Jan 20 18:05:42 crc systemd[1]: Starting Kubernetes Kubelet... Jan 20 18:05:42 crc restorecon[4644]: Relabeled /var/lib/kubelet/config.json from system_u:object_r:unlabeled_t:s0 to system_u:object_r:container_var_lib_t:s0 Jan 20 18:05:42 crc restorecon[4644]: /var/lib/kubelet/device-plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 20 18:05:42 crc restorecon[4644]: /var/lib/kubelet/device-plugins/kubelet.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 20 18:05:42 crc restorecon[4644]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/volumes/kubernetes.io~configmap/nginx-conf/..2025_02_23_05_40_35.4114275528/nginx.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 20 18:05:42 crc restorecon[4644]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 20 18:05:42 crc restorecon[4644]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/22e96971 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 20 18:05:42 crc restorecon[4644]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/21c98286 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 20 18:05:42 crc restorecon[4644]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/0f1869e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 20 18:05:42 crc restorecon[4644]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 20 18:05:42 crc restorecon[4644]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/46889d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 20 18:05:42 crc restorecon[4644]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/5b6a5969 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Jan 20 18:05:42 crc restorecon[4644]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/6c7921f5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 20 18:05:42 crc restorecon[4644]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4804f443 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 20 18:05:42 crc restorecon[4644]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/2a46b283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 20 18:05:42 crc restorecon[4644]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/a6b5573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 20 18:05:42 crc restorecon[4644]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4f88ee5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 20 18:05:42 crc restorecon[4644]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/5a4eee4b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Jan 20 18:05:42 crc restorecon[4644]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/cd87c521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 20 18:05:42 crc restorecon[4644]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 20 18:05:42 crc restorecon[4644]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 20 18:05:42 crc restorecon[4644]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 20 18:05:42 crc restorecon[4644]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 20 18:05:42 crc restorecon[4644]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 20 18:05:42 crc restorecon[4644]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 20 18:05:42 crc restorecon[4644]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/38602af4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 20 18:05:42 crc restorecon[4644]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/1483b002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 20 18:05:42 crc restorecon[4644]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/0346718b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 20 18:05:42 crc restorecon[4644]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/d3ed4ada not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 20 18:05:42 crc restorecon[4644]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/3bb473a5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 20 18:05:42 crc restorecon[4644]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/8cd075a9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 20 18:05:42 crc restorecon[4644]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/00ab4760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 20 18:05:42 crc restorecon[4644]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/54a21c09 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 20 18:05:42 crc restorecon[4644]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Jan 20 18:05:42 crc restorecon[4644]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/70478888 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 20 18:05:42 crc restorecon[4644]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/43802770 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 20 18:05:42 crc restorecon[4644]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/955a0edc not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 20 18:05:42 crc restorecon[4644]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/bca2d009 not reset as customized by admin to system_u:object_r:container_file_t:s0:c140,c1009 Jan 20 18:05:42 crc restorecon[4644]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/b295f9bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Jan 20 18:05:42 crc restorecon[4644]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 20 18:05:42 crc restorecon[4644]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 20 18:05:42 crc restorecon[4644]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 20 18:05:42 crc restorecon[4644]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 20 18:05:42 crc restorecon[4644]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 20 18:05:42 crc restorecon[4644]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 20 18:05:42 crc restorecon[4644]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 20 18:05:42 crc restorecon[4644]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 20 18:05:42 crc restorecon[4644]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 20 18:05:42 crc restorecon[4644]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 20 18:05:42 crc restorecon[4644]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 20 18:05:42 crc restorecon[4644]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/bc46ea27 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 20 18:05:42 crc restorecon[4644]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5731fc1b not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 20 18:05:42 crc restorecon[4644]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5e1b2a3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 20 18:05:42 crc restorecon[4644]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/943f0936 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 20 18:05:42 crc restorecon[4644]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/3f764ee4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 20 18:05:42 crc restorecon[4644]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/8695e3f9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 20 18:05:42 crc restorecon[4644]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/aed7aa86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 20 18:05:42 crc restorecon[4644]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/c64d7448 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 20 18:05:42 crc restorecon[4644]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/0ba16bd2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 20 18:05:42 crc restorecon[4644]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/207a939f not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 20 18:05:42 crc restorecon[4644]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/54aa8cdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 20 18:05:42 crc restorecon[4644]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/1f5fa595 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 20 18:05:42 crc restorecon[4644]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/bf9c8153 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 20 18:05:42 crc restorecon[4644]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/47fba4ea not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 20 18:05:42 crc restorecon[4644]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/7ae55ce9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 20 18:05:42 crc restorecon[4644]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7906a268 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 20 18:05:42 crc restorecon[4644]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/ce43fa69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 20 18:05:42 crc restorecon[4644]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7fc7ea3a not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 20 18:05:42 crc restorecon[4644]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/d8c38b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 20 18:05:42 crc restorecon[4644]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/9ef015fb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 20 18:05:42 crc restorecon[4644]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/b9db6a41 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 20 18:05:42 crc restorecon[4644]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 20 18:05:42 crc restorecon[4644]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/b1733d79 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Jan 20 18:05:42 crc restorecon[4644]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/afccd338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Jan 20 18:05:42 crc restorecon[4644]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/9df0a185 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 20 18:05:42 crc restorecon[4644]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/18938cf8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Jan 20 18:05:42 crc restorecon[4644]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/7ab4eb23 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Jan 20 18:05:42 crc restorecon[4644]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/56930be6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 20 18:05:42 crc restorecon[4644]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 20 18:05:42 crc restorecon[4644]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_35.630010865 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 20 18:05:42 crc restorecon[4644]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 20 18:05:42 crc restorecon[4644]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 20 18:05:42 crc restorecon[4644]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 20 18:05:42 crc restorecon[4644]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 20 18:05:42 crc restorecon[4644]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 20 18:05:42 crc restorecon[4644]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 20 18:05:42 crc restorecon[4644]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 20 18:05:42 crc restorecon[4644]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/0d8e3722 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 20 18:05:42 crc restorecon[4644]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/d22b2e76 not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Jan 20 18:05:42 crc restorecon[4644]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/e036759f not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 20 18:05:42 crc restorecon[4644]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/2734c483 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 20 18:05:42 crc restorecon[4644]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/57878fe7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 20 18:05:42 crc restorecon[4644]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/3f3c2e58 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 20 18:05:42 crc restorecon[4644]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/375bec3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Jan 20 18:05:42 crc restorecon[4644]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/7bc41e08 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 20 18:05:42 crc restorecon[4644]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 20 18:05:42 crc restorecon[4644]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/48c7a72d not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 20 18:05:42 crc restorecon[4644]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/4b66701f not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 20 18:05:42 crc restorecon[4644]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/a5a1c202 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 20 18:05:42 crc restorecon[4644]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 20 18:05:42 crc restorecon[4644]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 20 18:05:42 crc restorecon[4644]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 20 18:05:42 crc restorecon[4644]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 20 18:05:42 crc restorecon[4644]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 20 18:05:42 crc restorecon[4644]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 20 18:05:42 crc restorecon[4644]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 20 18:05:42 crc restorecon[4644]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 20 18:05:42 crc restorecon[4644]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_40.1388695756 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 20 18:05:42 crc restorecon[4644]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 20 18:05:42 crc restorecon[4644]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 20 18:05:42 crc restorecon[4644]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/26f3df5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 20 18:05:42 crc restorecon[4644]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/6d8fb21d not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 20 18:05:42 crc restorecon[4644]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/50e94777 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 20 18:05:42 crc restorecon[4644]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208473b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 20 18:05:42 crc restorecon[4644]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/ec9e08ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 20 18:05:42 crc restorecon[4644]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3b787c39 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 20 18:05:42 crc restorecon[4644]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208eaed5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 20 18:05:42 crc restorecon[4644]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/93aa3a2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 20 18:05:42 crc restorecon[4644]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3c697968 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 20 18:05:42 crc restorecon[4644]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 20 18:05:42 crc restorecon[4644]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/ba950ec9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 20 18:05:42 crc restorecon[4644]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/cb5cdb37 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 20 18:05:42 crc restorecon[4644]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/f2df9827 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 20 18:05:42 crc restorecon[4644]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 20 18:05:42 crc restorecon[4644]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 20 18:05:42 crc restorecon[4644]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 20 18:05:42 crc restorecon[4644]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 20 18:05:42 crc restorecon[4644]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 20 18:05:42 crc restorecon[4644]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 20 18:05:42 crc restorecon[4644]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 20 18:05:42 crc restorecon[4644]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 20 18:05:42 crc restorecon[4644]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 20 18:05:42 crc restorecon[4644]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 20 18:05:42 crc restorecon[4644]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 20 18:05:42 crc restorecon[4644]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/fedaa673 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 20 18:05:42 crc restorecon[4644]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/9ca2df95 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 20 18:05:42 crc restorecon[4644]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/b2d7460e not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 20 18:05:42 crc restorecon[4644]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2207853c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 20 18:05:42 crc restorecon[4644]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/241c1c29 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 20 18:05:42 crc restorecon[4644]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2d910eaf not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 20 18:05:42 crc restorecon[4644]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 20 18:05:42 crc restorecon[4644]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 20 18:05:42 crc restorecon[4644]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 20 18:05:42 crc restorecon[4644]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 20 18:05:42 crc restorecon[4644]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 20 18:05:42 crc restorecon[4644]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 20 18:05:42 crc restorecon[4644]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 20 18:05:42 crc restorecon[4644]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 20 18:05:42 crc restorecon[4644]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 20 18:05:42 crc restorecon[4644]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 20 18:05:42 crc restorecon[4644]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 20 18:05:42 crc restorecon[4644]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 20 18:05:42 crc restorecon[4644]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 20 18:05:42 crc restorecon[4644]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 20 18:05:42 crc restorecon[4644]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 20 18:05:42 crc restorecon[4644]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 20 18:05:42 crc restorecon[4644]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/c6c0f2e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 20 18:05:42 crc restorecon[4644]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/399edc97 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 20 18:05:42 crc restorecon[4644]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8049f7cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 20 18:05:42 crc restorecon[4644]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/0cec5484 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 20 18:05:42 crc restorecon[4644]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/312446d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c406,c828 Jan 20 18:05:42 crc restorecon[4644]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8e56a35d not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 20 18:05:42 crc restorecon[4644]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 20 18:05:42 crc restorecon[4644]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 20 18:05:42 crc restorecon[4644]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 20 18:05:42 crc restorecon[4644]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 20 18:05:42 crc restorecon[4644]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 20 18:05:42 crc restorecon[4644]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 20 18:05:42 crc restorecon[4644]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/2d30ddb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Jan 20 18:05:42 crc restorecon[4644]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/eca8053d not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Jan 20 18:05:42 crc restorecon[4644]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/c3a25c9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c168,c522 Jan 20 18:05:42 crc restorecon[4644]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/b9609c22 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 20 18:05:42 crc restorecon[4644]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 20 18:05:42 crc restorecon[4644]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/e8b0eca9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Jan 20 18:05:42 crc restorecon[4644]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/b36a9c3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Jan 20 18:05:42 crc restorecon[4644]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/38af7b07 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 20 18:05:42 crc restorecon[4644]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/ae821620 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Jan 20 18:05:42 crc restorecon[4644]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/baa23338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Jan 20 18:05:42 crc restorecon[4644]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/2c534809 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 20 18:05:42 crc restorecon[4644]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 20 18:05:42 crc restorecon[4644]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 20 18:05:42 crc restorecon[4644]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 20 18:05:42 crc restorecon[4644]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 20 18:05:42 crc restorecon[4644]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 20 18:05:42 crc restorecon[4644]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 20 18:05:42 crc restorecon[4644]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/59b29eae not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Jan 20 18:05:42 crc restorecon[4644]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/c91a8e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Jan 20 18:05:42 crc restorecon[4644]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/4d87494a not reset as customized by admin to system_u:object_r:container_file_t:s0:c442,c857 Jan 20 18:05:42 crc restorecon[4644]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/1e33ca63 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 20 18:05:42 crc restorecon[4644]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 20 18:05:42 crc restorecon[4644]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/8dea7be2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 20 18:05:42 crc restorecon[4644]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d0b04a99 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 20 18:05:42 crc restorecon[4644]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d84f01e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 20 18:05:42 crc restorecon[4644]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/4109059b not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 20 18:05:42 crc restorecon[4644]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/a7258a3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 20 18:05:42 crc restorecon[4644]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/05bdf2b6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 20 18:05:42 crc restorecon[4644]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 20 18:05:42 crc restorecon[4644]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/f3261b51 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 20 18:05:42 crc restorecon[4644]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/315d045e not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 20 18:05:42 crc restorecon[4644]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/5fdcf278 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 20 18:05:42 crc restorecon[4644]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/d053f757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 20 18:05:42 crc restorecon[4644]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/c2850dc7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 20 18:05:42 crc restorecon[4644]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:42 crc restorecon[4644]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:42 crc restorecon[4644]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:42 crc restorecon[4644]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:42 crc restorecon[4644]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:42 crc restorecon[4644]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:42 crc restorecon[4644]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fcfb0b2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:42 crc restorecon[4644]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c7ac9b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:42 crc restorecon[4644]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fa0c0d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:42 crc restorecon[4644]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c609b6ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:42 crc restorecon[4644]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/2be6c296 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:42 crc restorecon[4644]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/89a32653 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:42 crc restorecon[4644]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/4eb9afeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:42 crc restorecon[4644]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/13af6efa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:42 crc restorecon[4644]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 20 18:05:42 crc restorecon[4644]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/b03f9724 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 20 18:05:42 crc restorecon[4644]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/e3d105cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 20 18:05:42 crc restorecon[4644]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/3aed4d83 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 20 18:05:42 crc restorecon[4644]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 20 18:05:42 crc restorecon[4644]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 20 18:05:42 crc restorecon[4644]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 20 18:05:42 crc restorecon[4644]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 20 18:05:42 crc restorecon[4644]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 20 18:05:42 crc restorecon[4644]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 20 18:05:42 crc restorecon[4644]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/0765fa6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 20 18:05:42 crc restorecon[4644]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/2cefc627 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 20 18:05:42 crc restorecon[4644]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/3dcc6345 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 20 18:05:42 crc restorecon[4644]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/365af391 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 20 18:05:42 crc restorecon[4644]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 20 18:05:42 crc restorecon[4644]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 20 18:05:42 crc restorecon[4644]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 20 18:05:42 crc restorecon[4644]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 20 18:05:42 crc restorecon[4644]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 20 18:05:42 crc restorecon[4644]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 20 18:05:42 crc restorecon[4644]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 20 18:05:42 crc restorecon[4644]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 20 18:05:42 crc restorecon[4644]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b1130c0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 20 18:05:42 crc restorecon[4644]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/236a5913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 20 18:05:42 crc restorecon[4644]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b9432e26 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 20 18:05:42 crc restorecon[4644]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/5ddb0e3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 20 18:05:42 crc restorecon[4644]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/986dc4fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 20 18:05:42 crc restorecon[4644]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/8a23ff9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 20 18:05:42 crc restorecon[4644]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/9728ae68 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 20 18:05:42 crc restorecon[4644]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/665f31d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 20 18:05:42 crc restorecon[4644]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 20 18:05:42 crc restorecon[4644]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 20 18:05:42 crc restorecon[4644]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 20 18:05:42 crc restorecon[4644]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 20 18:05:42 crc restorecon[4644]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 20 18:05:42 crc restorecon[4644]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 20 18:05:42 crc restorecon[4644]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 20 18:05:42 crc restorecon[4644]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 20 18:05:42 crc restorecon[4644]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 20 18:05:42 crc restorecon[4644]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 20 18:05:42 crc restorecon[4644]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 20 18:05:42 crc restorecon[4644]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 20 18:05:42 crc restorecon[4644]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 20 18:05:42 crc restorecon[4644]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 20 18:05:42 crc restorecon[4644]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 20 18:05:42 crc restorecon[4644]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 20 18:05:42 crc restorecon[4644]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/136c9b42 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 20 18:05:42 crc restorecon[4644]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/98a1575b not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 20 18:05:42 crc restorecon[4644]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/cac69136 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 20 18:05:42 crc restorecon[4644]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/5deb77a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 20 18:05:42 crc restorecon[4644]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/2ae53400 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 20 18:05:42 crc restorecon[4644]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 20 18:05:42 crc restorecon[4644]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 20 18:05:42 crc restorecon[4644]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 20 18:05:42 crc restorecon[4644]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 20 18:05:42 crc restorecon[4644]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 20 18:05:42 crc restorecon[4644]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 20 18:05:42 crc restorecon[4644]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/e46f2326 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 20 18:05:42 crc restorecon[4644]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/dc688d3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 20 18:05:42 crc restorecon[4644]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/3497c3cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 20 18:05:42 crc restorecon[4644]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/177eb008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 20 18:05:42 crc restorecon[4644]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 20 18:05:42 crc restorecon[4644]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 20 18:05:42 crc restorecon[4644]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 20 18:05:42 crc restorecon[4644]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 20 18:05:42 crc restorecon[4644]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 20 18:05:42 crc restorecon[4644]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 20 18:05:42 crc restorecon[4644]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/af5a2afa not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 20 18:05:42 crc restorecon[4644]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/d780cb1f not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 20 18:05:42 crc restorecon[4644]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/49b0f374 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 20 18:05:42 crc restorecon[4644]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/26fbb125 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 20 18:05:42 crc restorecon[4644]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 20 18:05:42 crc restorecon[4644]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 20 18:05:42 crc restorecon[4644]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 20 18:05:42 crc restorecon[4644]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 20 18:05:42 crc restorecon[4644]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 20 18:05:42 crc restorecon[4644]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 20 18:05:42 crc restorecon[4644]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/cf14125a not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 20 18:05:42 crc restorecon[4644]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/b7f86972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 20 18:05:42 crc restorecon[4644]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/e51d739c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 20 18:05:42 crc restorecon[4644]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/88ba6a69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 20 18:05:42 crc restorecon[4644]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/669a9acf not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 20 18:05:42 crc restorecon[4644]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/5cd51231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 20 18:05:42 crc restorecon[4644]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/75349ec7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 20 18:05:42 crc restorecon[4644]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/15c26839 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 20 18:05:42 crc restorecon[4644]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/45023dcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 20 18:05:42 crc restorecon[4644]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/2bb66a50 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 20 18:05:42 crc restorecon[4644]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/64d03bdd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 20 18:05:42 crc restorecon[4644]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/ab8e7ca0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 20 18:05:42 crc restorecon[4644]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/bb9be25f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 20 18:05:42 crc restorecon[4644]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:42 crc restorecon[4644]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:42 crc restorecon[4644]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:42 crc restorecon[4644]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:42 crc restorecon[4644]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:42 crc restorecon[4644]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:42 crc restorecon[4644]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/9a0b61d3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:42 crc restorecon[4644]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/d471b9d2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:42 crc restorecon[4644]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/8cb76b8e not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:42 crc restorecon[4644]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 20 18:05:42 crc restorecon[4644]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/11a00840 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 20 18:05:42 crc restorecon[4644]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/ec355a92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 20 18:05:42 crc restorecon[4644]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/992f735e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 20 18:05:42 crc restorecon[4644]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 20 18:05:42 crc restorecon[4644]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 20 18:05:42 crc restorecon[4644]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 20 18:05:42 crc restorecon[4644]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 20 18:05:42 crc restorecon[4644]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 20 18:05:42 crc restorecon[4644]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 20 18:05:42 crc restorecon[4644]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d59cdbbc not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 20 18:05:42 crc restorecon[4644]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/72133ff0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 20 18:05:42 crc restorecon[4644]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/c56c834c not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 20 18:05:42 crc restorecon[4644]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d13724c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 20 18:05:42 crc restorecon[4644]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/0a498258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 20 18:05:42 crc restorecon[4644]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 20 18:05:42 crc restorecon[4644]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa471982 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 20 18:05:42 crc restorecon[4644]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fc900d92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 20 18:05:42 crc restorecon[4644]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa7d68da not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 20 18:05:42 crc restorecon[4644]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 20 18:05:42 crc restorecon[4644]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/4bacf9b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 20 18:05:42 crc restorecon[4644]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/424021b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 20 18:05:42 crc restorecon[4644]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/fc2e31a3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 20 18:05:42 crc restorecon[4644]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/f51eefac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 20 18:05:42 crc restorecon[4644]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/c8997f2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 20 18:05:42 crc restorecon[4644]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/7481f599 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 20 18:05:42 crc restorecon[4644]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 20 18:05:42 crc restorecon[4644]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 20 18:05:42 crc restorecon[4644]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 20 18:05:42 crc restorecon[4644]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 20 18:05:42 crc restorecon[4644]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 20 18:05:42 crc restorecon[4644]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 20 18:05:42 crc restorecon[4644]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/fdafea19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 20 18:05:42 crc restorecon[4644]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/d0e1c571 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 20 18:05:42 crc restorecon[4644]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/ee398915 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 20 18:05:42 crc restorecon[4644]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/682bb6b8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 20 18:05:42 crc restorecon[4644]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 20 18:05:42 crc restorecon[4644]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a3e67855 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 20 18:05:42 crc restorecon[4644]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a989f289 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 20 18:05:42 crc restorecon[4644]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/915431bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 20 18:05:42 crc restorecon[4644]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/7796fdab not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 20 18:05:42 crc restorecon[4644]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/dcdb5f19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 20 18:05:42 crc restorecon[4644]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/a3aaa88c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 20 18:05:42 crc restorecon[4644]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/5508e3e6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 20 18:05:42 crc restorecon[4644]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/160585de not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 20 18:05:42 crc restorecon[4644]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/e99f8da3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 20 18:05:42 crc restorecon[4644]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/8bc85570 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 20 18:05:42 crc restorecon[4644]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/a5861c91 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 20 18:05:42 crc restorecon[4644]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/84db1135 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 20 18:05:42 crc restorecon[4644]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/9e1a6043 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 20 18:05:42 crc restorecon[4644]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/c1aba1c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 20 18:05:42 crc restorecon[4644]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/d55ccd6d not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 20 18:05:42 crc restorecon[4644]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/971cc9f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 20 18:05:42 crc restorecon[4644]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/8f2e3dcf not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 20 18:05:42 crc restorecon[4644]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/ceb35e9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 20 18:05:42 crc restorecon[4644]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/1c192745 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 20 18:05:42 crc restorecon[4644]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/5209e501 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/f83de4df not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/e7b978ac not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/c64304a1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/5384386b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/cce3e3ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/8fb75465 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/740f573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/32fd1134 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/0a861bd3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/80363026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/bfa952a8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..2025_02_23_05_33_31.333075221 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/793bf43d not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/7db1bb6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/4f6a0368 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/c12c7d86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/36c4a773 not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/4c1e98ae not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/a4c8115c not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/setup/7db1802e not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver/a008a7ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-syncer/2c836bac not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-regeneration-controller/0ce62299 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-insecure-readyz/945d2457 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-check-endpoints/7d5c1dd8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/index.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/bundle-v1.15.0.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/channel.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/package.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/bc8d0691 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/6b76097a not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/34d1af30 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/312ba61c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/645d5dd1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/16e825f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/4cf51fc9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/2a23d348 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/075dbd49 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/dd585ddd not reset as customized by admin to system_u:object_r:container_file_t:s0:c377,c642 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/17ebd0ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c343 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/005579f4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_23_11.1287037894 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/bf5f3b9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/af276eb7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/ea28e322 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/692e6683 not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/871746a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/4eb2e958 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/ca9b62da not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/0edd6fce not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/containers/controller-manager/89b4555f not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/655fcd71 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/0d43c002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/e68efd17 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/9acf9b65 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/5ae3ff11 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/1e59206a not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/27af16d1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c304,c1017 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/7918e729 not reset as customized by admin to system_u:object_r:container_file_t:s0:c853,c893 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/5d976d0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c585,c981 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/d7f55cbb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/f0812073 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/1a56cbeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/7fdd437e not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/cdfb5652 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/fix-audit-permissions/fb93119e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver/f1e8fc0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver-check-endpoints/218511f3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server/serving-certs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/ca8af7b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/72cc8a75 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/6e8a3760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4c3455c0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/2278acb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4b453e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/3ec09bda not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2/cacerts.bin not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java/cacerts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl/ca-bundle.trust.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/email-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/objsign-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2ae6433e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fde84897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75680d2e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/openshift-service-serving-signer_1740288168.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/facfc4fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f5a969c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CFCA_EV_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9ef4a08a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ingress-operator_1740288202.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2f332aed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/248c8271.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d10a21f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ACCVRAIZ1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a94d09e5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c9a4d3b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40193066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd8c0d63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b936d1c6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CA_Disig_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4fd49c6c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM_SERVIDORES_SEGUROS.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b81b93f0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f9a69fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b30d5fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ANF_Secure_Server_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b433981b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93851c9e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9282e51c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7dd1bc4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Actalis_Authentication_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/930ac5d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f47b495.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e113c810.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5931b5bc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Commercial.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2b349938.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e48193cf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/302904dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a716d4ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Networking.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93bc0acc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/86212b19.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b727005e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbc54cab.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f51bb24c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c28a8a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9c8dfbd4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ccc52f49.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cb1c3204.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ce5e74ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd08c599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6d41d539.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb5fa911.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e35234b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8cb5ee0f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a7c655d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f8fc53da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/de6d66f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d41b5e2a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/41a3f684.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1df5a75f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_2011.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e36a6752.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b872f2b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9576d26b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/228f89db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_ECC_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb717492.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d21b73c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b1b94ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/595e996b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_RSA_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b46e03d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/128f4b91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_3_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81f2d2b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Autoridad_de_Certificacion_Firmaprofesional_CIF_A62634068.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3bde41ac.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d16a5865.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_EC-384_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0179095f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ffa7f1eb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9482e63a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4dae3dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e359ba6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7e067d03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/95aff9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7746a63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Baltimore_CyberTrust_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/653b494a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3ad48a91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_2_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/54657681.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/82223c44.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8de2f56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d9dafe4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d96b65e2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee64a828.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40547a79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5a3f0ff8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a780d93.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/34d996fb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/eed8c118.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/89c02a45.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b1159c4c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d6325660.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4c339cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8312c4c1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_E1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8508e720.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5fdd185d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48bec511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/69105f4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b9bc432.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/32888f65.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b03dec0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/219d9499.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5acf816d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbf06781.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc99f41e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AAA_Certificate_Services.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/985c1f52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8794b4e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_BR_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7c037b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ef954a4e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_EV_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2add47b6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/90c5a3c8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0f3e76e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/53a1b57a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_EV_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5ad8a5d6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/68dd7389.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d04f354.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d6437c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/062cdee6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bd43e1dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7f3d5d1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c491639e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3513523f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/399e7759.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/feffd413.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d18e9066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/607986c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c90bc37d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1b0f7e5c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e08bfd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dd8e9d41.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed39abd0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a3418fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bc3f2570.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_High_Assurance_EV_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/244b5494.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81b9768f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4be590e0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_ECC_P384_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9846683b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/252252d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e8e7201.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_RSA4096_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d52c538d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c44cc0c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Trusted_Root_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75d1b2ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a2c66da8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ecccd8db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust.net_Certification_Authority__2048_.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/aee5f10d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e7271e8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0e59380.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4c3982f2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b99d060.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf64f35b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0a775a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/002c0b4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cc450945.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_EC1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/106f3e4d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b3fb433b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4042bcee.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/02265526.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/455f1b52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0d69c7e1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9f727ac7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5e98733a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0cd152c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc4d6a89.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6187b673.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/FIRMAPROFESIONAL_CA_ROOT-A_WEB.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ba8887ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/068570d1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f081611a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48a195d8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GDCA_TrustAUTH_R5_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f6fa695.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab59055e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b92fd57f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GLOBALTRUST_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fa5da96b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ec40989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7719f463.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1001acf7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f013ecaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/626dceaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c559d742.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1d3472b9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9479c8c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a81e292b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4bfab552.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e071171e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/57bcb2da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_ECC_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab5346f4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5046c355.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_RSA_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/865fbdf9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da0cfd1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/85cde254.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_ECC_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbb3f32b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureSign_RootCA11.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5860aaa6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/31188b5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HiPKI_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c7f1359b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f15c80c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hongkong_Post_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/09789157.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/18856ac4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e09d511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Commercial_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cf701eeb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d06393bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Public_Sector_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/10531352.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Izenpe.com.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureTrust_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0ed035a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsec_e-Szigno_Root_CA_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8160b96c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8651083.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2c63f966.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_ECC_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d89cda1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/01419da9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_RSA_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7a5b843.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_RSA_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf53fb88.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9591a472.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3afde786.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Gold_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NAVER_Global_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3fb36b73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d39b0a2c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a89d74c2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd58d51e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7db1890.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NetLock_Arany__Class_Gold__F__tan__s__tv__ny.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/988a38cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/60afe812.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f39fc864.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5443e9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GB_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e73d606e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dfc0fe80.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b66938e9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e1eab7c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GC_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/773e07ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c899c73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d59297b8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ddcda989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_1_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/749e9e03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/52b525c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7e8dc79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a819ef2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/08063a00.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b483515.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/064e0aa9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1f58a078.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6f7454b3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7fa05551.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76faf6c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9339512a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f387163d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee37c333.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e18bfb83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e442e424.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fe8a2cd8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/23f4c490.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5cd81ad7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0c70a8d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7892ad52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SZAFIR_ROOT_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4f316efb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_RSA_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/06dc52d5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/583d0756.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0bf05006.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/88950faa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9046744a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c860d51.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_RSA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6fa5da56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/33ee480d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Secure_Global_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/63a2c897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_ECC_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bdacca6f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ff34af3f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbff3a01.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_ECC_RootCA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_C1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/406c9bb1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_C3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Services_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Silver_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/99e1b953.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/14bc7599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TUBITAK_Kamu_SM_SSL_Kok_Sertifikasi_-_Surum_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a3adc42.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f459871d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_ECC_Root_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_RSA_Root_2023.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TeliaSonera_Root_CA_v1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telia_Root_CA_v2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f103249.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f058632f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-certificates.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9bf03295.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/98aaf404.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1cef98f5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/073bfcc5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2923b3f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f249de83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/edcbddb5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P256_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b5697b0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ae85e5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b74d2bd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P384_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d887a5bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9aef356c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TunTrust_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd64f3fc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e13665f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Extended_Validation_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f5dc4f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da7377f6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Global_G2_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c01eb047.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/304d27c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed858448.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f30dd6ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/04f60c28.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_ECC_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fc5a8f99.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/35105088.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee532fd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/XRamp_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/706f604c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76579174.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d86cdd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/882de061.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f618aec.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a9d40e02.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e-Szigno_Root_CA_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e868b802.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/83e9984f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ePKI_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca6e4ad9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d6523ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4b718d9b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/869fbf79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/containers/registry/f8d22bdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/6e8bbfac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/54dd7996 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/a4f1bb05 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/207129da not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/c1df39e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/15b8f1cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/77bd6913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/2382c1b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/704ce128 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/70d16fe0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/bfb95535 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/57a8e8e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/1b9d3e5e not reset as customized by admin to system_u:object_r:container_file_t:s0:c107,c917 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/fddb173c not reset as customized by admin to system_u:object_r:container_file_t:s0:c202,c983 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/95d3c6c4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/bfb5fff5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/2aef40aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/c0391cad not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/1119e69d not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/660608b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/8220bd53 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/85f99d5c not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/4b0225f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/9c2a3394 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/e820b243 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/1ca52ea0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/e6988e45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/6655f00b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/98bc3986 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/08e3458a not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/2a191cb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/6c4eeefb not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/f61a549c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/24891863 not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/fbdfd89c not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/9b63b3bc not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/8acde6d6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/node-driver-registrar/59ecbba3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/csi-provisioner/685d4be3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/containers/route-controller-manager/feaea55e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/63709497 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/d966b7fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/f5773757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/81c9edb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/57bf57ee not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/86f5e6aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/0aabe31d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/d2af85c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/09d157d9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c0fe7256 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c30319e4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/e6b1dd45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/2bb643f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/920de426 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/70fa1e87 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/a1c12a2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/9442e6c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/5b45ec72 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/3c9f3a59 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/1091c11b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/9a6821c6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/ec0c35e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/517f37e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/6214fe78 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/ba189c8b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/351e4f31 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/c0f219ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/8069f607 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/559c3d82 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/605ad488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/148df488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/3bf6dcb4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/022a2feb not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/938c3924 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/729fe23e not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/1fd5cbd4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/a96697e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/e155ddca not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/10dd0e0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/6f2c8392 not reset as customized by admin to system_u:object_r:container_file_t:s0:c267,c588 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/bd241ad9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/plugins/csi-hostpath not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/plugins/csi-hostpath/csi.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/plugins/kubernetes.io not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/plugins/kubernetes.io/csi not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983 not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/vol_data.json not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 20 18:05:43 crc restorecon[4644]: /var/lib/kubelet/plugins_registry not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 20 18:05:43 crc restorecon[4644]: Relabeled /var/usrlocal/bin/kubenswrapper from system_u:object_r:bin_t:s0 to system_u:object_r:kubelet_exec_t:s0 Jan 20 18:05:43 crc kubenswrapper[4661]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 20 18:05:43 crc kubenswrapper[4661]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Jan 20 18:05:43 crc kubenswrapper[4661]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 20 18:05:43 crc kubenswrapper[4661]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 20 18:05:43 crc kubenswrapper[4661]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 20 18:05:43 crc kubenswrapper[4661]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 20 18:05:43 crc kubenswrapper[4661]: I0120 18:05:43.958854 4661 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 20 18:05:43 crc kubenswrapper[4661]: W0120 18:05:43.965522 4661 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 20 18:05:43 crc kubenswrapper[4661]: W0120 18:05:43.965555 4661 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 20 18:05:43 crc kubenswrapper[4661]: W0120 18:05:43.965565 4661 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 20 18:05:43 crc kubenswrapper[4661]: W0120 18:05:43.965574 4661 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 20 18:05:43 crc kubenswrapper[4661]: W0120 18:05:43.965584 4661 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 20 18:05:43 crc kubenswrapper[4661]: W0120 18:05:43.965592 4661 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 20 18:05:43 crc kubenswrapper[4661]: W0120 18:05:43.965600 4661 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 20 18:05:43 crc kubenswrapper[4661]: W0120 18:05:43.965612 4661 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 20 18:05:43 crc kubenswrapper[4661]: W0120 18:05:43.965623 4661 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 20 18:05:43 crc kubenswrapper[4661]: W0120 18:05:43.965633 4661 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 20 18:05:43 crc kubenswrapper[4661]: W0120 18:05:43.965643 4661 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 20 18:05:43 crc kubenswrapper[4661]: W0120 18:05:43.965652 4661 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 20 18:05:43 crc kubenswrapper[4661]: W0120 18:05:43.965660 4661 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 20 18:05:43 crc kubenswrapper[4661]: W0120 18:05:43.965695 4661 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 20 18:05:43 crc kubenswrapper[4661]: W0120 18:05:43.965706 4661 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 20 18:05:43 crc kubenswrapper[4661]: W0120 18:05:43.965715 4661 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 20 18:05:43 crc kubenswrapper[4661]: W0120 18:05:43.965724 4661 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 20 18:05:43 crc kubenswrapper[4661]: W0120 18:05:43.965732 4661 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 20 18:05:43 crc kubenswrapper[4661]: W0120 18:05:43.965742 4661 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 20 18:05:43 crc kubenswrapper[4661]: W0120 18:05:43.965753 4661 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 20 18:05:43 crc kubenswrapper[4661]: W0120 18:05:43.965762 4661 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 20 18:05:43 crc kubenswrapper[4661]: W0120 18:05:43.965771 4661 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 20 18:05:43 crc kubenswrapper[4661]: W0120 18:05:43.965779 4661 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 20 18:05:43 crc kubenswrapper[4661]: W0120 18:05:43.965788 4661 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 20 18:05:43 crc kubenswrapper[4661]: W0120 18:05:43.965798 4661 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 20 18:05:43 crc kubenswrapper[4661]: W0120 18:05:43.965806 4661 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 20 18:05:43 crc kubenswrapper[4661]: W0120 18:05:43.965814 4661 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 20 18:05:43 crc kubenswrapper[4661]: W0120 18:05:43.965829 4661 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 20 18:05:43 crc kubenswrapper[4661]: W0120 18:05:43.965837 4661 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 20 18:05:43 crc kubenswrapper[4661]: W0120 18:05:43.965845 4661 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 20 18:05:43 crc kubenswrapper[4661]: W0120 18:05:43.965854 4661 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 20 18:05:43 crc kubenswrapper[4661]: W0120 18:05:43.965861 4661 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 20 18:05:43 crc kubenswrapper[4661]: W0120 18:05:43.965869 4661 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 20 18:05:43 crc kubenswrapper[4661]: W0120 18:05:43.965877 4661 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 20 18:05:43 crc kubenswrapper[4661]: W0120 18:05:43.965885 4661 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 20 18:05:43 crc kubenswrapper[4661]: W0120 18:05:43.965893 4661 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 20 18:05:43 crc kubenswrapper[4661]: W0120 18:05:43.965903 4661 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 20 18:05:43 crc kubenswrapper[4661]: W0120 18:05:43.965910 4661 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 20 18:05:43 crc kubenswrapper[4661]: W0120 18:05:43.965918 4661 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 20 18:05:43 crc kubenswrapper[4661]: W0120 18:05:43.965926 4661 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 20 18:05:43 crc kubenswrapper[4661]: W0120 18:05:43.965934 4661 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 20 18:05:43 crc kubenswrapper[4661]: W0120 18:05:43.965942 4661 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 20 18:05:43 crc kubenswrapper[4661]: W0120 18:05:43.965950 4661 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 20 18:05:43 crc kubenswrapper[4661]: W0120 18:05:43.965957 4661 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 20 18:05:43 crc kubenswrapper[4661]: W0120 18:05:43.965966 4661 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 20 18:05:43 crc kubenswrapper[4661]: W0120 18:05:43.965974 4661 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 20 18:05:43 crc kubenswrapper[4661]: W0120 18:05:43.965982 4661 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 20 18:05:43 crc kubenswrapper[4661]: W0120 18:05:43.965991 4661 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 20 18:05:43 crc kubenswrapper[4661]: W0120 18:05:43.965999 4661 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 20 18:05:43 crc kubenswrapper[4661]: W0120 18:05:43.966007 4661 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 20 18:05:43 crc kubenswrapper[4661]: W0120 18:05:43.966014 4661 feature_gate.go:330] unrecognized feature gate: Example Jan 20 18:05:43 crc kubenswrapper[4661]: W0120 18:05:43.966025 4661 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 20 18:05:43 crc kubenswrapper[4661]: W0120 18:05:43.966037 4661 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 20 18:05:43 crc kubenswrapper[4661]: W0120 18:05:43.966046 4661 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 20 18:05:43 crc kubenswrapper[4661]: W0120 18:05:43.966054 4661 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 20 18:05:43 crc kubenswrapper[4661]: W0120 18:05:43.966063 4661 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 20 18:05:43 crc kubenswrapper[4661]: W0120 18:05:43.966071 4661 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 20 18:05:43 crc kubenswrapper[4661]: W0120 18:05:43.966080 4661 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 20 18:05:43 crc kubenswrapper[4661]: W0120 18:05:43.966089 4661 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 20 18:05:43 crc kubenswrapper[4661]: W0120 18:05:43.966097 4661 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 20 18:05:43 crc kubenswrapper[4661]: W0120 18:05:43.966105 4661 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 20 18:05:43 crc kubenswrapper[4661]: W0120 18:05:43.966114 4661 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 20 18:05:43 crc kubenswrapper[4661]: W0120 18:05:43.966124 4661 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 20 18:05:43 crc kubenswrapper[4661]: W0120 18:05:43.966133 4661 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 20 18:05:43 crc kubenswrapper[4661]: W0120 18:05:43.966141 4661 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 20 18:05:43 crc kubenswrapper[4661]: W0120 18:05:43.966149 4661 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 20 18:05:43 crc kubenswrapper[4661]: W0120 18:05:43.966157 4661 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 20 18:05:43 crc kubenswrapper[4661]: W0120 18:05:43.966165 4661 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 20 18:05:43 crc kubenswrapper[4661]: W0120 18:05:43.966172 4661 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 20 18:05:43 crc kubenswrapper[4661]: W0120 18:05:43.966180 4661 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 20 18:05:43 crc kubenswrapper[4661]: W0120 18:05:43.966187 4661 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 20 18:05:43 crc kubenswrapper[4661]: I0120 18:05:43.966843 4661 flags.go:64] FLAG: --address="0.0.0.0" Jan 20 18:05:43 crc kubenswrapper[4661]: I0120 18:05:43.966867 4661 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Jan 20 18:05:43 crc kubenswrapper[4661]: I0120 18:05:43.966884 4661 flags.go:64] FLAG: --anonymous-auth="true" Jan 20 18:05:43 crc kubenswrapper[4661]: I0120 18:05:43.966896 4661 flags.go:64] FLAG: --application-metrics-count-limit="100" Jan 20 18:05:43 crc kubenswrapper[4661]: I0120 18:05:43.966908 4661 flags.go:64] FLAG: --authentication-token-webhook="false" Jan 20 18:05:43 crc kubenswrapper[4661]: I0120 18:05:43.966917 4661 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Jan 20 18:05:43 crc kubenswrapper[4661]: I0120 18:05:43.966930 4661 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Jan 20 18:05:43 crc kubenswrapper[4661]: I0120 18:05:43.966942 4661 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Jan 20 18:05:43 crc kubenswrapper[4661]: I0120 18:05:43.966952 4661 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Jan 20 18:05:43 crc kubenswrapper[4661]: I0120 18:05:43.966961 4661 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Jan 20 18:05:43 crc kubenswrapper[4661]: I0120 18:05:43.966971 4661 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Jan 20 18:05:43 crc kubenswrapper[4661]: I0120 18:05:43.966984 4661 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Jan 20 18:05:43 crc kubenswrapper[4661]: I0120 18:05:43.966996 4661 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Jan 20 18:05:43 crc kubenswrapper[4661]: I0120 18:05:43.967009 4661 flags.go:64] FLAG: --cgroup-root="" Jan 20 18:05:43 crc kubenswrapper[4661]: I0120 18:05:43.967020 4661 flags.go:64] FLAG: --cgroups-per-qos="true" Jan 20 18:05:43 crc kubenswrapper[4661]: I0120 18:05:43.967032 4661 flags.go:64] FLAG: --client-ca-file="" Jan 20 18:05:43 crc kubenswrapper[4661]: I0120 18:05:43.967043 4661 flags.go:64] FLAG: --cloud-config="" Jan 20 18:05:43 crc kubenswrapper[4661]: I0120 18:05:43.967059 4661 flags.go:64] FLAG: --cloud-provider="" Jan 20 18:05:43 crc kubenswrapper[4661]: I0120 18:05:43.967070 4661 flags.go:64] FLAG: --cluster-dns="[]" Jan 20 18:05:43 crc kubenswrapper[4661]: I0120 18:05:43.967087 4661 flags.go:64] FLAG: --cluster-domain="" Jan 20 18:05:43 crc kubenswrapper[4661]: I0120 18:05:43.967098 4661 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Jan 20 18:05:43 crc kubenswrapper[4661]: I0120 18:05:43.967110 4661 flags.go:64] FLAG: --config-dir="" Jan 20 18:05:43 crc kubenswrapper[4661]: I0120 18:05:43.967120 4661 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Jan 20 18:05:43 crc kubenswrapper[4661]: I0120 18:05:43.967129 4661 flags.go:64] FLAG: --container-log-max-files="5" Jan 20 18:05:43 crc kubenswrapper[4661]: I0120 18:05:43.967141 4661 flags.go:64] FLAG: --container-log-max-size="10Mi" Jan 20 18:05:43 crc kubenswrapper[4661]: I0120 18:05:43.967150 4661 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Jan 20 18:05:43 crc kubenswrapper[4661]: I0120 18:05:43.967160 4661 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Jan 20 18:05:43 crc kubenswrapper[4661]: I0120 18:05:43.967170 4661 flags.go:64] FLAG: --containerd-namespace="k8s.io" Jan 20 18:05:43 crc kubenswrapper[4661]: I0120 18:05:43.967179 4661 flags.go:64] FLAG: --contention-profiling="false" Jan 20 18:05:43 crc kubenswrapper[4661]: I0120 18:05:43.967227 4661 flags.go:64] FLAG: --cpu-cfs-quota="true" Jan 20 18:05:43 crc kubenswrapper[4661]: I0120 18:05:43.967237 4661 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Jan 20 18:05:43 crc kubenswrapper[4661]: I0120 18:05:43.967246 4661 flags.go:64] FLAG: --cpu-manager-policy="none" Jan 20 18:05:43 crc kubenswrapper[4661]: I0120 18:05:43.967256 4661 flags.go:64] FLAG: --cpu-manager-policy-options="" Jan 20 18:05:43 crc kubenswrapper[4661]: I0120 18:05:43.967268 4661 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Jan 20 18:05:43 crc kubenswrapper[4661]: I0120 18:05:43.967277 4661 flags.go:64] FLAG: --enable-controller-attach-detach="true" Jan 20 18:05:43 crc kubenswrapper[4661]: I0120 18:05:43.967286 4661 flags.go:64] FLAG: --enable-debugging-handlers="true" Jan 20 18:05:43 crc kubenswrapper[4661]: I0120 18:05:43.967297 4661 flags.go:64] FLAG: --enable-load-reader="false" Jan 20 18:05:43 crc kubenswrapper[4661]: I0120 18:05:43.967307 4661 flags.go:64] FLAG: --enable-server="true" Jan 20 18:05:43 crc kubenswrapper[4661]: I0120 18:05:43.967316 4661 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Jan 20 18:05:43 crc kubenswrapper[4661]: I0120 18:05:43.967327 4661 flags.go:64] FLAG: --event-burst="100" Jan 20 18:05:43 crc kubenswrapper[4661]: I0120 18:05:43.967338 4661 flags.go:64] FLAG: --event-qps="50" Jan 20 18:05:43 crc kubenswrapper[4661]: I0120 18:05:43.967347 4661 flags.go:64] FLAG: --event-storage-age-limit="default=0" Jan 20 18:05:43 crc kubenswrapper[4661]: I0120 18:05:43.967357 4661 flags.go:64] FLAG: --event-storage-event-limit="default=0" Jan 20 18:05:43 crc kubenswrapper[4661]: I0120 18:05:43.967368 4661 flags.go:64] FLAG: --eviction-hard="" Jan 20 18:05:43 crc kubenswrapper[4661]: I0120 18:05:43.967396 4661 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Jan 20 18:05:43 crc kubenswrapper[4661]: I0120 18:05:43.967406 4661 flags.go:64] FLAG: --eviction-minimum-reclaim="" Jan 20 18:05:43 crc kubenswrapper[4661]: I0120 18:05:43.967416 4661 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Jan 20 18:05:43 crc kubenswrapper[4661]: I0120 18:05:43.967426 4661 flags.go:64] FLAG: --eviction-soft="" Jan 20 18:05:43 crc kubenswrapper[4661]: I0120 18:05:43.967437 4661 flags.go:64] FLAG: --eviction-soft-grace-period="" Jan 20 18:05:43 crc kubenswrapper[4661]: I0120 18:05:43.967447 4661 flags.go:64] FLAG: --exit-on-lock-contention="false" Jan 20 18:05:43 crc kubenswrapper[4661]: I0120 18:05:43.967457 4661 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Jan 20 18:05:43 crc kubenswrapper[4661]: I0120 18:05:43.967467 4661 flags.go:64] FLAG: --experimental-mounter-path="" Jan 20 18:05:43 crc kubenswrapper[4661]: I0120 18:05:43.967478 4661 flags.go:64] FLAG: --fail-cgroupv1="false" Jan 20 18:05:43 crc kubenswrapper[4661]: I0120 18:05:43.967488 4661 flags.go:64] FLAG: --fail-swap-on="true" Jan 20 18:05:43 crc kubenswrapper[4661]: I0120 18:05:43.967497 4661 flags.go:64] FLAG: --feature-gates="" Jan 20 18:05:43 crc kubenswrapper[4661]: I0120 18:05:43.967508 4661 flags.go:64] FLAG: --file-check-frequency="20s" Jan 20 18:05:43 crc kubenswrapper[4661]: I0120 18:05:43.967518 4661 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Jan 20 18:05:43 crc kubenswrapper[4661]: I0120 18:05:43.967527 4661 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Jan 20 18:05:43 crc kubenswrapper[4661]: I0120 18:05:43.967537 4661 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Jan 20 18:05:43 crc kubenswrapper[4661]: I0120 18:05:43.967546 4661 flags.go:64] FLAG: --healthz-port="10248" Jan 20 18:05:43 crc kubenswrapper[4661]: I0120 18:05:43.967555 4661 flags.go:64] FLAG: --help="false" Jan 20 18:05:43 crc kubenswrapper[4661]: I0120 18:05:43.967565 4661 flags.go:64] FLAG: --hostname-override="" Jan 20 18:05:43 crc kubenswrapper[4661]: I0120 18:05:43.967574 4661 flags.go:64] FLAG: --housekeeping-interval="10s" Jan 20 18:05:43 crc kubenswrapper[4661]: I0120 18:05:43.967584 4661 flags.go:64] FLAG: --http-check-frequency="20s" Jan 20 18:05:43 crc kubenswrapper[4661]: I0120 18:05:43.967593 4661 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Jan 20 18:05:43 crc kubenswrapper[4661]: I0120 18:05:43.967602 4661 flags.go:64] FLAG: --image-credential-provider-config="" Jan 20 18:05:43 crc kubenswrapper[4661]: I0120 18:05:43.967611 4661 flags.go:64] FLAG: --image-gc-high-threshold="85" Jan 20 18:05:43 crc kubenswrapper[4661]: I0120 18:05:43.967620 4661 flags.go:64] FLAG: --image-gc-low-threshold="80" Jan 20 18:05:43 crc kubenswrapper[4661]: I0120 18:05:43.967629 4661 flags.go:64] FLAG: --image-service-endpoint="" Jan 20 18:05:43 crc kubenswrapper[4661]: I0120 18:05:43.967638 4661 flags.go:64] FLAG: --kernel-memcg-notification="false" Jan 20 18:05:43 crc kubenswrapper[4661]: I0120 18:05:43.967647 4661 flags.go:64] FLAG: --kube-api-burst="100" Jan 20 18:05:43 crc kubenswrapper[4661]: I0120 18:05:43.967656 4661 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Jan 20 18:05:43 crc kubenswrapper[4661]: I0120 18:05:43.967693 4661 flags.go:64] FLAG: --kube-api-qps="50" Jan 20 18:05:43 crc kubenswrapper[4661]: I0120 18:05:43.967702 4661 flags.go:64] FLAG: --kube-reserved="" Jan 20 18:05:43 crc kubenswrapper[4661]: I0120 18:05:43.967712 4661 flags.go:64] FLAG: --kube-reserved-cgroup="" Jan 20 18:05:43 crc kubenswrapper[4661]: I0120 18:05:43.967721 4661 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Jan 20 18:05:43 crc kubenswrapper[4661]: I0120 18:05:43.967730 4661 flags.go:64] FLAG: --kubelet-cgroups="" Jan 20 18:05:43 crc kubenswrapper[4661]: I0120 18:05:43.967739 4661 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Jan 20 18:05:43 crc kubenswrapper[4661]: I0120 18:05:43.967748 4661 flags.go:64] FLAG: --lock-file="" Jan 20 18:05:43 crc kubenswrapper[4661]: I0120 18:05:43.967759 4661 flags.go:64] FLAG: --log-cadvisor-usage="false" Jan 20 18:05:43 crc kubenswrapper[4661]: I0120 18:05:43.967769 4661 flags.go:64] FLAG: --log-flush-frequency="5s" Jan 20 18:05:43 crc kubenswrapper[4661]: I0120 18:05:43.967779 4661 flags.go:64] FLAG: --log-json-info-buffer-size="0" Jan 20 18:05:43 crc kubenswrapper[4661]: I0120 18:05:43.967792 4661 flags.go:64] FLAG: --log-json-split-stream="false" Jan 20 18:05:43 crc kubenswrapper[4661]: I0120 18:05:43.967801 4661 flags.go:64] FLAG: --log-text-info-buffer-size="0" Jan 20 18:05:43 crc kubenswrapper[4661]: I0120 18:05:43.967811 4661 flags.go:64] FLAG: --log-text-split-stream="false" Jan 20 18:05:43 crc kubenswrapper[4661]: I0120 18:05:43.967821 4661 flags.go:64] FLAG: --logging-format="text" Jan 20 18:05:43 crc kubenswrapper[4661]: I0120 18:05:43.967829 4661 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Jan 20 18:05:43 crc kubenswrapper[4661]: I0120 18:05:43.967839 4661 flags.go:64] FLAG: --make-iptables-util-chains="true" Jan 20 18:05:43 crc kubenswrapper[4661]: I0120 18:05:43.967848 4661 flags.go:64] FLAG: --manifest-url="" Jan 20 18:05:43 crc kubenswrapper[4661]: I0120 18:05:43.967857 4661 flags.go:64] FLAG: --manifest-url-header="" Jan 20 18:05:43 crc kubenswrapper[4661]: I0120 18:05:43.967869 4661 flags.go:64] FLAG: --max-housekeeping-interval="15s" Jan 20 18:05:43 crc kubenswrapper[4661]: I0120 18:05:43.967879 4661 flags.go:64] FLAG: --max-open-files="1000000" Jan 20 18:05:43 crc kubenswrapper[4661]: I0120 18:05:43.967890 4661 flags.go:64] FLAG: --max-pods="110" Jan 20 18:05:43 crc kubenswrapper[4661]: I0120 18:05:43.967899 4661 flags.go:64] FLAG: --maximum-dead-containers="-1" Jan 20 18:05:43 crc kubenswrapper[4661]: I0120 18:05:43.967908 4661 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Jan 20 18:05:43 crc kubenswrapper[4661]: I0120 18:05:43.967918 4661 flags.go:64] FLAG: --memory-manager-policy="None" Jan 20 18:05:43 crc kubenswrapper[4661]: I0120 18:05:43.967928 4661 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Jan 20 18:05:43 crc kubenswrapper[4661]: I0120 18:05:43.967937 4661 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Jan 20 18:05:43 crc kubenswrapper[4661]: I0120 18:05:43.967946 4661 flags.go:64] FLAG: --node-ip="192.168.126.11" Jan 20 18:05:43 crc kubenswrapper[4661]: I0120 18:05:43.967956 4661 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Jan 20 18:05:43 crc kubenswrapper[4661]: I0120 18:05:43.967976 4661 flags.go:64] FLAG: --node-status-max-images="50" Jan 20 18:05:43 crc kubenswrapper[4661]: I0120 18:05:43.967985 4661 flags.go:64] FLAG: --node-status-update-frequency="10s" Jan 20 18:05:43 crc kubenswrapper[4661]: I0120 18:05:43.967994 4661 flags.go:64] FLAG: --oom-score-adj="-999" Jan 20 18:05:43 crc kubenswrapper[4661]: I0120 18:05:43.968005 4661 flags.go:64] FLAG: --pod-cidr="" Jan 20 18:05:43 crc kubenswrapper[4661]: I0120 18:05:43.968014 4661 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:33549946e22a9ffa738fd94b1345f90921bc8f92fa6137784cb33c77ad806f9d" Jan 20 18:05:43 crc kubenswrapper[4661]: I0120 18:05:43.968030 4661 flags.go:64] FLAG: --pod-manifest-path="" Jan 20 18:05:43 crc kubenswrapper[4661]: I0120 18:05:43.968039 4661 flags.go:64] FLAG: --pod-max-pids="-1" Jan 20 18:05:43 crc kubenswrapper[4661]: I0120 18:05:43.968049 4661 flags.go:64] FLAG: --pods-per-core="0" Jan 20 18:05:43 crc kubenswrapper[4661]: I0120 18:05:43.968058 4661 flags.go:64] FLAG: --port="10250" Jan 20 18:05:43 crc kubenswrapper[4661]: I0120 18:05:43.968068 4661 flags.go:64] FLAG: --protect-kernel-defaults="false" Jan 20 18:05:43 crc kubenswrapper[4661]: I0120 18:05:43.968077 4661 flags.go:64] FLAG: --provider-id="" Jan 20 18:05:43 crc kubenswrapper[4661]: I0120 18:05:43.968085 4661 flags.go:64] FLAG: --qos-reserved="" Jan 20 18:05:43 crc kubenswrapper[4661]: I0120 18:05:43.968094 4661 flags.go:64] FLAG: --read-only-port="10255" Jan 20 18:05:43 crc kubenswrapper[4661]: I0120 18:05:43.968104 4661 flags.go:64] FLAG: --register-node="true" Jan 20 18:05:43 crc kubenswrapper[4661]: I0120 18:05:43.968115 4661 flags.go:64] FLAG: --register-schedulable="true" Jan 20 18:05:43 crc kubenswrapper[4661]: I0120 18:05:43.968129 4661 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Jan 20 18:05:43 crc kubenswrapper[4661]: I0120 18:05:43.968151 4661 flags.go:64] FLAG: --registry-burst="10" Jan 20 18:05:43 crc kubenswrapper[4661]: I0120 18:05:43.968163 4661 flags.go:64] FLAG: --registry-qps="5" Jan 20 18:05:43 crc kubenswrapper[4661]: I0120 18:05:43.968174 4661 flags.go:64] FLAG: --reserved-cpus="" Jan 20 18:05:43 crc kubenswrapper[4661]: I0120 18:05:43.968187 4661 flags.go:64] FLAG: --reserved-memory="" Jan 20 18:05:43 crc kubenswrapper[4661]: I0120 18:05:43.968203 4661 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Jan 20 18:05:43 crc kubenswrapper[4661]: I0120 18:05:43.968215 4661 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Jan 20 18:05:43 crc kubenswrapper[4661]: I0120 18:05:43.968226 4661 flags.go:64] FLAG: --rotate-certificates="false" Jan 20 18:05:43 crc kubenswrapper[4661]: I0120 18:05:43.968239 4661 flags.go:64] FLAG: --rotate-server-certificates="false" Jan 20 18:05:43 crc kubenswrapper[4661]: I0120 18:05:43.968251 4661 flags.go:64] FLAG: --runonce="false" Jan 20 18:05:43 crc kubenswrapper[4661]: I0120 18:05:43.968262 4661 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Jan 20 18:05:43 crc kubenswrapper[4661]: I0120 18:05:43.968274 4661 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Jan 20 18:05:43 crc kubenswrapper[4661]: I0120 18:05:43.968286 4661 flags.go:64] FLAG: --seccomp-default="false" Jan 20 18:05:43 crc kubenswrapper[4661]: I0120 18:05:43.968298 4661 flags.go:64] FLAG: --serialize-image-pulls="true" Jan 20 18:05:43 crc kubenswrapper[4661]: I0120 18:05:43.968308 4661 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Jan 20 18:05:43 crc kubenswrapper[4661]: I0120 18:05:43.968318 4661 flags.go:64] FLAG: --storage-driver-db="cadvisor" Jan 20 18:05:43 crc kubenswrapper[4661]: I0120 18:05:43.968327 4661 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Jan 20 18:05:43 crc kubenswrapper[4661]: I0120 18:05:43.968337 4661 flags.go:64] FLAG: --storage-driver-password="root" Jan 20 18:05:43 crc kubenswrapper[4661]: I0120 18:05:43.968346 4661 flags.go:64] FLAG: --storage-driver-secure="false" Jan 20 18:05:43 crc kubenswrapper[4661]: I0120 18:05:43.968355 4661 flags.go:64] FLAG: --storage-driver-table="stats" Jan 20 18:05:43 crc kubenswrapper[4661]: I0120 18:05:43.968364 4661 flags.go:64] FLAG: --storage-driver-user="root" Jan 20 18:05:43 crc kubenswrapper[4661]: I0120 18:05:43.968373 4661 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Jan 20 18:05:43 crc kubenswrapper[4661]: I0120 18:05:43.968383 4661 flags.go:64] FLAG: --sync-frequency="1m0s" Jan 20 18:05:43 crc kubenswrapper[4661]: I0120 18:05:43.968392 4661 flags.go:64] FLAG: --system-cgroups="" Jan 20 18:05:43 crc kubenswrapper[4661]: I0120 18:05:43.968401 4661 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Jan 20 18:05:43 crc kubenswrapper[4661]: I0120 18:05:43.968417 4661 flags.go:64] FLAG: --system-reserved-cgroup="" Jan 20 18:05:43 crc kubenswrapper[4661]: I0120 18:05:43.968427 4661 flags.go:64] FLAG: --tls-cert-file="" Jan 20 18:05:43 crc kubenswrapper[4661]: I0120 18:05:43.968436 4661 flags.go:64] FLAG: --tls-cipher-suites="[]" Jan 20 18:05:43 crc kubenswrapper[4661]: I0120 18:05:43.968451 4661 flags.go:64] FLAG: --tls-min-version="" Jan 20 18:05:43 crc kubenswrapper[4661]: I0120 18:05:43.968460 4661 flags.go:64] FLAG: --tls-private-key-file="" Jan 20 18:05:43 crc kubenswrapper[4661]: I0120 18:05:43.968471 4661 flags.go:64] FLAG: --topology-manager-policy="none" Jan 20 18:05:43 crc kubenswrapper[4661]: I0120 18:05:43.968480 4661 flags.go:64] FLAG: --topology-manager-policy-options="" Jan 20 18:05:43 crc kubenswrapper[4661]: I0120 18:05:43.968489 4661 flags.go:64] FLAG: --topology-manager-scope="container" Jan 20 18:05:43 crc kubenswrapper[4661]: I0120 18:05:43.968499 4661 flags.go:64] FLAG: --v="2" Jan 20 18:05:43 crc kubenswrapper[4661]: I0120 18:05:43.968511 4661 flags.go:64] FLAG: --version="false" Jan 20 18:05:43 crc kubenswrapper[4661]: I0120 18:05:43.968522 4661 flags.go:64] FLAG: --vmodule="" Jan 20 18:05:43 crc kubenswrapper[4661]: I0120 18:05:43.968535 4661 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Jan 20 18:05:43 crc kubenswrapper[4661]: I0120 18:05:43.968545 4661 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Jan 20 18:05:43 crc kubenswrapper[4661]: W0120 18:05:43.968812 4661 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 20 18:05:43 crc kubenswrapper[4661]: W0120 18:05:43.968825 4661 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 20 18:05:43 crc kubenswrapper[4661]: W0120 18:05:43.968835 4661 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 20 18:05:43 crc kubenswrapper[4661]: W0120 18:05:43.968843 4661 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 20 18:05:43 crc kubenswrapper[4661]: W0120 18:05:43.968852 4661 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 20 18:05:43 crc kubenswrapper[4661]: W0120 18:05:43.968859 4661 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 20 18:05:43 crc kubenswrapper[4661]: W0120 18:05:43.968867 4661 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 20 18:05:43 crc kubenswrapper[4661]: W0120 18:05:43.968876 4661 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 20 18:05:43 crc kubenswrapper[4661]: W0120 18:05:43.968884 4661 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 20 18:05:43 crc kubenswrapper[4661]: W0120 18:05:43.968892 4661 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 20 18:05:43 crc kubenswrapper[4661]: W0120 18:05:43.968899 4661 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 20 18:05:43 crc kubenswrapper[4661]: W0120 18:05:43.968907 4661 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 20 18:05:43 crc kubenswrapper[4661]: W0120 18:05:43.968917 4661 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 20 18:05:43 crc kubenswrapper[4661]: W0120 18:05:43.968926 4661 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 20 18:05:43 crc kubenswrapper[4661]: W0120 18:05:43.968936 4661 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 20 18:05:43 crc kubenswrapper[4661]: W0120 18:05:43.968946 4661 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 20 18:05:43 crc kubenswrapper[4661]: W0120 18:05:43.968956 4661 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 20 18:05:43 crc kubenswrapper[4661]: W0120 18:05:43.968966 4661 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 20 18:05:43 crc kubenswrapper[4661]: W0120 18:05:43.968975 4661 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 20 18:05:43 crc kubenswrapper[4661]: W0120 18:05:43.968985 4661 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 20 18:05:43 crc kubenswrapper[4661]: W0120 18:05:43.968994 4661 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 20 18:05:43 crc kubenswrapper[4661]: W0120 18:05:43.969004 4661 feature_gate.go:330] unrecognized feature gate: Example Jan 20 18:05:43 crc kubenswrapper[4661]: W0120 18:05:43.969013 4661 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 20 18:05:43 crc kubenswrapper[4661]: W0120 18:05:43.969025 4661 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 20 18:05:43 crc kubenswrapper[4661]: W0120 18:05:43.969035 4661 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 20 18:05:43 crc kubenswrapper[4661]: W0120 18:05:43.969045 4661 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 20 18:05:43 crc kubenswrapper[4661]: W0120 18:05:43.969056 4661 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 20 18:05:43 crc kubenswrapper[4661]: W0120 18:05:43.969065 4661 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 20 18:05:43 crc kubenswrapper[4661]: W0120 18:05:43.969075 4661 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 20 18:05:43 crc kubenswrapper[4661]: W0120 18:05:43.969085 4661 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 20 18:05:43 crc kubenswrapper[4661]: W0120 18:05:43.969095 4661 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 20 18:05:43 crc kubenswrapper[4661]: W0120 18:05:43.969104 4661 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 20 18:05:43 crc kubenswrapper[4661]: W0120 18:05:43.969113 4661 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 20 18:05:43 crc kubenswrapper[4661]: W0120 18:05:43.969121 4661 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 20 18:05:43 crc kubenswrapper[4661]: W0120 18:05:43.969131 4661 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 20 18:05:43 crc kubenswrapper[4661]: W0120 18:05:43.969140 4661 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 20 18:05:43 crc kubenswrapper[4661]: W0120 18:05:43.969148 4661 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 20 18:05:43 crc kubenswrapper[4661]: W0120 18:05:43.969156 4661 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 20 18:05:43 crc kubenswrapper[4661]: W0120 18:05:43.969164 4661 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 20 18:05:43 crc kubenswrapper[4661]: W0120 18:05:43.969173 4661 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 20 18:05:43 crc kubenswrapper[4661]: W0120 18:05:43.969181 4661 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 20 18:05:43 crc kubenswrapper[4661]: W0120 18:05:43.969190 4661 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 20 18:05:43 crc kubenswrapper[4661]: W0120 18:05:43.969198 4661 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 20 18:05:43 crc kubenswrapper[4661]: W0120 18:05:43.969206 4661 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 20 18:05:43 crc kubenswrapper[4661]: W0120 18:05:43.969214 4661 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 20 18:05:43 crc kubenswrapper[4661]: W0120 18:05:43.969222 4661 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 20 18:05:43 crc kubenswrapper[4661]: W0120 18:05:43.969230 4661 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 20 18:05:43 crc kubenswrapper[4661]: W0120 18:05:43.969238 4661 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 20 18:05:43 crc kubenswrapper[4661]: W0120 18:05:43.969247 4661 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 20 18:05:43 crc kubenswrapper[4661]: W0120 18:05:43.969254 4661 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 20 18:05:43 crc kubenswrapper[4661]: W0120 18:05:43.969406 4661 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 20 18:05:43 crc kubenswrapper[4661]: W0120 18:05:43.969417 4661 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 20 18:05:43 crc kubenswrapper[4661]: W0120 18:05:43.969435 4661 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 20 18:05:43 crc kubenswrapper[4661]: W0120 18:05:43.969448 4661 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 20 18:05:43 crc kubenswrapper[4661]: W0120 18:05:43.969459 4661 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 20 18:05:43 crc kubenswrapper[4661]: W0120 18:05:43.969468 4661 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 20 18:05:43 crc kubenswrapper[4661]: W0120 18:05:43.969477 4661 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 20 18:05:43 crc kubenswrapper[4661]: W0120 18:05:43.969485 4661 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 20 18:05:43 crc kubenswrapper[4661]: W0120 18:05:43.969494 4661 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 20 18:05:43 crc kubenswrapper[4661]: W0120 18:05:43.969502 4661 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 20 18:05:43 crc kubenswrapper[4661]: W0120 18:05:43.969510 4661 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 20 18:05:43 crc kubenswrapper[4661]: W0120 18:05:43.969519 4661 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 20 18:05:43 crc kubenswrapper[4661]: W0120 18:05:43.969527 4661 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 20 18:05:43 crc kubenswrapper[4661]: W0120 18:05:43.969536 4661 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 20 18:05:43 crc kubenswrapper[4661]: W0120 18:05:43.969546 4661 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 20 18:05:43 crc kubenswrapper[4661]: W0120 18:05:43.969554 4661 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 20 18:05:43 crc kubenswrapper[4661]: W0120 18:05:43.969564 4661 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 20 18:05:43 crc kubenswrapper[4661]: W0120 18:05:43.969572 4661 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 20 18:05:43 crc kubenswrapper[4661]: W0120 18:05:43.969580 4661 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 20 18:05:43 crc kubenswrapper[4661]: W0120 18:05:43.969588 4661 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 20 18:05:43 crc kubenswrapper[4661]: W0120 18:05:43.969596 4661 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 20 18:05:43 crc kubenswrapper[4661]: I0120 18:05:43.969886 4661 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 20 18:05:43 crc kubenswrapper[4661]: I0120 18:05:43.978888 4661 server.go:491] "Kubelet version" kubeletVersion="v1.31.5" Jan 20 18:05:43 crc kubenswrapper[4661]: I0120 18:05:43.979322 4661 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 20 18:05:43 crc kubenswrapper[4661]: W0120 18:05:43.979409 4661 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 20 18:05:43 crc kubenswrapper[4661]: W0120 18:05:43.979436 4661 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 20 18:05:43 crc kubenswrapper[4661]: W0120 18:05:43.979445 4661 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 20 18:05:43 crc kubenswrapper[4661]: W0120 18:05:43.979451 4661 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 20 18:05:43 crc kubenswrapper[4661]: W0120 18:05:43.979458 4661 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 20 18:05:43 crc kubenswrapper[4661]: W0120 18:05:43.979466 4661 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 20 18:05:43 crc kubenswrapper[4661]: W0120 18:05:43.979473 4661 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 20 18:05:43 crc kubenswrapper[4661]: W0120 18:05:43.979478 4661 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 20 18:05:43 crc kubenswrapper[4661]: W0120 18:05:43.979484 4661 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 20 18:05:43 crc kubenswrapper[4661]: W0120 18:05:43.979489 4661 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 20 18:05:43 crc kubenswrapper[4661]: W0120 18:05:43.979494 4661 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 20 18:05:43 crc kubenswrapper[4661]: W0120 18:05:43.979500 4661 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 20 18:05:43 crc kubenswrapper[4661]: W0120 18:05:43.979505 4661 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 20 18:05:43 crc kubenswrapper[4661]: W0120 18:05:43.979509 4661 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 20 18:05:43 crc kubenswrapper[4661]: W0120 18:05:43.979514 4661 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 20 18:05:43 crc kubenswrapper[4661]: W0120 18:05:43.979519 4661 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 20 18:05:43 crc kubenswrapper[4661]: W0120 18:05:43.979525 4661 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 20 18:05:43 crc kubenswrapper[4661]: W0120 18:05:43.979530 4661 feature_gate.go:330] unrecognized feature gate: Example Jan 20 18:05:43 crc kubenswrapper[4661]: W0120 18:05:43.979535 4661 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 20 18:05:43 crc kubenswrapper[4661]: W0120 18:05:43.979540 4661 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 20 18:05:43 crc kubenswrapper[4661]: W0120 18:05:43.979544 4661 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 20 18:05:43 crc kubenswrapper[4661]: W0120 18:05:43.979550 4661 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 20 18:05:43 crc kubenswrapper[4661]: W0120 18:05:43.979555 4661 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 20 18:05:43 crc kubenswrapper[4661]: W0120 18:05:43.979560 4661 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 20 18:05:43 crc kubenswrapper[4661]: W0120 18:05:43.979565 4661 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 20 18:05:43 crc kubenswrapper[4661]: W0120 18:05:43.979570 4661 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 20 18:05:43 crc kubenswrapper[4661]: W0120 18:05:43.979576 4661 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 20 18:05:43 crc kubenswrapper[4661]: W0120 18:05:43.979581 4661 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 20 18:05:43 crc kubenswrapper[4661]: W0120 18:05:43.979586 4661 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 20 18:05:43 crc kubenswrapper[4661]: W0120 18:05:43.979590 4661 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 20 18:05:43 crc kubenswrapper[4661]: W0120 18:05:43.979597 4661 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 20 18:05:43 crc kubenswrapper[4661]: W0120 18:05:43.979606 4661 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 20 18:05:43 crc kubenswrapper[4661]: W0120 18:05:43.979613 4661 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 20 18:05:43 crc kubenswrapper[4661]: W0120 18:05:43.979620 4661 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 20 18:05:43 crc kubenswrapper[4661]: W0120 18:05:43.979628 4661 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 20 18:05:43 crc kubenswrapper[4661]: W0120 18:05:43.979635 4661 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 20 18:05:43 crc kubenswrapper[4661]: W0120 18:05:43.979641 4661 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 20 18:05:43 crc kubenswrapper[4661]: W0120 18:05:43.979648 4661 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 20 18:05:43 crc kubenswrapper[4661]: W0120 18:05:43.979655 4661 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 20 18:05:43 crc kubenswrapper[4661]: W0120 18:05:43.979661 4661 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 20 18:05:43 crc kubenswrapper[4661]: W0120 18:05:43.979693 4661 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 20 18:05:43 crc kubenswrapper[4661]: W0120 18:05:43.979701 4661 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 20 18:05:43 crc kubenswrapper[4661]: W0120 18:05:43.979708 4661 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 20 18:05:43 crc kubenswrapper[4661]: W0120 18:05:43.979713 4661 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 20 18:05:43 crc kubenswrapper[4661]: W0120 18:05:43.979718 4661 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 20 18:05:43 crc kubenswrapper[4661]: W0120 18:05:43.979723 4661 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 20 18:05:43 crc kubenswrapper[4661]: W0120 18:05:43.979728 4661 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 20 18:05:43 crc kubenswrapper[4661]: W0120 18:05:43.979734 4661 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 20 18:05:43 crc kubenswrapper[4661]: W0120 18:05:43.979739 4661 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 20 18:05:43 crc kubenswrapper[4661]: W0120 18:05:43.979746 4661 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 20 18:05:43 crc kubenswrapper[4661]: W0120 18:05:43.979752 4661 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 20 18:05:43 crc kubenswrapper[4661]: W0120 18:05:43.979757 4661 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 20 18:05:43 crc kubenswrapper[4661]: W0120 18:05:43.979763 4661 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 20 18:05:43 crc kubenswrapper[4661]: W0120 18:05:43.979769 4661 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 20 18:05:43 crc kubenswrapper[4661]: W0120 18:05:43.979774 4661 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 20 18:05:43 crc kubenswrapper[4661]: W0120 18:05:43.979779 4661 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 20 18:05:43 crc kubenswrapper[4661]: W0120 18:05:43.979785 4661 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 20 18:05:43 crc kubenswrapper[4661]: W0120 18:05:43.979790 4661 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 20 18:05:43 crc kubenswrapper[4661]: W0120 18:05:43.979796 4661 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 20 18:05:43 crc kubenswrapper[4661]: W0120 18:05:43.979800 4661 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 20 18:05:43 crc kubenswrapper[4661]: W0120 18:05:43.979805 4661 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 20 18:05:43 crc kubenswrapper[4661]: W0120 18:05:43.979810 4661 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 20 18:05:43 crc kubenswrapper[4661]: W0120 18:05:43.979815 4661 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 20 18:05:43 crc kubenswrapper[4661]: W0120 18:05:43.979820 4661 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 20 18:05:43 crc kubenswrapper[4661]: W0120 18:05:43.979825 4661 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 20 18:05:43 crc kubenswrapper[4661]: W0120 18:05:43.979829 4661 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 20 18:05:43 crc kubenswrapper[4661]: W0120 18:05:43.979834 4661 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 20 18:05:43 crc kubenswrapper[4661]: W0120 18:05:43.979839 4661 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 20 18:05:43 crc kubenswrapper[4661]: W0120 18:05:43.979844 4661 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 20 18:05:43 crc kubenswrapper[4661]: W0120 18:05:43.979849 4661 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 20 18:05:43 crc kubenswrapper[4661]: W0120 18:05:43.979854 4661 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 20 18:05:43 crc kubenswrapper[4661]: I0120 18:05:43.979863 4661 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 20 18:05:43 crc kubenswrapper[4661]: W0120 18:05:43.980017 4661 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 20 18:05:43 crc kubenswrapper[4661]: W0120 18:05:43.980025 4661 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 20 18:05:43 crc kubenswrapper[4661]: W0120 18:05:43.980031 4661 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 20 18:05:43 crc kubenswrapper[4661]: W0120 18:05:43.980036 4661 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 20 18:05:43 crc kubenswrapper[4661]: W0120 18:05:43.980041 4661 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 20 18:05:43 crc kubenswrapper[4661]: W0120 18:05:43.980046 4661 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 20 18:05:43 crc kubenswrapper[4661]: W0120 18:05:43.980051 4661 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 20 18:05:43 crc kubenswrapper[4661]: W0120 18:05:43.980056 4661 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 20 18:05:43 crc kubenswrapper[4661]: W0120 18:05:43.980060 4661 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 20 18:05:43 crc kubenswrapper[4661]: W0120 18:05:43.980066 4661 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 20 18:05:43 crc kubenswrapper[4661]: W0120 18:05:43.980073 4661 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 20 18:05:43 crc kubenswrapper[4661]: W0120 18:05:43.980079 4661 feature_gate.go:330] unrecognized feature gate: Example Jan 20 18:05:43 crc kubenswrapper[4661]: W0120 18:05:43.980084 4661 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 20 18:05:43 crc kubenswrapper[4661]: W0120 18:05:43.980089 4661 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 20 18:05:43 crc kubenswrapper[4661]: W0120 18:05:43.980093 4661 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 20 18:05:43 crc kubenswrapper[4661]: W0120 18:05:43.980098 4661 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 20 18:05:43 crc kubenswrapper[4661]: W0120 18:05:43.980103 4661 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 20 18:05:43 crc kubenswrapper[4661]: W0120 18:05:43.980108 4661 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 20 18:05:43 crc kubenswrapper[4661]: W0120 18:05:43.980113 4661 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 20 18:05:43 crc kubenswrapper[4661]: W0120 18:05:43.980120 4661 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 20 18:05:43 crc kubenswrapper[4661]: W0120 18:05:43.980125 4661 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 20 18:05:43 crc kubenswrapper[4661]: W0120 18:05:43.980131 4661 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 20 18:05:43 crc kubenswrapper[4661]: W0120 18:05:43.980137 4661 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 20 18:05:43 crc kubenswrapper[4661]: W0120 18:05:43.980143 4661 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 20 18:05:43 crc kubenswrapper[4661]: W0120 18:05:43.980156 4661 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 20 18:05:43 crc kubenswrapper[4661]: W0120 18:05:43.980166 4661 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 20 18:05:43 crc kubenswrapper[4661]: W0120 18:05:43.980173 4661 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 20 18:05:43 crc kubenswrapper[4661]: W0120 18:05:43.980181 4661 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 20 18:05:43 crc kubenswrapper[4661]: W0120 18:05:43.980187 4661 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 20 18:05:43 crc kubenswrapper[4661]: W0120 18:05:43.980194 4661 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 20 18:05:43 crc kubenswrapper[4661]: W0120 18:05:43.980200 4661 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 20 18:05:43 crc kubenswrapper[4661]: W0120 18:05:43.980207 4661 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 20 18:05:43 crc kubenswrapper[4661]: W0120 18:05:43.980213 4661 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 20 18:05:43 crc kubenswrapper[4661]: W0120 18:05:43.980219 4661 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 20 18:05:43 crc kubenswrapper[4661]: W0120 18:05:43.980228 4661 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 20 18:05:43 crc kubenswrapper[4661]: W0120 18:05:43.980235 4661 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 20 18:05:43 crc kubenswrapper[4661]: W0120 18:05:43.980241 4661 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 20 18:05:43 crc kubenswrapper[4661]: W0120 18:05:43.980247 4661 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 20 18:05:43 crc kubenswrapper[4661]: W0120 18:05:43.980255 4661 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 20 18:05:43 crc kubenswrapper[4661]: W0120 18:05:43.980262 4661 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 20 18:05:43 crc kubenswrapper[4661]: W0120 18:05:43.980270 4661 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 20 18:05:43 crc kubenswrapper[4661]: W0120 18:05:43.980279 4661 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 20 18:05:43 crc kubenswrapper[4661]: W0120 18:05:43.980287 4661 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 20 18:05:43 crc kubenswrapper[4661]: W0120 18:05:43.980294 4661 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 20 18:05:43 crc kubenswrapper[4661]: W0120 18:05:43.980301 4661 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 20 18:05:43 crc kubenswrapper[4661]: W0120 18:05:43.980307 4661 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 20 18:05:43 crc kubenswrapper[4661]: W0120 18:05:43.980313 4661 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 20 18:05:43 crc kubenswrapper[4661]: W0120 18:05:43.980319 4661 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 20 18:05:43 crc kubenswrapper[4661]: W0120 18:05:43.980326 4661 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 20 18:05:43 crc kubenswrapper[4661]: W0120 18:05:43.980333 4661 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 20 18:05:43 crc kubenswrapper[4661]: W0120 18:05:43.980341 4661 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 20 18:05:43 crc kubenswrapper[4661]: W0120 18:05:43.980347 4661 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 20 18:05:43 crc kubenswrapper[4661]: W0120 18:05:43.980354 4661 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 20 18:05:43 crc kubenswrapper[4661]: W0120 18:05:43.980359 4661 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 20 18:05:43 crc kubenswrapper[4661]: W0120 18:05:43.980365 4661 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 20 18:05:43 crc kubenswrapper[4661]: W0120 18:05:43.980370 4661 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 20 18:05:43 crc kubenswrapper[4661]: W0120 18:05:43.980375 4661 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 20 18:05:43 crc kubenswrapper[4661]: W0120 18:05:43.980380 4661 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 20 18:05:43 crc kubenswrapper[4661]: W0120 18:05:43.980385 4661 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 20 18:05:43 crc kubenswrapper[4661]: W0120 18:05:43.980390 4661 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 20 18:05:43 crc kubenswrapper[4661]: W0120 18:05:43.980395 4661 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 20 18:05:43 crc kubenswrapper[4661]: W0120 18:05:43.980400 4661 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 20 18:05:43 crc kubenswrapper[4661]: W0120 18:05:43.980405 4661 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 20 18:05:43 crc kubenswrapper[4661]: W0120 18:05:43.980410 4661 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 20 18:05:43 crc kubenswrapper[4661]: W0120 18:05:43.980415 4661 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 20 18:05:43 crc kubenswrapper[4661]: W0120 18:05:43.980420 4661 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 20 18:05:43 crc kubenswrapper[4661]: W0120 18:05:43.980425 4661 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 20 18:05:43 crc kubenswrapper[4661]: W0120 18:05:43.980430 4661 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 20 18:05:43 crc kubenswrapper[4661]: W0120 18:05:43.980435 4661 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 20 18:05:43 crc kubenswrapper[4661]: W0120 18:05:43.980440 4661 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 20 18:05:43 crc kubenswrapper[4661]: W0120 18:05:43.980453 4661 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 20 18:05:43 crc kubenswrapper[4661]: I0120 18:05:43.980462 4661 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 20 18:05:43 crc kubenswrapper[4661]: I0120 18:05:43.980704 4661 server.go:940] "Client rotation is on, will bootstrap in background" Jan 20 18:05:43 crc kubenswrapper[4661]: I0120 18:05:43.983979 4661 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Jan 20 18:05:43 crc kubenswrapper[4661]: I0120 18:05:43.984101 4661 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 20 18:05:43 crc kubenswrapper[4661]: I0120 18:05:43.989546 4661 server.go:997] "Starting client certificate rotation" Jan 20 18:05:43 crc kubenswrapper[4661]: I0120 18:05:43.989582 4661 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Jan 20 18:05:43 crc kubenswrapper[4661]: I0120 18:05:43.989807 4661 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-02-24 05:52:08 +0000 UTC, rotation deadline is 2025-11-17 04:56:03.378739605 +0000 UTC Jan 20 18:05:43 crc kubenswrapper[4661]: I0120 18:05:43.989967 4661 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 20 18:05:43 crc kubenswrapper[4661]: I0120 18:05:43.994845 4661 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 20 18:05:43 crc kubenswrapper[4661]: I0120 18:05:43.996577 4661 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 20 18:05:43 crc kubenswrapper[4661]: E0120 18:05:43.996656 4661 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.36:6443: connect: connection refused" logger="UnhandledError" Jan 20 18:05:44 crc kubenswrapper[4661]: I0120 18:05:44.009977 4661 log.go:25] "Validated CRI v1 runtime API" Jan 20 18:05:44 crc kubenswrapper[4661]: I0120 18:05:44.031252 4661 log.go:25] "Validated CRI v1 image API" Jan 20 18:05:44 crc kubenswrapper[4661]: I0120 18:05:44.032944 4661 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 20 18:05:44 crc kubenswrapper[4661]: I0120 18:05:44.035487 4661 fs.go:133] Filesystem UUIDs: map[0b076daa-c26a-46d2-b3a6-72a8dbc6e257:/dev/vda4 2026-01-20-18-00-06-00:/dev/sr0 7B77-95E7:/dev/vda2 de0497b0-db1b-465a-b278-03db02455c71:/dev/vda3] Jan 20 18:05:44 crc kubenswrapper[4661]: I0120 18:05:44.035521 4661 fs.go:134] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:42 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:41 fsType:tmpfs blockSize:0}] Jan 20 18:05:44 crc kubenswrapper[4661]: I0120 18:05:44.045772 4661 manager.go:217] Machine: {Timestamp:2026-01-20 18:05:44.044838645 +0000 UTC m=+0.375628327 CPUVendorID:AuthenticAMD NumCores:8 NumPhysicalCores:1 NumSockets:8 CpuFrequency:2800000 MemoryCapacity:25199480832 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:21801e6708c44f15b81395eb736a7cec SystemUUID:727045d4-7edb-4891-a9ee-dd5ccba890df BootID:7f2069d5-53e0-4198-b42b-b73aa1252865 Filesystems:[{Device:/run/user/1000 DeviceMajor:0 DeviceMinor:42 Capacity:2519945216 Type:vfs Inodes:615221 HasInodes:true} {Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:41 Capacity:1073741824 Type:vfs Inodes:3076108 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:12599738368 Type:vfs Inodes:3076108 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:5039898624 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:12599742464 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:429496729600 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:5a:d4:79 Speed:0 Mtu:1500} {Name:br-int MacAddress:d6:39:55:2e:22:71 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:5a:d4:79 Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:c7:33:e2 Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:bc:93:15 Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:68:a8:be Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:3c:9c:7e Speed:-1 Mtu:1496} {Name:ens7.23 MacAddress:52:54:00:ab:8e:6b Speed:-1 Mtu:1496} {Name:eth10 MacAddress:ae:a8:13:d7:17:a0 Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:da:02:9c:96:b1:4f Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:25199480832 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Jan 20 18:05:44 crc kubenswrapper[4661]: I0120 18:05:44.045987 4661 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Jan 20 18:05:44 crc kubenswrapper[4661]: I0120 18:05:44.046122 4661 manager.go:233] Version: {KernelVersion:5.14.0-427.50.2.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202502100215-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Jan 20 18:05:44 crc kubenswrapper[4661]: I0120 18:05:44.046604 4661 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jan 20 18:05:44 crc kubenswrapper[4661]: I0120 18:05:44.046818 4661 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 20 18:05:44 crc kubenswrapper[4661]: I0120 18:05:44.046852 4661 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 20 18:05:44 crc kubenswrapper[4661]: I0120 18:05:44.047036 4661 topology_manager.go:138] "Creating topology manager with none policy" Jan 20 18:05:44 crc kubenswrapper[4661]: I0120 18:05:44.047049 4661 container_manager_linux.go:303] "Creating device plugin manager" Jan 20 18:05:44 crc kubenswrapper[4661]: I0120 18:05:44.047280 4661 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Jan 20 18:05:44 crc kubenswrapper[4661]: I0120 18:05:44.047316 4661 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Jan 20 18:05:44 crc kubenswrapper[4661]: I0120 18:05:44.047563 4661 state_mem.go:36] "Initialized new in-memory state store" Jan 20 18:05:44 crc kubenswrapper[4661]: I0120 18:05:44.047664 4661 server.go:1245] "Using root directory" path="/var/lib/kubelet" Jan 20 18:05:44 crc kubenswrapper[4661]: I0120 18:05:44.048237 4661 kubelet.go:418] "Attempting to sync node with API server" Jan 20 18:05:44 crc kubenswrapper[4661]: I0120 18:05:44.048256 4661 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 20 18:05:44 crc kubenswrapper[4661]: I0120 18:05:44.048281 4661 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Jan 20 18:05:44 crc kubenswrapper[4661]: I0120 18:05:44.048296 4661 kubelet.go:324] "Adding apiserver pod source" Jan 20 18:05:44 crc kubenswrapper[4661]: I0120 18:05:44.048307 4661 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 20 18:05:44 crc kubenswrapper[4661]: I0120 18:05:44.051746 4661 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.5-4.rhaos4.18.gitdad78d5.el9" apiVersion="v1" Jan 20 18:05:44 crc kubenswrapper[4661]: I0120 18:05:44.052237 4661 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Jan 20 18:05:44 crc kubenswrapper[4661]: I0120 18:05:44.053161 4661 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 20 18:05:44 crc kubenswrapper[4661]: I0120 18:05:44.053836 4661 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Jan 20 18:05:44 crc kubenswrapper[4661]: I0120 18:05:44.053868 4661 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Jan 20 18:05:44 crc kubenswrapper[4661]: I0120 18:05:44.053878 4661 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Jan 20 18:05:44 crc kubenswrapper[4661]: I0120 18:05:44.053888 4661 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Jan 20 18:05:44 crc kubenswrapper[4661]: I0120 18:05:44.053903 4661 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Jan 20 18:05:44 crc kubenswrapper[4661]: I0120 18:05:44.053912 4661 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Jan 20 18:05:44 crc kubenswrapper[4661]: I0120 18:05:44.053922 4661 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Jan 20 18:05:44 crc kubenswrapper[4661]: I0120 18:05:44.053937 4661 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Jan 20 18:05:44 crc kubenswrapper[4661]: I0120 18:05:44.053949 4661 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Jan 20 18:05:44 crc kubenswrapper[4661]: I0120 18:05:44.053959 4661 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Jan 20 18:05:44 crc kubenswrapper[4661]: I0120 18:05:44.053974 4661 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Jan 20 18:05:44 crc kubenswrapper[4661]: I0120 18:05:44.053983 4661 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Jan 20 18:05:44 crc kubenswrapper[4661]: I0120 18:05:44.054217 4661 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Jan 20 18:05:44 crc kubenswrapper[4661]: I0120 18:05:44.054818 4661 server.go:1280] "Started kubelet" Jan 20 18:05:44 crc kubenswrapper[4661]: I0120 18:05:44.056159 4661 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 20 18:05:44 crc kubenswrapper[4661]: I0120 18:05:44.055689 4661 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 20 18:05:44 crc systemd[1]: Started Kubernetes Kubelet. Jan 20 18:05:44 crc kubenswrapper[4661]: I0120 18:05:44.058286 4661 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 20 18:05:44 crc kubenswrapper[4661]: I0120 18:05:44.064481 4661 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.36:6443: connect: connection refused Jan 20 18:05:44 crc kubenswrapper[4661]: W0120 18:05:44.064777 4661 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.36:6443: connect: connection refused Jan 20 18:05:44 crc kubenswrapper[4661]: W0120 18:05:44.065467 4661 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.36:6443: connect: connection refused Jan 20 18:05:44 crc kubenswrapper[4661]: E0120 18:05:44.064926 4661 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.36:6443: connect: connection refused" logger="UnhandledError" Jan 20 18:05:44 crc kubenswrapper[4661]: E0120 18:05:44.065560 4661 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.36:6443: connect: connection refused" logger="UnhandledError" Jan 20 18:05:44 crc kubenswrapper[4661]: I0120 18:05:44.075612 4661 server.go:460] "Adding debug handlers to kubelet server" Jan 20 18:05:44 crc kubenswrapper[4661]: I0120 18:05:44.076412 4661 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Jan 20 18:05:44 crc kubenswrapper[4661]: I0120 18:05:44.076463 4661 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 20 18:05:44 crc kubenswrapper[4661]: I0120 18:05:44.076934 4661 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-16 01:03:54.70692944 +0000 UTC Jan 20 18:05:44 crc kubenswrapper[4661]: E0120 18:05:44.077263 4661 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Jan 20 18:05:44 crc kubenswrapper[4661]: I0120 18:05:44.077342 4661 volume_manager.go:287] "The desired_state_of_world populator starts" Jan 20 18:05:44 crc kubenswrapper[4661]: I0120 18:05:44.077357 4661 volume_manager.go:289] "Starting Kubelet Volume Manager" Jan 20 18:05:44 crc kubenswrapper[4661]: I0120 18:05:44.077551 4661 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 20 18:05:44 crc kubenswrapper[4661]: E0120 18:05:44.077839 4661 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.36:6443: connect: connection refused" interval="200ms" Jan 20 18:05:44 crc kubenswrapper[4661]: W0120 18:05:44.078237 4661 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.36:6443: connect: connection refused Jan 20 18:05:44 crc kubenswrapper[4661]: E0120 18:05:44.078301 4661 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.36:6443: connect: connection refused" logger="UnhandledError" Jan 20 18:05:44 crc kubenswrapper[4661]: I0120 18:05:44.087945 4661 factory.go:55] Registering systemd factory Jan 20 18:05:44 crc kubenswrapper[4661]: I0120 18:05:44.088230 4661 factory.go:221] Registration of the systemd container factory successfully Jan 20 18:05:44 crc kubenswrapper[4661]: I0120 18:05:44.089825 4661 factory.go:153] Registering CRI-O factory Jan 20 18:05:44 crc kubenswrapper[4661]: I0120 18:05:44.089857 4661 factory.go:221] Registration of the crio container factory successfully Jan 20 18:05:44 crc kubenswrapper[4661]: I0120 18:05:44.089928 4661 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Jan 20 18:05:44 crc kubenswrapper[4661]: I0120 18:05:44.089951 4661 factory.go:103] Registering Raw factory Jan 20 18:05:44 crc kubenswrapper[4661]: I0120 18:05:44.089969 4661 manager.go:1196] Started watching for new ooms in manager Jan 20 18:05:44 crc kubenswrapper[4661]: I0120 18:05:44.091936 4661 manager.go:319] Starting recovery of all containers Jan 20 18:05:44 crc kubenswrapper[4661]: I0120 18:05:44.092023 4661 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" seLinuxMountContext="" Jan 20 18:05:44 crc kubenswrapper[4661]: I0120 18:05:44.092097 4661 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" seLinuxMountContext="" Jan 20 18:05:44 crc kubenswrapper[4661]: I0120 18:05:44.092117 4661 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" seLinuxMountContext="" Jan 20 18:05:44 crc kubenswrapper[4661]: I0120 18:05:44.092132 4661 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" seLinuxMountContext="" Jan 20 18:05:44 crc kubenswrapper[4661]: I0120 18:05:44.092145 4661 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" seLinuxMountContext="" Jan 20 18:05:44 crc kubenswrapper[4661]: I0120 18:05:44.092157 4661 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" seLinuxMountContext="" Jan 20 18:05:44 crc kubenswrapper[4661]: I0120 18:05:44.092169 4661 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" seLinuxMountContext="" Jan 20 18:05:44 crc kubenswrapper[4661]: I0120 18:05:44.092182 4661 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" seLinuxMountContext="" Jan 20 18:05:44 crc kubenswrapper[4661]: I0120 18:05:44.092198 4661 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" seLinuxMountContext="" Jan 20 18:05:44 crc kubenswrapper[4661]: I0120 18:05:44.092211 4661 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" seLinuxMountContext="" Jan 20 18:05:44 crc kubenswrapper[4661]: I0120 18:05:44.092224 4661 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" seLinuxMountContext="" Jan 20 18:05:44 crc kubenswrapper[4661]: I0120 18:05:44.092236 4661 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" seLinuxMountContext="" Jan 20 18:05:44 crc kubenswrapper[4661]: I0120 18:05:44.092252 4661 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" seLinuxMountContext="" Jan 20 18:05:44 crc kubenswrapper[4661]: I0120 18:05:44.092267 4661 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" seLinuxMountContext="" Jan 20 18:05:44 crc kubenswrapper[4661]: I0120 18:05:44.092302 4661 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" seLinuxMountContext="" Jan 20 18:05:44 crc kubenswrapper[4661]: I0120 18:05:44.092319 4661 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Jan 20 18:05:44 crc kubenswrapper[4661]: I0120 18:05:44.092334 4661 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" seLinuxMountContext="" Jan 20 18:05:44 crc kubenswrapper[4661]: I0120 18:05:44.092347 4661 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" seLinuxMountContext="" Jan 20 18:05:44 crc kubenswrapper[4661]: I0120 18:05:44.092359 4661 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" seLinuxMountContext="" Jan 20 18:05:44 crc kubenswrapper[4661]: I0120 18:05:44.092375 4661 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" seLinuxMountContext="" Jan 20 18:05:44 crc kubenswrapper[4661]: I0120 18:05:44.092386 4661 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" seLinuxMountContext="" Jan 20 18:05:44 crc kubenswrapper[4661]: I0120 18:05:44.092398 4661 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" seLinuxMountContext="" Jan 20 18:05:44 crc kubenswrapper[4661]: I0120 18:05:44.092407 4661 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d751cbb-f2e2-430d-9754-c882a5e924a5" volumeName="kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl" seLinuxMountContext="" Jan 20 18:05:44 crc kubenswrapper[4661]: I0120 18:05:44.092420 4661 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" seLinuxMountContext="" Jan 20 18:05:44 crc kubenswrapper[4661]: I0120 18:05:44.092436 4661 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" seLinuxMountContext="" Jan 20 18:05:44 crc kubenswrapper[4661]: I0120 18:05:44.092451 4661 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" seLinuxMountContext="" Jan 20 18:05:44 crc kubenswrapper[4661]: I0120 18:05:44.092484 4661 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" seLinuxMountContext="" Jan 20 18:05:44 crc kubenswrapper[4661]: I0120 18:05:44.092497 4661 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3b6479f0-333b-4a96-9adf-2099afdc2447" volumeName="kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr" seLinuxMountContext="" Jan 20 18:05:44 crc kubenswrapper[4661]: I0120 18:05:44.092508 4661 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" seLinuxMountContext="" Jan 20 18:05:44 crc kubenswrapper[4661]: I0120 18:05:44.092520 4661 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm" seLinuxMountContext="" Jan 20 18:05:44 crc kubenswrapper[4661]: I0120 18:05:44.092540 4661 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" seLinuxMountContext="" Jan 20 18:05:44 crc kubenswrapper[4661]: I0120 18:05:44.092569 4661 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" seLinuxMountContext="" Jan 20 18:05:44 crc kubenswrapper[4661]: I0120 18:05:44.092583 4661 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" seLinuxMountContext="" Jan 20 18:05:44 crc kubenswrapper[4661]: I0120 18:05:44.092599 4661 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert" seLinuxMountContext="" Jan 20 18:05:44 crc kubenswrapper[4661]: I0120 18:05:44.092612 4661 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" seLinuxMountContext="" Jan 20 18:05:44 crc kubenswrapper[4661]: I0120 18:05:44.092623 4661 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" seLinuxMountContext="" Jan 20 18:05:44 crc kubenswrapper[4661]: I0120 18:05:44.092654 4661 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" seLinuxMountContext="" Jan 20 18:05:44 crc kubenswrapper[4661]: I0120 18:05:44.092684 4661 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" seLinuxMountContext="" Jan 20 18:05:44 crc kubenswrapper[4661]: I0120 18:05:44.092699 4661 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" seLinuxMountContext="" Jan 20 18:05:44 crc kubenswrapper[4661]: I0120 18:05:44.092709 4661 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls" seLinuxMountContext="" Jan 20 18:05:44 crc kubenswrapper[4661]: I0120 18:05:44.092720 4661 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" seLinuxMountContext="" Jan 20 18:05:44 crc kubenswrapper[4661]: I0120 18:05:44.092731 4661 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" seLinuxMountContext="" Jan 20 18:05:44 crc kubenswrapper[4661]: I0120 18:05:44.092742 4661 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" seLinuxMountContext="" Jan 20 18:05:44 crc kubenswrapper[4661]: I0120 18:05:44.092752 4661 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" seLinuxMountContext="" Jan 20 18:05:44 crc kubenswrapper[4661]: I0120 18:05:44.092762 4661 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert" seLinuxMountContext="" Jan 20 18:05:44 crc kubenswrapper[4661]: I0120 18:05:44.092780 4661 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" seLinuxMountContext="" Jan 20 18:05:44 crc kubenswrapper[4661]: I0120 18:05:44.092789 4661 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" seLinuxMountContext="" Jan 20 18:05:44 crc kubenswrapper[4661]: I0120 18:05:44.092801 4661 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44663579-783b-4372-86d6-acf235a62d72" volumeName="kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" seLinuxMountContext="" Jan 20 18:05:44 crc kubenswrapper[4661]: I0120 18:05:44.092810 4661 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" seLinuxMountContext="" Jan 20 18:05:44 crc kubenswrapper[4661]: I0120 18:05:44.092823 4661 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" seLinuxMountContext="" Jan 20 18:05:44 crc kubenswrapper[4661]: I0120 18:05:44.092833 4661 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" seLinuxMountContext="" Jan 20 18:05:44 crc kubenswrapper[4661]: I0120 18:05:44.092843 4661 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" seLinuxMountContext="" Jan 20 18:05:44 crc kubenswrapper[4661]: I0120 18:05:44.092855 4661 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" seLinuxMountContext="" Jan 20 18:05:44 crc kubenswrapper[4661]: I0120 18:05:44.092867 4661 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" seLinuxMountContext="" Jan 20 18:05:44 crc kubenswrapper[4661]: I0120 18:05:44.092895 4661 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" seLinuxMountContext="" Jan 20 18:05:44 crc kubenswrapper[4661]: I0120 18:05:44.092907 4661 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" seLinuxMountContext="" Jan 20 18:05:44 crc kubenswrapper[4661]: I0120 18:05:44.092916 4661 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" seLinuxMountContext="" Jan 20 18:05:44 crc kubenswrapper[4661]: I0120 18:05:44.092923 4661 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script" seLinuxMountContext="" Jan 20 18:05:44 crc kubenswrapper[4661]: I0120 18:05:44.092933 4661 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" seLinuxMountContext="" Jan 20 18:05:44 crc kubenswrapper[4661]: I0120 18:05:44.092943 4661 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" seLinuxMountContext="" Jan 20 18:05:44 crc kubenswrapper[4661]: I0120 18:05:44.092953 4661 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" seLinuxMountContext="" Jan 20 18:05:44 crc kubenswrapper[4661]: I0120 18:05:44.092961 4661 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" seLinuxMountContext="" Jan 20 18:05:44 crc kubenswrapper[4661]: I0120 18:05:44.092973 4661 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" volumeName="kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" seLinuxMountContext="" Jan 20 18:05:44 crc kubenswrapper[4661]: I0120 18:05:44.092983 4661 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" seLinuxMountContext="" Jan 20 18:05:44 crc kubenswrapper[4661]: I0120 18:05:44.092992 4661 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" seLinuxMountContext="" Jan 20 18:05:44 crc kubenswrapper[4661]: I0120 18:05:44.093004 4661 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" seLinuxMountContext="" Jan 20 18:05:44 crc kubenswrapper[4661]: I0120 18:05:44.093013 4661 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" seLinuxMountContext="" Jan 20 18:05:44 crc kubenswrapper[4661]: I0120 18:05:44.093022 4661 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" seLinuxMountContext="" Jan 20 18:05:44 crc kubenswrapper[4661]: I0120 18:05:44.093032 4661 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" seLinuxMountContext="" Jan 20 18:05:44 crc kubenswrapper[4661]: I0120 18:05:44.093042 4661 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" seLinuxMountContext="" Jan 20 18:05:44 crc kubenswrapper[4661]: I0120 18:05:44.093052 4661 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" seLinuxMountContext="" Jan 20 18:05:44 crc kubenswrapper[4661]: I0120 18:05:44.093063 4661 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" volumeName="kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" seLinuxMountContext="" Jan 20 18:05:44 crc kubenswrapper[4661]: I0120 18:05:44.093075 4661 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" seLinuxMountContext="" Jan 20 18:05:44 crc kubenswrapper[4661]: I0120 18:05:44.093088 4661 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" seLinuxMountContext="" Jan 20 18:05:44 crc kubenswrapper[4661]: I0120 18:05:44.093101 4661 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" seLinuxMountContext="" Jan 20 18:05:44 crc kubenswrapper[4661]: I0120 18:05:44.093113 4661 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" seLinuxMountContext="" Jan 20 18:05:44 crc kubenswrapper[4661]: I0120 18:05:44.093127 4661 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" seLinuxMountContext="" Jan 20 18:05:44 crc kubenswrapper[4661]: I0120 18:05:44.093138 4661 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" seLinuxMountContext="" Jan 20 18:05:44 crc kubenswrapper[4661]: I0120 18:05:44.093148 4661 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" seLinuxMountContext="" Jan 20 18:05:44 crc kubenswrapper[4661]: I0120 18:05:44.093157 4661 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Jan 20 18:05:44 crc kubenswrapper[4661]: I0120 18:05:44.093168 4661 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" seLinuxMountContext="" Jan 20 18:05:44 crc kubenswrapper[4661]: I0120 18:05:44.093180 4661 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" seLinuxMountContext="" Jan 20 18:05:44 crc kubenswrapper[4661]: I0120 18:05:44.093189 4661 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides" seLinuxMountContext="" Jan 20 18:05:44 crc kubenswrapper[4661]: I0120 18:05:44.093198 4661 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" seLinuxMountContext="" Jan 20 18:05:44 crc kubenswrapper[4661]: I0120 18:05:44.093207 4661 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" seLinuxMountContext="" Jan 20 18:05:44 crc kubenswrapper[4661]: I0120 18:05:44.093217 4661 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" seLinuxMountContext="" Jan 20 18:05:44 crc kubenswrapper[4661]: I0120 18:05:44.093228 4661 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" seLinuxMountContext="" Jan 20 18:05:44 crc kubenswrapper[4661]: I0120 18:05:44.093239 4661 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" seLinuxMountContext="" Jan 20 18:05:44 crc kubenswrapper[4661]: I0120 18:05:44.093247 4661 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" seLinuxMountContext="" Jan 20 18:05:44 crc kubenswrapper[4661]: I0120 18:05:44.093256 4661 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" seLinuxMountContext="" Jan 20 18:05:44 crc kubenswrapper[4661]: I0120 18:05:44.093265 4661 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" seLinuxMountContext="" Jan 20 18:05:44 crc kubenswrapper[4661]: I0120 18:05:44.093273 4661 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" seLinuxMountContext="" Jan 20 18:05:44 crc kubenswrapper[4661]: I0120 18:05:44.093281 4661 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" seLinuxMountContext="" Jan 20 18:05:44 crc kubenswrapper[4661]: I0120 18:05:44.093289 4661 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" seLinuxMountContext="" Jan 20 18:05:44 crc kubenswrapper[4661]: I0120 18:05:44.093297 4661 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" seLinuxMountContext="" Jan 20 18:05:44 crc kubenswrapper[4661]: I0120 18:05:44.093308 4661 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" seLinuxMountContext="" Jan 20 18:05:44 crc kubenswrapper[4661]: I0120 18:05:44.093317 4661 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" seLinuxMountContext="" Jan 20 18:05:44 crc kubenswrapper[4661]: I0120 18:05:44.093331 4661 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" seLinuxMountContext="" Jan 20 18:05:44 crc kubenswrapper[4661]: I0120 18:05:44.093340 4661 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" seLinuxMountContext="" Jan 20 18:05:44 crc kubenswrapper[4661]: I0120 18:05:44.093350 4661 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" seLinuxMountContext="" Jan 20 18:05:44 crc kubenswrapper[4661]: I0120 18:05:44.093359 4661 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" seLinuxMountContext="" Jan 20 18:05:44 crc kubenswrapper[4661]: I0120 18:05:44.093369 4661 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" seLinuxMountContext="" Jan 20 18:05:44 crc kubenswrapper[4661]: I0120 18:05:44.093378 4661 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" seLinuxMountContext="" Jan 20 18:05:44 crc kubenswrapper[4661]: I0120 18:05:44.093387 4661 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" seLinuxMountContext="" Jan 20 18:05:44 crc kubenswrapper[4661]: I0120 18:05:44.093402 4661 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" seLinuxMountContext="" Jan 20 18:05:44 crc kubenswrapper[4661]: I0120 18:05:44.093414 4661 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" seLinuxMountContext="" Jan 20 18:05:44 crc kubenswrapper[4661]: I0120 18:05:44.093423 4661 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" seLinuxMountContext="" Jan 20 18:05:44 crc kubenswrapper[4661]: I0120 18:05:44.093434 4661 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" seLinuxMountContext="" Jan 20 18:05:44 crc kubenswrapper[4661]: I0120 18:05:44.093446 4661 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf" seLinuxMountContext="" Jan 20 18:05:44 crc kubenswrapper[4661]: I0120 18:05:44.093461 4661 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" seLinuxMountContext="" Jan 20 18:05:44 crc kubenswrapper[4661]: I0120 18:05:44.093473 4661 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" seLinuxMountContext="" Jan 20 18:05:44 crc kubenswrapper[4661]: I0120 18:05:44.093484 4661 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" seLinuxMountContext="" Jan 20 18:05:44 crc kubenswrapper[4661]: I0120 18:05:44.093494 4661 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" seLinuxMountContext="" Jan 20 18:05:44 crc kubenswrapper[4661]: I0120 18:05:44.093514 4661 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" seLinuxMountContext="" Jan 20 18:05:44 crc kubenswrapper[4661]: I0120 18:05:44.093525 4661 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" seLinuxMountContext="" Jan 20 18:05:44 crc kubenswrapper[4661]: I0120 18:05:44.093536 4661 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" seLinuxMountContext="" Jan 20 18:05:44 crc kubenswrapper[4661]: I0120 18:05:44.093548 4661 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" seLinuxMountContext="" Jan 20 18:05:44 crc kubenswrapper[4661]: I0120 18:05:44.093573 4661 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" seLinuxMountContext="" Jan 20 18:05:44 crc kubenswrapper[4661]: I0120 18:05:44.093583 4661 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" seLinuxMountContext="" Jan 20 18:05:44 crc kubenswrapper[4661]: I0120 18:05:44.093593 4661 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" seLinuxMountContext="" Jan 20 18:05:44 crc kubenswrapper[4661]: I0120 18:05:44.093602 4661 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" seLinuxMountContext="" Jan 20 18:05:44 crc kubenswrapper[4661]: I0120 18:05:44.093612 4661 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" seLinuxMountContext="" Jan 20 18:05:44 crc kubenswrapper[4661]: I0120 18:05:44.093621 4661 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" seLinuxMountContext="" Jan 20 18:05:44 crc kubenswrapper[4661]: I0120 18:05:44.093631 4661 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" seLinuxMountContext="" Jan 20 18:05:44 crc kubenswrapper[4661]: I0120 18:05:44.093643 4661 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" seLinuxMountContext="" Jan 20 18:05:44 crc kubenswrapper[4661]: I0120 18:05:44.093651 4661 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" seLinuxMountContext="" Jan 20 18:05:44 crc kubenswrapper[4661]: I0120 18:05:44.093659 4661 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" seLinuxMountContext="" Jan 20 18:05:44 crc kubenswrapper[4661]: I0120 18:05:44.093682 4661 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" seLinuxMountContext="" Jan 20 18:05:44 crc kubenswrapper[4661]: I0120 18:05:44.093691 4661 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" seLinuxMountContext="" Jan 20 18:05:44 crc kubenswrapper[4661]: I0120 18:05:44.093702 4661 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" seLinuxMountContext="" Jan 20 18:05:44 crc kubenswrapper[4661]: I0120 18:05:44.093711 4661 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" seLinuxMountContext="" Jan 20 18:05:44 crc kubenswrapper[4661]: I0120 18:05:44.093722 4661 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Jan 20 18:05:44 crc kubenswrapper[4661]: I0120 18:05:44.093732 4661 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" seLinuxMountContext="" Jan 20 18:05:44 crc kubenswrapper[4661]: I0120 18:05:44.093742 4661 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" seLinuxMountContext="" Jan 20 18:05:44 crc kubenswrapper[4661]: I0120 18:05:44.093751 4661 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" seLinuxMountContext="" Jan 20 18:05:44 crc kubenswrapper[4661]: I0120 18:05:44.093760 4661 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" seLinuxMountContext="" Jan 20 18:05:44 crc kubenswrapper[4661]: I0120 18:05:44.093769 4661 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf" seLinuxMountContext="" Jan 20 18:05:44 crc kubenswrapper[4661]: I0120 18:05:44.093780 4661 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" seLinuxMountContext="" Jan 20 18:05:44 crc kubenswrapper[4661]: I0120 18:05:44.093789 4661 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" seLinuxMountContext="" Jan 20 18:05:44 crc kubenswrapper[4661]: I0120 18:05:44.093798 4661 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" seLinuxMountContext="" Jan 20 18:05:44 crc kubenswrapper[4661]: I0120 18:05:44.093806 4661 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" seLinuxMountContext="" Jan 20 18:05:44 crc kubenswrapper[4661]: I0120 18:05:44.093817 4661 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" seLinuxMountContext="" Jan 20 18:05:44 crc kubenswrapper[4661]: I0120 18:05:44.093970 4661 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" seLinuxMountContext="" Jan 20 18:05:44 crc kubenswrapper[4661]: I0120 18:05:44.093980 4661 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" seLinuxMountContext="" Jan 20 18:05:44 crc kubenswrapper[4661]: I0120 18:05:44.093991 4661 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" seLinuxMountContext="" Jan 20 18:05:44 crc kubenswrapper[4661]: I0120 18:05:44.094000 4661 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" seLinuxMountContext="" Jan 20 18:05:44 crc kubenswrapper[4661]: I0120 18:05:44.094009 4661 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" seLinuxMountContext="" Jan 20 18:05:44 crc kubenswrapper[4661]: I0120 18:05:44.094017 4661 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" seLinuxMountContext="" Jan 20 18:05:44 crc kubenswrapper[4661]: I0120 18:05:44.094026 4661 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" seLinuxMountContext="" Jan 20 18:05:44 crc kubenswrapper[4661]: I0120 18:05:44.094036 4661 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" seLinuxMountContext="" Jan 20 18:05:44 crc kubenswrapper[4661]: I0120 18:05:44.094044 4661 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" seLinuxMountContext="" Jan 20 18:05:44 crc kubenswrapper[4661]: I0120 18:05:44.094053 4661 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" seLinuxMountContext="" Jan 20 18:05:44 crc kubenswrapper[4661]: I0120 18:05:44.094063 4661 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" seLinuxMountContext="" Jan 20 18:05:44 crc kubenswrapper[4661]: E0120 18:05:44.095038 4661 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.36:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.188c82983f1542c5 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-20 18:05:44.054760133 +0000 UTC m=+0.385549805,LastTimestamp:2026-01-20 18:05:44.054760133 +0000 UTC m=+0.385549805,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 20 18:05:44 crc kubenswrapper[4661]: I0120 18:05:44.096200 4661 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount" Jan 20 18:05:44 crc kubenswrapper[4661]: I0120 18:05:44.096250 4661 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" seLinuxMountContext="" Jan 20 18:05:44 crc kubenswrapper[4661]: I0120 18:05:44.096279 4661 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" seLinuxMountContext="" Jan 20 18:05:44 crc kubenswrapper[4661]: I0120 18:05:44.096294 4661 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" seLinuxMountContext="" Jan 20 18:05:44 crc kubenswrapper[4661]: I0120 18:05:44.096308 4661 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" seLinuxMountContext="" Jan 20 18:05:44 crc kubenswrapper[4661]: I0120 18:05:44.096328 4661 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" seLinuxMountContext="" Jan 20 18:05:44 crc kubenswrapper[4661]: I0120 18:05:44.096347 4661 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" seLinuxMountContext="" Jan 20 18:05:44 crc kubenswrapper[4661]: I0120 18:05:44.096363 4661 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" seLinuxMountContext="" Jan 20 18:05:44 crc kubenswrapper[4661]: I0120 18:05:44.096379 4661 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" seLinuxMountContext="" Jan 20 18:05:44 crc kubenswrapper[4661]: I0120 18:05:44.096399 4661 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" seLinuxMountContext="" Jan 20 18:05:44 crc kubenswrapper[4661]: I0120 18:05:44.096412 4661 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" seLinuxMountContext="" Jan 20 18:05:44 crc kubenswrapper[4661]: I0120 18:05:44.096438 4661 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" seLinuxMountContext="" Jan 20 18:05:44 crc kubenswrapper[4661]: I0120 18:05:44.096453 4661 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" seLinuxMountContext="" Jan 20 18:05:44 crc kubenswrapper[4661]: I0120 18:05:44.096469 4661 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb" seLinuxMountContext="" Jan 20 18:05:44 crc kubenswrapper[4661]: I0120 18:05:44.096485 4661 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" seLinuxMountContext="" Jan 20 18:05:44 crc kubenswrapper[4661]: I0120 18:05:44.096500 4661 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" seLinuxMountContext="" Jan 20 18:05:44 crc kubenswrapper[4661]: I0120 18:05:44.096524 4661 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" seLinuxMountContext="" Jan 20 18:05:44 crc kubenswrapper[4661]: I0120 18:05:44.096541 4661 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" seLinuxMountContext="" Jan 20 18:05:44 crc kubenswrapper[4661]: I0120 18:05:44.096559 4661 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" seLinuxMountContext="" Jan 20 18:05:44 crc kubenswrapper[4661]: I0120 18:05:44.096580 4661 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" seLinuxMountContext="" Jan 20 18:05:44 crc kubenswrapper[4661]: I0120 18:05:44.096593 4661 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" seLinuxMountContext="" Jan 20 18:05:44 crc kubenswrapper[4661]: I0120 18:05:44.096609 4661 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" seLinuxMountContext="" Jan 20 18:05:44 crc kubenswrapper[4661]: I0120 18:05:44.096624 4661 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" seLinuxMountContext="" Jan 20 18:05:44 crc kubenswrapper[4661]: I0120 18:05:44.096641 4661 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" seLinuxMountContext="" Jan 20 18:05:44 crc kubenswrapper[4661]: I0120 18:05:44.096657 4661 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" seLinuxMountContext="" Jan 20 18:05:44 crc kubenswrapper[4661]: I0120 18:05:44.096695 4661 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" seLinuxMountContext="" Jan 20 18:05:44 crc kubenswrapper[4661]: I0120 18:05:44.096709 4661 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" seLinuxMountContext="" Jan 20 18:05:44 crc kubenswrapper[4661]: I0120 18:05:44.096728 4661 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" seLinuxMountContext="" Jan 20 18:05:44 crc kubenswrapper[4661]: I0120 18:05:44.096750 4661 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" seLinuxMountContext="" Jan 20 18:05:44 crc kubenswrapper[4661]: I0120 18:05:44.096765 4661 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" seLinuxMountContext="" Jan 20 18:05:44 crc kubenswrapper[4661]: I0120 18:05:44.096779 4661 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" seLinuxMountContext="" Jan 20 18:05:44 crc kubenswrapper[4661]: I0120 18:05:44.096793 4661 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" seLinuxMountContext="" Jan 20 18:05:44 crc kubenswrapper[4661]: I0120 18:05:44.096815 4661 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" seLinuxMountContext="" Jan 20 18:05:44 crc kubenswrapper[4661]: I0120 18:05:44.096832 4661 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" seLinuxMountContext="" Jan 20 18:05:44 crc kubenswrapper[4661]: I0120 18:05:44.096853 4661 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" seLinuxMountContext="" Jan 20 18:05:44 crc kubenswrapper[4661]: I0120 18:05:44.096869 4661 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5" seLinuxMountContext="" Jan 20 18:05:44 crc kubenswrapper[4661]: I0120 18:05:44.096883 4661 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" seLinuxMountContext="" Jan 20 18:05:44 crc kubenswrapper[4661]: I0120 18:05:44.096896 4661 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" seLinuxMountContext="" Jan 20 18:05:44 crc kubenswrapper[4661]: I0120 18:05:44.096910 4661 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" seLinuxMountContext="" Jan 20 18:05:44 crc kubenswrapper[4661]: I0120 18:05:44.096925 4661 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" seLinuxMountContext="" Jan 20 18:05:44 crc kubenswrapper[4661]: I0120 18:05:44.096941 4661 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" seLinuxMountContext="" Jan 20 18:05:44 crc kubenswrapper[4661]: I0120 18:05:44.096962 4661 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" seLinuxMountContext="" Jan 20 18:05:44 crc kubenswrapper[4661]: I0120 18:05:44.096978 4661 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" seLinuxMountContext="" Jan 20 18:05:44 crc kubenswrapper[4661]: I0120 18:05:44.096992 4661 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" seLinuxMountContext="" Jan 20 18:05:44 crc kubenswrapper[4661]: I0120 18:05:44.097007 4661 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" seLinuxMountContext="" Jan 20 18:05:44 crc kubenswrapper[4661]: I0120 18:05:44.097025 4661 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" seLinuxMountContext="" Jan 20 18:05:44 crc kubenswrapper[4661]: I0120 18:05:44.097041 4661 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" seLinuxMountContext="" Jan 20 18:05:44 crc kubenswrapper[4661]: I0120 18:05:44.097054 4661 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" seLinuxMountContext="" Jan 20 18:05:44 crc kubenswrapper[4661]: I0120 18:05:44.097074 4661 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" seLinuxMountContext="" Jan 20 18:05:44 crc kubenswrapper[4661]: I0120 18:05:44.097088 4661 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" seLinuxMountContext="" Jan 20 18:05:44 crc kubenswrapper[4661]: I0120 18:05:44.097101 4661 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" seLinuxMountContext="" Jan 20 18:05:44 crc kubenswrapper[4661]: I0120 18:05:44.097120 4661 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Jan 20 18:05:44 crc kubenswrapper[4661]: I0120 18:05:44.097142 4661 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49ef4625-1d3a-4a9f-b595-c2433d32326d" volumeName="kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" seLinuxMountContext="" Jan 20 18:05:44 crc kubenswrapper[4661]: I0120 18:05:44.097162 4661 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" seLinuxMountContext="" Jan 20 18:05:44 crc kubenswrapper[4661]: I0120 18:05:44.097179 4661 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" seLinuxMountContext="" Jan 20 18:05:44 crc kubenswrapper[4661]: I0120 18:05:44.097191 4661 reconstruct.go:97] "Volume reconstruction finished" Jan 20 18:05:44 crc kubenswrapper[4661]: I0120 18:05:44.097200 4661 reconciler.go:26] "Reconciler: start to sync state" Jan 20 18:05:44 crc kubenswrapper[4661]: I0120 18:05:44.107549 4661 manager.go:324] Recovery completed Jan 20 18:05:44 crc kubenswrapper[4661]: I0120 18:05:44.115938 4661 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 20 18:05:44 crc kubenswrapper[4661]: I0120 18:05:44.118877 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:05:44 crc kubenswrapper[4661]: I0120 18:05:44.118912 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:05:44 crc kubenswrapper[4661]: I0120 18:05:44.118921 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:05:44 crc kubenswrapper[4661]: I0120 18:05:44.120699 4661 cpu_manager.go:225] "Starting CPU manager" policy="none" Jan 20 18:05:44 crc kubenswrapper[4661]: I0120 18:05:44.120722 4661 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Jan 20 18:05:44 crc kubenswrapper[4661]: I0120 18:05:44.120748 4661 state_mem.go:36] "Initialized new in-memory state store" Jan 20 18:05:44 crc kubenswrapper[4661]: I0120 18:05:44.129974 4661 policy_none.go:49] "None policy: Start" Jan 20 18:05:44 crc kubenswrapper[4661]: I0120 18:05:44.131052 4661 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 20 18:05:44 crc kubenswrapper[4661]: I0120 18:05:44.131085 4661 state_mem.go:35] "Initializing new in-memory state store" Jan 20 18:05:44 crc kubenswrapper[4661]: I0120 18:05:44.137620 4661 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 20 18:05:44 crc kubenswrapper[4661]: I0120 18:05:44.139555 4661 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 20 18:05:44 crc kubenswrapper[4661]: I0120 18:05:44.139714 4661 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 20 18:05:44 crc kubenswrapper[4661]: I0120 18:05:44.139820 4661 kubelet.go:2335] "Starting kubelet main sync loop" Jan 20 18:05:44 crc kubenswrapper[4661]: E0120 18:05:44.140886 4661 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 20 18:05:44 crc kubenswrapper[4661]: W0120 18:05:44.140690 4661 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.36:6443: connect: connection refused Jan 20 18:05:44 crc kubenswrapper[4661]: E0120 18:05:44.141043 4661 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.36:6443: connect: connection refused" logger="UnhandledError" Jan 20 18:05:44 crc kubenswrapper[4661]: E0120 18:05:44.178213 4661 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Jan 20 18:05:44 crc kubenswrapper[4661]: I0120 18:05:44.197765 4661 manager.go:334] "Starting Device Plugin manager" Jan 20 18:05:44 crc kubenswrapper[4661]: I0120 18:05:44.197817 4661 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 20 18:05:44 crc kubenswrapper[4661]: I0120 18:05:44.197849 4661 server.go:79] "Starting device plugin registration server" Jan 20 18:05:44 crc kubenswrapper[4661]: I0120 18:05:44.199063 4661 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 20 18:05:44 crc kubenswrapper[4661]: I0120 18:05:44.199193 4661 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 20 18:05:44 crc kubenswrapper[4661]: I0120 18:05:44.199365 4661 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Jan 20 18:05:44 crc kubenswrapper[4661]: I0120 18:05:44.199459 4661 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Jan 20 18:05:44 crc kubenswrapper[4661]: I0120 18:05:44.199466 4661 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 20 18:05:44 crc kubenswrapper[4661]: E0120 18:05:44.206915 4661 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 20 18:05:44 crc kubenswrapper[4661]: I0120 18:05:44.241410 4661 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc","openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-etcd/etcd-crc","openshift-kube-apiserver/kube-apiserver-crc"] Jan 20 18:05:44 crc kubenswrapper[4661]: I0120 18:05:44.241530 4661 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 20 18:05:44 crc kubenswrapper[4661]: I0120 18:05:44.242769 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:05:44 crc kubenswrapper[4661]: I0120 18:05:44.242819 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:05:44 crc kubenswrapper[4661]: I0120 18:05:44.242831 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:05:44 crc kubenswrapper[4661]: I0120 18:05:44.242992 4661 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 20 18:05:44 crc kubenswrapper[4661]: I0120 18:05:44.243287 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 20 18:05:44 crc kubenswrapper[4661]: I0120 18:05:44.243607 4661 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 20 18:05:44 crc kubenswrapper[4661]: I0120 18:05:44.244169 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:05:44 crc kubenswrapper[4661]: I0120 18:05:44.244279 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:05:44 crc kubenswrapper[4661]: I0120 18:05:44.244360 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:05:44 crc kubenswrapper[4661]: I0120 18:05:44.244577 4661 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 20 18:05:44 crc kubenswrapper[4661]: I0120 18:05:44.244707 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 20 18:05:44 crc kubenswrapper[4661]: I0120 18:05:44.244735 4661 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 20 18:05:44 crc kubenswrapper[4661]: I0120 18:05:44.244603 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:05:44 crc kubenswrapper[4661]: I0120 18:05:44.244893 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:05:44 crc kubenswrapper[4661]: I0120 18:05:44.244904 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:05:44 crc kubenswrapper[4661]: I0120 18:05:44.245447 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:05:44 crc kubenswrapper[4661]: I0120 18:05:44.245466 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:05:44 crc kubenswrapper[4661]: I0120 18:05:44.245477 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:05:44 crc kubenswrapper[4661]: I0120 18:05:44.246732 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:05:44 crc kubenswrapper[4661]: I0120 18:05:44.246849 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:05:44 crc kubenswrapper[4661]: I0120 18:05:44.246926 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:05:44 crc kubenswrapper[4661]: I0120 18:05:44.247115 4661 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 20 18:05:44 crc kubenswrapper[4661]: I0120 18:05:44.247208 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 20 18:05:44 crc kubenswrapper[4661]: I0120 18:05:44.247252 4661 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 20 18:05:44 crc kubenswrapper[4661]: I0120 18:05:44.248600 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:05:44 crc kubenswrapper[4661]: I0120 18:05:44.248737 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:05:44 crc kubenswrapper[4661]: I0120 18:05:44.248842 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:05:44 crc kubenswrapper[4661]: I0120 18:05:44.249065 4661 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 20 18:05:44 crc kubenswrapper[4661]: I0120 18:05:44.249779 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Jan 20 18:05:44 crc kubenswrapper[4661]: I0120 18:05:44.249878 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:05:44 crc kubenswrapper[4661]: I0120 18:05:44.249908 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:05:44 crc kubenswrapper[4661]: I0120 18:05:44.249916 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:05:44 crc kubenswrapper[4661]: I0120 18:05:44.250053 4661 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 20 18:05:44 crc kubenswrapper[4661]: I0120 18:05:44.250110 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:05:44 crc kubenswrapper[4661]: I0120 18:05:44.250191 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:05:44 crc kubenswrapper[4661]: I0120 18:05:44.250201 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:05:44 crc kubenswrapper[4661]: I0120 18:05:44.250437 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 20 18:05:44 crc kubenswrapper[4661]: I0120 18:05:44.250768 4661 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 20 18:05:44 crc kubenswrapper[4661]: I0120 18:05:44.251706 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:05:44 crc kubenswrapper[4661]: I0120 18:05:44.251727 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:05:44 crc kubenswrapper[4661]: I0120 18:05:44.251735 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:05:44 crc kubenswrapper[4661]: I0120 18:05:44.252742 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:05:44 crc kubenswrapper[4661]: I0120 18:05:44.252851 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:05:44 crc kubenswrapper[4661]: I0120 18:05:44.252928 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:05:44 crc kubenswrapper[4661]: E0120 18:05:44.278633 4661 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.36:6443: connect: connection refused" interval="400ms" Jan 20 18:05:44 crc kubenswrapper[4661]: I0120 18:05:44.299633 4661 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 20 18:05:44 crc kubenswrapper[4661]: I0120 18:05:44.301022 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 20 18:05:44 crc kubenswrapper[4661]: I0120 18:05:44.301261 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 20 18:05:44 crc kubenswrapper[4661]: I0120 18:05:44.301428 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 20 18:05:44 crc kubenswrapper[4661]: I0120 18:05:44.301576 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 20 18:05:44 crc kubenswrapper[4661]: I0120 18:05:44.301896 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 20 18:05:44 crc kubenswrapper[4661]: I0120 18:05:44.302016 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 20 18:05:44 crc kubenswrapper[4661]: I0120 18:05:44.302111 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 20 18:05:44 crc kubenswrapper[4661]: I0120 18:05:44.302328 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 20 18:05:44 crc kubenswrapper[4661]: I0120 18:05:44.302563 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 20 18:05:44 crc kubenswrapper[4661]: I0120 18:05:44.301914 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:05:44 crc kubenswrapper[4661]: I0120 18:05:44.302773 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 20 18:05:44 crc kubenswrapper[4661]: I0120 18:05:44.302820 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:05:44 crc kubenswrapper[4661]: I0120 18:05:44.302836 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:05:44 crc kubenswrapper[4661]: I0120 18:05:44.302869 4661 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 20 18:05:44 crc kubenswrapper[4661]: I0120 18:05:44.302949 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 20 18:05:44 crc kubenswrapper[4661]: I0120 18:05:44.303018 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 20 18:05:44 crc kubenswrapper[4661]: I0120 18:05:44.303068 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 20 18:05:44 crc kubenswrapper[4661]: I0120 18:05:44.303111 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 20 18:05:44 crc kubenswrapper[4661]: I0120 18:05:44.303148 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 20 18:05:44 crc kubenswrapper[4661]: E0120 18:05:44.303469 4661 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.36:6443: connect: connection refused" node="crc" Jan 20 18:05:44 crc kubenswrapper[4661]: I0120 18:05:44.404948 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 20 18:05:44 crc kubenswrapper[4661]: I0120 18:05:44.405036 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 20 18:05:44 crc kubenswrapper[4661]: I0120 18:05:44.405071 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 20 18:05:44 crc kubenswrapper[4661]: I0120 18:05:44.405138 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 20 18:05:44 crc kubenswrapper[4661]: I0120 18:05:44.405175 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 20 18:05:44 crc kubenswrapper[4661]: I0120 18:05:44.405187 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 20 18:05:44 crc kubenswrapper[4661]: I0120 18:05:44.405237 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 20 18:05:44 crc kubenswrapper[4661]: I0120 18:05:44.405186 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 20 18:05:44 crc kubenswrapper[4661]: I0120 18:05:44.405257 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 20 18:05:44 crc kubenswrapper[4661]: I0120 18:05:44.405281 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 20 18:05:44 crc kubenswrapper[4661]: I0120 18:05:44.405210 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 20 18:05:44 crc kubenswrapper[4661]: I0120 18:05:44.405187 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 20 18:05:44 crc kubenswrapper[4661]: I0120 18:05:44.405350 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 20 18:05:44 crc kubenswrapper[4661]: I0120 18:05:44.405370 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 20 18:05:44 crc kubenswrapper[4661]: I0120 18:05:44.405387 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 20 18:05:44 crc kubenswrapper[4661]: I0120 18:05:44.405404 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 20 18:05:44 crc kubenswrapper[4661]: I0120 18:05:44.405419 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 20 18:05:44 crc kubenswrapper[4661]: I0120 18:05:44.405434 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 20 18:05:44 crc kubenswrapper[4661]: I0120 18:05:44.405449 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 20 18:05:44 crc kubenswrapper[4661]: I0120 18:05:44.405465 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 20 18:05:44 crc kubenswrapper[4661]: I0120 18:05:44.405485 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 20 18:05:44 crc kubenswrapper[4661]: I0120 18:05:44.405485 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 20 18:05:44 crc kubenswrapper[4661]: I0120 18:05:44.405535 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 20 18:05:44 crc kubenswrapper[4661]: I0120 18:05:44.405576 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 20 18:05:44 crc kubenswrapper[4661]: I0120 18:05:44.405622 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 20 18:05:44 crc kubenswrapper[4661]: I0120 18:05:44.405698 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 20 18:05:44 crc kubenswrapper[4661]: I0120 18:05:44.405752 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 20 18:05:44 crc kubenswrapper[4661]: I0120 18:05:44.405815 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 20 18:05:44 crc kubenswrapper[4661]: I0120 18:05:44.405819 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 20 18:05:44 crc kubenswrapper[4661]: I0120 18:05:44.405853 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 20 18:05:44 crc kubenswrapper[4661]: I0120 18:05:44.504253 4661 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 20 18:05:44 crc kubenswrapper[4661]: I0120 18:05:44.505988 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:05:44 crc kubenswrapper[4661]: I0120 18:05:44.506166 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:05:44 crc kubenswrapper[4661]: I0120 18:05:44.506337 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:05:44 crc kubenswrapper[4661]: I0120 18:05:44.506577 4661 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 20 18:05:44 crc kubenswrapper[4661]: E0120 18:05:44.507320 4661 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.36:6443: connect: connection refused" node="crc" Jan 20 18:05:44 crc kubenswrapper[4661]: I0120 18:05:44.564534 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 20 18:05:44 crc kubenswrapper[4661]: I0120 18:05:44.570510 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 20 18:05:44 crc kubenswrapper[4661]: W0120 18:05:44.586636 4661 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf614b9022728cf315e60c057852e563e.slice/crio-4520acc14bbf04e61d76538d3bc6bd8e5c0379caa25dbae4d57ac480a37d4f02 WatchSource:0}: Error finding container 4520acc14bbf04e61d76538d3bc6bd8e5c0379caa25dbae4d57ac480a37d4f02: Status 404 returned error can't find the container with id 4520acc14bbf04e61d76538d3bc6bd8e5c0379caa25dbae4d57ac480a37d4f02 Jan 20 18:05:44 crc kubenswrapper[4661]: I0120 18:05:44.589953 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 20 18:05:44 crc kubenswrapper[4661]: W0120 18:05:44.590112 4661 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd1b160f5dda77d281dd8e69ec8d817f9.slice/crio-39adb54f27629da5e09596c5a679bae39c37d38706a3ff07327778ce4ffcc4f7 WatchSource:0}: Error finding container 39adb54f27629da5e09596c5a679bae39c37d38706a3ff07327778ce4ffcc4f7: Status 404 returned error can't find the container with id 39adb54f27629da5e09596c5a679bae39c37d38706a3ff07327778ce4ffcc4f7 Jan 20 18:05:44 crc kubenswrapper[4661]: I0120 18:05:44.597756 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Jan 20 18:05:44 crc kubenswrapper[4661]: I0120 18:05:44.600841 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 20 18:05:44 crc kubenswrapper[4661]: W0120 18:05:44.618396 4661 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2139d3e2895fc6797b9c76a1b4c9886d.slice/crio-267fdb2059f6ed6a68f7bfc40298206d0e75b053ad664fe0735602d93aa53473 WatchSource:0}: Error finding container 267fdb2059f6ed6a68f7bfc40298206d0e75b053ad664fe0735602d93aa53473: Status 404 returned error can't find the container with id 267fdb2059f6ed6a68f7bfc40298206d0e75b053ad664fe0735602d93aa53473 Jan 20 18:05:44 crc kubenswrapper[4661]: W0120 18:05:44.623315 4661 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice/crio-3138dbd6267b93d0b72e50396445206b313a9dbe97664fb27f95508cdd2c83fe WatchSource:0}: Error finding container 3138dbd6267b93d0b72e50396445206b313a9dbe97664fb27f95508cdd2c83fe: Status 404 returned error can't find the container with id 3138dbd6267b93d0b72e50396445206b313a9dbe97664fb27f95508cdd2c83fe Jan 20 18:05:44 crc kubenswrapper[4661]: E0120 18:05:44.679789 4661 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.36:6443: connect: connection refused" interval="800ms" Jan 20 18:05:44 crc kubenswrapper[4661]: I0120 18:05:44.907760 4661 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 20 18:05:44 crc kubenswrapper[4661]: I0120 18:05:44.914700 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:05:44 crc kubenswrapper[4661]: I0120 18:05:44.914739 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:05:44 crc kubenswrapper[4661]: I0120 18:05:44.914748 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:05:44 crc kubenswrapper[4661]: I0120 18:05:44.914774 4661 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 20 18:05:44 crc kubenswrapper[4661]: E0120 18:05:44.915003 4661 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.36:6443: connect: connection refused" node="crc" Jan 20 18:05:44 crc kubenswrapper[4661]: W0120 18:05:44.981774 4661 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.36:6443: connect: connection refused Jan 20 18:05:44 crc kubenswrapper[4661]: E0120 18:05:44.981859 4661 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.36:6443: connect: connection refused" logger="UnhandledError" Jan 20 18:05:45 crc kubenswrapper[4661]: I0120 18:05:45.065359 4661 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.36:6443: connect: connection refused Jan 20 18:05:45 crc kubenswrapper[4661]: I0120 18:05:45.077418 4661 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-02 23:53:46.555058582 +0000 UTC Jan 20 18:05:45 crc kubenswrapper[4661]: I0120 18:05:45.145525 4661 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="6eedc9bdf3c37af238cf9ad5172a8d93751c0641cbf43057016157f086c77538" exitCode=0 Jan 20 18:05:45 crc kubenswrapper[4661]: I0120 18:05:45.145618 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"6eedc9bdf3c37af238cf9ad5172a8d93751c0641cbf43057016157f086c77538"} Jan 20 18:05:45 crc kubenswrapper[4661]: I0120 18:05:45.145750 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"3138dbd6267b93d0b72e50396445206b313a9dbe97664fb27f95508cdd2c83fe"} Jan 20 18:05:45 crc kubenswrapper[4661]: I0120 18:05:45.145944 4661 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 20 18:05:45 crc kubenswrapper[4661]: I0120 18:05:45.147415 4661 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="550c15750153bb1329cc16b3a5cf51be47a7464b3f9840ba02397ffa86a80065" exitCode=0 Jan 20 18:05:45 crc kubenswrapper[4661]: I0120 18:05:45.147433 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"550c15750153bb1329cc16b3a5cf51be47a7464b3f9840ba02397ffa86a80065"} Jan 20 18:05:45 crc kubenswrapper[4661]: I0120 18:05:45.147457 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"267fdb2059f6ed6a68f7bfc40298206d0e75b053ad664fe0735602d93aa53473"} Jan 20 18:05:45 crc kubenswrapper[4661]: I0120 18:05:45.147552 4661 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 20 18:05:45 crc kubenswrapper[4661]: I0120 18:05:45.148247 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:05:45 crc kubenswrapper[4661]: I0120 18:05:45.148268 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:05:45 crc kubenswrapper[4661]: I0120 18:05:45.148275 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:05:45 crc kubenswrapper[4661]: I0120 18:05:45.151383 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:05:45 crc kubenswrapper[4661]: I0120 18:05:45.151499 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:05:45 crc kubenswrapper[4661]: I0120 18:05:45.151550 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:05:45 crc kubenswrapper[4661]: I0120 18:05:45.152392 4661 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 20 18:05:45 crc kubenswrapper[4661]: I0120 18:05:45.154534 4661 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="892bacf66ebee9e56348d6d6f391b0fd23a5c99369ddaf9280590d8598b32e62" exitCode=0 Jan 20 18:05:45 crc kubenswrapper[4661]: I0120 18:05:45.154657 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"892bacf66ebee9e56348d6d6f391b0fd23a5c99369ddaf9280590d8598b32e62"} Jan 20 18:05:45 crc kubenswrapper[4661]: I0120 18:05:45.154708 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"135b58d8d4ccea2e71439f9ef02241129f3c203850e7130809c34733796feb3a"} Jan 20 18:05:45 crc kubenswrapper[4661]: I0120 18:05:45.154809 4661 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 20 18:05:45 crc kubenswrapper[4661]: I0120 18:05:45.155572 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:05:45 crc kubenswrapper[4661]: I0120 18:05:45.155618 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:05:45 crc kubenswrapper[4661]: I0120 18:05:45.155630 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:05:45 crc kubenswrapper[4661]: I0120 18:05:45.156099 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:05:45 crc kubenswrapper[4661]: I0120 18:05:45.156125 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:05:45 crc kubenswrapper[4661]: I0120 18:05:45.156136 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:05:45 crc kubenswrapper[4661]: I0120 18:05:45.157900 4661 generic.go:334] "Generic (PLEG): container finished" podID="d1b160f5dda77d281dd8e69ec8d817f9" containerID="470d14940e84e825c26cb01f9310af3ebbbc2107623e2237d96b40e918def207" exitCode=0 Jan 20 18:05:45 crc kubenswrapper[4661]: I0120 18:05:45.157971 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerDied","Data":"470d14940e84e825c26cb01f9310af3ebbbc2107623e2237d96b40e918def207"} Jan 20 18:05:45 crc kubenswrapper[4661]: I0120 18:05:45.158009 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"39adb54f27629da5e09596c5a679bae39c37d38706a3ff07327778ce4ffcc4f7"} Jan 20 18:05:45 crc kubenswrapper[4661]: I0120 18:05:45.158098 4661 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 20 18:05:45 crc kubenswrapper[4661]: I0120 18:05:45.159615 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:05:45 crc kubenswrapper[4661]: I0120 18:05:45.159772 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:05:45 crc kubenswrapper[4661]: I0120 18:05:45.159889 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:05:45 crc kubenswrapper[4661]: I0120 18:05:45.160947 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"008613eee577926f777b6eba5a93379dca1203429fb29918bb057f2aba5eba4e"} Jan 20 18:05:45 crc kubenswrapper[4661]: I0120 18:05:45.160988 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"4520acc14bbf04e61d76538d3bc6bd8e5c0379caa25dbae4d57ac480a37d4f02"} Jan 20 18:05:45 crc kubenswrapper[4661]: W0120 18:05:45.418282 4661 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.36:6443: connect: connection refused Jan 20 18:05:45 crc kubenswrapper[4661]: E0120 18:05:45.418598 4661 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.36:6443: connect: connection refused" logger="UnhandledError" Jan 20 18:05:45 crc kubenswrapper[4661]: W0120 18:05:45.456214 4661 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.36:6443: connect: connection refused Jan 20 18:05:45 crc kubenswrapper[4661]: E0120 18:05:45.456290 4661 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.36:6443: connect: connection refused" logger="UnhandledError" Jan 20 18:05:45 crc kubenswrapper[4661]: E0120 18:05:45.481283 4661 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.36:6443: connect: connection refused" interval="1.6s" Jan 20 18:05:45 crc kubenswrapper[4661]: W0120 18:05:45.617548 4661 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.36:6443: connect: connection refused Jan 20 18:05:45 crc kubenswrapper[4661]: E0120 18:05:45.617639 4661 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.36:6443: connect: connection refused" logger="UnhandledError" Jan 20 18:05:45 crc kubenswrapper[4661]: I0120 18:05:45.715928 4661 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 20 18:05:45 crc kubenswrapper[4661]: I0120 18:05:45.717415 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:05:45 crc kubenswrapper[4661]: I0120 18:05:45.717476 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:05:45 crc kubenswrapper[4661]: I0120 18:05:45.717490 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:05:45 crc kubenswrapper[4661]: I0120 18:05:45.717528 4661 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 20 18:05:45 crc kubenswrapper[4661]: E0120 18:05:45.718217 4661 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.36:6443: connect: connection refused" node="crc" Jan 20 18:05:46 crc kubenswrapper[4661]: I0120 18:05:46.039598 4661 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 20 18:05:46 crc kubenswrapper[4661]: E0120 18:05:46.040692 4661 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.36:6443: connect: connection refused" logger="UnhandledError" Jan 20 18:05:46 crc kubenswrapper[4661]: I0120 18:05:46.065189 4661 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.36:6443: connect: connection refused Jan 20 18:05:46 crc kubenswrapper[4661]: I0120 18:05:46.079333 4661 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-14 17:07:51.315755602 +0000 UTC Jan 20 18:05:46 crc kubenswrapper[4661]: I0120 18:05:46.169076 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"f09e5fcc7fafac7a11257184f5919c06b5b2e56a677b67c664e6489d9a581a20"} Jan 20 18:05:46 crc kubenswrapper[4661]: I0120 18:05:46.169118 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"b7995b8e096ce8c7adf28d9baa4e12d943a697db80ee2b6e6b347b334e44b0df"} Jan 20 18:05:46 crc kubenswrapper[4661]: I0120 18:05:46.169129 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"8a1fb928361cffd6f14855b6c1cf5964eccc9f923435bf79dddd8f0c94decd9a"} Jan 20 18:05:46 crc kubenswrapper[4661]: I0120 18:05:46.169139 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"2286c38d543136df613b2611b8d494d0777a950158adb169c26708335c024251"} Jan 20 18:05:46 crc kubenswrapper[4661]: I0120 18:05:46.171578 4661 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="8dd258010361f9c226562aa4ec5c38cce4a95ab98dad3511c5c5f3f63e909d27" exitCode=0 Jan 20 18:05:46 crc kubenswrapper[4661]: I0120 18:05:46.171620 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"8dd258010361f9c226562aa4ec5c38cce4a95ab98dad3511c5c5f3f63e909d27"} Jan 20 18:05:46 crc kubenswrapper[4661]: I0120 18:05:46.171750 4661 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 20 18:05:46 crc kubenswrapper[4661]: I0120 18:05:46.179618 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:05:46 crc kubenswrapper[4661]: I0120 18:05:46.179956 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:05:46 crc kubenswrapper[4661]: I0120 18:05:46.179974 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:05:46 crc kubenswrapper[4661]: I0120 18:05:46.183961 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"fdf1da4bed4fed480e327c750f01ea201663449a9975540540859463e5b4821f"} Jan 20 18:05:46 crc kubenswrapper[4661]: I0120 18:05:46.184016 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"786a785de87e5345f01ed57fb6cd17efebe4633953b9e6bc9c169469621aea5a"} Jan 20 18:05:46 crc kubenswrapper[4661]: I0120 18:05:46.184032 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"887bb0c57d5fcddfad0ffb44f39fb809f945050689a5fb64f145b607b2dcd4f8"} Jan 20 18:05:46 crc kubenswrapper[4661]: I0120 18:05:46.184145 4661 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 20 18:05:46 crc kubenswrapper[4661]: I0120 18:05:46.185210 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:05:46 crc kubenswrapper[4661]: I0120 18:05:46.185247 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:05:46 crc kubenswrapper[4661]: I0120 18:05:46.185259 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:05:46 crc kubenswrapper[4661]: I0120 18:05:46.188246 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"3e81ace2939c908bb5c186943729767d4a14f0f0f12fc09c3e351774ca38dc47"} Jan 20 18:05:46 crc kubenswrapper[4661]: I0120 18:05:46.188384 4661 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 20 18:05:46 crc kubenswrapper[4661]: I0120 18:05:46.192340 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:05:46 crc kubenswrapper[4661]: I0120 18:05:46.192371 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:05:46 crc kubenswrapper[4661]: I0120 18:05:46.192383 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:05:46 crc kubenswrapper[4661]: I0120 18:05:46.199222 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"faea3c0fefa61b8f0e07a050f59ca7b88d89a7ac8dba19ab019cff00fd782da3"} Jan 20 18:05:46 crc kubenswrapper[4661]: I0120 18:05:46.199265 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"baf1692fe971ebe4534bc83cc471812d2b2883b6f97e53728ded6cd57b40c6f1"} Jan 20 18:05:46 crc kubenswrapper[4661]: I0120 18:05:46.199290 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"1f30ca85f0d31021dde3b56c646ddd5d841e699b809c85e54afa944cc8035df6"} Jan 20 18:05:46 crc kubenswrapper[4661]: I0120 18:05:46.199321 4661 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 20 18:05:46 crc kubenswrapper[4661]: I0120 18:05:46.200639 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:05:46 crc kubenswrapper[4661]: I0120 18:05:46.200706 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:05:46 crc kubenswrapper[4661]: I0120 18:05:46.200726 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:05:46 crc kubenswrapper[4661]: I0120 18:05:46.852717 4661 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 20 18:05:47 crc kubenswrapper[4661]: I0120 18:05:47.080082 4661 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-21 20:22:29.409899495 +0000 UTC Jan 20 18:05:47 crc kubenswrapper[4661]: I0120 18:05:47.204880 4661 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 20 18:05:47 crc kubenswrapper[4661]: I0120 18:05:47.204880 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"3584f02089912eecb6ea77d78d4f093929ce92631cb9ea758f1311268963b6b1"} Jan 20 18:05:47 crc kubenswrapper[4661]: I0120 18:05:47.205848 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:05:47 crc kubenswrapper[4661]: I0120 18:05:47.205883 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:05:47 crc kubenswrapper[4661]: I0120 18:05:47.205895 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:05:47 crc kubenswrapper[4661]: I0120 18:05:47.206878 4661 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="2ed1f50477b370ce196e48a41c08bb2818ebf4f4994e608662e943ac6f0ccbed" exitCode=0 Jan 20 18:05:47 crc kubenswrapper[4661]: I0120 18:05:47.206939 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"2ed1f50477b370ce196e48a41c08bb2818ebf4f4994e608662e943ac6f0ccbed"} Jan 20 18:05:47 crc kubenswrapper[4661]: I0120 18:05:47.207040 4661 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 20 18:05:47 crc kubenswrapper[4661]: I0120 18:05:47.207037 4661 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 20 18:05:47 crc kubenswrapper[4661]: I0120 18:05:47.207812 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:05:47 crc kubenswrapper[4661]: I0120 18:05:47.207836 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:05:47 crc kubenswrapper[4661]: I0120 18:05:47.207848 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:05:47 crc kubenswrapper[4661]: I0120 18:05:47.208812 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:05:47 crc kubenswrapper[4661]: I0120 18:05:47.208849 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:05:47 crc kubenswrapper[4661]: I0120 18:05:47.208860 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:05:47 crc kubenswrapper[4661]: I0120 18:05:47.318708 4661 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 20 18:05:47 crc kubenswrapper[4661]: I0120 18:05:47.319890 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:05:47 crc kubenswrapper[4661]: I0120 18:05:47.319918 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:05:47 crc kubenswrapper[4661]: I0120 18:05:47.319927 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:05:47 crc kubenswrapper[4661]: I0120 18:05:47.319953 4661 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 20 18:05:47 crc kubenswrapper[4661]: I0120 18:05:47.442215 4661 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 20 18:05:48 crc kubenswrapper[4661]: I0120 18:05:48.081220 4661 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-26 14:08:26.221313702 +0000 UTC Jan 20 18:05:48 crc kubenswrapper[4661]: I0120 18:05:48.218108 4661 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 20 18:05:48 crc kubenswrapper[4661]: I0120 18:05:48.218568 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"433b8b108c2b2f7469e71a9e13101c63e3c304e2de1798f6ebea256576d18aa1"} Jan 20 18:05:48 crc kubenswrapper[4661]: I0120 18:05:48.218603 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"eda453cf459078a75b837ae4b66e9a6e6d7fa0563d430b6c95fdac724698649c"} Jan 20 18:05:48 crc kubenswrapper[4661]: I0120 18:05:48.218612 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"2638d17f4684dcf6f074c181308e286774173811d7f1c1b72fda18e1fd45b119"} Jan 20 18:05:48 crc kubenswrapper[4661]: I0120 18:05:48.218621 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"f52492e1e5793ecb9a543b9bbd7b00bfcdd2cbb25a98a8bbd9fe449025b08abc"} Jan 20 18:05:48 crc kubenswrapper[4661]: I0120 18:05:48.218727 4661 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 20 18:05:48 crc kubenswrapper[4661]: I0120 18:05:48.219510 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:05:48 crc kubenswrapper[4661]: I0120 18:05:48.219613 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:05:48 crc kubenswrapper[4661]: I0120 18:05:48.219626 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:05:48 crc kubenswrapper[4661]: I0120 18:05:48.220299 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:05:48 crc kubenswrapper[4661]: I0120 18:05:48.220339 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:05:48 crc kubenswrapper[4661]: I0120 18:05:48.220352 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:05:48 crc kubenswrapper[4661]: I0120 18:05:48.335160 4661 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 20 18:05:49 crc kubenswrapper[4661]: I0120 18:05:49.081631 4661 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-16 01:46:07.667537975 +0000 UTC Jan 20 18:05:49 crc kubenswrapper[4661]: I0120 18:05:49.088145 4661 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 20 18:05:49 crc kubenswrapper[4661]: I0120 18:05:49.227182 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"222ae69d3e4ca6156097290820872d1deb363aeaab2c274facf6248753a23988"} Jan 20 18:05:49 crc kubenswrapper[4661]: I0120 18:05:49.227268 4661 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 20 18:05:49 crc kubenswrapper[4661]: I0120 18:05:49.227268 4661 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 20 18:05:49 crc kubenswrapper[4661]: I0120 18:05:49.228659 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:05:49 crc kubenswrapper[4661]: I0120 18:05:49.228707 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:05:49 crc kubenswrapper[4661]: I0120 18:05:49.228716 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:05:49 crc kubenswrapper[4661]: I0120 18:05:49.229084 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:05:49 crc kubenswrapper[4661]: I0120 18:05:49.229124 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:05:49 crc kubenswrapper[4661]: I0120 18:05:49.229140 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:05:49 crc kubenswrapper[4661]: I0120 18:05:49.473628 4661 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 20 18:05:49 crc kubenswrapper[4661]: I0120 18:05:49.473891 4661 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 20 18:05:49 crc kubenswrapper[4661]: I0120 18:05:49.478851 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:05:49 crc kubenswrapper[4661]: I0120 18:05:49.478920 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:05:49 crc kubenswrapper[4661]: I0120 18:05:49.478939 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:05:50 crc kubenswrapper[4661]: I0120 18:05:50.082074 4661 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-17 00:56:54.490568888 +0000 UTC Jan 20 18:05:50 crc kubenswrapper[4661]: I0120 18:05:50.084189 4661 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 20 18:05:50 crc kubenswrapper[4661]: I0120 18:05:50.230176 4661 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 20 18:05:50 crc kubenswrapper[4661]: I0120 18:05:50.230176 4661 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 20 18:05:50 crc kubenswrapper[4661]: I0120 18:05:50.231186 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:05:50 crc kubenswrapper[4661]: I0120 18:05:50.231222 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:05:50 crc kubenswrapper[4661]: I0120 18:05:50.231233 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:05:50 crc kubenswrapper[4661]: I0120 18:05:50.231434 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:05:50 crc kubenswrapper[4661]: I0120 18:05:50.231465 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:05:50 crc kubenswrapper[4661]: I0120 18:05:50.231476 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:05:50 crc kubenswrapper[4661]: I0120 18:05:50.858003 4661 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 20 18:05:50 crc kubenswrapper[4661]: I0120 18:05:50.858488 4661 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 20 18:05:50 crc kubenswrapper[4661]: I0120 18:05:50.860148 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:05:50 crc kubenswrapper[4661]: I0120 18:05:50.860178 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:05:50 crc kubenswrapper[4661]: I0120 18:05:50.860191 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:05:51 crc kubenswrapper[4661]: I0120 18:05:51.082227 4661 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-28 22:35:45.346491008 +0000 UTC Jan 20 18:05:51 crc kubenswrapper[4661]: I0120 18:05:51.865853 4661 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Jan 20 18:05:51 crc kubenswrapper[4661]: I0120 18:05:51.866040 4661 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 20 18:05:51 crc kubenswrapper[4661]: I0120 18:05:51.867180 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:05:51 crc kubenswrapper[4661]: I0120 18:05:51.867423 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:05:51 crc kubenswrapper[4661]: I0120 18:05:51.867562 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:05:52 crc kubenswrapper[4661]: I0120 18:05:52.082996 4661 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-01 02:35:39.722960415 +0000 UTC Jan 20 18:05:52 crc kubenswrapper[4661]: I0120 18:05:52.473926 4661 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 20 18:05:52 crc kubenswrapper[4661]: I0120 18:05:52.474091 4661 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 20 18:05:53 crc kubenswrapper[4661]: I0120 18:05:53.084508 4661 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-23 03:34:34.011580276 +0000 UTC Jan 20 18:05:53 crc kubenswrapper[4661]: I0120 18:05:53.679358 4661 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-crc" Jan 20 18:05:53 crc kubenswrapper[4661]: I0120 18:05:53.679547 4661 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 20 18:05:53 crc kubenswrapper[4661]: I0120 18:05:53.681026 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:05:53 crc kubenswrapper[4661]: I0120 18:05:53.681265 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:05:53 crc kubenswrapper[4661]: I0120 18:05:53.681323 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:05:54 crc kubenswrapper[4661]: I0120 18:05:54.086285 4661 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-04 02:04:39.681174398 +0000 UTC Jan 20 18:05:54 crc kubenswrapper[4661]: E0120 18:05:54.207151 4661 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 20 18:05:54 crc kubenswrapper[4661]: I0120 18:05:54.461796 4661 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 20 18:05:54 crc kubenswrapper[4661]: I0120 18:05:54.462046 4661 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 20 18:05:54 crc kubenswrapper[4661]: I0120 18:05:54.463692 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:05:54 crc kubenswrapper[4661]: I0120 18:05:54.463740 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:05:54 crc kubenswrapper[4661]: I0120 18:05:54.463754 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:05:54 crc kubenswrapper[4661]: I0120 18:05:54.469259 4661 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 20 18:05:54 crc kubenswrapper[4661]: I0120 18:05:54.651068 4661 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 20 18:05:54 crc kubenswrapper[4661]: I0120 18:05:54.651365 4661 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 20 18:05:54 crc kubenswrapper[4661]: I0120 18:05:54.653206 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:05:54 crc kubenswrapper[4661]: I0120 18:05:54.653253 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:05:54 crc kubenswrapper[4661]: I0120 18:05:54.653264 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:05:55 crc kubenswrapper[4661]: I0120 18:05:55.086430 4661 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-26 18:52:19.146829641 +0000 UTC Jan 20 18:05:55 crc kubenswrapper[4661]: I0120 18:05:55.247239 4661 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 20 18:05:55 crc kubenswrapper[4661]: I0120 18:05:55.248450 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:05:55 crc kubenswrapper[4661]: I0120 18:05:55.248499 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:05:55 crc kubenswrapper[4661]: I0120 18:05:55.248514 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:05:55 crc kubenswrapper[4661]: I0120 18:05:55.253723 4661 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 20 18:05:56 crc kubenswrapper[4661]: I0120 18:05:56.086755 4661 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-04 08:07:18.167728137 +0000 UTC Jan 20 18:05:56 crc kubenswrapper[4661]: I0120 18:05:56.256981 4661 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 20 18:05:56 crc kubenswrapper[4661]: I0120 18:05:56.262764 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:05:56 crc kubenswrapper[4661]: I0120 18:05:56.262833 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:05:56 crc kubenswrapper[4661]: I0120 18:05:56.263012 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:05:57 crc kubenswrapper[4661]: I0120 18:05:57.066467 4661 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": net/http: TLS handshake timeout Jan 20 18:05:57 crc kubenswrapper[4661]: E0120 18:05:57.082874 4661 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" interval="3.2s" Jan 20 18:05:57 crc kubenswrapper[4661]: I0120 18:05:57.088467 4661 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-24 20:33:39.829546702 +0000 UTC Jan 20 18:05:57 crc kubenswrapper[4661]: E0120 18:05:57.321205 4661 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": net/http: TLS handshake timeout" node="crc" Jan 20 18:05:57 crc kubenswrapper[4661]: W0120 18:05:57.416773 4661 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": net/http: TLS handshake timeout Jan 20 18:05:57 crc kubenswrapper[4661]: I0120 18:05:57.416879 4661 trace.go:236] Trace[1697811185]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (20-Jan-2026 18:05:47.415) (total time: 10001ms): Jan 20 18:05:57 crc kubenswrapper[4661]: Trace[1697811185]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (18:05:57.416) Jan 20 18:05:57 crc kubenswrapper[4661]: Trace[1697811185]: [10.001556069s] [10.001556069s] END Jan 20 18:05:57 crc kubenswrapper[4661]: E0120 18:05:57.416907 4661 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Jan 20 18:05:57 crc kubenswrapper[4661]: I0120 18:05:57.515188 4661 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Jan 20 18:05:57 crc kubenswrapper[4661]: I0120 18:05:57.515564 4661 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Jan 20 18:05:57 crc kubenswrapper[4661]: I0120 18:05:57.522701 4661 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Jan 20 18:05:57 crc kubenswrapper[4661]: I0120 18:05:57.522921 4661 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Jan 20 18:05:58 crc kubenswrapper[4661]: I0120 18:05:58.088752 4661 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-11 15:10:52.233112184 +0000 UTC Jan 20 18:05:59 crc kubenswrapper[4661]: I0120 18:05:59.088866 4661 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-13 17:24:34.477083523 +0000 UTC Jan 20 18:05:59 crc kubenswrapper[4661]: I0120 18:05:59.096637 4661 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 20 18:05:59 crc kubenswrapper[4661]: I0120 18:05:59.097010 4661 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 20 18:05:59 crc kubenswrapper[4661]: I0120 18:05:59.098447 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:05:59 crc kubenswrapper[4661]: I0120 18:05:59.098536 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:05:59 crc kubenswrapper[4661]: I0120 18:05:59.098593 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:05:59 crc kubenswrapper[4661]: I0120 18:05:59.103605 4661 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 20 18:05:59 crc kubenswrapper[4661]: I0120 18:05:59.265098 4661 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 20 18:05:59 crc kubenswrapper[4661]: I0120 18:05:59.265152 4661 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 20 18:05:59 crc kubenswrapper[4661]: I0120 18:05:59.266192 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:05:59 crc kubenswrapper[4661]: I0120 18:05:59.266230 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:05:59 crc kubenswrapper[4661]: I0120 18:05:59.266246 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:00 crc kubenswrapper[4661]: I0120 18:06:00.089502 4661 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-28 09:13:29.79530849 +0000 UTC Jan 20 18:06:00 crc kubenswrapper[4661]: I0120 18:06:00.522338 4661 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 20 18:06:00 crc kubenswrapper[4661]: I0120 18:06:00.523802 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:00 crc kubenswrapper[4661]: I0120 18:06:00.523836 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:00 crc kubenswrapper[4661]: I0120 18:06:00.523848 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:00 crc kubenswrapper[4661]: I0120 18:06:00.523874 4661 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 20 18:06:00 crc kubenswrapper[4661]: E0120 18:06:00.527451 4661 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes \"crc\" is forbidden: autoscaling.openshift.io/ManagedNode infra config cache not synchronized" node="crc" Jan 20 18:06:01 crc kubenswrapper[4661]: I0120 18:06:01.090723 4661 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-07 18:39:05.265358977 +0000 UTC Jan 20 18:06:01 crc kubenswrapper[4661]: I0120 18:06:01.945757 4661 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Jan 20 18:06:01 crc kubenswrapper[4661]: I0120 18:06:01.946014 4661 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 20 18:06:01 crc kubenswrapper[4661]: I0120 18:06:01.947566 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:01 crc kubenswrapper[4661]: I0120 18:06:01.947626 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:01 crc kubenswrapper[4661]: I0120 18:06:01.947640 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:01 crc kubenswrapper[4661]: I0120 18:06:01.966862 4661 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Jan 20 18:06:02 crc kubenswrapper[4661]: I0120 18:06:02.091426 4661 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-22 10:22:06.300219722 +0000 UTC Jan 20 18:06:02 crc kubenswrapper[4661]: I0120 18:06:02.273853 4661 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 20 18:06:02 crc kubenswrapper[4661]: I0120 18:06:02.275615 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:02 crc kubenswrapper[4661]: I0120 18:06:02.275696 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:02 crc kubenswrapper[4661]: I0120 18:06:02.275715 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:02 crc kubenswrapper[4661]: I0120 18:06:02.474631 4661 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 20 18:06:02 crc kubenswrapper[4661]: I0120 18:06:02.475033 4661 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 20 18:06:02 crc kubenswrapper[4661]: I0120 18:06:02.511838 4661 trace.go:236] Trace[138032532]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (20-Jan-2026 18:05:47.572) (total time: 14939ms): Jan 20 18:06:02 crc kubenswrapper[4661]: Trace[138032532]: ---"Objects listed" error: 14939ms (18:06:02.511) Jan 20 18:06:02 crc kubenswrapper[4661]: Trace[138032532]: [14.93956351s] [14.93956351s] END Jan 20 18:06:02 crc kubenswrapper[4661]: I0120 18:06:02.511891 4661 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Jan 20 18:06:02 crc kubenswrapper[4661]: I0120 18:06:02.514349 4661 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Jan 20 18:06:02 crc kubenswrapper[4661]: I0120 18:06:02.514728 4661 trace.go:236] Trace[129960284]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (20-Jan-2026 18:05:48.117) (total time: 14397ms): Jan 20 18:06:02 crc kubenswrapper[4661]: Trace[129960284]: ---"Objects listed" error: 14397ms (18:06:02.514) Jan 20 18:06:02 crc kubenswrapper[4661]: Trace[129960284]: [14.397222975s] [14.397222975s] END Jan 20 18:06:02 crc kubenswrapper[4661]: I0120 18:06:02.514762 4661 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Jan 20 18:06:02 crc kubenswrapper[4661]: I0120 18:06:02.515247 4661 trace.go:236] Trace[584637605]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (20-Jan-2026 18:05:48.496) (total time: 14018ms): Jan 20 18:06:02 crc kubenswrapper[4661]: Trace[584637605]: ---"Objects listed" error: 14018ms (18:06:02.515) Jan 20 18:06:02 crc kubenswrapper[4661]: Trace[584637605]: [14.018679402s] [14.018679402s] END Jan 20 18:06:02 crc kubenswrapper[4661]: I0120 18:06:02.515304 4661 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Jan 20 18:06:02 crc kubenswrapper[4661]: I0120 18:06:02.528250 4661 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Jan 20 18:06:02 crc kubenswrapper[4661]: I0120 18:06:02.581797 4661 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:56324->192.168.126.11:17697: read: connection reset by peer" start-of-body= Jan 20 18:06:02 crc kubenswrapper[4661]: I0120 18:06:02.581816 4661 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Liveness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:59054->192.168.126.11:17697: read: connection reset by peer" start-of-body= Jan 20 18:06:02 crc kubenswrapper[4661]: I0120 18:06:02.581867 4661 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:56324->192.168.126.11:17697: read: connection reset by peer" Jan 20 18:06:02 crc kubenswrapper[4661]: I0120 18:06:02.582006 4661 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:59054->192.168.126.11:17697: read: connection reset by peer" Jan 20 18:06:02 crc kubenswrapper[4661]: I0120 18:06:02.583638 4661 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Jan 20 18:06:02 crc kubenswrapper[4661]: I0120 18:06:02.583859 4661 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.059836 4661 apiserver.go:52] "Watching apiserver" Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.062643 4661 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.062936 4661 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-console/networking-console-plugin-85b44fc459-gdk6g","openshift-network-diagnostics/network-check-source-55646444c4-trplf","openshift-network-diagnostics/network-check-target-xd92c","openshift-network-node-identity/network-node-identity-vrzqb","openshift-network-operator/iptables-alerter-4ln5h","openshift-network-operator/network-operator-58b4c7f79c-55gtf"] Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.063855 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.063965 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.064367 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.064463 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.064430 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 18:06:03 crc kubenswrapper[4661]: E0120 18:06:03.064738 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 20 18:06:03 crc kubenswrapper[4661]: E0120 18:06:03.065219 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.063776 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 18:06:03 crc kubenswrapper[4661]: E0120 18:06:03.065580 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.067010 4661 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.067291 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.067600 4661 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.067767 4661 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.068020 4661 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.068141 4661 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.068782 4661 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.069641 4661 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.071304 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.078806 4661 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.091781 4661 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-21 06:41:20.005828234 +0000 UTC Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.092242 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.106221 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.116379 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:03Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.119589 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.119742 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.119833 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.119926 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.120011 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.120109 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.120173 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.120261 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.120359 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.120451 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.120537 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.120635 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.120751 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.120851 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.120968 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.121081 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.121217 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.121317 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.121423 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.121519 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.120372 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.121156 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.121192 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.121366 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.121578 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.121834 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.121930 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.122053 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.122156 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.122252 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.122356 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.122447 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.122547 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.122644 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.122775 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.122875 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.122970 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.123077 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.123171 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.123262 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.123351 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.123447 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.123545 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.123641 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.123771 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.123877 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.123972 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.124078 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.124189 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.124274 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.124359 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.124445 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.124612 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.124720 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.124808 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.124886 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.124965 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.125060 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.125167 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.125256 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.125342 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.125432 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.125530 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.125624 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.125747 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.125841 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.125930 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.126027 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.126132 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.126234 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.126321 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.126414 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.126507 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.126605 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.126720 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.126831 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.126934 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.127028 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.127126 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.127222 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.127327 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.127431 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.127523 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.127614 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.127777 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.128956 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.128991 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.129039 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.129069 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.129124 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.129204 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.129234 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.129289 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.129317 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.129363 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.129392 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.129446 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.129527 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.129775 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.129818 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.129871 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.129902 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.129956 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.129987 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.130049 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.130112 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.130149 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.130205 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.130236 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.130295 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.130323 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.130350 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.130376 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.130407 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.130435 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.130463 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.130489 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.130507 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.130526 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.130546 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.130567 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.130585 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.130604 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.130624 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.130643 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.130715 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.130740 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.130763 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.130786 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.130873 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.130899 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.130920 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.130936 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.130955 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.130974 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.130993 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.131016 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.131037 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.131057 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.131075 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.131094 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.131111 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.131128 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.131146 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.131165 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.131184 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.131202 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.131222 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.131243 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.131267 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.131283 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.131299 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.131317 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.131337 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.131356 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.131373 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.131410 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.131429 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.131448 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.131467 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.131485 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.131506 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.131524 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.131543 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.131562 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.131579 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.131599 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.131619 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.131637 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.131656 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.131698 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.131723 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.131740 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.131757 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.131776 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.131795 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.131813 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.131834 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.131852 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.131869 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.131887 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.131905 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.131925 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.131943 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.131960 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.131980 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.132023 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.132042 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.132062 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.132083 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.132100 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.132117 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.132137 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.132156 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.132174 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.132193 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.132211 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.132230 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.132247 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.132264 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.132282 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.132278 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:03Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.132352 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.132385 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.132415 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.132439 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.132462 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.133911 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.133980 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.134007 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.134147 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.134207 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.134231 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.134250 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.134309 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.134335 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.121945 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.134490 4661 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.134902 4661 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.135150 4661 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.135207 4661 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.135250 4661 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.135439 4661 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.121983 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.122170 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.122402 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.122432 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.122432 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.122692 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.122949 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.123062 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.135645 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.123326 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.123404 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.123636 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.123987 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.124101 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.125732 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.125932 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.126183 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.126317 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.126452 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.126868 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.126909 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.127157 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.127390 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.128205 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.128214 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.128295 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.128471 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.128487 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.128723 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.129051 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.129088 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.130229 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.130877 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.130926 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.130956 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.131130 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.131389 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.131519 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.131658 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.131731 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.141722 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.131815 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.131905 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.131994 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.132378 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.132392 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.133008 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.133303 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.133572 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.134008 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.134061 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.134480 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.134832 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.135089 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.135709 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.135796 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.136228 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.136245 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.136388 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.136581 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.136641 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.136829 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.136829 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.136975 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.137303 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.137423 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.137451 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.139364 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.139655 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.140281 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.140638 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.140658 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.141084 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.141269 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.141300 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.141515 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.141826 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.142083 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.142403 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.142714 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.143355 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.143721 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.143847 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.143946 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.144290 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.144516 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.144604 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.144824 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.144841 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.144851 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.145642 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.145661 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.146231 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.146255 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.147074 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.147092 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.147392 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.147657 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.148027 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.148232 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.148414 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.148410 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.148906 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.149112 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.149167 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.149472 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.149333 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.149413 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.149448 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.149998 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.150278 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.150377 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.150491 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 18:06:03 crc kubenswrapper[4661]: E0120 18:06:03.150731 4661 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 18:06:03.650658817 +0000 UTC m=+19.981448519 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.151001 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.154813 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.154916 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.154998 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.155043 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.155082 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.155125 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.155413 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.155746 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.155761 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.155795 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.131890 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.156521 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.156606 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.157298 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.157693 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.157809 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.158462 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.158623 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.158691 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 18:06:03 crc kubenswrapper[4661]: E0120 18:06:03.158696 4661 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 20 18:06:03 crc kubenswrapper[4661]: E0120 18:06:03.167990 4661 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-20 18:06:03.66795963 +0000 UTC m=+19.998749332 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.160787 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.161107 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.157842 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.168436 4661 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.168497 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.158729 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.159151 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.160090 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.160314 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.160166 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 18:06:03 crc kubenswrapper[4661]: E0120 18:06:03.160520 4661 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 20 18:06:03 crc kubenswrapper[4661]: E0120 18:06:03.168655 4661 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-20 18:06:03.668637639 +0000 UTC m=+19.999427331 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.162309 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.162629 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.163071 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.163095 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.161727 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.168868 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.169442 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.169512 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.169627 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.169719 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.170239 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.170483 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.175515 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 18:06:03 crc kubenswrapper[4661]: E0120 18:06:03.175987 4661 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 20 18:06:03 crc kubenswrapper[4661]: E0120 18:06:03.176015 4661 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 20 18:06:03 crc kubenswrapper[4661]: E0120 18:06:03.176037 4661 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 20 18:06:03 crc kubenswrapper[4661]: E0120 18:06:03.176103 4661 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-20 18:06:03.676085057 +0000 UTC m=+20.006874749 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.176293 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.178360 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.178745 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.179009 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 18:06:03 crc kubenswrapper[4661]: E0120 18:06:03.179220 4661 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 20 18:06:03 crc kubenswrapper[4661]: E0120 18:06:03.179260 4661 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 20 18:06:03 crc kubenswrapper[4661]: E0120 18:06:03.179280 4661 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 20 18:06:03 crc kubenswrapper[4661]: E0120 18:06:03.179338 4661 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-20 18:06:03.679320347 +0000 UTC m=+20.010110039 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.180176 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.182081 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.183566 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.184301 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.185922 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.191176 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.191444 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.192796 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.192975 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.198323 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.199271 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.200037 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.200308 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.201122 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.202872 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.203870 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.206259 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.206385 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.207743 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.207907 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.208151 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.208688 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.208815 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.208930 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.210367 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.210620 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.210747 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.213778 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.216073 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:03Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.218804 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.219768 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.219953 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.220225 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.220337 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.221455 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.225336 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.228946 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:03Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.235528 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.236406 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.236478 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.236592 4661 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.236636 4661 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.236650 4661 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.236661 4661 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.236776 4661 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.236790 4661 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.236794 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.236802 4661 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.236869 4661 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.236886 4661 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.236902 4661 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.236917 4661 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.236605 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.236931 4661 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.236981 4661 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.237025 4661 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.237041 4661 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.237058 4661 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.237102 4661 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.237114 4661 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.237124 4661 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.237135 4661 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.237146 4661 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.237182 4661 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.237309 4661 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.237320 4661 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.237330 4661 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.237343 4661 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.237375 4661 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.237389 4661 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.237402 4661 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.237421 4661 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.237460 4661 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.237474 4661 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.237483 4661 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.237492 4661 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.237502 4661 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.237540 4661 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.237551 4661 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.237563 4661 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.237574 4661 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.237584 4661 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.237595 4661 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.237616 4661 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.237627 4661 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.237638 4661 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.237647 4661 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.237657 4661 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.237733 4661 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.237745 4661 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.237755 4661 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.237765 4661 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.237776 4661 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.237805 4661 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.237822 4661 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.237836 4661 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.237849 4661 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.237862 4661 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.237898 4661 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.237912 4661 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.237925 4661 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.237940 4661 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.237971 4661 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.237983 4661 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.237996 4661 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.238011 4661 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.238023 4661 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.238055 4661 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.238067 4661 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.238079 4661 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.238091 4661 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.238104 4661 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.238138 4661 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.238151 4661 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.238164 4661 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.238177 4661 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.238208 4661 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.238221 4661 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.238235 4661 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.238247 4661 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.238261 4661 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.238293 4661 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.238305 4661 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.238317 4661 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.238330 4661 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.238342 4661 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.238377 4661 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.238392 4661 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.238405 4661 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.238478 4661 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.238491 4661 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.238504 4661 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.238540 4661 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.238556 4661 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.238570 4661 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.238583 4661 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.238595 4661 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.238628 4661 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.238640 4661 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.238654 4661 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.238697 4661 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.238713 4661 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.238726 4661 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.238737 4661 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.238748 4661 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.238776 4661 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.238788 4661 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.238800 4661 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.238812 4661 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.238822 4661 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.238851 4661 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.238862 4661 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.238873 4661 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.238883 4661 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.238893 4661 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.238903 4661 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.238929 4661 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.238941 4661 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.238952 4661 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.238961 4661 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.238970 4661 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.238979 4661 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.239005 4661 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.239015 4661 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.239025 4661 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.239035 4661 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.239045 4661 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.239055 4661 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.239081 4661 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.239092 4661 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.239103 4661 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.239112 4661 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.239122 4661 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.239132 4661 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.239155 4661 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.239166 4661 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.239175 4661 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.239185 4661 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.239194 4661 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.239203 4661 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.239213 4661 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.239237 4661 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.239250 4661 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.239259 4661 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.239268 4661 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.239279 4661 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.239288 4661 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.239314 4661 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.239323 4661 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.239332 4661 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.239341 4661 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.239350 4661 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.239359 4661 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.239386 4661 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.239396 4661 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.239405 4661 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.239414 4661 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.239424 4661 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.239433 4661 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.239442 4661 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.239466 4661 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.239475 4661 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.239484 4661 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.239493 4661 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.239503 4661 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.239514 4661 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.239523 4661 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.239547 4661 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.239557 4661 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.239565 4661 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.239573 4661 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.239585 4661 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.239594 4661 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.239616 4661 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.239626 4661 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.239636 4661 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.239646 4661 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.239655 4661 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.239688 4661 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.239701 4661 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.239712 4661 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.239721 4661 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.239731 4661 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.239741 4661 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.239765 4661 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.239782 4661 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.239791 4661 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.239799 4661 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.243498 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.245107 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.277253 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.278415 4661 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="3584f02089912eecb6ea77d78d4f093929ce92631cb9ea758f1311268963b6b1" exitCode=255 Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.278463 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"3584f02089912eecb6ea77d78d4f093929ce92631cb9ea758f1311268963b6b1"} Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.292880 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.299374 4661 scope.go:117] "RemoveContainer" containerID="3584f02089912eecb6ea77d78d4f093929ce92631cb9ea758f1311268963b6b1" Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.299579 4661 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.307152 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.319645 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:03Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.335331 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:03Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.340509 4661 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.340566 4661 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.346891 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:03Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.360026 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:03Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.379346 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.389099 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 20 18:06:03 crc kubenswrapper[4661]: W0120 18:06:03.393440 4661 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod37a5e44f_9a88_4405_be8a_b645485e7312.slice/crio-00f3758975edb3be9d44630a65339ae9e7a74fe86fbdbf750939954b35e5a14e WatchSource:0}: Error finding container 00f3758975edb3be9d44630a65339ae9e7a74fe86fbdbf750939954b35e5a14e: Status 404 returned error can't find the container with id 00f3758975edb3be9d44630a65339ae9e7a74fe86fbdbf750939954b35e5a14e Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.399725 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 20 18:06:03 crc kubenswrapper[4661]: W0120 18:06:03.412452 4661 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podef543e1b_8068_4ea3_b32a_61027b32e95d.slice/crio-781ce24f4a8b407962b47900fb47bed55e73c8709f728bd59db13819437d31b9 WatchSource:0}: Error finding container 781ce24f4a8b407962b47900fb47bed55e73c8709f728bd59db13819437d31b9: Status 404 returned error can't find the container with id 781ce24f4a8b407962b47900fb47bed55e73c8709f728bd59db13819437d31b9 Jan 20 18:06:03 crc kubenswrapper[4661]: W0120 18:06:03.418380 4661 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd75a4c96_2883_4a0b_bab2_0fab2b6c0b49.slice/crio-a426dbce36e9bb5148d6638873e131088e0991ea7d1cbe9655cbf989346a6ce1 WatchSource:0}: Error finding container a426dbce36e9bb5148d6638873e131088e0991ea7d1cbe9655cbf989346a6ce1: Status 404 returned error can't find the container with id a426dbce36e9bb5148d6638873e131088e0991ea7d1cbe9655cbf989346a6ce1 Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.727508 4661 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.743608 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.743710 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.743741 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.743779 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 18:06:03 crc kubenswrapper[4661]: I0120 18:06:03.743801 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 18:06:03 crc kubenswrapper[4661]: E0120 18:06:03.743917 4661 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 20 18:06:03 crc kubenswrapper[4661]: E0120 18:06:03.743932 4661 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 20 18:06:03 crc kubenswrapper[4661]: E0120 18:06:03.743942 4661 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 20 18:06:03 crc kubenswrapper[4661]: E0120 18:06:03.743985 4661 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-20 18:06:04.743973223 +0000 UTC m=+21.074762885 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 20 18:06:03 crc kubenswrapper[4661]: E0120 18:06:03.744335 4661 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 18:06:04.744327023 +0000 UTC m=+21.075116685 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 18:06:03 crc kubenswrapper[4661]: E0120 18:06:03.744371 4661 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 20 18:06:03 crc kubenswrapper[4661]: E0120 18:06:03.744393 4661 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-20 18:06:04.744386914 +0000 UTC m=+21.075176576 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 20 18:06:03 crc kubenswrapper[4661]: E0120 18:06:03.744436 4661 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 20 18:06:03 crc kubenswrapper[4661]: E0120 18:06:03.744447 4661 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 20 18:06:03 crc kubenswrapper[4661]: E0120 18:06:03.744454 4661 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 20 18:06:03 crc kubenswrapper[4661]: E0120 18:06:03.744472 4661 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-20 18:06:04.744467127 +0000 UTC m=+21.075256789 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 20 18:06:03 crc kubenswrapper[4661]: E0120 18:06:03.744521 4661 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 20 18:06:03 crc kubenswrapper[4661]: E0120 18:06:03.744541 4661 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-20 18:06:04.744535639 +0000 UTC m=+21.075325291 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 20 18:06:04 crc kubenswrapper[4661]: I0120 18:06:04.093139 4661 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-06 20:30:41.179091648 +0000 UTC Jan 20 18:06:04 crc kubenswrapper[4661]: I0120 18:06:04.147779 4661 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Jan 20 18:06:04 crc kubenswrapper[4661]: I0120 18:06:04.148277 4661 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Jan 20 18:06:04 crc kubenswrapper[4661]: I0120 18:06:04.149487 4661 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Jan 20 18:06:04 crc kubenswrapper[4661]: I0120 18:06:04.150140 4661 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Jan 20 18:06:04 crc kubenswrapper[4661]: I0120 18:06:04.151481 4661 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Jan 20 18:06:04 crc kubenswrapper[4661]: I0120 18:06:04.152026 4661 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Jan 20 18:06:04 crc kubenswrapper[4661]: I0120 18:06:04.152641 4661 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Jan 20 18:06:04 crc kubenswrapper[4661]: I0120 18:06:04.153551 4661 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Jan 20 18:06:04 crc kubenswrapper[4661]: I0120 18:06:04.154174 4661 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Jan 20 18:06:04 crc kubenswrapper[4661]: I0120 18:06:04.155090 4661 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Jan 20 18:06:04 crc kubenswrapper[4661]: I0120 18:06:04.155558 4661 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Jan 20 18:06:04 crc kubenswrapper[4661]: I0120 18:06:04.157753 4661 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Jan 20 18:06:04 crc kubenswrapper[4661]: I0120 18:06:04.158325 4661 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Jan 20 18:06:04 crc kubenswrapper[4661]: I0120 18:06:04.158962 4661 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Jan 20 18:06:04 crc kubenswrapper[4661]: I0120 18:06:04.160044 4661 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Jan 20 18:06:04 crc kubenswrapper[4661]: I0120 18:06:04.160655 4661 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Jan 20 18:06:04 crc kubenswrapper[4661]: I0120 18:06:04.161778 4661 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Jan 20 18:06:04 crc kubenswrapper[4661]: I0120 18:06:04.162600 4661 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Jan 20 18:06:04 crc kubenswrapper[4661]: I0120 18:06:04.163301 4661 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Jan 20 18:06:04 crc kubenswrapper[4661]: I0120 18:06:04.164762 4661 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Jan 20 18:06:04 crc kubenswrapper[4661]: I0120 18:06:04.165353 4661 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Jan 20 18:06:04 crc kubenswrapper[4661]: I0120 18:06:04.166850 4661 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Jan 20 18:06:04 crc kubenswrapper[4661]: I0120 18:06:04.167367 4661 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Jan 20 18:06:04 crc kubenswrapper[4661]: I0120 18:06:04.168704 4661 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Jan 20 18:06:04 crc kubenswrapper[4661]: I0120 18:06:04.168917 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:03Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:04Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:04 crc kubenswrapper[4661]: I0120 18:06:04.169272 4661 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Jan 20 18:06:04 crc kubenswrapper[4661]: I0120 18:06:04.170269 4661 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Jan 20 18:06:04 crc kubenswrapper[4661]: I0120 18:06:04.171628 4661 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Jan 20 18:06:04 crc kubenswrapper[4661]: I0120 18:06:04.172477 4661 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Jan 20 18:06:04 crc kubenswrapper[4661]: I0120 18:06:04.173939 4661 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Jan 20 18:06:04 crc kubenswrapper[4661]: I0120 18:06:04.177784 4661 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Jan 20 18:06:04 crc kubenswrapper[4661]: I0120 18:06:04.193132 4661 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Jan 20 18:06:04 crc kubenswrapper[4661]: I0120 18:06:04.193259 4661 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Jan 20 18:06:04 crc kubenswrapper[4661]: I0120 18:06:04.194475 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:03Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:04Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:04 crc kubenswrapper[4661]: I0120 18:06:04.194976 4661 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Jan 20 18:06:04 crc kubenswrapper[4661]: I0120 18:06:04.199407 4661 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Jan 20 18:06:04 crc kubenswrapper[4661]: I0120 18:06:04.199839 4661 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Jan 20 18:06:04 crc kubenswrapper[4661]: I0120 18:06:04.201435 4661 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Jan 20 18:06:04 crc kubenswrapper[4661]: I0120 18:06:04.202199 4661 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Jan 20 18:06:04 crc kubenswrapper[4661]: I0120 18:06:04.203140 4661 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Jan 20 18:06:04 crc kubenswrapper[4661]: I0120 18:06:04.203798 4661 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Jan 20 18:06:04 crc kubenswrapper[4661]: I0120 18:06:04.204959 4661 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Jan 20 18:06:04 crc kubenswrapper[4661]: I0120 18:06:04.205431 4661 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Jan 20 18:06:04 crc kubenswrapper[4661]: I0120 18:06:04.206410 4661 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Jan 20 18:06:04 crc kubenswrapper[4661]: I0120 18:06:04.207370 4661 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Jan 20 18:06:04 crc kubenswrapper[4661]: I0120 18:06:04.208066 4661 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Jan 20 18:06:04 crc kubenswrapper[4661]: I0120 18:06:04.208979 4661 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Jan 20 18:06:04 crc kubenswrapper[4661]: I0120 18:06:04.209529 4661 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Jan 20 18:06:04 crc kubenswrapper[4661]: I0120 18:06:04.210492 4661 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Jan 20 18:06:04 crc kubenswrapper[4661]: I0120 18:06:04.211317 4661 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Jan 20 18:06:04 crc kubenswrapper[4661]: I0120 18:06:04.211879 4661 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Jan 20 18:06:04 crc kubenswrapper[4661]: I0120 18:06:04.212397 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:04Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:04 crc kubenswrapper[4661]: I0120 18:06:04.212782 4661 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Jan 20 18:06:04 crc kubenswrapper[4661]: I0120 18:06:04.213225 4661 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Jan 20 18:06:04 crc kubenswrapper[4661]: I0120 18:06:04.214240 4661 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Jan 20 18:06:04 crc kubenswrapper[4661]: I0120 18:06:04.214932 4661 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Jan 20 18:06:04 crc kubenswrapper[4661]: I0120 18:06:04.215487 4661 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Jan 20 18:06:04 crc kubenswrapper[4661]: I0120 18:06:04.227352 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:03Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:04Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:04 crc kubenswrapper[4661]: I0120 18:06:04.258564 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:03Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:04Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:04 crc kubenswrapper[4661]: I0120 18:06:04.281591 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"a426dbce36e9bb5148d6638873e131088e0991ea7d1cbe9655cbf989346a6ce1"} Jan 20 18:06:04 crc kubenswrapper[4661]: I0120 18:06:04.283306 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"aafdc595f8f331b863d71124f1aa3c686ec883829377108268dd78de88f498ea"} Jan 20 18:06:04 crc kubenswrapper[4661]: I0120 18:06:04.283378 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"a15e7bb714cbcf03a4ed8925508be80b06b04f3cd455d293237554c8ad0fdeee"} Jan 20 18:06:04 crc kubenswrapper[4661]: I0120 18:06:04.283393 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"781ce24f4a8b407962b47900fb47bed55e73c8709f728bd59db13819437d31b9"} Jan 20 18:06:04 crc kubenswrapper[4661]: I0120 18:06:04.284511 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"26d03e00aaf9fc7a94d8fe25f4f6f7a028f4e5eb9956411442757ca8b2046d27"} Jan 20 18:06:04 crc kubenswrapper[4661]: I0120 18:06:04.284556 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"00f3758975edb3be9d44630a65339ae9e7a74fe86fbdbf750939954b35e5a14e"} Jan 20 18:06:04 crc kubenswrapper[4661]: I0120 18:06:04.286301 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 20 18:06:04 crc kubenswrapper[4661]: I0120 18:06:04.287768 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"5a8e025f49d745d0d846c606a3ec9dd6fbd2d255e8662ba1fd1a65f0d4289e77"} Jan 20 18:06:04 crc kubenswrapper[4661]: I0120 18:06:04.288429 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5947c5f0-b932-4127-a183-6b9023784c81\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:05:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:05:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:05:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:05:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:05:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2286c38d543136df613b2611b8d494d0777a950158adb169c26708335c024251\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7995b8e096ce8c7adf28d9baa4e12d943a697db80ee2b6e6b347b334e44b0df\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a1fb928361cffd6f14855b6c1cf5964eccc9f923435bf79dddd8f0c94decd9a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3584f02089912eecb6ea77d78d4f093929ce92631cb9ea758f1311268963b6b1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3584f02089912eecb6ea77d78d4f093929ce92631cb9ea758f1311268963b6b1\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-20T18:06:02Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0120 18:05:56.920405 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0120 18:05:56.921589 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1862726087/tls.crt::/tmp/serving-cert-1862726087/tls.key\\\\\\\"\\\\nI0120 18:06:02.544098 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0120 18:06:02.549414 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0120 18:06:02.549439 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0120 18:06:02.549472 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0120 18:06:02.549479 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0120 18:06:02.569160 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0120 18:06:02.569400 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0120 18:06:02.569474 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0120 18:06:02.569536 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0120 18:06:02.569594 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0120 18:06:02.569648 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0120 18:06:02.569744 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0120 18:06:02.569342 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0120 18:06:02.573278 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T18:05:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f09e5fcc7fafac7a11257184f5919c06b5b2e56a677b67c664e6489d9a581a20\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:05:46Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6eedc9bdf3c37af238cf9ad5172a8d93751c0641cbf43057016157f086c77538\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6eedc9bdf3c37af238cf9ad5172a8d93751c0641cbf43057016157f086c77538\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T18:05:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T18:05:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T18:05:44Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:04Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:04 crc kubenswrapper[4661]: I0120 18:06:04.293944 4661 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 20 18:06:04 crc kubenswrapper[4661]: I0120 18:06:04.314142 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:04Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:04 crc kubenswrapper[4661]: I0120 18:06:04.359077 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26d03e00aaf9fc7a94d8fe25f4f6f7a028f4e5eb9956411442757ca8b2046d27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:04Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:04 crc kubenswrapper[4661]: I0120 18:06:04.403989 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:04Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:04 crc kubenswrapper[4661]: I0120 18:06:04.466152 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:03Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:04Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:04 crc kubenswrapper[4661]: I0120 18:06:04.490872 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:03Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:04Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:04 crc kubenswrapper[4661]: I0120 18:06:04.525440 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5947c5f0-b932-4127-a183-6b9023784c81\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:05:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:05:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:05:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:05:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:05:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2286c38d543136df613b2611b8d494d0777a950158adb169c26708335c024251\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7995b8e096ce8c7adf28d9baa4e12d943a697db80ee2b6e6b347b334e44b0df\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a1fb928361cffd6f14855b6c1cf5964eccc9f923435bf79dddd8f0c94decd9a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a8e025f49d745d0d846c606a3ec9dd6fbd2d255e8662ba1fd1a65f0d4289e77\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3584f02089912eecb6ea77d78d4f093929ce92631cb9ea758f1311268963b6b1\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-20T18:06:02Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0120 18:05:56.920405 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0120 18:05:56.921589 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1862726087/tls.crt::/tmp/serving-cert-1862726087/tls.key\\\\\\\"\\\\nI0120 18:06:02.544098 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0120 18:06:02.549414 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0120 18:06:02.549439 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0120 18:06:02.549472 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0120 18:06:02.549479 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0120 18:06:02.569160 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0120 18:06:02.569400 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0120 18:06:02.569474 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0120 18:06:02.569536 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0120 18:06:02.569594 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0120 18:06:02.569648 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0120 18:06:02.569744 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0120 18:06:02.569342 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0120 18:06:02.573278 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T18:05:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f09e5fcc7fafac7a11257184f5919c06b5b2e56a677b67c664e6489d9a581a20\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:05:46Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6eedc9bdf3c37af238cf9ad5172a8d93751c0641cbf43057016157f086c77538\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6eedc9bdf3c37af238cf9ad5172a8d93751c0641cbf43057016157f086c77538\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T18:05:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T18:05:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T18:05:44Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:04Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:04 crc kubenswrapper[4661]: I0120 18:06:04.574419 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:03Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:04Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:04 crc kubenswrapper[4661]: I0120 18:06:04.586614 4661 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/node-resolver-9m9jm"] Jan 20 18:06:04 crc kubenswrapper[4661]: I0120 18:06:04.587127 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-9m9jm" Jan 20 18:06:04 crc kubenswrapper[4661]: I0120 18:06:04.590525 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Jan 20 18:06:04 crc kubenswrapper[4661]: I0120 18:06:04.594550 4661 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Jan 20 18:06:04 crc kubenswrapper[4661]: I0120 18:06:04.595654 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aafdc595f8f331b863d71124f1aa3c686ec883829377108268dd78de88f498ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a15e7bb714cbcf03a4ed8925508be80b06b04f3cd455d293237554c8ad0fdeee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:04Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:04 crc kubenswrapper[4661]: I0120 18:06:04.596131 4661 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Jan 20 18:06:04 crc kubenswrapper[4661]: I0120 18:06:04.616814 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:03Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:04Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:04 crc kubenswrapper[4661]: I0120 18:06:04.640289 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aafdc595f8f331b863d71124f1aa3c686ec883829377108268dd78de88f498ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a15e7bb714cbcf03a4ed8925508be80b06b04f3cd455d293237554c8ad0fdeee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:04Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:04 crc kubenswrapper[4661]: I0120 18:06:04.657246 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kn7nv\" (UniqueName: \"kubernetes.io/projected/c44ff326-6791-438a-8d65-b2be26e9c819-kube-api-access-kn7nv\") pod \"node-resolver-9m9jm\" (UID: \"c44ff326-6791-438a-8d65-b2be26e9c819\") " pod="openshift-dns/node-resolver-9m9jm" Jan 20 18:06:04 crc kubenswrapper[4661]: I0120 18:06:04.657508 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/c44ff326-6791-438a-8d65-b2be26e9c819-hosts-file\") pod \"node-resolver-9m9jm\" (UID: \"c44ff326-6791-438a-8d65-b2be26e9c819\") " pod="openshift-dns/node-resolver-9m9jm" Jan 20 18:06:04 crc kubenswrapper[4661]: I0120 18:06:04.676020 4661 csr.go:261] certificate signing request csr-rf96r is approved, waiting to be issued Jan 20 18:06:04 crc kubenswrapper[4661]: I0120 18:06:04.685064 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:03Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:04Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:04 crc kubenswrapper[4661]: I0120 18:06:04.737865 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:03Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:04Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:04 crc kubenswrapper[4661]: I0120 18:06:04.758860 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 18:06:04 crc kubenswrapper[4661]: I0120 18:06:04.759188 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 18:06:04 crc kubenswrapper[4661]: I0120 18:06:04.759293 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kn7nv\" (UniqueName: \"kubernetes.io/projected/c44ff326-6791-438a-8d65-b2be26e9c819-kube-api-access-kn7nv\") pod \"node-resolver-9m9jm\" (UID: \"c44ff326-6791-438a-8d65-b2be26e9c819\") " pod="openshift-dns/node-resolver-9m9jm" Jan 20 18:06:04 crc kubenswrapper[4661]: I0120 18:06:04.759390 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 18:06:04 crc kubenswrapper[4661]: I0120 18:06:04.759484 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/c44ff326-6791-438a-8d65-b2be26e9c819-hosts-file\") pod \"node-resolver-9m9jm\" (UID: \"c44ff326-6791-438a-8d65-b2be26e9c819\") " pod="openshift-dns/node-resolver-9m9jm" Jan 20 18:06:04 crc kubenswrapper[4661]: I0120 18:06:04.759591 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 18:06:04 crc kubenswrapper[4661]: I0120 18:06:04.759715 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 18:06:04 crc kubenswrapper[4661]: E0120 18:06:04.759922 4661 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 20 18:06:04 crc kubenswrapper[4661]: E0120 18:06:04.760069 4661 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-20 18:06:06.760050075 +0000 UTC m=+23.090839737 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 20 18:06:04 crc kubenswrapper[4661]: E0120 18:06:04.760500 4661 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 18:06:06.760490007 +0000 UTC m=+23.091279669 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 18:06:04 crc kubenswrapper[4661]: E0120 18:06:04.760692 4661 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 20 18:06:04 crc kubenswrapper[4661]: E0120 18:06:04.760786 4661 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 20 18:06:04 crc kubenswrapper[4661]: E0120 18:06:04.760860 4661 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 20 18:06:04 crc kubenswrapper[4661]: E0120 18:06:04.760954 4661 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-20 18:06:06.76094203 +0000 UTC m=+23.091731692 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 20 18:06:04 crc kubenswrapper[4661]: E0120 18:06:04.761253 4661 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 20 18:06:04 crc kubenswrapper[4661]: E0120 18:06:04.761565 4661 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-20 18:06:06.761553347 +0000 UTC m=+23.092343009 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 20 18:06:04 crc kubenswrapper[4661]: I0120 18:06:04.761711 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/c44ff326-6791-438a-8d65-b2be26e9c819-hosts-file\") pod \"node-resolver-9m9jm\" (UID: \"c44ff326-6791-438a-8d65-b2be26e9c819\") " pod="openshift-dns/node-resolver-9m9jm" Jan 20 18:06:04 crc kubenswrapper[4661]: E0120 18:06:04.764984 4661 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 20 18:06:04 crc kubenswrapper[4661]: E0120 18:06:04.765087 4661 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 20 18:06:04 crc kubenswrapper[4661]: E0120 18:06:04.765166 4661 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 20 18:06:04 crc kubenswrapper[4661]: E0120 18:06:04.765267 4661 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-20 18:06:06.76525632 +0000 UTC m=+23.096045982 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 20 18:06:04 crc kubenswrapper[4661]: I0120 18:06:04.775399 4661 csr.go:257] certificate signing request csr-rf96r is issued Jan 20 18:06:04 crc kubenswrapper[4661]: I0120 18:06:04.816043 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-9m9jm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c44ff326-6791-438a-8d65-b2be26e9c819\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:04Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:04Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kn7nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T18:06:04Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-9m9jm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:04Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:04 crc kubenswrapper[4661]: I0120 18:06:04.833772 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kn7nv\" (UniqueName: \"kubernetes.io/projected/c44ff326-6791-438a-8d65-b2be26e9c819-kube-api-access-kn7nv\") pod \"node-resolver-9m9jm\" (UID: \"c44ff326-6791-438a-8d65-b2be26e9c819\") " pod="openshift-dns/node-resolver-9m9jm" Jan 20 18:06:04 crc kubenswrapper[4661]: I0120 18:06:04.876191 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5947c5f0-b932-4127-a183-6b9023784c81\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:05:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:05:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:05:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:05:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:05:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2286c38d543136df613b2611b8d494d0777a950158adb169c26708335c024251\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7995b8e096ce8c7adf28d9baa4e12d943a697db80ee2b6e6b347b334e44b0df\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a1fb928361cffd6f14855b6c1cf5964eccc9f923435bf79dddd8f0c94decd9a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a8e025f49d745d0d846c606a3ec9dd6fbd2d255e8662ba1fd1a65f0d4289e77\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3584f02089912eecb6ea77d78d4f093929ce92631cb9ea758f1311268963b6b1\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-20T18:06:02Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0120 18:05:56.920405 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0120 18:05:56.921589 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1862726087/tls.crt::/tmp/serving-cert-1862726087/tls.key\\\\\\\"\\\\nI0120 18:06:02.544098 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0120 18:06:02.549414 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0120 18:06:02.549439 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0120 18:06:02.549472 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0120 18:06:02.549479 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0120 18:06:02.569160 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0120 18:06:02.569400 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0120 18:06:02.569474 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0120 18:06:02.569536 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0120 18:06:02.569594 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0120 18:06:02.569648 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0120 18:06:02.569744 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0120 18:06:02.569342 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0120 18:06:02.573278 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T18:05:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f09e5fcc7fafac7a11257184f5919c06b5b2e56a677b67c664e6489d9a581a20\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:05:46Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6eedc9bdf3c37af238cf9ad5172a8d93751c0641cbf43057016157f086c77538\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6eedc9bdf3c37af238cf9ad5172a8d93751c0641cbf43057016157f086c77538\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T18:05:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T18:05:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T18:05:44Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:04Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:04 crc kubenswrapper[4661]: I0120 18:06:04.897864 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-9m9jm" Jan 20 18:06:04 crc kubenswrapper[4661]: W0120 18:06:04.915304 4661 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc44ff326_6791_438a_8d65_b2be26e9c819.slice/crio-09ac1435ed72c71f25ff8be022d5c8f562d937f9d32615c6517fe3c55a658374 WatchSource:0}: Error finding container 09ac1435ed72c71f25ff8be022d5c8f562d937f9d32615c6517fe3c55a658374: Status 404 returned error can't find the container with id 09ac1435ed72c71f25ff8be022d5c8f562d937f9d32615c6517fe3c55a658374 Jan 20 18:06:04 crc kubenswrapper[4661]: I0120 18:06:04.928384 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26d03e00aaf9fc7a94d8fe25f4f6f7a028f4e5eb9956411442757ca8b2046d27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:04Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:05 crc kubenswrapper[4661]: I0120 18:06:05.041362 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:05Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:05 crc kubenswrapper[4661]: I0120 18:06:05.094057 4661 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-06 22:41:43.391384443 +0000 UTC Jan 20 18:06:05 crc kubenswrapper[4661]: I0120 18:06:05.141041 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 18:06:05 crc kubenswrapper[4661]: I0120 18:06:05.141071 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 18:06:05 crc kubenswrapper[4661]: E0120 18:06:05.141173 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 20 18:06:05 crc kubenswrapper[4661]: I0120 18:06:05.141531 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 18:06:05 crc kubenswrapper[4661]: E0120 18:06:05.141617 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 20 18:06:05 crc kubenswrapper[4661]: E0120 18:06:05.141715 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 20 18:06:05 crc kubenswrapper[4661]: I0120 18:06:05.291803 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-9m9jm" event={"ID":"c44ff326-6791-438a-8d65-b2be26e9c819","Type":"ContainerStarted","Data":"de5a607340e429cf954b1b6e147c4dbff99ffee4d311e9692410698574915af2"} Jan 20 18:06:05 crc kubenswrapper[4661]: I0120 18:06:05.291875 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-9m9jm" event={"ID":"c44ff326-6791-438a-8d65-b2be26e9c819","Type":"ContainerStarted","Data":"09ac1435ed72c71f25ff8be022d5c8f562d937f9d32615c6517fe3c55a658374"} Jan 20 18:06:05 crc kubenswrapper[4661]: I0120 18:06:05.307479 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:03Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:05Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:05 crc kubenswrapper[4661]: I0120 18:06:05.326873 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aafdc595f8f331b863d71124f1aa3c686ec883829377108268dd78de88f498ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a15e7bb714cbcf03a4ed8925508be80b06b04f3cd455d293237554c8ad0fdeee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:05Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:05 crc kubenswrapper[4661]: I0120 18:06:05.352283 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:03Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:05Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:05 crc kubenswrapper[4661]: I0120 18:06:05.366548 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:03Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:05Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:05 crc kubenswrapper[4661]: I0120 18:06:05.378162 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-9m9jm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c44ff326-6791-438a-8d65-b2be26e9c819\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de5a607340e429cf954b1b6e147c4dbff99ffee4d311e9692410698574915af2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kn7nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T18:06:04Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-9m9jm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:05Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:05 crc kubenswrapper[4661]: I0120 18:06:05.394358 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5947c5f0-b932-4127-a183-6b9023784c81\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:05:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:05:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:05:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:05:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:05:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2286c38d543136df613b2611b8d494d0777a950158adb169c26708335c024251\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7995b8e096ce8c7adf28d9baa4e12d943a697db80ee2b6e6b347b334e44b0df\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a1fb928361cffd6f14855b6c1cf5964eccc9f923435bf79dddd8f0c94decd9a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a8e025f49d745d0d846c606a3ec9dd6fbd2d255e8662ba1fd1a65f0d4289e77\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3584f02089912eecb6ea77d78d4f093929ce92631cb9ea758f1311268963b6b1\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-20T18:06:02Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0120 18:05:56.920405 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0120 18:05:56.921589 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1862726087/tls.crt::/tmp/serving-cert-1862726087/tls.key\\\\\\\"\\\\nI0120 18:06:02.544098 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0120 18:06:02.549414 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0120 18:06:02.549439 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0120 18:06:02.549472 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0120 18:06:02.549479 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0120 18:06:02.569160 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0120 18:06:02.569400 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0120 18:06:02.569474 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0120 18:06:02.569536 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0120 18:06:02.569594 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0120 18:06:02.569648 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0120 18:06:02.569744 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0120 18:06:02.569342 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0120 18:06:02.573278 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T18:05:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f09e5fcc7fafac7a11257184f5919c06b5b2e56a677b67c664e6489d9a581a20\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:05:46Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6eedc9bdf3c37af238cf9ad5172a8d93751c0641cbf43057016157f086c77538\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6eedc9bdf3c37af238cf9ad5172a8d93751c0641cbf43057016157f086c77538\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T18:05:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T18:05:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T18:05:44Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:05Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:05 crc kubenswrapper[4661]: I0120 18:06:05.411915 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26d03e00aaf9fc7a94d8fe25f4f6f7a028f4e5eb9956411442757ca8b2046d27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:05Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:05 crc kubenswrapper[4661]: I0120 18:06:05.441623 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:05Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:05 crc kubenswrapper[4661]: I0120 18:06:05.603988 4661 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-z97p2"] Jan 20 18:06:05 crc kubenswrapper[4661]: I0120 18:06:05.604366 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-z97p2" Jan 20 18:06:05 crc kubenswrapper[4661]: I0120 18:06:05.605041 4661 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-svf7c"] Jan 20 18:06:05 crc kubenswrapper[4661]: I0120 18:06:05.605475 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-svf7c" Jan 20 18:06:05 crc kubenswrapper[4661]: I0120 18:06:05.613156 4661 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-j9j6p"] Jan 20 18:06:05 crc kubenswrapper[4661]: I0120 18:06:05.614042 4661 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Jan 20 18:06:05 crc kubenswrapper[4661]: I0120 18:06:05.614221 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-j9j6p" Jan 20 18:06:05 crc kubenswrapper[4661]: I0120 18:06:05.632437 4661 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Jan 20 18:06:05 crc kubenswrapper[4661]: I0120 18:06:05.632546 4661 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Jan 20 18:06:05 crc kubenswrapper[4661]: I0120 18:06:05.632696 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Jan 20 18:06:05 crc kubenswrapper[4661]: I0120 18:06:05.632754 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Jan 20 18:06:05 crc kubenswrapper[4661]: I0120 18:06:05.632852 4661 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Jan 20 18:06:05 crc kubenswrapper[4661]: W0120 18:06:05.654210 4661 reflector.go:561] object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz": failed to list *v1.Secret: secrets "multus-ancillary-tools-dockercfg-vnmsz" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-multus": no relationship found between node 'crc' and this object Jan 20 18:06:05 crc kubenswrapper[4661]: E0120 18:06:05.654268 4661 reflector.go:158] "Unhandled Error" err="object-\"openshift-multus\"/\"multus-ancillary-tools-dockercfg-vnmsz\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"multus-ancillary-tools-dockercfg-vnmsz\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-multus\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 20 18:06:05 crc kubenswrapper[4661]: I0120 18:06:05.665235 4661 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Jan 20 18:06:05 crc kubenswrapper[4661]: I0120 18:06:05.665354 4661 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Jan 20 18:06:05 crc kubenswrapper[4661]: I0120 18:06:05.665533 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Jan 20 18:06:05 crc kubenswrapper[4661]: W0120 18:06:05.665590 4661 reflector.go:561] object-"openshift-multus"/"default-cni-sysctl-allowlist": failed to list *v1.ConfigMap: configmaps "default-cni-sysctl-allowlist" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-multus": no relationship found between node 'crc' and this object Jan 20 18:06:05 crc kubenswrapper[4661]: E0120 18:06:05.665644 4661 reflector.go:158] "Unhandled Error" err="object-\"openshift-multus\"/\"default-cni-sysctl-allowlist\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"default-cni-sysctl-allowlist\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-multus\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 20 18:06:05 crc kubenswrapper[4661]: I0120 18:06:05.665860 4661 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Jan 20 18:06:05 crc kubenswrapper[4661]: I0120 18:06:05.676121 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ff8qg\" (UniqueName: \"kubernetes.io/projected/5b6f2401-3eb9-4ee4-b79c-6faee06bc21c-kube-api-access-ff8qg\") pod \"multus-z97p2\" (UID: \"5b6f2401-3eb9-4ee4-b79c-6faee06bc21c\") " pod="openshift-multus/multus-z97p2" Jan 20 18:06:05 crc kubenswrapper[4661]: I0120 18:06:05.676172 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/5b6f2401-3eb9-4ee4-b79c-6faee06bc21c-hostroot\") pod \"multus-z97p2\" (UID: \"5b6f2401-3eb9-4ee4-b79c-6faee06bc21c\") " pod="openshift-multus/multus-z97p2" Jan 20 18:06:05 crc kubenswrapper[4661]: I0120 18:06:05.676195 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/5b6f2401-3eb9-4ee4-b79c-6faee06bc21c-etc-kubernetes\") pod \"multus-z97p2\" (UID: \"5b6f2401-3eb9-4ee4-b79c-6faee06bc21c\") " pod="openshift-multus/multus-z97p2" Jan 20 18:06:05 crc kubenswrapper[4661]: I0120 18:06:05.676215 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/5b6f2401-3eb9-4ee4-b79c-6faee06bc21c-multus-daemon-config\") pod \"multus-z97p2\" (UID: \"5b6f2401-3eb9-4ee4-b79c-6faee06bc21c\") " pod="openshift-multus/multus-z97p2" Jan 20 18:06:05 crc kubenswrapper[4661]: I0120 18:06:05.676252 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/78855c94-da90-4523-8d65-70f7fd153dee-rootfs\") pod \"machine-config-daemon-svf7c\" (UID: \"78855c94-da90-4523-8d65-70f7fd153dee\") " pod="openshift-machine-config-operator/machine-config-daemon-svf7c" Jan 20 18:06:05 crc kubenswrapper[4661]: I0120 18:06:05.676269 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/5b6f2401-3eb9-4ee4-b79c-6faee06bc21c-system-cni-dir\") pod \"multus-z97p2\" (UID: \"5b6f2401-3eb9-4ee4-b79c-6faee06bc21c\") " pod="openshift-multus/multus-z97p2" Jan 20 18:06:05 crc kubenswrapper[4661]: I0120 18:06:05.676287 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/5b6f2401-3eb9-4ee4-b79c-6faee06bc21c-multus-conf-dir\") pod \"multus-z97p2\" (UID: \"5b6f2401-3eb9-4ee4-b79c-6faee06bc21c\") " pod="openshift-multus/multus-z97p2" Jan 20 18:06:05 crc kubenswrapper[4661]: I0120 18:06:05.676305 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/78855c94-da90-4523-8d65-70f7fd153dee-mcd-auth-proxy-config\") pod \"machine-config-daemon-svf7c\" (UID: \"78855c94-da90-4523-8d65-70f7fd153dee\") " pod="openshift-machine-config-operator/machine-config-daemon-svf7c" Jan 20 18:06:05 crc kubenswrapper[4661]: I0120 18:06:05.676338 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/5b6f2401-3eb9-4ee4-b79c-6faee06bc21c-os-release\") pod \"multus-z97p2\" (UID: \"5b6f2401-3eb9-4ee4-b79c-6faee06bc21c\") " pod="openshift-multus/multus-z97p2" Jan 20 18:06:05 crc kubenswrapper[4661]: I0120 18:06:05.676359 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/5b6f2401-3eb9-4ee4-b79c-6faee06bc21c-cni-binary-copy\") pod \"multus-z97p2\" (UID: \"5b6f2401-3eb9-4ee4-b79c-6faee06bc21c\") " pod="openshift-multus/multus-z97p2" Jan 20 18:06:05 crc kubenswrapper[4661]: I0120 18:06:05.676392 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/5b6f2401-3eb9-4ee4-b79c-6faee06bc21c-multus-socket-dir-parent\") pod \"multus-z97p2\" (UID: \"5b6f2401-3eb9-4ee4-b79c-6faee06bc21c\") " pod="openshift-multus/multus-z97p2" Jan 20 18:06:05 crc kubenswrapper[4661]: I0120 18:06:05.676410 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/5b6f2401-3eb9-4ee4-b79c-6faee06bc21c-host-var-lib-kubelet\") pod \"multus-z97p2\" (UID: \"5b6f2401-3eb9-4ee4-b79c-6faee06bc21c\") " pod="openshift-multus/multus-z97p2" Jan 20 18:06:05 crc kubenswrapper[4661]: I0120 18:06:05.676428 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/5b6f2401-3eb9-4ee4-b79c-6faee06bc21c-host-run-multus-certs\") pod \"multus-z97p2\" (UID: \"5b6f2401-3eb9-4ee4-b79c-6faee06bc21c\") " pod="openshift-multus/multus-z97p2" Jan 20 18:06:05 crc kubenswrapper[4661]: I0120 18:06:05.676455 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/5b6f2401-3eb9-4ee4-b79c-6faee06bc21c-multus-cni-dir\") pod \"multus-z97p2\" (UID: \"5b6f2401-3eb9-4ee4-b79c-6faee06bc21c\") " pod="openshift-multus/multus-z97p2" Jan 20 18:06:05 crc kubenswrapper[4661]: I0120 18:06:05.676472 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/5b6f2401-3eb9-4ee4-b79c-6faee06bc21c-cnibin\") pod \"multus-z97p2\" (UID: \"5b6f2401-3eb9-4ee4-b79c-6faee06bc21c\") " pod="openshift-multus/multus-z97p2" Jan 20 18:06:05 crc kubenswrapper[4661]: I0120 18:06:05.676488 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/5b6f2401-3eb9-4ee4-b79c-6faee06bc21c-host-run-netns\") pod \"multus-z97p2\" (UID: \"5b6f2401-3eb9-4ee4-b79c-6faee06bc21c\") " pod="openshift-multus/multus-z97p2" Jan 20 18:06:05 crc kubenswrapper[4661]: I0120 18:06:05.676511 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/5b6f2401-3eb9-4ee4-b79c-6faee06bc21c-host-var-lib-cni-multus\") pod \"multus-z97p2\" (UID: \"5b6f2401-3eb9-4ee4-b79c-6faee06bc21c\") " pod="openshift-multus/multus-z97p2" Jan 20 18:06:05 crc kubenswrapper[4661]: I0120 18:06:05.676531 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/5b6f2401-3eb9-4ee4-b79c-6faee06bc21c-host-var-lib-cni-bin\") pod \"multus-z97p2\" (UID: \"5b6f2401-3eb9-4ee4-b79c-6faee06bc21c\") " pod="openshift-multus/multus-z97p2" Jan 20 18:06:05 crc kubenswrapper[4661]: I0120 18:06:05.676554 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/78855c94-da90-4523-8d65-70f7fd153dee-proxy-tls\") pod \"machine-config-daemon-svf7c\" (UID: \"78855c94-da90-4523-8d65-70f7fd153dee\") " pod="openshift-machine-config-operator/machine-config-daemon-svf7c" Jan 20 18:06:05 crc kubenswrapper[4661]: I0120 18:06:05.676574 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zvj2k\" (UniqueName: \"kubernetes.io/projected/78855c94-da90-4523-8d65-70f7fd153dee-kube-api-access-zvj2k\") pod \"machine-config-daemon-svf7c\" (UID: \"78855c94-da90-4523-8d65-70f7fd153dee\") " pod="openshift-machine-config-operator/machine-config-daemon-svf7c" Jan 20 18:06:05 crc kubenswrapper[4661]: I0120 18:06:05.676593 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/5b6f2401-3eb9-4ee4-b79c-6faee06bc21c-host-run-k8s-cni-cncf-io\") pod \"multus-z97p2\" (UID: \"5b6f2401-3eb9-4ee4-b79c-6faee06bc21c\") " pod="openshift-multus/multus-z97p2" Jan 20 18:06:05 crc kubenswrapper[4661]: I0120 18:06:05.729255 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:05Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:05 crc kubenswrapper[4661]: I0120 18:06:05.779077 4661 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2027-01-20 18:01:04 +0000 UTC, rotation deadline is 2026-11-17 18:20:30.474311552 +0000 UTC Jan 20 18:06:05 crc kubenswrapper[4661]: I0120 18:06:05.779140 4661 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 7224h14m24.695175651s for next certificate rotation Jan 20 18:06:05 crc kubenswrapper[4661]: I0120 18:06:05.779512 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/5b6f2401-3eb9-4ee4-b79c-6faee06bc21c-host-run-k8s-cni-cncf-io\") pod \"multus-z97p2\" (UID: \"5b6f2401-3eb9-4ee4-b79c-6faee06bc21c\") " pod="openshift-multus/multus-z97p2" Jan 20 18:06:05 crc kubenswrapper[4661]: I0120 18:06:05.779553 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/5b6f2401-3eb9-4ee4-b79c-6faee06bc21c-host-var-lib-cni-bin\") pod \"multus-z97p2\" (UID: \"5b6f2401-3eb9-4ee4-b79c-6faee06bc21c\") " pod="openshift-multus/multus-z97p2" Jan 20 18:06:05 crc kubenswrapper[4661]: I0120 18:06:05.779580 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zvj2k\" (UniqueName: \"kubernetes.io/projected/78855c94-da90-4523-8d65-70f7fd153dee-kube-api-access-zvj2k\") pod \"machine-config-daemon-svf7c\" (UID: \"78855c94-da90-4523-8d65-70f7fd153dee\") " pod="openshift-machine-config-operator/machine-config-daemon-svf7c" Jan 20 18:06:05 crc kubenswrapper[4661]: I0120 18:06:05.779607 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/e190abed-d178-4ce7-9485-f6090ecf8578-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-j9j6p\" (UID: \"e190abed-d178-4ce7-9485-f6090ecf8578\") " pod="openshift-multus/multus-additional-cni-plugins-j9j6p" Jan 20 18:06:05 crc kubenswrapper[4661]: I0120 18:06:05.779618 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/5b6f2401-3eb9-4ee4-b79c-6faee06bc21c-host-run-k8s-cni-cncf-io\") pod \"multus-z97p2\" (UID: \"5b6f2401-3eb9-4ee4-b79c-6faee06bc21c\") " pod="openshift-multus/multus-z97p2" Jan 20 18:06:05 crc kubenswrapper[4661]: I0120 18:06:05.779632 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mxwgg\" (UniqueName: \"kubernetes.io/projected/e190abed-d178-4ce7-9485-f6090ecf8578-kube-api-access-mxwgg\") pod \"multus-additional-cni-plugins-j9j6p\" (UID: \"e190abed-d178-4ce7-9485-f6090ecf8578\") " pod="openshift-multus/multus-additional-cni-plugins-j9j6p" Jan 20 18:06:05 crc kubenswrapper[4661]: I0120 18:06:05.779657 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/5b6f2401-3eb9-4ee4-b79c-6faee06bc21c-hostroot\") pod \"multus-z97p2\" (UID: \"5b6f2401-3eb9-4ee4-b79c-6faee06bc21c\") " pod="openshift-multus/multus-z97p2" Jan 20 18:06:05 crc kubenswrapper[4661]: I0120 18:06:05.779705 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/e190abed-d178-4ce7-9485-f6090ecf8578-system-cni-dir\") pod \"multus-additional-cni-plugins-j9j6p\" (UID: \"e190abed-d178-4ce7-9485-f6090ecf8578\") " pod="openshift-multus/multus-additional-cni-plugins-j9j6p" Jan 20 18:06:05 crc kubenswrapper[4661]: I0120 18:06:05.779747 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/e190abed-d178-4ce7-9485-f6090ecf8578-tuning-conf-dir\") pod \"multus-additional-cni-plugins-j9j6p\" (UID: \"e190abed-d178-4ce7-9485-f6090ecf8578\") " pod="openshift-multus/multus-additional-cni-plugins-j9j6p" Jan 20 18:06:05 crc kubenswrapper[4661]: I0120 18:06:05.779785 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/5b6f2401-3eb9-4ee4-b79c-6faee06bc21c-multus-daemon-config\") pod \"multus-z97p2\" (UID: \"5b6f2401-3eb9-4ee4-b79c-6faee06bc21c\") " pod="openshift-multus/multus-z97p2" Jan 20 18:06:05 crc kubenswrapper[4661]: I0120 18:06:05.779812 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/5b6f2401-3eb9-4ee4-b79c-6faee06bc21c-hostroot\") pod \"multus-z97p2\" (UID: \"5b6f2401-3eb9-4ee4-b79c-6faee06bc21c\") " pod="openshift-multus/multus-z97p2" Jan 20 18:06:05 crc kubenswrapper[4661]: I0120 18:06:05.779867 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/78855c94-da90-4523-8d65-70f7fd153dee-rootfs\") pod \"machine-config-daemon-svf7c\" (UID: \"78855c94-da90-4523-8d65-70f7fd153dee\") " pod="openshift-machine-config-operator/machine-config-daemon-svf7c" Jan 20 18:06:05 crc kubenswrapper[4661]: I0120 18:06:05.779821 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/78855c94-da90-4523-8d65-70f7fd153dee-rootfs\") pod \"machine-config-daemon-svf7c\" (UID: \"78855c94-da90-4523-8d65-70f7fd153dee\") " pod="openshift-machine-config-operator/machine-config-daemon-svf7c" Jan 20 18:06:05 crc kubenswrapper[4661]: I0120 18:06:05.779802 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/5b6f2401-3eb9-4ee4-b79c-6faee06bc21c-host-var-lib-cni-bin\") pod \"multus-z97p2\" (UID: \"5b6f2401-3eb9-4ee4-b79c-6faee06bc21c\") " pod="openshift-multus/multus-z97p2" Jan 20 18:06:05 crc kubenswrapper[4661]: I0120 18:06:05.779928 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/5b6f2401-3eb9-4ee4-b79c-6faee06bc21c-multus-conf-dir\") pod \"multus-z97p2\" (UID: \"5b6f2401-3eb9-4ee4-b79c-6faee06bc21c\") " pod="openshift-multus/multus-z97p2" Jan 20 18:06:05 crc kubenswrapper[4661]: I0120 18:06:05.779971 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/5b6f2401-3eb9-4ee4-b79c-6faee06bc21c-os-release\") pod \"multus-z97p2\" (UID: \"5b6f2401-3eb9-4ee4-b79c-6faee06bc21c\") " pod="openshift-multus/multus-z97p2" Jan 20 18:06:05 crc kubenswrapper[4661]: I0120 18:06:05.779996 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/5b6f2401-3eb9-4ee4-b79c-6faee06bc21c-multus-socket-dir-parent\") pod \"multus-z97p2\" (UID: \"5b6f2401-3eb9-4ee4-b79c-6faee06bc21c\") " pod="openshift-multus/multus-z97p2" Jan 20 18:06:05 crc kubenswrapper[4661]: I0120 18:06:05.780015 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/5b6f2401-3eb9-4ee4-b79c-6faee06bc21c-host-run-netns\") pod \"multus-z97p2\" (UID: \"5b6f2401-3eb9-4ee4-b79c-6faee06bc21c\") " pod="openshift-multus/multus-z97p2" Jan 20 18:06:05 crc kubenswrapper[4661]: I0120 18:06:05.780030 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/5b6f2401-3eb9-4ee4-b79c-6faee06bc21c-host-var-lib-cni-multus\") pod \"multus-z97p2\" (UID: \"5b6f2401-3eb9-4ee4-b79c-6faee06bc21c\") " pod="openshift-multus/multus-z97p2" Jan 20 18:06:05 crc kubenswrapper[4661]: I0120 18:06:05.780045 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/5b6f2401-3eb9-4ee4-b79c-6faee06bc21c-host-var-lib-kubelet\") pod \"multus-z97p2\" (UID: \"5b6f2401-3eb9-4ee4-b79c-6faee06bc21c\") " pod="openshift-multus/multus-z97p2" Jan 20 18:06:05 crc kubenswrapper[4661]: I0120 18:06:05.780060 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/5b6f2401-3eb9-4ee4-b79c-6faee06bc21c-host-run-multus-certs\") pod \"multus-z97p2\" (UID: \"5b6f2401-3eb9-4ee4-b79c-6faee06bc21c\") " pod="openshift-multus/multus-z97p2" Jan 20 18:06:05 crc kubenswrapper[4661]: I0120 18:06:05.780083 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/5b6f2401-3eb9-4ee4-b79c-6faee06bc21c-multus-cni-dir\") pod \"multus-z97p2\" (UID: \"5b6f2401-3eb9-4ee4-b79c-6faee06bc21c\") " pod="openshift-multus/multus-z97p2" Jan 20 18:06:05 crc kubenswrapper[4661]: I0120 18:06:05.780105 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/78855c94-da90-4523-8d65-70f7fd153dee-proxy-tls\") pod \"machine-config-daemon-svf7c\" (UID: \"78855c94-da90-4523-8d65-70f7fd153dee\") " pod="openshift-machine-config-operator/machine-config-daemon-svf7c" Jan 20 18:06:05 crc kubenswrapper[4661]: I0120 18:06:05.780110 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/5b6f2401-3eb9-4ee4-b79c-6faee06bc21c-multus-socket-dir-parent\") pod \"multus-z97p2\" (UID: \"5b6f2401-3eb9-4ee4-b79c-6faee06bc21c\") " pod="openshift-multus/multus-z97p2" Jan 20 18:06:05 crc kubenswrapper[4661]: I0120 18:06:05.780127 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/e190abed-d178-4ce7-9485-f6090ecf8578-os-release\") pod \"multus-additional-cni-plugins-j9j6p\" (UID: \"e190abed-d178-4ce7-9485-f6090ecf8578\") " pod="openshift-multus/multus-additional-cni-plugins-j9j6p" Jan 20 18:06:05 crc kubenswrapper[4661]: I0120 18:06:05.780154 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/5b6f2401-3eb9-4ee4-b79c-6faee06bc21c-etc-kubernetes\") pod \"multus-z97p2\" (UID: \"5b6f2401-3eb9-4ee4-b79c-6faee06bc21c\") " pod="openshift-multus/multus-z97p2" Jan 20 18:06:05 crc kubenswrapper[4661]: I0120 18:06:05.780162 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/5b6f2401-3eb9-4ee4-b79c-6faee06bc21c-host-var-lib-kubelet\") pod \"multus-z97p2\" (UID: \"5b6f2401-3eb9-4ee4-b79c-6faee06bc21c\") " pod="openshift-multus/multus-z97p2" Jan 20 18:06:05 crc kubenswrapper[4661]: I0120 18:06:05.780173 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ff8qg\" (UniqueName: \"kubernetes.io/projected/5b6f2401-3eb9-4ee4-b79c-6faee06bc21c-kube-api-access-ff8qg\") pod \"multus-z97p2\" (UID: \"5b6f2401-3eb9-4ee4-b79c-6faee06bc21c\") " pod="openshift-multus/multus-z97p2" Jan 20 18:06:05 crc kubenswrapper[4661]: I0120 18:06:05.780169 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/5b6f2401-3eb9-4ee4-b79c-6faee06bc21c-multus-conf-dir\") pod \"multus-z97p2\" (UID: \"5b6f2401-3eb9-4ee4-b79c-6faee06bc21c\") " pod="openshift-multus/multus-z97p2" Jan 20 18:06:05 crc kubenswrapper[4661]: I0120 18:06:05.780196 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/5b6f2401-3eb9-4ee4-b79c-6faee06bc21c-host-run-netns\") pod \"multus-z97p2\" (UID: \"5b6f2401-3eb9-4ee4-b79c-6faee06bc21c\") " pod="openshift-multus/multus-z97p2" Jan 20 18:06:05 crc kubenswrapper[4661]: I0120 18:06:05.780196 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/e190abed-d178-4ce7-9485-f6090ecf8578-cni-binary-copy\") pod \"multus-additional-cni-plugins-j9j6p\" (UID: \"e190abed-d178-4ce7-9485-f6090ecf8578\") " pod="openshift-multus/multus-additional-cni-plugins-j9j6p" Jan 20 18:06:05 crc kubenswrapper[4661]: I0120 18:06:05.780266 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/5b6f2401-3eb9-4ee4-b79c-6faee06bc21c-host-var-lib-cni-multus\") pod \"multus-z97p2\" (UID: \"5b6f2401-3eb9-4ee4-b79c-6faee06bc21c\") " pod="openshift-multus/multus-z97p2" Jan 20 18:06:05 crc kubenswrapper[4661]: I0120 18:06:05.780279 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/5b6f2401-3eb9-4ee4-b79c-6faee06bc21c-system-cni-dir\") pod \"multus-z97p2\" (UID: \"5b6f2401-3eb9-4ee4-b79c-6faee06bc21c\") " pod="openshift-multus/multus-z97p2" Jan 20 18:06:05 crc kubenswrapper[4661]: I0120 18:06:05.780299 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/78855c94-da90-4523-8d65-70f7fd153dee-mcd-auth-proxy-config\") pod \"machine-config-daemon-svf7c\" (UID: \"78855c94-da90-4523-8d65-70f7fd153dee\") " pod="openshift-machine-config-operator/machine-config-daemon-svf7c" Jan 20 18:06:05 crc kubenswrapper[4661]: I0120 18:06:05.780305 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/5b6f2401-3eb9-4ee4-b79c-6faee06bc21c-etc-kubernetes\") pod \"multus-z97p2\" (UID: \"5b6f2401-3eb9-4ee4-b79c-6faee06bc21c\") " pod="openshift-multus/multus-z97p2" Jan 20 18:06:05 crc kubenswrapper[4661]: I0120 18:06:05.780324 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/5b6f2401-3eb9-4ee4-b79c-6faee06bc21c-cni-binary-copy\") pod \"multus-z97p2\" (UID: \"5b6f2401-3eb9-4ee4-b79c-6faee06bc21c\") " pod="openshift-multus/multus-z97p2" Jan 20 18:06:05 crc kubenswrapper[4661]: I0120 18:06:05.780323 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/5b6f2401-3eb9-4ee4-b79c-6faee06bc21c-os-release\") pod \"multus-z97p2\" (UID: \"5b6f2401-3eb9-4ee4-b79c-6faee06bc21c\") " pod="openshift-multus/multus-z97p2" Jan 20 18:06:05 crc kubenswrapper[4661]: I0120 18:06:05.780364 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/5b6f2401-3eb9-4ee4-b79c-6faee06bc21c-cnibin\") pod \"multus-z97p2\" (UID: \"5b6f2401-3eb9-4ee4-b79c-6faee06bc21c\") " pod="openshift-multus/multus-z97p2" Jan 20 18:06:05 crc kubenswrapper[4661]: I0120 18:06:05.780421 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/e190abed-d178-4ce7-9485-f6090ecf8578-cnibin\") pod \"multus-additional-cni-plugins-j9j6p\" (UID: \"e190abed-d178-4ce7-9485-f6090ecf8578\") " pod="openshift-multus/multus-additional-cni-plugins-j9j6p" Jan 20 18:06:05 crc kubenswrapper[4661]: I0120 18:06:05.780431 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/5b6f2401-3eb9-4ee4-b79c-6faee06bc21c-cnibin\") pod \"multus-z97p2\" (UID: \"5b6f2401-3eb9-4ee4-b79c-6faee06bc21c\") " pod="openshift-multus/multus-z97p2" Jan 20 18:06:05 crc kubenswrapper[4661]: I0120 18:06:05.780491 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/5b6f2401-3eb9-4ee4-b79c-6faee06bc21c-host-run-multus-certs\") pod \"multus-z97p2\" (UID: \"5b6f2401-3eb9-4ee4-b79c-6faee06bc21c\") " pod="openshift-multus/multus-z97p2" Jan 20 18:06:05 crc kubenswrapper[4661]: I0120 18:06:05.780520 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/5b6f2401-3eb9-4ee4-b79c-6faee06bc21c-multus-daemon-config\") pod \"multus-z97p2\" (UID: \"5b6f2401-3eb9-4ee4-b79c-6faee06bc21c\") " pod="openshift-multus/multus-z97p2" Jan 20 18:06:05 crc kubenswrapper[4661]: I0120 18:06:05.780580 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/5b6f2401-3eb9-4ee4-b79c-6faee06bc21c-multus-cni-dir\") pod \"multus-z97p2\" (UID: \"5b6f2401-3eb9-4ee4-b79c-6faee06bc21c\") " pod="openshift-multus/multus-z97p2" Jan 20 18:06:05 crc kubenswrapper[4661]: I0120 18:06:05.780617 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/5b6f2401-3eb9-4ee4-b79c-6faee06bc21c-system-cni-dir\") pod \"multus-z97p2\" (UID: \"5b6f2401-3eb9-4ee4-b79c-6faee06bc21c\") " pod="openshift-multus/multus-z97p2" Jan 20 18:06:05 crc kubenswrapper[4661]: I0120 18:06:05.781281 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/78855c94-da90-4523-8d65-70f7fd153dee-mcd-auth-proxy-config\") pod \"machine-config-daemon-svf7c\" (UID: \"78855c94-da90-4523-8d65-70f7fd153dee\") " pod="openshift-machine-config-operator/machine-config-daemon-svf7c" Jan 20 18:06:05 crc kubenswrapper[4661]: I0120 18:06:05.781390 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/5b6f2401-3eb9-4ee4-b79c-6faee06bc21c-cni-binary-copy\") pod \"multus-z97p2\" (UID: \"5b6f2401-3eb9-4ee4-b79c-6faee06bc21c\") " pod="openshift-multus/multus-z97p2" Jan 20 18:06:05 crc kubenswrapper[4661]: I0120 18:06:05.785218 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/78855c94-da90-4523-8d65-70f7fd153dee-proxy-tls\") pod \"machine-config-daemon-svf7c\" (UID: \"78855c94-da90-4523-8d65-70f7fd153dee\") " pod="openshift-machine-config-operator/machine-config-daemon-svf7c" Jan 20 18:06:05 crc kubenswrapper[4661]: I0120 18:06:05.823048 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ff8qg\" (UniqueName: \"kubernetes.io/projected/5b6f2401-3eb9-4ee4-b79c-6faee06bc21c-kube-api-access-ff8qg\") pod \"multus-z97p2\" (UID: \"5b6f2401-3eb9-4ee4-b79c-6faee06bc21c\") " pod="openshift-multus/multus-z97p2" Jan 20 18:06:05 crc kubenswrapper[4661]: I0120 18:06:05.831658 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zvj2k\" (UniqueName: \"kubernetes.io/projected/78855c94-da90-4523-8d65-70f7fd153dee-kube-api-access-zvj2k\") pod \"machine-config-daemon-svf7c\" (UID: \"78855c94-da90-4523-8d65-70f7fd153dee\") " pod="openshift-machine-config-operator/machine-config-daemon-svf7c" Jan 20 18:06:05 crc kubenswrapper[4661]: I0120 18:06:05.857322 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:03Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:05Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:05 crc kubenswrapper[4661]: I0120 18:06:05.881497 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/e190abed-d178-4ce7-9485-f6090ecf8578-os-release\") pod \"multus-additional-cni-plugins-j9j6p\" (UID: \"e190abed-d178-4ce7-9485-f6090ecf8578\") " pod="openshift-multus/multus-additional-cni-plugins-j9j6p" Jan 20 18:06:05 crc kubenswrapper[4661]: I0120 18:06:05.881540 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/e190abed-d178-4ce7-9485-f6090ecf8578-cni-binary-copy\") pod \"multus-additional-cni-plugins-j9j6p\" (UID: \"e190abed-d178-4ce7-9485-f6090ecf8578\") " pod="openshift-multus/multus-additional-cni-plugins-j9j6p" Jan 20 18:06:05 crc kubenswrapper[4661]: I0120 18:06:05.881565 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/e190abed-d178-4ce7-9485-f6090ecf8578-cnibin\") pod \"multus-additional-cni-plugins-j9j6p\" (UID: \"e190abed-d178-4ce7-9485-f6090ecf8578\") " pod="openshift-multus/multus-additional-cni-plugins-j9j6p" Jan 20 18:06:05 crc kubenswrapper[4661]: I0120 18:06:05.881583 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/e190abed-d178-4ce7-9485-f6090ecf8578-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-j9j6p\" (UID: \"e190abed-d178-4ce7-9485-f6090ecf8578\") " pod="openshift-multus/multus-additional-cni-plugins-j9j6p" Jan 20 18:06:05 crc kubenswrapper[4661]: I0120 18:06:05.881647 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mxwgg\" (UniqueName: \"kubernetes.io/projected/e190abed-d178-4ce7-9485-f6090ecf8578-kube-api-access-mxwgg\") pod \"multus-additional-cni-plugins-j9j6p\" (UID: \"e190abed-d178-4ce7-9485-f6090ecf8578\") " pod="openshift-multus/multus-additional-cni-plugins-j9j6p" Jan 20 18:06:05 crc kubenswrapper[4661]: I0120 18:06:05.881683 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/e190abed-d178-4ce7-9485-f6090ecf8578-system-cni-dir\") pod \"multus-additional-cni-plugins-j9j6p\" (UID: \"e190abed-d178-4ce7-9485-f6090ecf8578\") " pod="openshift-multus/multus-additional-cni-plugins-j9j6p" Jan 20 18:06:05 crc kubenswrapper[4661]: I0120 18:06:05.881687 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/e190abed-d178-4ce7-9485-f6090ecf8578-os-release\") pod \"multus-additional-cni-plugins-j9j6p\" (UID: \"e190abed-d178-4ce7-9485-f6090ecf8578\") " pod="openshift-multus/multus-additional-cni-plugins-j9j6p" Jan 20 18:06:05 crc kubenswrapper[4661]: I0120 18:06:05.881750 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/e190abed-d178-4ce7-9485-f6090ecf8578-system-cni-dir\") pod \"multus-additional-cni-plugins-j9j6p\" (UID: \"e190abed-d178-4ce7-9485-f6090ecf8578\") " pod="openshift-multus/multus-additional-cni-plugins-j9j6p" Jan 20 18:06:05 crc kubenswrapper[4661]: I0120 18:06:05.881691 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/e190abed-d178-4ce7-9485-f6090ecf8578-cnibin\") pod \"multus-additional-cni-plugins-j9j6p\" (UID: \"e190abed-d178-4ce7-9485-f6090ecf8578\") " pod="openshift-multus/multus-additional-cni-plugins-j9j6p" Jan 20 18:06:05 crc kubenswrapper[4661]: I0120 18:06:05.881701 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/e190abed-d178-4ce7-9485-f6090ecf8578-tuning-conf-dir\") pod \"multus-additional-cni-plugins-j9j6p\" (UID: \"e190abed-d178-4ce7-9485-f6090ecf8578\") " pod="openshift-multus/multus-additional-cni-plugins-j9j6p" Jan 20 18:06:05 crc kubenswrapper[4661]: I0120 18:06:05.882121 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/e190abed-d178-4ce7-9485-f6090ecf8578-tuning-conf-dir\") pod \"multus-additional-cni-plugins-j9j6p\" (UID: \"e190abed-d178-4ce7-9485-f6090ecf8578\") " pod="openshift-multus/multus-additional-cni-plugins-j9j6p" Jan 20 18:06:05 crc kubenswrapper[4661]: I0120 18:06:05.882530 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/e190abed-d178-4ce7-9485-f6090ecf8578-cni-binary-copy\") pod \"multus-additional-cni-plugins-j9j6p\" (UID: \"e190abed-d178-4ce7-9485-f6090ecf8578\") " pod="openshift-multus/multus-additional-cni-plugins-j9j6p" Jan 20 18:06:05 crc kubenswrapper[4661]: I0120 18:06:05.914768 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:03Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:05Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:05 crc kubenswrapper[4661]: I0120 18:06:05.917796 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-z97p2" Jan 20 18:06:05 crc kubenswrapper[4661]: I0120 18:06:05.927176 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-svf7c" Jan 20 18:06:05 crc kubenswrapper[4661]: I0120 18:06:05.932918 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mxwgg\" (UniqueName: \"kubernetes.io/projected/e190abed-d178-4ce7-9485-f6090ecf8578-kube-api-access-mxwgg\") pod \"multus-additional-cni-plugins-j9j6p\" (UID: \"e190abed-d178-4ce7-9485-f6090ecf8578\") " pod="openshift-multus/multus-additional-cni-plugins-j9j6p" Jan 20 18:06:05 crc kubenswrapper[4661]: W0120 18:06:05.942002 4661 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5b6f2401_3eb9_4ee4_b79c_6faee06bc21c.slice/crio-2f6c64b0cc04e6a5245e728a5ef16736ea4c34bbb32dd3f9b7bc06d25416df15 WatchSource:0}: Error finding container 2f6c64b0cc04e6a5245e728a5ef16736ea4c34bbb32dd3f9b7bc06d25416df15: Status 404 returned error can't find the container with id 2f6c64b0cc04e6a5245e728a5ef16736ea4c34bbb32dd3f9b7bc06d25416df15 Jan 20 18:06:05 crc kubenswrapper[4661]: W0120 18:06:05.945390 4661 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod78855c94_da90_4523_8d65_70f7fd153dee.slice/crio-cced88821663aa6982f143f19b1b333dfd5e178f736f969785c6f95061c5d60a WatchSource:0}: Error finding container cced88821663aa6982f143f19b1b333dfd5e178f736f969785c6f95061c5d60a: Status 404 returned error can't find the container with id cced88821663aa6982f143f19b1b333dfd5e178f736f969785c6f95061c5d60a Jan 20 18:06:05 crc kubenswrapper[4661]: I0120 18:06:05.958533 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-9m9jm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c44ff326-6791-438a-8d65-b2be26e9c819\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de5a607340e429cf954b1b6e147c4dbff99ffee4d311e9692410698574915af2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kn7nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T18:06:04Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-9m9jm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:05Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:05 crc kubenswrapper[4661]: I0120 18:06:05.976084 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5947c5f0-b932-4127-a183-6b9023784c81\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:05:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:05:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:05:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:05:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:05:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2286c38d543136df613b2611b8d494d0777a950158adb169c26708335c024251\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7995b8e096ce8c7adf28d9baa4e12d943a697db80ee2b6e6b347b334e44b0df\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a1fb928361cffd6f14855b6c1cf5964eccc9f923435bf79dddd8f0c94decd9a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a8e025f49d745d0d846c606a3ec9dd6fbd2d255e8662ba1fd1a65f0d4289e77\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3584f02089912eecb6ea77d78d4f093929ce92631cb9ea758f1311268963b6b1\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-20T18:06:02Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0120 18:05:56.920405 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0120 18:05:56.921589 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1862726087/tls.crt::/tmp/serving-cert-1862726087/tls.key\\\\\\\"\\\\nI0120 18:06:02.544098 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0120 18:06:02.549414 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0120 18:06:02.549439 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0120 18:06:02.549472 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0120 18:06:02.549479 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0120 18:06:02.569160 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0120 18:06:02.569400 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0120 18:06:02.569474 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0120 18:06:02.569536 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0120 18:06:02.569594 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0120 18:06:02.569648 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0120 18:06:02.569744 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0120 18:06:02.569342 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0120 18:06:02.573278 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T18:05:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f09e5fcc7fafac7a11257184f5919c06b5b2e56a677b67c664e6489d9a581a20\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:05:46Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6eedc9bdf3c37af238cf9ad5172a8d93751c0641cbf43057016157f086c77538\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6eedc9bdf3c37af238cf9ad5172a8d93751c0641cbf43057016157f086c77538\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T18:05:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T18:05:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T18:05:44Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:05Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:05 crc kubenswrapper[4661]: I0120 18:06:05.999409 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26d03e00aaf9fc7a94d8fe25f4f6f7a028f4e5eb9956411442757ca8b2046d27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:05Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:06 crc kubenswrapper[4661]: I0120 18:06:06.015006 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:03Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:06Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:06 crc kubenswrapper[4661]: I0120 18:06:06.035473 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aafdc595f8f331b863d71124f1aa3c686ec883829377108268dd78de88f498ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a15e7bb714cbcf03a4ed8925508be80b06b04f3cd455d293237554c8ad0fdeee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:06Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:06 crc kubenswrapper[4661]: I0120 18:06:06.053029 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-z97p2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b6f2401-3eb9-4ee4-b79c-6faee06bc21c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff8qg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T18:06:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-z97p2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:06Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:06 crc kubenswrapper[4661]: I0120 18:06:06.073368 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:06Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:06 crc kubenswrapper[4661]: I0120 18:06:06.088651 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-svf7c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"78855c94-da90-4523-8d65-70f7fd153dee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zvj2k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zvj2k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T18:06:05Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-svf7c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:06Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:06 crc kubenswrapper[4661]: I0120 18:06:06.094242 4661 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-13 17:18:26.04925083 +0000 UTC Jan 20 18:06:06 crc kubenswrapper[4661]: I0120 18:06:06.114612 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26d03e00aaf9fc7a94d8fe25f4f6f7a028f4e5eb9956411442757ca8b2046d27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:06Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:06 crc kubenswrapper[4661]: I0120 18:06:06.124534 4661 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-fxb9d"] Jan 20 18:06:06 crc kubenswrapper[4661]: I0120 18:06:06.125541 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-fxb9d" Jan 20 18:06:06 crc kubenswrapper[4661]: I0120 18:06:06.130784 4661 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Jan 20 18:06:06 crc kubenswrapper[4661]: I0120 18:06:06.130874 4661 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Jan 20 18:06:06 crc kubenswrapper[4661]: I0120 18:06:06.131047 4661 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Jan 20 18:06:06 crc kubenswrapper[4661]: I0120 18:06:06.131136 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Jan 20 18:06:06 crc kubenswrapper[4661]: I0120 18:06:06.131184 4661 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Jan 20 18:06:06 crc kubenswrapper[4661]: I0120 18:06:06.131292 4661 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Jan 20 18:06:06 crc kubenswrapper[4661]: I0120 18:06:06.141358 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Jan 20 18:06:06 crc kubenswrapper[4661]: I0120 18:06:06.141608 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aafdc595f8f331b863d71124f1aa3c686ec883829377108268dd78de88f498ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a15e7bb714cbcf03a4ed8925508be80b06b04f3cd455d293237554c8ad0fdeee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:06Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:06 crc kubenswrapper[4661]: I0120 18:06:06.163698 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-j9j6p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e190abed-d178-4ce7-9485-f6090ecf8578\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:05Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxwgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxwgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxwgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxwgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxwgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxwgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxwgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T18:06:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-j9j6p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:06Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:06 crc kubenswrapper[4661]: I0120 18:06:06.183869 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:03Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:06Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:06 crc kubenswrapper[4661]: I0120 18:06:06.210972 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:03Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:06Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:06 crc kubenswrapper[4661]: I0120 18:06:06.226914 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-9m9jm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c44ff326-6791-438a-8d65-b2be26e9c819\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de5a607340e429cf954b1b6e147c4dbff99ffee4d311e9692410698574915af2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kn7nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T18:06:04Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-9m9jm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:06Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:06 crc kubenswrapper[4661]: I0120 18:06:06.241785 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5947c5f0-b932-4127-a183-6b9023784c81\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:05:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:05:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:05:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:05:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:05:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2286c38d543136df613b2611b8d494d0777a950158adb169c26708335c024251\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7995b8e096ce8c7adf28d9baa4e12d943a697db80ee2b6e6b347b334e44b0df\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a1fb928361cffd6f14855b6c1cf5964eccc9f923435bf79dddd8f0c94decd9a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a8e025f49d745d0d846c606a3ec9dd6fbd2d255e8662ba1fd1a65f0d4289e77\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3584f02089912eecb6ea77d78d4f093929ce92631cb9ea758f1311268963b6b1\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-20T18:06:02Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0120 18:05:56.920405 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0120 18:05:56.921589 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1862726087/tls.crt::/tmp/serving-cert-1862726087/tls.key\\\\\\\"\\\\nI0120 18:06:02.544098 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0120 18:06:02.549414 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0120 18:06:02.549439 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0120 18:06:02.549472 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0120 18:06:02.549479 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0120 18:06:02.569160 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0120 18:06:02.569400 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0120 18:06:02.569474 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0120 18:06:02.569536 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0120 18:06:02.569594 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0120 18:06:02.569648 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0120 18:06:02.569744 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0120 18:06:02.569342 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0120 18:06:02.573278 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T18:05:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f09e5fcc7fafac7a11257184f5919c06b5b2e56a677b67c664e6489d9a581a20\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:05:46Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6eedc9bdf3c37af238cf9ad5172a8d93751c0641cbf43057016157f086c77538\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6eedc9bdf3c37af238cf9ad5172a8d93751c0641cbf43057016157f086c77538\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T18:05:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T18:05:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T18:05:44Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:06Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:06 crc kubenswrapper[4661]: I0120 18:06:06.264340 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:03Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:06Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:06 crc kubenswrapper[4661]: I0120 18:06:06.283527 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-z97p2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b6f2401-3eb9-4ee4-b79c-6faee06bc21c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff8qg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T18:06:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-z97p2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:06Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:06 crc kubenswrapper[4661]: I0120 18:06:06.284782 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/3856f23c-8dc3-4b36-b3b7-955dff315250-ovn-node-metrics-cert\") pod \"ovnkube-node-fxb9d\" (UID: \"3856f23c-8dc3-4b36-b3b7-955dff315250\") " pod="openshift-ovn-kubernetes/ovnkube-node-fxb9d" Jan 20 18:06:06 crc kubenswrapper[4661]: I0120 18:06:06.284829 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/3856f23c-8dc3-4b36-b3b7-955dff315250-run-ovn\") pod \"ovnkube-node-fxb9d\" (UID: \"3856f23c-8dc3-4b36-b3b7-955dff315250\") " pod="openshift-ovn-kubernetes/ovnkube-node-fxb9d" Jan 20 18:06:06 crc kubenswrapper[4661]: I0120 18:06:06.284848 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/3856f23c-8dc3-4b36-b3b7-955dff315250-log-socket\") pod \"ovnkube-node-fxb9d\" (UID: \"3856f23c-8dc3-4b36-b3b7-955dff315250\") " pod="openshift-ovn-kubernetes/ovnkube-node-fxb9d" Jan 20 18:06:06 crc kubenswrapper[4661]: I0120 18:06:06.284865 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/3856f23c-8dc3-4b36-b3b7-955dff315250-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-fxb9d\" (UID: \"3856f23c-8dc3-4b36-b3b7-955dff315250\") " pod="openshift-ovn-kubernetes/ovnkube-node-fxb9d" Jan 20 18:06:06 crc kubenswrapper[4661]: I0120 18:06:06.284897 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/3856f23c-8dc3-4b36-b3b7-955dff315250-host-slash\") pod \"ovnkube-node-fxb9d\" (UID: \"3856f23c-8dc3-4b36-b3b7-955dff315250\") " pod="openshift-ovn-kubernetes/ovnkube-node-fxb9d" Jan 20 18:06:06 crc kubenswrapper[4661]: I0120 18:06:06.284916 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/3856f23c-8dc3-4b36-b3b7-955dff315250-host-run-netns\") pod \"ovnkube-node-fxb9d\" (UID: \"3856f23c-8dc3-4b36-b3b7-955dff315250\") " pod="openshift-ovn-kubernetes/ovnkube-node-fxb9d" Jan 20 18:06:06 crc kubenswrapper[4661]: I0120 18:06:06.284931 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/3856f23c-8dc3-4b36-b3b7-955dff315250-run-systemd\") pod \"ovnkube-node-fxb9d\" (UID: \"3856f23c-8dc3-4b36-b3b7-955dff315250\") " pod="openshift-ovn-kubernetes/ovnkube-node-fxb9d" Jan 20 18:06:06 crc kubenswrapper[4661]: I0120 18:06:06.284960 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/3856f23c-8dc3-4b36-b3b7-955dff315250-etc-openvswitch\") pod \"ovnkube-node-fxb9d\" (UID: \"3856f23c-8dc3-4b36-b3b7-955dff315250\") " pod="openshift-ovn-kubernetes/ovnkube-node-fxb9d" Jan 20 18:06:06 crc kubenswrapper[4661]: I0120 18:06:06.284976 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/3856f23c-8dc3-4b36-b3b7-955dff315250-host-run-ovn-kubernetes\") pod \"ovnkube-node-fxb9d\" (UID: \"3856f23c-8dc3-4b36-b3b7-955dff315250\") " pod="openshift-ovn-kubernetes/ovnkube-node-fxb9d" Jan 20 18:06:06 crc kubenswrapper[4661]: I0120 18:06:06.284991 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-66kpm\" (UniqueName: \"kubernetes.io/projected/3856f23c-8dc3-4b36-b3b7-955dff315250-kube-api-access-66kpm\") pod \"ovnkube-node-fxb9d\" (UID: \"3856f23c-8dc3-4b36-b3b7-955dff315250\") " pod="openshift-ovn-kubernetes/ovnkube-node-fxb9d" Jan 20 18:06:06 crc kubenswrapper[4661]: I0120 18:06:06.285006 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/3856f23c-8dc3-4b36-b3b7-955dff315250-host-kubelet\") pod \"ovnkube-node-fxb9d\" (UID: \"3856f23c-8dc3-4b36-b3b7-955dff315250\") " pod="openshift-ovn-kubernetes/ovnkube-node-fxb9d" Jan 20 18:06:06 crc kubenswrapper[4661]: I0120 18:06:06.285046 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/3856f23c-8dc3-4b36-b3b7-955dff315250-env-overrides\") pod \"ovnkube-node-fxb9d\" (UID: \"3856f23c-8dc3-4b36-b3b7-955dff315250\") " pod="openshift-ovn-kubernetes/ovnkube-node-fxb9d" Jan 20 18:06:06 crc kubenswrapper[4661]: I0120 18:06:06.285065 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/3856f23c-8dc3-4b36-b3b7-955dff315250-ovnkube-config\") pod \"ovnkube-node-fxb9d\" (UID: \"3856f23c-8dc3-4b36-b3b7-955dff315250\") " pod="openshift-ovn-kubernetes/ovnkube-node-fxb9d" Jan 20 18:06:06 crc kubenswrapper[4661]: I0120 18:06:06.285083 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/3856f23c-8dc3-4b36-b3b7-955dff315250-ovnkube-script-lib\") pod \"ovnkube-node-fxb9d\" (UID: \"3856f23c-8dc3-4b36-b3b7-955dff315250\") " pod="openshift-ovn-kubernetes/ovnkube-node-fxb9d" Jan 20 18:06:06 crc kubenswrapper[4661]: I0120 18:06:06.285118 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/3856f23c-8dc3-4b36-b3b7-955dff315250-node-log\") pod \"ovnkube-node-fxb9d\" (UID: \"3856f23c-8dc3-4b36-b3b7-955dff315250\") " pod="openshift-ovn-kubernetes/ovnkube-node-fxb9d" Jan 20 18:06:06 crc kubenswrapper[4661]: I0120 18:06:06.285142 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/3856f23c-8dc3-4b36-b3b7-955dff315250-systemd-units\") pod \"ovnkube-node-fxb9d\" (UID: \"3856f23c-8dc3-4b36-b3b7-955dff315250\") " pod="openshift-ovn-kubernetes/ovnkube-node-fxb9d" Jan 20 18:06:06 crc kubenswrapper[4661]: I0120 18:06:06.285157 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/3856f23c-8dc3-4b36-b3b7-955dff315250-var-lib-openvswitch\") pod \"ovnkube-node-fxb9d\" (UID: \"3856f23c-8dc3-4b36-b3b7-955dff315250\") " pod="openshift-ovn-kubernetes/ovnkube-node-fxb9d" Jan 20 18:06:06 crc kubenswrapper[4661]: I0120 18:06:06.285178 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/3856f23c-8dc3-4b36-b3b7-955dff315250-host-cni-netd\") pod \"ovnkube-node-fxb9d\" (UID: \"3856f23c-8dc3-4b36-b3b7-955dff315250\") " pod="openshift-ovn-kubernetes/ovnkube-node-fxb9d" Jan 20 18:06:06 crc kubenswrapper[4661]: I0120 18:06:06.285196 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/3856f23c-8dc3-4b36-b3b7-955dff315250-host-cni-bin\") pod \"ovnkube-node-fxb9d\" (UID: \"3856f23c-8dc3-4b36-b3b7-955dff315250\") " pod="openshift-ovn-kubernetes/ovnkube-node-fxb9d" Jan 20 18:06:06 crc kubenswrapper[4661]: I0120 18:06:06.285211 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/3856f23c-8dc3-4b36-b3b7-955dff315250-run-openvswitch\") pod \"ovnkube-node-fxb9d\" (UID: \"3856f23c-8dc3-4b36-b3b7-955dff315250\") " pod="openshift-ovn-kubernetes/ovnkube-node-fxb9d" Jan 20 18:06:06 crc kubenswrapper[4661]: I0120 18:06:06.295945 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-svf7c" event={"ID":"78855c94-da90-4523-8d65-70f7fd153dee","Type":"ContainerStarted","Data":"ce85015f47761ddd35031a4b2aa10eddde92a1f1ee206e6454b967b03b49372e"} Jan 20 18:06:06 crc kubenswrapper[4661]: I0120 18:06:06.296099 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-svf7c" event={"ID":"78855c94-da90-4523-8d65-70f7fd153dee","Type":"ContainerStarted","Data":"7dad5141c6e2e07d42bee1c473efffa900d0d900467b1524cd59962582696a3e"} Jan 20 18:06:06 crc kubenswrapper[4661]: I0120 18:06:06.296162 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-svf7c" event={"ID":"78855c94-da90-4523-8d65-70f7fd153dee","Type":"ContainerStarted","Data":"cced88821663aa6982f143f19b1b333dfd5e178f736f969785c6f95061c5d60a"} Jan 20 18:06:06 crc kubenswrapper[4661]: I0120 18:06:06.297850 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-z97p2" event={"ID":"5b6f2401-3eb9-4ee4-b79c-6faee06bc21c","Type":"ContainerStarted","Data":"5d04be3c87130e9506908a5ff0bf35490bafa64b4cec7b6ae1b67c4a8bd7df5d"} Jan 20 18:06:06 crc kubenswrapper[4661]: I0120 18:06:06.297906 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-z97p2" event={"ID":"5b6f2401-3eb9-4ee4-b79c-6faee06bc21c","Type":"ContainerStarted","Data":"2f6c64b0cc04e6a5245e728a5ef16736ea4c34bbb32dd3f9b7bc06d25416df15"} Jan 20 18:06:06 crc kubenswrapper[4661]: I0120 18:06:06.299492 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"3d831477bdf455582c54cba87020fc1141541282a25169c4b9730a78855e5719"} Jan 20 18:06:06 crc kubenswrapper[4661]: I0120 18:06:06.302027 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aafdc595f8f331b863d71124f1aa3c686ec883829377108268dd78de88f498ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a15e7bb714cbcf03a4ed8925508be80b06b04f3cd455d293237554c8ad0fdeee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:06Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:06 crc kubenswrapper[4661]: I0120 18:06:06.318584 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-j9j6p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e190abed-d178-4ce7-9485-f6090ecf8578\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:05Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxwgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxwgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxwgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxwgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxwgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxwgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxwgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T18:06:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-j9j6p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:06Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:06 crc kubenswrapper[4661]: I0120 18:06:06.347450 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fxb9d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3856f23c-8dc3-4b36-b3b7-955dff315250\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:06Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:06Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:06Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66kpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66kpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66kpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66kpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66kpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66kpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66kpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66kpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66kpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T18:06:06Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fxb9d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:06Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:06 crc kubenswrapper[4661]: I0120 18:06:06.367321 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5947c5f0-b932-4127-a183-6b9023784c81\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:05:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:05:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:05:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:05:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:05:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2286c38d543136df613b2611b8d494d0777a950158adb169c26708335c024251\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7995b8e096ce8c7adf28d9baa4e12d943a697db80ee2b6e6b347b334e44b0df\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a1fb928361cffd6f14855b6c1cf5964eccc9f923435bf79dddd8f0c94decd9a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a8e025f49d745d0d846c606a3ec9dd6fbd2d255e8662ba1fd1a65f0d4289e77\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3584f02089912eecb6ea77d78d4f093929ce92631cb9ea758f1311268963b6b1\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-20T18:06:02Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0120 18:05:56.920405 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0120 18:05:56.921589 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1862726087/tls.crt::/tmp/serving-cert-1862726087/tls.key\\\\\\\"\\\\nI0120 18:06:02.544098 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0120 18:06:02.549414 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0120 18:06:02.549439 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0120 18:06:02.549472 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0120 18:06:02.549479 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0120 18:06:02.569160 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0120 18:06:02.569400 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0120 18:06:02.569474 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0120 18:06:02.569536 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0120 18:06:02.569594 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0120 18:06:02.569648 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0120 18:06:02.569744 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0120 18:06:02.569342 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0120 18:06:02.573278 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T18:05:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f09e5fcc7fafac7a11257184f5919c06b5b2e56a677b67c664e6489d9a581a20\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:05:46Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6eedc9bdf3c37af238cf9ad5172a8d93751c0641cbf43057016157f086c77538\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6eedc9bdf3c37af238cf9ad5172a8d93751c0641cbf43057016157f086c77538\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T18:05:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T18:05:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T18:05:44Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:06Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:06 crc kubenswrapper[4661]: I0120 18:06:06.379847 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:03Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:06Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:06 crc kubenswrapper[4661]: I0120 18:06:06.386233 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/3856f23c-8dc3-4b36-b3b7-955dff315250-systemd-units\") pod \"ovnkube-node-fxb9d\" (UID: \"3856f23c-8dc3-4b36-b3b7-955dff315250\") " pod="openshift-ovn-kubernetes/ovnkube-node-fxb9d" Jan 20 18:06:06 crc kubenswrapper[4661]: I0120 18:06:06.386286 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/3856f23c-8dc3-4b36-b3b7-955dff315250-var-lib-openvswitch\") pod \"ovnkube-node-fxb9d\" (UID: \"3856f23c-8dc3-4b36-b3b7-955dff315250\") " pod="openshift-ovn-kubernetes/ovnkube-node-fxb9d" Jan 20 18:06:06 crc kubenswrapper[4661]: I0120 18:06:06.386307 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/3856f23c-8dc3-4b36-b3b7-955dff315250-host-cni-netd\") pod \"ovnkube-node-fxb9d\" (UID: \"3856f23c-8dc3-4b36-b3b7-955dff315250\") " pod="openshift-ovn-kubernetes/ovnkube-node-fxb9d" Jan 20 18:06:06 crc kubenswrapper[4661]: I0120 18:06:06.386326 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/3856f23c-8dc3-4b36-b3b7-955dff315250-host-cni-bin\") pod \"ovnkube-node-fxb9d\" (UID: \"3856f23c-8dc3-4b36-b3b7-955dff315250\") " pod="openshift-ovn-kubernetes/ovnkube-node-fxb9d" Jan 20 18:06:06 crc kubenswrapper[4661]: I0120 18:06:06.386342 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/3856f23c-8dc3-4b36-b3b7-955dff315250-run-openvswitch\") pod \"ovnkube-node-fxb9d\" (UID: \"3856f23c-8dc3-4b36-b3b7-955dff315250\") " pod="openshift-ovn-kubernetes/ovnkube-node-fxb9d" Jan 20 18:06:06 crc kubenswrapper[4661]: I0120 18:06:06.386374 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/3856f23c-8dc3-4b36-b3b7-955dff315250-run-ovn\") pod \"ovnkube-node-fxb9d\" (UID: \"3856f23c-8dc3-4b36-b3b7-955dff315250\") " pod="openshift-ovn-kubernetes/ovnkube-node-fxb9d" Jan 20 18:06:06 crc kubenswrapper[4661]: I0120 18:06:06.386391 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/3856f23c-8dc3-4b36-b3b7-955dff315250-log-socket\") pod \"ovnkube-node-fxb9d\" (UID: \"3856f23c-8dc3-4b36-b3b7-955dff315250\") " pod="openshift-ovn-kubernetes/ovnkube-node-fxb9d" Jan 20 18:06:06 crc kubenswrapper[4661]: I0120 18:06:06.386407 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/3856f23c-8dc3-4b36-b3b7-955dff315250-ovn-node-metrics-cert\") pod \"ovnkube-node-fxb9d\" (UID: \"3856f23c-8dc3-4b36-b3b7-955dff315250\") " pod="openshift-ovn-kubernetes/ovnkube-node-fxb9d" Jan 20 18:06:06 crc kubenswrapper[4661]: I0120 18:06:06.386445 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/3856f23c-8dc3-4b36-b3b7-955dff315250-host-slash\") pod \"ovnkube-node-fxb9d\" (UID: \"3856f23c-8dc3-4b36-b3b7-955dff315250\") " pod="openshift-ovn-kubernetes/ovnkube-node-fxb9d" Jan 20 18:06:06 crc kubenswrapper[4661]: I0120 18:06:06.386460 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/3856f23c-8dc3-4b36-b3b7-955dff315250-host-run-netns\") pod \"ovnkube-node-fxb9d\" (UID: \"3856f23c-8dc3-4b36-b3b7-955dff315250\") " pod="openshift-ovn-kubernetes/ovnkube-node-fxb9d" Jan 20 18:06:06 crc kubenswrapper[4661]: I0120 18:06:06.386478 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/3856f23c-8dc3-4b36-b3b7-955dff315250-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-fxb9d\" (UID: \"3856f23c-8dc3-4b36-b3b7-955dff315250\") " pod="openshift-ovn-kubernetes/ovnkube-node-fxb9d" Jan 20 18:06:06 crc kubenswrapper[4661]: I0120 18:06:06.386497 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/3856f23c-8dc3-4b36-b3b7-955dff315250-run-systemd\") pod \"ovnkube-node-fxb9d\" (UID: \"3856f23c-8dc3-4b36-b3b7-955dff315250\") " pod="openshift-ovn-kubernetes/ovnkube-node-fxb9d" Jan 20 18:06:06 crc kubenswrapper[4661]: I0120 18:06:06.386512 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/3856f23c-8dc3-4b36-b3b7-955dff315250-etc-openvswitch\") pod \"ovnkube-node-fxb9d\" (UID: \"3856f23c-8dc3-4b36-b3b7-955dff315250\") " pod="openshift-ovn-kubernetes/ovnkube-node-fxb9d" Jan 20 18:06:06 crc kubenswrapper[4661]: I0120 18:06:06.386527 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/3856f23c-8dc3-4b36-b3b7-955dff315250-host-run-ovn-kubernetes\") pod \"ovnkube-node-fxb9d\" (UID: \"3856f23c-8dc3-4b36-b3b7-955dff315250\") " pod="openshift-ovn-kubernetes/ovnkube-node-fxb9d" Jan 20 18:06:06 crc kubenswrapper[4661]: I0120 18:06:06.386613 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/3856f23c-8dc3-4b36-b3b7-955dff315250-host-run-ovn-kubernetes\") pod \"ovnkube-node-fxb9d\" (UID: \"3856f23c-8dc3-4b36-b3b7-955dff315250\") " pod="openshift-ovn-kubernetes/ovnkube-node-fxb9d" Jan 20 18:06:06 crc kubenswrapper[4661]: I0120 18:06:06.386615 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-66kpm\" (UniqueName: \"kubernetes.io/projected/3856f23c-8dc3-4b36-b3b7-955dff315250-kube-api-access-66kpm\") pod \"ovnkube-node-fxb9d\" (UID: \"3856f23c-8dc3-4b36-b3b7-955dff315250\") " pod="openshift-ovn-kubernetes/ovnkube-node-fxb9d" Jan 20 18:06:06 crc kubenswrapper[4661]: I0120 18:06:06.386700 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/3856f23c-8dc3-4b36-b3b7-955dff315250-host-kubelet\") pod \"ovnkube-node-fxb9d\" (UID: \"3856f23c-8dc3-4b36-b3b7-955dff315250\") " pod="openshift-ovn-kubernetes/ovnkube-node-fxb9d" Jan 20 18:06:06 crc kubenswrapper[4661]: I0120 18:06:06.386769 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/3856f23c-8dc3-4b36-b3b7-955dff315250-env-overrides\") pod \"ovnkube-node-fxb9d\" (UID: \"3856f23c-8dc3-4b36-b3b7-955dff315250\") " pod="openshift-ovn-kubernetes/ovnkube-node-fxb9d" Jan 20 18:06:06 crc kubenswrapper[4661]: I0120 18:06:06.386790 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/3856f23c-8dc3-4b36-b3b7-955dff315250-ovnkube-script-lib\") pod \"ovnkube-node-fxb9d\" (UID: \"3856f23c-8dc3-4b36-b3b7-955dff315250\") " pod="openshift-ovn-kubernetes/ovnkube-node-fxb9d" Jan 20 18:06:06 crc kubenswrapper[4661]: I0120 18:06:06.386886 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/3856f23c-8dc3-4b36-b3b7-955dff315250-ovnkube-config\") pod \"ovnkube-node-fxb9d\" (UID: \"3856f23c-8dc3-4b36-b3b7-955dff315250\") " pod="openshift-ovn-kubernetes/ovnkube-node-fxb9d" Jan 20 18:06:06 crc kubenswrapper[4661]: I0120 18:06:06.386937 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/3856f23c-8dc3-4b36-b3b7-955dff315250-node-log\") pod \"ovnkube-node-fxb9d\" (UID: \"3856f23c-8dc3-4b36-b3b7-955dff315250\") " pod="openshift-ovn-kubernetes/ovnkube-node-fxb9d" Jan 20 18:06:06 crc kubenswrapper[4661]: I0120 18:06:06.387059 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/3856f23c-8dc3-4b36-b3b7-955dff315250-node-log\") pod \"ovnkube-node-fxb9d\" (UID: \"3856f23c-8dc3-4b36-b3b7-955dff315250\") " pod="openshift-ovn-kubernetes/ovnkube-node-fxb9d" Jan 20 18:06:06 crc kubenswrapper[4661]: I0120 18:06:06.387121 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/3856f23c-8dc3-4b36-b3b7-955dff315250-systemd-units\") pod \"ovnkube-node-fxb9d\" (UID: \"3856f23c-8dc3-4b36-b3b7-955dff315250\") " pod="openshift-ovn-kubernetes/ovnkube-node-fxb9d" Jan 20 18:06:06 crc kubenswrapper[4661]: I0120 18:06:06.387158 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/3856f23c-8dc3-4b36-b3b7-955dff315250-var-lib-openvswitch\") pod \"ovnkube-node-fxb9d\" (UID: \"3856f23c-8dc3-4b36-b3b7-955dff315250\") " pod="openshift-ovn-kubernetes/ovnkube-node-fxb9d" Jan 20 18:06:06 crc kubenswrapper[4661]: I0120 18:06:06.387183 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/3856f23c-8dc3-4b36-b3b7-955dff315250-host-cni-netd\") pod \"ovnkube-node-fxb9d\" (UID: \"3856f23c-8dc3-4b36-b3b7-955dff315250\") " pod="openshift-ovn-kubernetes/ovnkube-node-fxb9d" Jan 20 18:06:06 crc kubenswrapper[4661]: I0120 18:06:06.387229 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/3856f23c-8dc3-4b36-b3b7-955dff315250-host-cni-bin\") pod \"ovnkube-node-fxb9d\" (UID: \"3856f23c-8dc3-4b36-b3b7-955dff315250\") " pod="openshift-ovn-kubernetes/ovnkube-node-fxb9d" Jan 20 18:06:06 crc kubenswrapper[4661]: I0120 18:06:06.387286 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/3856f23c-8dc3-4b36-b3b7-955dff315250-host-kubelet\") pod \"ovnkube-node-fxb9d\" (UID: \"3856f23c-8dc3-4b36-b3b7-955dff315250\") " pod="openshift-ovn-kubernetes/ovnkube-node-fxb9d" Jan 20 18:06:06 crc kubenswrapper[4661]: I0120 18:06:06.387289 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/3856f23c-8dc3-4b36-b3b7-955dff315250-run-openvswitch\") pod \"ovnkube-node-fxb9d\" (UID: \"3856f23c-8dc3-4b36-b3b7-955dff315250\") " pod="openshift-ovn-kubernetes/ovnkube-node-fxb9d" Jan 20 18:06:06 crc kubenswrapper[4661]: I0120 18:06:06.387547 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/3856f23c-8dc3-4b36-b3b7-955dff315250-run-ovn\") pod \"ovnkube-node-fxb9d\" (UID: \"3856f23c-8dc3-4b36-b3b7-955dff315250\") " pod="openshift-ovn-kubernetes/ovnkube-node-fxb9d" Jan 20 18:06:06 crc kubenswrapper[4661]: I0120 18:06:06.387582 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/3856f23c-8dc3-4b36-b3b7-955dff315250-log-socket\") pod \"ovnkube-node-fxb9d\" (UID: \"3856f23c-8dc3-4b36-b3b7-955dff315250\") " pod="openshift-ovn-kubernetes/ovnkube-node-fxb9d" Jan 20 18:06:06 crc kubenswrapper[4661]: I0120 18:06:06.388157 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/3856f23c-8dc3-4b36-b3b7-955dff315250-host-run-netns\") pod \"ovnkube-node-fxb9d\" (UID: \"3856f23c-8dc3-4b36-b3b7-955dff315250\") " pod="openshift-ovn-kubernetes/ovnkube-node-fxb9d" Jan 20 18:06:06 crc kubenswrapper[4661]: I0120 18:06:06.388304 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/3856f23c-8dc3-4b36-b3b7-955dff315250-env-overrides\") pod \"ovnkube-node-fxb9d\" (UID: \"3856f23c-8dc3-4b36-b3b7-955dff315250\") " pod="openshift-ovn-kubernetes/ovnkube-node-fxb9d" Jan 20 18:06:06 crc kubenswrapper[4661]: I0120 18:06:06.388379 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/3856f23c-8dc3-4b36-b3b7-955dff315250-host-slash\") pod \"ovnkube-node-fxb9d\" (UID: \"3856f23c-8dc3-4b36-b3b7-955dff315250\") " pod="openshift-ovn-kubernetes/ovnkube-node-fxb9d" Jan 20 18:06:06 crc kubenswrapper[4661]: I0120 18:06:06.388518 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/3856f23c-8dc3-4b36-b3b7-955dff315250-ovnkube-script-lib\") pod \"ovnkube-node-fxb9d\" (UID: \"3856f23c-8dc3-4b36-b3b7-955dff315250\") " pod="openshift-ovn-kubernetes/ovnkube-node-fxb9d" Jan 20 18:06:06 crc kubenswrapper[4661]: I0120 18:06:06.388858 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/3856f23c-8dc3-4b36-b3b7-955dff315250-ovnkube-config\") pod \"ovnkube-node-fxb9d\" (UID: \"3856f23c-8dc3-4b36-b3b7-955dff315250\") " pod="openshift-ovn-kubernetes/ovnkube-node-fxb9d" Jan 20 18:06:06 crc kubenswrapper[4661]: I0120 18:06:06.388956 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/3856f23c-8dc3-4b36-b3b7-955dff315250-run-systemd\") pod \"ovnkube-node-fxb9d\" (UID: \"3856f23c-8dc3-4b36-b3b7-955dff315250\") " pod="openshift-ovn-kubernetes/ovnkube-node-fxb9d" Jan 20 18:06:06 crc kubenswrapper[4661]: I0120 18:06:06.388962 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/3856f23c-8dc3-4b36-b3b7-955dff315250-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-fxb9d\" (UID: \"3856f23c-8dc3-4b36-b3b7-955dff315250\") " pod="openshift-ovn-kubernetes/ovnkube-node-fxb9d" Jan 20 18:06:06 crc kubenswrapper[4661]: I0120 18:06:06.388989 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/3856f23c-8dc3-4b36-b3b7-955dff315250-etc-openvswitch\") pod \"ovnkube-node-fxb9d\" (UID: \"3856f23c-8dc3-4b36-b3b7-955dff315250\") " pod="openshift-ovn-kubernetes/ovnkube-node-fxb9d" Jan 20 18:06:06 crc kubenswrapper[4661]: I0120 18:06:06.392453 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/3856f23c-8dc3-4b36-b3b7-955dff315250-ovn-node-metrics-cert\") pod \"ovnkube-node-fxb9d\" (UID: \"3856f23c-8dc3-4b36-b3b7-955dff315250\") " pod="openshift-ovn-kubernetes/ovnkube-node-fxb9d" Jan 20 18:06:06 crc kubenswrapper[4661]: I0120 18:06:06.397997 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:03Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:06Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:06 crc kubenswrapper[4661]: I0120 18:06:06.416536 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-66kpm\" (UniqueName: \"kubernetes.io/projected/3856f23c-8dc3-4b36-b3b7-955dff315250-kube-api-access-66kpm\") pod \"ovnkube-node-fxb9d\" (UID: \"3856f23c-8dc3-4b36-b3b7-955dff315250\") " pod="openshift-ovn-kubernetes/ovnkube-node-fxb9d" Jan 20 18:06:06 crc kubenswrapper[4661]: I0120 18:06:06.423536 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-9m9jm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c44ff326-6791-438a-8d65-b2be26e9c819\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de5a607340e429cf954b1b6e147c4dbff99ffee4d311e9692410698574915af2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kn7nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T18:06:04Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-9m9jm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:06Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:06 crc kubenswrapper[4661]: I0120 18:06:06.436711 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:03Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:06Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:06 crc kubenswrapper[4661]: I0120 18:06:06.449825 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-z97p2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b6f2401-3eb9-4ee4-b79c-6faee06bc21c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff8qg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T18:06:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-z97p2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:06Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:06 crc kubenswrapper[4661]: I0120 18:06:06.457640 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-fxb9d" Jan 20 18:06:06 crc kubenswrapper[4661]: I0120 18:06:06.467304 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26d03e00aaf9fc7a94d8fe25f4f6f7a028f4e5eb9956411442757ca8b2046d27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:06Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:06 crc kubenswrapper[4661]: W0120 18:06:06.476630 4661 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3856f23c_8dc3_4b36_b3b7_955dff315250.slice/crio-6fc1c0bb3d0c80288d19a789669a571d8829461ce364a38889c09a2c46f5f070 WatchSource:0}: Error finding container 6fc1c0bb3d0c80288d19a789669a571d8829461ce364a38889c09a2c46f5f070: Status 404 returned error can't find the container with id 6fc1c0bb3d0c80288d19a789669a571d8829461ce364a38889c09a2c46f5f070 Jan 20 18:06:06 crc kubenswrapper[4661]: I0120 18:06:06.484249 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:06Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:06 crc kubenswrapper[4661]: I0120 18:06:06.487887 4661 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Jan 20 18:06:06 crc kubenswrapper[4661]: I0120 18:06:06.493503 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/e190abed-d178-4ce7-9485-f6090ecf8578-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-j9j6p\" (UID: \"e190abed-d178-4ce7-9485-f6090ecf8578\") " pod="openshift-multus/multus-additional-cni-plugins-j9j6p" Jan 20 18:06:06 crc kubenswrapper[4661]: I0120 18:06:06.494410 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-svf7c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"78855c94-da90-4523-8d65-70f7fd153dee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zvj2k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zvj2k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T18:06:05Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-svf7c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:06Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:06 crc kubenswrapper[4661]: I0120 18:06:06.509688 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d831477bdf455582c54cba87020fc1141541282a25169c4b9730a78855e5719\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:06Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:06 crc kubenswrapper[4661]: I0120 18:06:06.534550 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-z97p2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b6f2401-3eb9-4ee4-b79c-6faee06bc21c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d04be3c87130e9506908a5ff0bf35490bafa64b4cec7b6ae1b67c4a8bd7df5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff8qg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T18:06:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-z97p2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:06Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:06 crc kubenswrapper[4661]: I0120 18:06:06.562591 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26d03e00aaf9fc7a94d8fe25f4f6f7a028f4e5eb9956411442757ca8b2046d27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:06Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:06 crc kubenswrapper[4661]: I0120 18:06:06.597370 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:06Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:06 crc kubenswrapper[4661]: I0120 18:06:06.614530 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-svf7c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"78855c94-da90-4523-8d65-70f7fd153dee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce85015f47761ddd35031a4b2aa10eddde92a1f1ee206e6454b967b03b49372e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zvj2k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7dad5141c6e2e07d42bee1c473efffa900d0d900467b1524cd59962582696a3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zvj2k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T18:06:05Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-svf7c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:06Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:06 crc kubenswrapper[4661]: I0120 18:06:06.642745 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aafdc595f8f331b863d71124f1aa3c686ec883829377108268dd78de88f498ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a15e7bb714cbcf03a4ed8925508be80b06b04f3cd455d293237554c8ad0fdeee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:06Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:06 crc kubenswrapper[4661]: I0120 18:06:06.677279 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-j9j6p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e190abed-d178-4ce7-9485-f6090ecf8578\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:05Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxwgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxwgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxwgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxwgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxwgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxwgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxwgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T18:06:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-j9j6p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:06Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:06 crc kubenswrapper[4661]: I0120 18:06:06.700426 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5947c5f0-b932-4127-a183-6b9023784c81\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:05:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:05:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:05:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:05:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:05:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2286c38d543136df613b2611b8d494d0777a950158adb169c26708335c024251\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7995b8e096ce8c7adf28d9baa4e12d943a697db80ee2b6e6b347b334e44b0df\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a1fb928361cffd6f14855b6c1cf5964eccc9f923435bf79dddd8f0c94decd9a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a8e025f49d745d0d846c606a3ec9dd6fbd2d255e8662ba1fd1a65f0d4289e77\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3584f02089912eecb6ea77d78d4f093929ce92631cb9ea758f1311268963b6b1\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-20T18:06:02Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0120 18:05:56.920405 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0120 18:05:56.921589 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1862726087/tls.crt::/tmp/serving-cert-1862726087/tls.key\\\\\\\"\\\\nI0120 18:06:02.544098 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0120 18:06:02.549414 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0120 18:06:02.549439 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0120 18:06:02.549472 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0120 18:06:02.549479 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0120 18:06:02.569160 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0120 18:06:02.569400 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0120 18:06:02.569474 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0120 18:06:02.569536 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0120 18:06:02.569594 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0120 18:06:02.569648 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0120 18:06:02.569744 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0120 18:06:02.569342 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0120 18:06:02.573278 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T18:05:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f09e5fcc7fafac7a11257184f5919c06b5b2e56a677b67c664e6489d9a581a20\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:05:46Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6eedc9bdf3c37af238cf9ad5172a8d93751c0641cbf43057016157f086c77538\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6eedc9bdf3c37af238cf9ad5172a8d93751c0641cbf43057016157f086c77538\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T18:05:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T18:05:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T18:05:44Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:06Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:06 crc kubenswrapper[4661]: I0120 18:06:06.758530 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:03Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:06Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:06 crc kubenswrapper[4661]: I0120 18:06:06.790879 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 18:06:06 crc kubenswrapper[4661]: I0120 18:06:06.791047 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 18:06:06 crc kubenswrapper[4661]: I0120 18:06:06.791101 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 18:06:06 crc kubenswrapper[4661]: I0120 18:06:06.791128 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 18:06:06 crc kubenswrapper[4661]: I0120 18:06:06.791153 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 18:06:06 crc kubenswrapper[4661]: E0120 18:06:06.791309 4661 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 20 18:06:06 crc kubenswrapper[4661]: E0120 18:06:06.791327 4661 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 20 18:06:06 crc kubenswrapper[4661]: E0120 18:06:06.791339 4661 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 20 18:06:06 crc kubenswrapper[4661]: E0120 18:06:06.791404 4661 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-20 18:06:10.791381645 +0000 UTC m=+27.122171307 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 20 18:06:06 crc kubenswrapper[4661]: E0120 18:06:06.791841 4661 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 18:06:10.791832378 +0000 UTC m=+27.122622040 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 18:06:06 crc kubenswrapper[4661]: E0120 18:06:06.791882 4661 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 20 18:06:06 crc kubenswrapper[4661]: E0120 18:06:06.791905 4661 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-20 18:06:10.79189917 +0000 UTC m=+27.122688832 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 20 18:06:06 crc kubenswrapper[4661]: E0120 18:06:06.791951 4661 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 20 18:06:06 crc kubenswrapper[4661]: E0120 18:06:06.791972 4661 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-20 18:06:10.791966091 +0000 UTC m=+27.122755753 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 20 18:06:06 crc kubenswrapper[4661]: E0120 18:06:06.792011 4661 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 20 18:06:06 crc kubenswrapper[4661]: E0120 18:06:06.792024 4661 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 20 18:06:06 crc kubenswrapper[4661]: E0120 18:06:06.792033 4661 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 20 18:06:06 crc kubenswrapper[4661]: E0120 18:06:06.792054 4661 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-20 18:06:10.792046944 +0000 UTC m=+27.122836606 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 20 18:06:06 crc kubenswrapper[4661]: I0120 18:06:06.840205 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:03Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:06Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:06 crc kubenswrapper[4661]: I0120 18:06:06.914741 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-9m9jm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c44ff326-6791-438a-8d65-b2be26e9c819\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de5a607340e429cf954b1b6e147c4dbff99ffee4d311e9692410698574915af2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kn7nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T18:06:04Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-9m9jm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:06Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:06 crc kubenswrapper[4661]: I0120 18:06:06.928660 4661 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 20 18:06:06 crc kubenswrapper[4661]: I0120 18:06:06.930559 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:06 crc kubenswrapper[4661]: I0120 18:06:06.930615 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:06 crc kubenswrapper[4661]: I0120 18:06:06.930629 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:06 crc kubenswrapper[4661]: I0120 18:06:06.930786 4661 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 20 18:06:06 crc kubenswrapper[4661]: I0120 18:06:06.944868 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fxb9d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3856f23c-8dc3-4b36-b3b7-955dff315250\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:06Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:06Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:06Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66kpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66kpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66kpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66kpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66kpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66kpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66kpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66kpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66kpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T18:06:06Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fxb9d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:06Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:06 crc kubenswrapper[4661]: I0120 18:06:06.949947 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Jan 20 18:06:06 crc kubenswrapper[4661]: I0120 18:06:06.955068 4661 kubelet_node_status.go:115] "Node was previously registered" node="crc" Jan 20 18:06:06 crc kubenswrapper[4661]: I0120 18:06:06.955466 4661 kubelet_node_status.go:79] "Successfully registered node" node="crc" Jan 20 18:06:06 crc kubenswrapper[4661]: I0120 18:06:06.956196 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-j9j6p" Jan 20 18:06:06 crc kubenswrapper[4661]: I0120 18:06:06.960479 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:06 crc kubenswrapper[4661]: I0120 18:06:06.960545 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:06 crc kubenswrapper[4661]: I0120 18:06:06.960558 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:06 crc kubenswrapper[4661]: I0120 18:06:06.960605 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:06 crc kubenswrapper[4661]: I0120 18:06:06.960625 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:06Z","lastTransitionTime":"2026-01-20T18:06:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:06 crc kubenswrapper[4661]: W0120 18:06:06.967526 4661 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode190abed_d178_4ce7_9485_f6090ecf8578.slice/crio-3f51b7cc7f839f08d589a89ea10e000c9377c2bd8f6816c41f079158b4313746 WatchSource:0}: Error finding container 3f51b7cc7f839f08d589a89ea10e000c9377c2bd8f6816c41f079158b4313746: Status 404 returned error can't find the container with id 3f51b7cc7f839f08d589a89ea10e000c9377c2bd8f6816c41f079158b4313746 Jan 20 18:06:06 crc kubenswrapper[4661]: E0120 18:06:06.987652 4661 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T18:06:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T18:06:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:06Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T18:06:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T18:06:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:06Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7f2069d5-53e0-4198-b42b-b73aa1252865\\\",\\\"systemUUID\\\":\\\"727045d4-7edb-4891-a9ee-dd5ccba890df\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:06Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:06 crc kubenswrapper[4661]: I0120 18:06:06.995872 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:06 crc kubenswrapper[4661]: I0120 18:06:06.995905 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:06 crc kubenswrapper[4661]: I0120 18:06:06.995914 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:06 crc kubenswrapper[4661]: I0120 18:06:06.995930 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:06 crc kubenswrapper[4661]: I0120 18:06:06.995942 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:06Z","lastTransitionTime":"2026-01-20T18:06:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:07 crc kubenswrapper[4661]: E0120 18:06:07.010849 4661 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T18:06:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T18:06:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:06Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T18:06:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T18:06:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:06Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7f2069d5-53e0-4198-b42b-b73aa1252865\\\",\\\"systemUUID\\\":\\\"727045d4-7edb-4891-a9ee-dd5ccba890df\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:07Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:07 crc kubenswrapper[4661]: I0120 18:06:07.017967 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:07 crc kubenswrapper[4661]: I0120 18:06:07.018010 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:07 crc kubenswrapper[4661]: I0120 18:06:07.018021 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:07 crc kubenswrapper[4661]: I0120 18:06:07.018042 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:07 crc kubenswrapper[4661]: I0120 18:06:07.018055 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:07Z","lastTransitionTime":"2026-01-20T18:06:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:07 crc kubenswrapper[4661]: E0120 18:06:07.047479 4661 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T18:06:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T18:06:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:07Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T18:06:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T18:06:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:07Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7f2069d5-53e0-4198-b42b-b73aa1252865\\\",\\\"systemUUID\\\":\\\"727045d4-7edb-4891-a9ee-dd5ccba890df\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:07Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:07 crc kubenswrapper[4661]: I0120 18:06:07.055386 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:07 crc kubenswrapper[4661]: I0120 18:06:07.055427 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:07 crc kubenswrapper[4661]: I0120 18:06:07.055436 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:07 crc kubenswrapper[4661]: I0120 18:06:07.055451 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:07 crc kubenswrapper[4661]: I0120 18:06:07.055461 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:07Z","lastTransitionTime":"2026-01-20T18:06:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:07 crc kubenswrapper[4661]: E0120 18:06:07.077598 4661 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T18:06:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T18:06:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:07Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T18:06:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T18:06:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:07Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7f2069d5-53e0-4198-b42b-b73aa1252865\\\",\\\"systemUUID\\\":\\\"727045d4-7edb-4891-a9ee-dd5ccba890df\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:07Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:07 crc kubenswrapper[4661]: I0120 18:06:07.086819 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:07 crc kubenswrapper[4661]: I0120 18:06:07.086866 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:07 crc kubenswrapper[4661]: I0120 18:06:07.086876 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:07 crc kubenswrapper[4661]: I0120 18:06:07.086894 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:07 crc kubenswrapper[4661]: I0120 18:06:07.086904 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:07Z","lastTransitionTime":"2026-01-20T18:06:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:07 crc kubenswrapper[4661]: I0120 18:06:07.094444 4661 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-24 21:09:19.547166396 +0000 UTC Jan 20 18:06:07 crc kubenswrapper[4661]: E0120 18:06:07.109257 4661 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T18:06:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T18:06:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:07Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T18:06:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T18:06:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:07Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7f2069d5-53e0-4198-b42b-b73aa1252865\\\",\\\"systemUUID\\\":\\\"727045d4-7edb-4891-a9ee-dd5ccba890df\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:07Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:07 crc kubenswrapper[4661]: E0120 18:06:07.122060 4661 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 20 18:06:07 crc kubenswrapper[4661]: I0120 18:06:07.127152 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:07 crc kubenswrapper[4661]: I0120 18:06:07.127199 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:07 crc kubenswrapper[4661]: I0120 18:06:07.127211 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:07 crc kubenswrapper[4661]: I0120 18:06:07.127233 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:07 crc kubenswrapper[4661]: I0120 18:06:07.127265 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:07Z","lastTransitionTime":"2026-01-20T18:06:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:07 crc kubenswrapper[4661]: I0120 18:06:07.141622 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 18:06:07 crc kubenswrapper[4661]: E0120 18:06:07.141793 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 20 18:06:07 crc kubenswrapper[4661]: I0120 18:06:07.142115 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 18:06:07 crc kubenswrapper[4661]: E0120 18:06:07.142168 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 20 18:06:07 crc kubenswrapper[4661]: I0120 18:06:07.142480 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 18:06:07 crc kubenswrapper[4661]: E0120 18:06:07.142747 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 20 18:06:07 crc kubenswrapper[4661]: I0120 18:06:07.229562 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:07 crc kubenswrapper[4661]: I0120 18:06:07.229625 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:07 crc kubenswrapper[4661]: I0120 18:06:07.229637 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:07 crc kubenswrapper[4661]: I0120 18:06:07.229676 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:07 crc kubenswrapper[4661]: I0120 18:06:07.229690 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:07Z","lastTransitionTime":"2026-01-20T18:06:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:07 crc kubenswrapper[4661]: I0120 18:06:07.304911 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-j9j6p" event={"ID":"e190abed-d178-4ce7-9485-f6090ecf8578","Type":"ContainerStarted","Data":"6923af243783c919b8d74338d7221f91f7c6b770d97eb3a2f7e30360376f071d"} Jan 20 18:06:07 crc kubenswrapper[4661]: I0120 18:06:07.304965 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-j9j6p" event={"ID":"e190abed-d178-4ce7-9485-f6090ecf8578","Type":"ContainerStarted","Data":"3f51b7cc7f839f08d589a89ea10e000c9377c2bd8f6816c41f079158b4313746"} Jan 20 18:06:07 crc kubenswrapper[4661]: I0120 18:06:07.308863 4661 generic.go:334] "Generic (PLEG): container finished" podID="3856f23c-8dc3-4b36-b3b7-955dff315250" containerID="babd416d0d33b286f533dc5bd8d6904d24fd23632efce36edb6e13183fbd390a" exitCode=0 Jan 20 18:06:07 crc kubenswrapper[4661]: I0120 18:06:07.309353 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fxb9d" event={"ID":"3856f23c-8dc3-4b36-b3b7-955dff315250","Type":"ContainerDied","Data":"babd416d0d33b286f533dc5bd8d6904d24fd23632efce36edb6e13183fbd390a"} Jan 20 18:06:07 crc kubenswrapper[4661]: I0120 18:06:07.309397 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fxb9d" event={"ID":"3856f23c-8dc3-4b36-b3b7-955dff315250","Type":"ContainerStarted","Data":"6fc1c0bb3d0c80288d19a789669a571d8829461ce364a38889c09a2c46f5f070"} Jan 20 18:06:07 crc kubenswrapper[4661]: I0120 18:06:07.330281 4661 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-tfdrt"] Jan 20 18:06:07 crc kubenswrapper[4661]: I0120 18:06:07.331118 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-tfdrt" Jan 20 18:06:07 crc kubenswrapper[4661]: I0120 18:06:07.337361 4661 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Jan 20 18:06:07 crc kubenswrapper[4661]: I0120 18:06:07.337561 4661 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Jan 20 18:06:07 crc kubenswrapper[4661]: I0120 18:06:07.337738 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Jan 20 18:06:07 crc kubenswrapper[4661]: I0120 18:06:07.337905 4661 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Jan 20 18:06:07 crc kubenswrapper[4661]: I0120 18:06:07.342018 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:07 crc kubenswrapper[4661]: I0120 18:06:07.342046 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:07 crc kubenswrapper[4661]: I0120 18:06:07.342055 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:07 crc kubenswrapper[4661]: I0120 18:06:07.342070 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:07 crc kubenswrapper[4661]: I0120 18:06:07.342081 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:07Z","lastTransitionTime":"2026-01-20T18:06:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:07 crc kubenswrapper[4661]: I0120 18:06:07.353292 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26d03e00aaf9fc7a94d8fe25f4f6f7a028f4e5eb9956411442757ca8b2046d27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:07Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:07 crc kubenswrapper[4661]: I0120 18:06:07.381971 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:07Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:07 crc kubenswrapper[4661]: I0120 18:06:07.397836 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gbq77\" (UniqueName: \"kubernetes.io/projected/1c3f1ce7-0584-4bf1-8398-a277e9a4599b-kube-api-access-gbq77\") pod \"node-ca-tfdrt\" (UID: \"1c3f1ce7-0584-4bf1-8398-a277e9a4599b\") " pod="openshift-image-registry/node-ca-tfdrt" Jan 20 18:06:07 crc kubenswrapper[4661]: I0120 18:06:07.397923 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/1c3f1ce7-0584-4bf1-8398-a277e9a4599b-serviceca\") pod \"node-ca-tfdrt\" (UID: \"1c3f1ce7-0584-4bf1-8398-a277e9a4599b\") " pod="openshift-image-registry/node-ca-tfdrt" Jan 20 18:06:07 crc kubenswrapper[4661]: I0120 18:06:07.398964 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/1c3f1ce7-0584-4bf1-8398-a277e9a4599b-host\") pod \"node-ca-tfdrt\" (UID: \"1c3f1ce7-0584-4bf1-8398-a277e9a4599b\") " pod="openshift-image-registry/node-ca-tfdrt" Jan 20 18:06:07 crc kubenswrapper[4661]: I0120 18:06:07.401700 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-svf7c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"78855c94-da90-4523-8d65-70f7fd153dee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce85015f47761ddd35031a4b2aa10eddde92a1f1ee206e6454b967b03b49372e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zvj2k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7dad5141c6e2e07d42bee1c473efffa900d0d900467b1524cd59962582696a3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zvj2k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T18:06:05Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-svf7c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:07Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:07 crc kubenswrapper[4661]: I0120 18:06:07.420409 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aafdc595f8f331b863d71124f1aa3c686ec883829377108268dd78de88f498ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a15e7bb714cbcf03a4ed8925508be80b06b04f3cd455d293237554c8ad0fdeee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:07Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:07 crc kubenswrapper[4661]: I0120 18:06:07.439752 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-j9j6p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e190abed-d178-4ce7-9485-f6090ecf8578\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:05Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxwgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6923af243783c919b8d74338d7221f91f7c6b770d97eb3a2f7e30360376f071d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxwgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxwgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxwgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxwgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxwgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxwgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T18:06:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-j9j6p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:07Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:07 crc kubenswrapper[4661]: I0120 18:06:07.451982 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:07 crc kubenswrapper[4661]: I0120 18:06:07.452035 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:07 crc kubenswrapper[4661]: I0120 18:06:07.452047 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:07 crc kubenswrapper[4661]: I0120 18:06:07.452071 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:07 crc kubenswrapper[4661]: I0120 18:06:07.452083 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:07Z","lastTransitionTime":"2026-01-20T18:06:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:07 crc kubenswrapper[4661]: I0120 18:06:07.462033 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fxb9d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3856f23c-8dc3-4b36-b3b7-955dff315250\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:06Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:06Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:06Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66kpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66kpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66kpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66kpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66kpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66kpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66kpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66kpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66kpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T18:06:06Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fxb9d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:07Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:07 crc kubenswrapper[4661]: I0120 18:06:07.479760 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5947c5f0-b932-4127-a183-6b9023784c81\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:05:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:05:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:05:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:05:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:05:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2286c38d543136df613b2611b8d494d0777a950158adb169c26708335c024251\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7995b8e096ce8c7adf28d9baa4e12d943a697db80ee2b6e6b347b334e44b0df\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a1fb928361cffd6f14855b6c1cf5964eccc9f923435bf79dddd8f0c94decd9a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a8e025f49d745d0d846c606a3ec9dd6fbd2d255e8662ba1fd1a65f0d4289e77\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3584f02089912eecb6ea77d78d4f093929ce92631cb9ea758f1311268963b6b1\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-20T18:06:02Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0120 18:05:56.920405 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0120 18:05:56.921589 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1862726087/tls.crt::/tmp/serving-cert-1862726087/tls.key\\\\\\\"\\\\nI0120 18:06:02.544098 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0120 18:06:02.549414 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0120 18:06:02.549439 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0120 18:06:02.549472 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0120 18:06:02.549479 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0120 18:06:02.569160 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0120 18:06:02.569400 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0120 18:06:02.569474 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0120 18:06:02.569536 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0120 18:06:02.569594 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0120 18:06:02.569648 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0120 18:06:02.569744 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0120 18:06:02.569342 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0120 18:06:02.573278 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T18:05:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f09e5fcc7fafac7a11257184f5919c06b5b2e56a677b67c664e6489d9a581a20\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:05:46Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6eedc9bdf3c37af238cf9ad5172a8d93751c0641cbf43057016157f086c77538\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6eedc9bdf3c37af238cf9ad5172a8d93751c0641cbf43057016157f086c77538\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T18:05:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T18:05:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T18:05:44Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:07Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:07 crc kubenswrapper[4661]: I0120 18:06:07.492924 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:03Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:07Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:07 crc kubenswrapper[4661]: I0120 18:06:07.499767 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/1c3f1ce7-0584-4bf1-8398-a277e9a4599b-serviceca\") pod \"node-ca-tfdrt\" (UID: \"1c3f1ce7-0584-4bf1-8398-a277e9a4599b\") " pod="openshift-image-registry/node-ca-tfdrt" Jan 20 18:06:07 crc kubenswrapper[4661]: I0120 18:06:07.499831 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/1c3f1ce7-0584-4bf1-8398-a277e9a4599b-host\") pod \"node-ca-tfdrt\" (UID: \"1c3f1ce7-0584-4bf1-8398-a277e9a4599b\") " pod="openshift-image-registry/node-ca-tfdrt" Jan 20 18:06:07 crc kubenswrapper[4661]: I0120 18:06:07.499892 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gbq77\" (UniqueName: \"kubernetes.io/projected/1c3f1ce7-0584-4bf1-8398-a277e9a4599b-kube-api-access-gbq77\") pod \"node-ca-tfdrt\" (UID: \"1c3f1ce7-0584-4bf1-8398-a277e9a4599b\") " pod="openshift-image-registry/node-ca-tfdrt" Jan 20 18:06:07 crc kubenswrapper[4661]: I0120 18:06:07.500005 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/1c3f1ce7-0584-4bf1-8398-a277e9a4599b-host\") pod \"node-ca-tfdrt\" (UID: \"1c3f1ce7-0584-4bf1-8398-a277e9a4599b\") " pod="openshift-image-registry/node-ca-tfdrt" Jan 20 18:06:07 crc kubenswrapper[4661]: I0120 18:06:07.501468 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/1c3f1ce7-0584-4bf1-8398-a277e9a4599b-serviceca\") pod \"node-ca-tfdrt\" (UID: \"1c3f1ce7-0584-4bf1-8398-a277e9a4599b\") " pod="openshift-image-registry/node-ca-tfdrt" Jan 20 18:06:07 crc kubenswrapper[4661]: I0120 18:06:07.523406 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:03Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:07Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:07 crc kubenswrapper[4661]: I0120 18:06:07.523651 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gbq77\" (UniqueName: \"kubernetes.io/projected/1c3f1ce7-0584-4bf1-8398-a277e9a4599b-kube-api-access-gbq77\") pod \"node-ca-tfdrt\" (UID: \"1c3f1ce7-0584-4bf1-8398-a277e9a4599b\") " pod="openshift-image-registry/node-ca-tfdrt" Jan 20 18:06:07 crc kubenswrapper[4661]: I0120 18:06:07.534706 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-9m9jm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c44ff326-6791-438a-8d65-b2be26e9c819\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de5a607340e429cf954b1b6e147c4dbff99ffee4d311e9692410698574915af2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kn7nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T18:06:04Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-9m9jm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:07Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:07 crc kubenswrapper[4661]: I0120 18:06:07.549588 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d831477bdf455582c54cba87020fc1141541282a25169c4b9730a78855e5719\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:07Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:07 crc kubenswrapper[4661]: I0120 18:06:07.555925 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:07 crc kubenswrapper[4661]: I0120 18:06:07.555956 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:07 crc kubenswrapper[4661]: I0120 18:06:07.555964 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:07 crc kubenswrapper[4661]: I0120 18:06:07.555979 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:07 crc kubenswrapper[4661]: I0120 18:06:07.555989 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:07Z","lastTransitionTime":"2026-01-20T18:06:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:07 crc kubenswrapper[4661]: I0120 18:06:07.564848 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-z97p2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b6f2401-3eb9-4ee4-b79c-6faee06bc21c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d04be3c87130e9506908a5ff0bf35490bafa64b4cec7b6ae1b67c4a8bd7df5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff8qg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T18:06:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-z97p2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:07Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:07 crc kubenswrapper[4661]: I0120 18:06:07.577924 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:03Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:07Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:07 crc kubenswrapper[4661]: I0120 18:06:07.589381 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-9m9jm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c44ff326-6791-438a-8d65-b2be26e9c819\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de5a607340e429cf954b1b6e147c4dbff99ffee4d311e9692410698574915af2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kn7nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T18:06:04Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-9m9jm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:07Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:07 crc kubenswrapper[4661]: I0120 18:06:07.611157 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fxb9d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3856f23c-8dc3-4b36-b3b7-955dff315250\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:06Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:06Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66kpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66kpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66kpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66kpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66kpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66kpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66kpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66kpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://babd416d0d33b286f533dc5bd8d6904d24fd23632efce36edb6e13183fbd390a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://babd416d0d33b286f533dc5bd8d6904d24fd23632efce36edb6e13183fbd390a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T18:06:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66kpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T18:06:06Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fxb9d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:07Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:07 crc kubenswrapper[4661]: I0120 18:06:07.643325 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5947c5f0-b932-4127-a183-6b9023784c81\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:05:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:05:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:05:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:05:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:05:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2286c38d543136df613b2611b8d494d0777a950158adb169c26708335c024251\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7995b8e096ce8c7adf28d9baa4e12d943a697db80ee2b6e6b347b334e44b0df\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a1fb928361cffd6f14855b6c1cf5964eccc9f923435bf79dddd8f0c94decd9a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a8e025f49d745d0d846c606a3ec9dd6fbd2d255e8662ba1fd1a65f0d4289e77\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3584f02089912eecb6ea77d78d4f093929ce92631cb9ea758f1311268963b6b1\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-20T18:06:02Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0120 18:05:56.920405 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0120 18:05:56.921589 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1862726087/tls.crt::/tmp/serving-cert-1862726087/tls.key\\\\\\\"\\\\nI0120 18:06:02.544098 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0120 18:06:02.549414 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0120 18:06:02.549439 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0120 18:06:02.549472 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0120 18:06:02.549479 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0120 18:06:02.569160 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0120 18:06:02.569400 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0120 18:06:02.569474 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0120 18:06:02.569536 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0120 18:06:02.569594 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0120 18:06:02.569648 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0120 18:06:02.569744 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0120 18:06:02.569342 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0120 18:06:02.573278 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T18:05:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f09e5fcc7fafac7a11257184f5919c06b5b2e56a677b67c664e6489d9a581a20\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:05:46Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6eedc9bdf3c37af238cf9ad5172a8d93751c0641cbf43057016157f086c77538\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6eedc9bdf3c37af238cf9ad5172a8d93751c0641cbf43057016157f086c77538\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T18:05:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T18:05:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T18:05:44Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:07Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:07 crc kubenswrapper[4661]: I0120 18:06:07.659009 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:07 crc kubenswrapper[4661]: I0120 18:06:07.659045 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:07 crc kubenswrapper[4661]: I0120 18:06:07.659053 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:07 crc kubenswrapper[4661]: I0120 18:06:07.659071 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:07 crc kubenswrapper[4661]: I0120 18:06:07.659086 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:07Z","lastTransitionTime":"2026-01-20T18:06:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:07 crc kubenswrapper[4661]: I0120 18:06:07.664716 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:03Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:07Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:07 crc kubenswrapper[4661]: I0120 18:06:07.688318 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d831477bdf455582c54cba87020fc1141541282a25169c4b9730a78855e5719\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:07Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:07 crc kubenswrapper[4661]: I0120 18:06:07.711970 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-z97p2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b6f2401-3eb9-4ee4-b79c-6faee06bc21c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d04be3c87130e9506908a5ff0bf35490bafa64b4cec7b6ae1b67c4a8bd7df5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff8qg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T18:06:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-z97p2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:07Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:07 crc kubenswrapper[4661]: I0120 18:06:07.734310 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-svf7c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"78855c94-da90-4523-8d65-70f7fd153dee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce85015f47761ddd35031a4b2aa10eddde92a1f1ee206e6454b967b03b49372e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zvj2k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7dad5141c6e2e07d42bee1c473efffa900d0d900467b1524cd59962582696a3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zvj2k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T18:06:05Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-svf7c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:07Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:07 crc kubenswrapper[4661]: I0120 18:06:07.746645 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-tfdrt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1c3f1ce7-0584-4bf1-8398-a277e9a4599b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:07Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:07Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbq77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T18:06:07Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-tfdrt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:07Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:07 crc kubenswrapper[4661]: I0120 18:06:07.762716 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:07 crc kubenswrapper[4661]: I0120 18:06:07.762759 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:07 crc kubenswrapper[4661]: I0120 18:06:07.762775 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:07 crc kubenswrapper[4661]: I0120 18:06:07.762794 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:07 crc kubenswrapper[4661]: I0120 18:06:07.762804 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:07Z","lastTransitionTime":"2026-01-20T18:06:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:07 crc kubenswrapper[4661]: I0120 18:06:07.766653 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26d03e00aaf9fc7a94d8fe25f4f6f7a028f4e5eb9956411442757ca8b2046d27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:07Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:07 crc kubenswrapper[4661]: I0120 18:06:07.788387 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:07Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:07 crc kubenswrapper[4661]: I0120 18:06:07.803324 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aafdc595f8f331b863d71124f1aa3c686ec883829377108268dd78de88f498ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a15e7bb714cbcf03a4ed8925508be80b06b04f3cd455d293237554c8ad0fdeee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:07Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:07 crc kubenswrapper[4661]: I0120 18:06:07.817593 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-j9j6p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e190abed-d178-4ce7-9485-f6090ecf8578\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:05Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxwgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6923af243783c919b8d74338d7221f91f7c6b770d97eb3a2f7e30360376f071d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxwgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxwgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxwgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxwgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxwgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxwgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T18:06:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-j9j6p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:07Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:07 crc kubenswrapper[4661]: I0120 18:06:07.846253 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-tfdrt" Jan 20 18:06:07 crc kubenswrapper[4661]: I0120 18:06:07.865172 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:07 crc kubenswrapper[4661]: I0120 18:06:07.865207 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:07 crc kubenswrapper[4661]: I0120 18:06:07.865216 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:07 crc kubenswrapper[4661]: I0120 18:06:07.865231 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:07 crc kubenswrapper[4661]: I0120 18:06:07.865241 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:07Z","lastTransitionTime":"2026-01-20T18:06:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:07 crc kubenswrapper[4661]: I0120 18:06:07.969388 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:07 crc kubenswrapper[4661]: I0120 18:06:07.969436 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:07 crc kubenswrapper[4661]: I0120 18:06:07.969445 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:07 crc kubenswrapper[4661]: I0120 18:06:07.969464 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:07 crc kubenswrapper[4661]: I0120 18:06:07.969477 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:07Z","lastTransitionTime":"2026-01-20T18:06:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:08 crc kubenswrapper[4661]: I0120 18:06:08.072033 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:08 crc kubenswrapper[4661]: I0120 18:06:08.072396 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:08 crc kubenswrapper[4661]: I0120 18:06:08.072408 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:08 crc kubenswrapper[4661]: I0120 18:06:08.072429 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:08 crc kubenswrapper[4661]: I0120 18:06:08.072441 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:08Z","lastTransitionTime":"2026-01-20T18:06:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:08 crc kubenswrapper[4661]: I0120 18:06:08.095185 4661 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-16 14:57:38.881287211 +0000 UTC Jan 20 18:06:08 crc kubenswrapper[4661]: I0120 18:06:08.182654 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:08 crc kubenswrapper[4661]: I0120 18:06:08.182706 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:08 crc kubenswrapper[4661]: I0120 18:06:08.182714 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:08 crc kubenswrapper[4661]: I0120 18:06:08.182732 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:08 crc kubenswrapper[4661]: I0120 18:06:08.182743 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:08Z","lastTransitionTime":"2026-01-20T18:06:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:08 crc kubenswrapper[4661]: I0120 18:06:08.290430 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:08 crc kubenswrapper[4661]: I0120 18:06:08.290888 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:08 crc kubenswrapper[4661]: I0120 18:06:08.291258 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:08 crc kubenswrapper[4661]: I0120 18:06:08.291560 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:08 crc kubenswrapper[4661]: I0120 18:06:08.291883 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:08Z","lastTransitionTime":"2026-01-20T18:06:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:08 crc kubenswrapper[4661]: I0120 18:06:08.314868 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-tfdrt" event={"ID":"1c3f1ce7-0584-4bf1-8398-a277e9a4599b","Type":"ContainerStarted","Data":"163c719cffaaa547e54e81b543b5f5b2ce5abf7f6309d2859831a14e42df189f"} Jan 20 18:06:08 crc kubenswrapper[4661]: I0120 18:06:08.315184 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-tfdrt" event={"ID":"1c3f1ce7-0584-4bf1-8398-a277e9a4599b","Type":"ContainerStarted","Data":"a49d6ab657c6a6513b49d8a3d498b4c52023bbb9c28b570c7f4c7407c474d3e2"} Jan 20 18:06:08 crc kubenswrapper[4661]: I0120 18:06:08.316620 4661 generic.go:334] "Generic (PLEG): container finished" podID="e190abed-d178-4ce7-9485-f6090ecf8578" containerID="6923af243783c919b8d74338d7221f91f7c6b770d97eb3a2f7e30360376f071d" exitCode=0 Jan 20 18:06:08 crc kubenswrapper[4661]: I0120 18:06:08.316712 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-j9j6p" event={"ID":"e190abed-d178-4ce7-9485-f6090ecf8578","Type":"ContainerDied","Data":"6923af243783c919b8d74338d7221f91f7c6b770d97eb3a2f7e30360376f071d"} Jan 20 18:06:08 crc kubenswrapper[4661]: I0120 18:06:08.319703 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fxb9d" event={"ID":"3856f23c-8dc3-4b36-b3b7-955dff315250","Type":"ContainerStarted","Data":"d53da47c39bd1f10fe866890f30f12f27cb0cfce0348c89fc0e89b3e8f563f2f"} Jan 20 18:06:08 crc kubenswrapper[4661]: I0120 18:06:08.319746 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fxb9d" event={"ID":"3856f23c-8dc3-4b36-b3b7-955dff315250","Type":"ContainerStarted","Data":"407e4d66f22050b80251fcb98ac7168d601d70dff1679bdaca0fc82d6068da41"} Jan 20 18:06:08 crc kubenswrapper[4661]: I0120 18:06:08.337445 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:03Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:08Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:08 crc kubenswrapper[4661]: I0120 18:06:08.362584 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:03Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:08Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:08 crc kubenswrapper[4661]: I0120 18:06:08.374259 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-9m9jm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c44ff326-6791-438a-8d65-b2be26e9c819\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de5a607340e429cf954b1b6e147c4dbff99ffee4d311e9692410698574915af2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kn7nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T18:06:04Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-9m9jm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:08Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:08 crc kubenswrapper[4661]: I0120 18:06:08.395223 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:08 crc kubenswrapper[4661]: I0120 18:06:08.395276 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:08 crc kubenswrapper[4661]: I0120 18:06:08.395289 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:08 crc kubenswrapper[4661]: I0120 18:06:08.395310 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:08 crc kubenswrapper[4661]: I0120 18:06:08.395323 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:08Z","lastTransitionTime":"2026-01-20T18:06:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:08 crc kubenswrapper[4661]: I0120 18:06:08.403695 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fxb9d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3856f23c-8dc3-4b36-b3b7-955dff315250\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:06Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:06Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66kpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66kpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66kpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66kpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66kpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66kpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66kpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66kpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://babd416d0d33b286f533dc5bd8d6904d24fd23632efce36edb6e13183fbd390a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://babd416d0d33b286f533dc5bd8d6904d24fd23632efce36edb6e13183fbd390a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T18:06:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66kpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T18:06:06Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fxb9d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:08Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:08 crc kubenswrapper[4661]: I0120 18:06:08.420889 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5947c5f0-b932-4127-a183-6b9023784c81\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:05:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:05:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:05:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:05:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:05:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2286c38d543136df613b2611b8d494d0777a950158adb169c26708335c024251\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7995b8e096ce8c7adf28d9baa4e12d943a697db80ee2b6e6b347b334e44b0df\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a1fb928361cffd6f14855b6c1cf5964eccc9f923435bf79dddd8f0c94decd9a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a8e025f49d745d0d846c606a3ec9dd6fbd2d255e8662ba1fd1a65f0d4289e77\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3584f02089912eecb6ea77d78d4f093929ce92631cb9ea758f1311268963b6b1\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-20T18:06:02Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0120 18:05:56.920405 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0120 18:05:56.921589 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1862726087/tls.crt::/tmp/serving-cert-1862726087/tls.key\\\\\\\"\\\\nI0120 18:06:02.544098 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0120 18:06:02.549414 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0120 18:06:02.549439 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0120 18:06:02.549472 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0120 18:06:02.549479 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0120 18:06:02.569160 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0120 18:06:02.569400 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0120 18:06:02.569474 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0120 18:06:02.569536 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0120 18:06:02.569594 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0120 18:06:02.569648 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0120 18:06:02.569744 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0120 18:06:02.569342 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0120 18:06:02.573278 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T18:05:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f09e5fcc7fafac7a11257184f5919c06b5b2e56a677b67c664e6489d9a581a20\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:05:46Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6eedc9bdf3c37af238cf9ad5172a8d93751c0641cbf43057016157f086c77538\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6eedc9bdf3c37af238cf9ad5172a8d93751c0641cbf43057016157f086c77538\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T18:05:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T18:05:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T18:05:44Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:08Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:08 crc kubenswrapper[4661]: I0120 18:06:08.436882 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d831477bdf455582c54cba87020fc1141541282a25169c4b9730a78855e5719\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:08Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:08 crc kubenswrapper[4661]: I0120 18:06:08.458858 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-z97p2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b6f2401-3eb9-4ee4-b79c-6faee06bc21c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d04be3c87130e9506908a5ff0bf35490bafa64b4cec7b6ae1b67c4a8bd7df5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff8qg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T18:06:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-z97p2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:08Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:08 crc kubenswrapper[4661]: I0120 18:06:08.474895 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:08Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:08 crc kubenswrapper[4661]: I0120 18:06:08.534450 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:08 crc kubenswrapper[4661]: I0120 18:06:08.534764 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:08 crc kubenswrapper[4661]: I0120 18:06:08.534890 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:08 crc kubenswrapper[4661]: I0120 18:06:08.535006 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:08 crc kubenswrapper[4661]: I0120 18:06:08.535109 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:08Z","lastTransitionTime":"2026-01-20T18:06:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:08 crc kubenswrapper[4661]: I0120 18:06:08.554518 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-svf7c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"78855c94-da90-4523-8d65-70f7fd153dee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce85015f47761ddd35031a4b2aa10eddde92a1f1ee206e6454b967b03b49372e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zvj2k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7dad5141c6e2e07d42bee1c473efffa900d0d900467b1524cd59962582696a3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zvj2k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T18:06:05Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-svf7c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:08Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:08 crc kubenswrapper[4661]: I0120 18:06:08.568824 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-tfdrt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1c3f1ce7-0584-4bf1-8398-a277e9a4599b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://163c719cffaaa547e54e81b543b5f5b2ce5abf7f6309d2859831a14e42df189f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbq77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T18:06:07Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-tfdrt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:08Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:08 crc kubenswrapper[4661]: I0120 18:06:08.589620 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26d03e00aaf9fc7a94d8fe25f4f6f7a028f4e5eb9956411442757ca8b2046d27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:08Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:08 crc kubenswrapper[4661]: I0120 18:06:08.604947 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aafdc595f8f331b863d71124f1aa3c686ec883829377108268dd78de88f498ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a15e7bb714cbcf03a4ed8925508be80b06b04f3cd455d293237554c8ad0fdeee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:08Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:08 crc kubenswrapper[4661]: I0120 18:06:08.630798 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-j9j6p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e190abed-d178-4ce7-9485-f6090ecf8578\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:05Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxwgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6923af243783c919b8d74338d7221f91f7c6b770d97eb3a2f7e30360376f071d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxwgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxwgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxwgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxwgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxwgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxwgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T18:06:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-j9j6p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:08Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:08 crc kubenswrapper[4661]: I0120 18:06:08.637130 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:08 crc kubenswrapper[4661]: I0120 18:06:08.637165 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:08 crc kubenswrapper[4661]: I0120 18:06:08.637174 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:08 crc kubenswrapper[4661]: I0120 18:06:08.637191 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:08 crc kubenswrapper[4661]: I0120 18:06:08.637201 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:08Z","lastTransitionTime":"2026-01-20T18:06:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:08 crc kubenswrapper[4661]: I0120 18:06:08.641941 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:08Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:08 crc kubenswrapper[4661]: I0120 18:06:08.654971 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-svf7c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"78855c94-da90-4523-8d65-70f7fd153dee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce85015f47761ddd35031a4b2aa10eddde92a1f1ee206e6454b967b03b49372e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zvj2k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7dad5141c6e2e07d42bee1c473efffa900d0d900467b1524cd59962582696a3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zvj2k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T18:06:05Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-svf7c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:08Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:08 crc kubenswrapper[4661]: I0120 18:06:08.666021 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-tfdrt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1c3f1ce7-0584-4bf1-8398-a277e9a4599b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://163c719cffaaa547e54e81b543b5f5b2ce5abf7f6309d2859831a14e42df189f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbq77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T18:06:07Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-tfdrt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:08Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:08 crc kubenswrapper[4661]: I0120 18:06:08.683566 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26d03e00aaf9fc7a94d8fe25f4f6f7a028f4e5eb9956411442757ca8b2046d27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:08Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:08 crc kubenswrapper[4661]: I0120 18:06:08.703275 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aafdc595f8f331b863d71124f1aa3c686ec883829377108268dd78de88f498ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a15e7bb714cbcf03a4ed8925508be80b06b04f3cd455d293237554c8ad0fdeee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:08Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:08 crc kubenswrapper[4661]: I0120 18:06:08.718516 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-j9j6p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e190abed-d178-4ce7-9485-f6090ecf8578\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:05Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxwgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6923af243783c919b8d74338d7221f91f7c6b770d97eb3a2f7e30360376f071d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6923af243783c919b8d74338d7221f91f7c6b770d97eb3a2f7e30360376f071d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T18:06:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T18:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxwgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxwgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxwgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxwgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxwgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxwgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T18:06:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-j9j6p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:08Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:08 crc kubenswrapper[4661]: I0120 18:06:08.734088 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:03Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:08Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:08 crc kubenswrapper[4661]: I0120 18:06:08.740219 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:08 crc kubenswrapper[4661]: I0120 18:06:08.740257 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:08 crc kubenswrapper[4661]: I0120 18:06:08.740268 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:08 crc kubenswrapper[4661]: I0120 18:06:08.740288 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:08 crc kubenswrapper[4661]: I0120 18:06:08.740300 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:08Z","lastTransitionTime":"2026-01-20T18:06:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:08 crc kubenswrapper[4661]: I0120 18:06:08.751437 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:03Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:08Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:08 crc kubenswrapper[4661]: I0120 18:06:08.762398 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-9m9jm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c44ff326-6791-438a-8d65-b2be26e9c819\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de5a607340e429cf954b1b6e147c4dbff99ffee4d311e9692410698574915af2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kn7nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T18:06:04Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-9m9jm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:08Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:08 crc kubenswrapper[4661]: I0120 18:06:08.783649 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fxb9d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3856f23c-8dc3-4b36-b3b7-955dff315250\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:06Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:06Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66kpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66kpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66kpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66kpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66kpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66kpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66kpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66kpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://babd416d0d33b286f533dc5bd8d6904d24fd23632efce36edb6e13183fbd390a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://babd416d0d33b286f533dc5bd8d6904d24fd23632efce36edb6e13183fbd390a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T18:06:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66kpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T18:06:06Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fxb9d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:08Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:08 crc kubenswrapper[4661]: I0120 18:06:08.801017 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5947c5f0-b932-4127-a183-6b9023784c81\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:05:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:05:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:05:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:05:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:05:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2286c38d543136df613b2611b8d494d0777a950158adb169c26708335c024251\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7995b8e096ce8c7adf28d9baa4e12d943a697db80ee2b6e6b347b334e44b0df\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a1fb928361cffd6f14855b6c1cf5964eccc9f923435bf79dddd8f0c94decd9a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a8e025f49d745d0d846c606a3ec9dd6fbd2d255e8662ba1fd1a65f0d4289e77\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3584f02089912eecb6ea77d78d4f093929ce92631cb9ea758f1311268963b6b1\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-20T18:06:02Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0120 18:05:56.920405 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0120 18:05:56.921589 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1862726087/tls.crt::/tmp/serving-cert-1862726087/tls.key\\\\\\\"\\\\nI0120 18:06:02.544098 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0120 18:06:02.549414 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0120 18:06:02.549439 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0120 18:06:02.549472 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0120 18:06:02.549479 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0120 18:06:02.569160 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0120 18:06:02.569400 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0120 18:06:02.569474 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0120 18:06:02.569536 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0120 18:06:02.569594 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0120 18:06:02.569648 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0120 18:06:02.569744 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0120 18:06:02.569342 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0120 18:06:02.573278 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T18:05:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f09e5fcc7fafac7a11257184f5919c06b5b2e56a677b67c664e6489d9a581a20\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:05:46Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6eedc9bdf3c37af238cf9ad5172a8d93751c0641cbf43057016157f086c77538\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6eedc9bdf3c37af238cf9ad5172a8d93751c0641cbf43057016157f086c77538\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T18:05:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T18:05:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T18:05:44Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:08Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:08 crc kubenswrapper[4661]: I0120 18:06:08.814128 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d831477bdf455582c54cba87020fc1141541282a25169c4b9730a78855e5719\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:08Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:08 crc kubenswrapper[4661]: I0120 18:06:08.827218 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-z97p2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b6f2401-3eb9-4ee4-b79c-6faee06bc21c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d04be3c87130e9506908a5ff0bf35490bafa64b4cec7b6ae1b67c4a8bd7df5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff8qg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T18:06:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-z97p2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:08Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:08 crc kubenswrapper[4661]: I0120 18:06:08.843832 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:08 crc kubenswrapper[4661]: I0120 18:06:08.843872 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:08 crc kubenswrapper[4661]: I0120 18:06:08.843881 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:08 crc kubenswrapper[4661]: I0120 18:06:08.843902 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:08 crc kubenswrapper[4661]: I0120 18:06:08.843914 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:08Z","lastTransitionTime":"2026-01-20T18:06:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:08 crc kubenswrapper[4661]: I0120 18:06:08.947011 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:08 crc kubenswrapper[4661]: I0120 18:06:08.947064 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:08 crc kubenswrapper[4661]: I0120 18:06:08.947079 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:08 crc kubenswrapper[4661]: I0120 18:06:08.947102 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:08 crc kubenswrapper[4661]: I0120 18:06:08.947116 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:08Z","lastTransitionTime":"2026-01-20T18:06:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:09 crc kubenswrapper[4661]: I0120 18:06:09.050236 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:09 crc kubenswrapper[4661]: I0120 18:06:09.050293 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:09 crc kubenswrapper[4661]: I0120 18:06:09.050309 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:09 crc kubenswrapper[4661]: I0120 18:06:09.050332 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:09 crc kubenswrapper[4661]: I0120 18:06:09.050345 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:09Z","lastTransitionTime":"2026-01-20T18:06:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:09 crc kubenswrapper[4661]: I0120 18:06:09.095954 4661 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-22 15:14:11.432237456 +0000 UTC Jan 20 18:06:09 crc kubenswrapper[4661]: I0120 18:06:09.141622 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 18:06:09 crc kubenswrapper[4661]: E0120 18:06:09.141824 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 20 18:06:09 crc kubenswrapper[4661]: I0120 18:06:09.141866 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 18:06:09 crc kubenswrapper[4661]: I0120 18:06:09.141952 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 18:06:09 crc kubenswrapper[4661]: E0120 18:06:09.141996 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 20 18:06:09 crc kubenswrapper[4661]: E0120 18:06:09.142124 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 20 18:06:09 crc kubenswrapper[4661]: I0120 18:06:09.153850 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:09 crc kubenswrapper[4661]: I0120 18:06:09.153890 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:09 crc kubenswrapper[4661]: I0120 18:06:09.153905 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:09 crc kubenswrapper[4661]: I0120 18:06:09.153923 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:09 crc kubenswrapper[4661]: I0120 18:06:09.153940 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:09Z","lastTransitionTime":"2026-01-20T18:06:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:09 crc kubenswrapper[4661]: I0120 18:06:09.257517 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:09 crc kubenswrapper[4661]: I0120 18:06:09.257604 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:09 crc kubenswrapper[4661]: I0120 18:06:09.257618 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:09 crc kubenswrapper[4661]: I0120 18:06:09.257639 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:09 crc kubenswrapper[4661]: I0120 18:06:09.257692 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:09Z","lastTransitionTime":"2026-01-20T18:06:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:09 crc kubenswrapper[4661]: I0120 18:06:09.330524 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fxb9d" event={"ID":"3856f23c-8dc3-4b36-b3b7-955dff315250","Type":"ContainerStarted","Data":"6bac19d8c5ba66dc20e5e4b90b2ba10efe69f218908b04abb221416f47e47f5b"} Jan 20 18:06:09 crc kubenswrapper[4661]: I0120 18:06:09.330635 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fxb9d" event={"ID":"3856f23c-8dc3-4b36-b3b7-955dff315250","Type":"ContainerStarted","Data":"1f5f5d96326cd37c1101488fff8b4ce215ce84766faf13112bed7df0a767de0c"} Jan 20 18:06:09 crc kubenswrapper[4661]: I0120 18:06:09.330708 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fxb9d" event={"ID":"3856f23c-8dc3-4b36-b3b7-955dff315250","Type":"ContainerStarted","Data":"37fb98a4cea5fe59a694ef52ebebfd3366649970415c8bd3b1307e6d150ffe66"} Jan 20 18:06:09 crc kubenswrapper[4661]: I0120 18:06:09.330738 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fxb9d" event={"ID":"3856f23c-8dc3-4b36-b3b7-955dff315250","Type":"ContainerStarted","Data":"54a53d0636da9c6e7974633697967fa21ba02b0357019aca7c83994f57d06d84"} Jan 20 18:06:09 crc kubenswrapper[4661]: I0120 18:06:09.333936 4661 generic.go:334] "Generic (PLEG): container finished" podID="e190abed-d178-4ce7-9485-f6090ecf8578" containerID="d61ecbabdd991af4f3f3005e3d6fab0d3f7fa863e7503f45dd91633dfc68c597" exitCode=0 Jan 20 18:06:09 crc kubenswrapper[4661]: I0120 18:06:09.334009 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-j9j6p" event={"ID":"e190abed-d178-4ce7-9485-f6090ecf8578","Type":"ContainerDied","Data":"d61ecbabdd991af4f3f3005e3d6fab0d3f7fa863e7503f45dd91633dfc68c597"} Jan 20 18:06:09 crc kubenswrapper[4661]: I0120 18:06:09.361727 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:09 crc kubenswrapper[4661]: I0120 18:06:09.361774 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:09 crc kubenswrapper[4661]: I0120 18:06:09.361791 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:09 crc kubenswrapper[4661]: I0120 18:06:09.361816 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:09 crc kubenswrapper[4661]: I0120 18:06:09.361832 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:09Z","lastTransitionTime":"2026-01-20T18:06:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:09 crc kubenswrapper[4661]: I0120 18:06:09.365805 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26d03e00aaf9fc7a94d8fe25f4f6f7a028f4e5eb9956411442757ca8b2046d27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:09Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:09 crc kubenswrapper[4661]: I0120 18:06:09.388812 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:09Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:09 crc kubenswrapper[4661]: I0120 18:06:09.409715 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-svf7c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"78855c94-da90-4523-8d65-70f7fd153dee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce85015f47761ddd35031a4b2aa10eddde92a1f1ee206e6454b967b03b49372e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zvj2k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7dad5141c6e2e07d42bee1c473efffa900d0d900467b1524cd59962582696a3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zvj2k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T18:06:05Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-svf7c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:09Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:09 crc kubenswrapper[4661]: I0120 18:06:09.421072 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-tfdrt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1c3f1ce7-0584-4bf1-8398-a277e9a4599b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://163c719cffaaa547e54e81b543b5f5b2ce5abf7f6309d2859831a14e42df189f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbq77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T18:06:07Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-tfdrt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:09Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:09 crc kubenswrapper[4661]: I0120 18:06:09.437102 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aafdc595f8f331b863d71124f1aa3c686ec883829377108268dd78de88f498ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a15e7bb714cbcf03a4ed8925508be80b06b04f3cd455d293237554c8ad0fdeee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:09Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:09 crc kubenswrapper[4661]: I0120 18:06:09.452571 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-j9j6p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e190abed-d178-4ce7-9485-f6090ecf8578\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:05Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxwgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6923af243783c919b8d74338d7221f91f7c6b770d97eb3a2f7e30360376f071d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6923af243783c919b8d74338d7221f91f7c6b770d97eb3a2f7e30360376f071d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T18:06:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T18:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxwgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d61ecbabdd991af4f3f3005e3d6fab0d3f7fa863e7503f45dd91633dfc68c597\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d61ecbabdd991af4f3f3005e3d6fab0d3f7fa863e7503f45dd91633dfc68c597\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T18:06:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T18:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxwgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxwgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxwgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxwgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxwgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T18:06:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-j9j6p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:09Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:09 crc kubenswrapper[4661]: I0120 18:06:09.465511 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:09 crc kubenswrapper[4661]: I0120 18:06:09.465564 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:09 crc kubenswrapper[4661]: I0120 18:06:09.465573 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:09 crc kubenswrapper[4661]: I0120 18:06:09.465591 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:09 crc kubenswrapper[4661]: I0120 18:06:09.465604 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:09Z","lastTransitionTime":"2026-01-20T18:06:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:09 crc kubenswrapper[4661]: I0120 18:06:09.475276 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5947c5f0-b932-4127-a183-6b9023784c81\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:05:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:05:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:05:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:05:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:05:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2286c38d543136df613b2611b8d494d0777a950158adb169c26708335c024251\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7995b8e096ce8c7adf28d9baa4e12d943a697db80ee2b6e6b347b334e44b0df\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a1fb928361cffd6f14855b6c1cf5964eccc9f923435bf79dddd8f0c94decd9a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a8e025f49d745d0d846c606a3ec9dd6fbd2d255e8662ba1fd1a65f0d4289e77\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3584f02089912eecb6ea77d78d4f093929ce92631cb9ea758f1311268963b6b1\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-20T18:06:02Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0120 18:05:56.920405 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0120 18:05:56.921589 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1862726087/tls.crt::/tmp/serving-cert-1862726087/tls.key\\\\\\\"\\\\nI0120 18:06:02.544098 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0120 18:06:02.549414 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0120 18:06:02.549439 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0120 18:06:02.549472 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0120 18:06:02.549479 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0120 18:06:02.569160 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0120 18:06:02.569400 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0120 18:06:02.569474 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0120 18:06:02.569536 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0120 18:06:02.569594 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0120 18:06:02.569648 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0120 18:06:02.569744 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0120 18:06:02.569342 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0120 18:06:02.573278 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T18:05:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f09e5fcc7fafac7a11257184f5919c06b5b2e56a677b67c664e6489d9a581a20\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:05:46Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6eedc9bdf3c37af238cf9ad5172a8d93751c0641cbf43057016157f086c77538\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6eedc9bdf3c37af238cf9ad5172a8d93751c0641cbf43057016157f086c77538\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T18:05:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T18:05:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T18:05:44Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:09Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:09 crc kubenswrapper[4661]: I0120 18:06:09.478431 4661 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 20 18:06:09 crc kubenswrapper[4661]: I0120 18:06:09.482982 4661 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 20 18:06:09 crc kubenswrapper[4661]: I0120 18:06:09.491212 4661 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-crc"] Jan 20 18:06:09 crc kubenswrapper[4661]: I0120 18:06:09.491809 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:03Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:09Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:09 crc kubenswrapper[4661]: I0120 18:06:09.507540 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:03Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:09Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:09 crc kubenswrapper[4661]: I0120 18:06:09.521360 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-9m9jm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c44ff326-6791-438a-8d65-b2be26e9c819\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de5a607340e429cf954b1b6e147c4dbff99ffee4d311e9692410698574915af2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kn7nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T18:06:04Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-9m9jm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:09Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:09 crc kubenswrapper[4661]: I0120 18:06:09.550148 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fxb9d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3856f23c-8dc3-4b36-b3b7-955dff315250\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:06Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:06Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66kpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66kpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66kpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66kpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66kpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66kpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66kpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66kpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://babd416d0d33b286f533dc5bd8d6904d24fd23632efce36edb6e13183fbd390a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://babd416d0d33b286f533dc5bd8d6904d24fd23632efce36edb6e13183fbd390a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T18:06:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66kpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T18:06:06Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fxb9d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:09Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:09 crc kubenswrapper[4661]: I0120 18:06:09.562360 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d831477bdf455582c54cba87020fc1141541282a25169c4b9730a78855e5719\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:09Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:09 crc kubenswrapper[4661]: I0120 18:06:09.568400 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:09 crc kubenswrapper[4661]: I0120 18:06:09.568596 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:09 crc kubenswrapper[4661]: I0120 18:06:09.568716 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:09 crc kubenswrapper[4661]: I0120 18:06:09.568862 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:09 crc kubenswrapper[4661]: I0120 18:06:09.568946 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:09Z","lastTransitionTime":"2026-01-20T18:06:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:09 crc kubenswrapper[4661]: I0120 18:06:09.574484 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-z97p2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b6f2401-3eb9-4ee4-b79c-6faee06bc21c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d04be3c87130e9506908a5ff0bf35490bafa64b4cec7b6ae1b67c4a8bd7df5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff8qg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T18:06:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-z97p2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:09Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:09 crc kubenswrapper[4661]: I0120 18:06:09.589890 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5947c5f0-b932-4127-a183-6b9023784c81\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:05:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:05:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:05:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:05:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:05:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2286c38d543136df613b2611b8d494d0777a950158adb169c26708335c024251\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7995b8e096ce8c7adf28d9baa4e12d943a697db80ee2b6e6b347b334e44b0df\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a1fb928361cffd6f14855b6c1cf5964eccc9f923435bf79dddd8f0c94decd9a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a8e025f49d745d0d846c606a3ec9dd6fbd2d255e8662ba1fd1a65f0d4289e77\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3584f02089912eecb6ea77d78d4f093929ce92631cb9ea758f1311268963b6b1\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-20T18:06:02Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0120 18:05:56.920405 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0120 18:05:56.921589 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1862726087/tls.crt::/tmp/serving-cert-1862726087/tls.key\\\\\\\"\\\\nI0120 18:06:02.544098 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0120 18:06:02.549414 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0120 18:06:02.549439 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0120 18:06:02.549472 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0120 18:06:02.549479 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0120 18:06:02.569160 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0120 18:06:02.569400 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0120 18:06:02.569474 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0120 18:06:02.569536 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0120 18:06:02.569594 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0120 18:06:02.569648 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0120 18:06:02.569744 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0120 18:06:02.569342 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0120 18:06:02.573278 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T18:05:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f09e5fcc7fafac7a11257184f5919c06b5b2e56a677b67c664e6489d9a581a20\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:05:46Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6eedc9bdf3c37af238cf9ad5172a8d93751c0641cbf43057016157f086c77538\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6eedc9bdf3c37af238cf9ad5172a8d93751c0641cbf43057016157f086c77538\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T18:05:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T18:05:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T18:05:44Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:09Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:09 crc kubenswrapper[4661]: I0120 18:06:09.608215 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:03Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:09Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:09 crc kubenswrapper[4661]: I0120 18:06:09.658120 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:03Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:09Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:09 crc kubenswrapper[4661]: I0120 18:06:09.671975 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:09 crc kubenswrapper[4661]: I0120 18:06:09.672029 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:09 crc kubenswrapper[4661]: I0120 18:06:09.672045 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:09 crc kubenswrapper[4661]: I0120 18:06:09.672070 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:09 crc kubenswrapper[4661]: I0120 18:06:09.672090 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:09Z","lastTransitionTime":"2026-01-20T18:06:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:09 crc kubenswrapper[4661]: I0120 18:06:09.674919 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-9m9jm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c44ff326-6791-438a-8d65-b2be26e9c819\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de5a607340e429cf954b1b6e147c4dbff99ffee4d311e9692410698574915af2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kn7nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T18:06:04Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-9m9jm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:09Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:09 crc kubenswrapper[4661]: I0120 18:06:09.697207 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fxb9d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3856f23c-8dc3-4b36-b3b7-955dff315250\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:06Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:06Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66kpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66kpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66kpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66kpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66kpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66kpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66kpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66kpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://babd416d0d33b286f533dc5bd8d6904d24fd23632efce36edb6e13183fbd390a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://babd416d0d33b286f533dc5bd8d6904d24fd23632efce36edb6e13183fbd390a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T18:06:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66kpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T18:06:06Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fxb9d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:09Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:09 crc kubenswrapper[4661]: I0120 18:06:09.710208 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f7511825-196e-48ea-a80c-f30a6806a15f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:05:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:05:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f30ca85f0d31021dde3b56c646ddd5d841e699b809c85e54afa944cc8035df6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://008613eee577926f777b6eba5a93379dca1203429fb29918bb057f2aba5eba4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:05:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://baf1692fe971ebe4534bc83cc471812d2b2883b6f97e53728ded6cd57b40c6f1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://faea3c0fefa61b8f0e07a050f59ca7b88d89a7ac8dba19ab019cff00fd782da3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T18:05:44Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:09Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:09 crc kubenswrapper[4661]: I0120 18:06:09.727650 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d831477bdf455582c54cba87020fc1141541282a25169c4b9730a78855e5719\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:09Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:09 crc kubenswrapper[4661]: I0120 18:06:09.740844 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-z97p2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b6f2401-3eb9-4ee4-b79c-6faee06bc21c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d04be3c87130e9506908a5ff0bf35490bafa64b4cec7b6ae1b67c4a8bd7df5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff8qg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T18:06:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-z97p2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:09Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:09 crc kubenswrapper[4661]: I0120 18:06:09.754889 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26d03e00aaf9fc7a94d8fe25f4f6f7a028f4e5eb9956411442757ca8b2046d27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:09Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:09 crc kubenswrapper[4661]: I0120 18:06:09.767190 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:09Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:09 crc kubenswrapper[4661]: I0120 18:06:09.775439 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:09 crc kubenswrapper[4661]: I0120 18:06:09.775475 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:09 crc kubenswrapper[4661]: I0120 18:06:09.775488 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:09 crc kubenswrapper[4661]: I0120 18:06:09.775510 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:09 crc kubenswrapper[4661]: I0120 18:06:09.775524 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:09Z","lastTransitionTime":"2026-01-20T18:06:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:09 crc kubenswrapper[4661]: I0120 18:06:09.780362 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-svf7c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"78855c94-da90-4523-8d65-70f7fd153dee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce85015f47761ddd35031a4b2aa10eddde92a1f1ee206e6454b967b03b49372e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zvj2k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7dad5141c6e2e07d42bee1c473efffa900d0d900467b1524cd59962582696a3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zvj2k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T18:06:05Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-svf7c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:09Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:09 crc kubenswrapper[4661]: I0120 18:06:09.792573 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-tfdrt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1c3f1ce7-0584-4bf1-8398-a277e9a4599b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://163c719cffaaa547e54e81b543b5f5b2ce5abf7f6309d2859831a14e42df189f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbq77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T18:06:07Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-tfdrt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:09Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:09 crc kubenswrapper[4661]: I0120 18:06:09.806509 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aafdc595f8f331b863d71124f1aa3c686ec883829377108268dd78de88f498ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a15e7bb714cbcf03a4ed8925508be80b06b04f3cd455d293237554c8ad0fdeee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:09Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:09 crc kubenswrapper[4661]: I0120 18:06:09.821188 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-j9j6p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e190abed-d178-4ce7-9485-f6090ecf8578\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:05Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxwgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6923af243783c919b8d74338d7221f91f7c6b770d97eb3a2f7e30360376f071d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6923af243783c919b8d74338d7221f91f7c6b770d97eb3a2f7e30360376f071d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T18:06:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T18:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxwgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d61ecbabdd991af4f3f3005e3d6fab0d3f7fa863e7503f45dd91633dfc68c597\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d61ecbabdd991af4f3f3005e3d6fab0d3f7fa863e7503f45dd91633dfc68c597\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T18:06:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T18:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxwgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxwgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxwgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxwgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxwgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T18:06:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-j9j6p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:09Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:09 crc kubenswrapper[4661]: I0120 18:06:09.877721 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:09 crc kubenswrapper[4661]: I0120 18:06:09.877761 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:09 crc kubenswrapper[4661]: I0120 18:06:09.877769 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:09 crc kubenswrapper[4661]: I0120 18:06:09.877785 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:09 crc kubenswrapper[4661]: I0120 18:06:09.877795 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:09Z","lastTransitionTime":"2026-01-20T18:06:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:09 crc kubenswrapper[4661]: I0120 18:06:09.980607 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:09 crc kubenswrapper[4661]: I0120 18:06:09.980640 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:09 crc kubenswrapper[4661]: I0120 18:06:09.980649 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:09 crc kubenswrapper[4661]: I0120 18:06:09.980686 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:09 crc kubenswrapper[4661]: I0120 18:06:09.980697 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:09Z","lastTransitionTime":"2026-01-20T18:06:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:10 crc kubenswrapper[4661]: I0120 18:06:10.084505 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:10 crc kubenswrapper[4661]: I0120 18:06:10.084589 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:10 crc kubenswrapper[4661]: I0120 18:06:10.084613 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:10 crc kubenswrapper[4661]: I0120 18:06:10.084643 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:10 crc kubenswrapper[4661]: I0120 18:06:10.084701 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:10Z","lastTransitionTime":"2026-01-20T18:06:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:10 crc kubenswrapper[4661]: I0120 18:06:10.097532 4661 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-12 14:05:48.909775621 +0000 UTC Jan 20 18:06:10 crc kubenswrapper[4661]: I0120 18:06:10.187785 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:10 crc kubenswrapper[4661]: I0120 18:06:10.187879 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:10 crc kubenswrapper[4661]: I0120 18:06:10.187899 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:10 crc kubenswrapper[4661]: I0120 18:06:10.187925 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:10 crc kubenswrapper[4661]: I0120 18:06:10.187946 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:10Z","lastTransitionTime":"2026-01-20T18:06:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:10 crc kubenswrapper[4661]: I0120 18:06:10.291230 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:10 crc kubenswrapper[4661]: I0120 18:06:10.291317 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:10 crc kubenswrapper[4661]: I0120 18:06:10.291341 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:10 crc kubenswrapper[4661]: I0120 18:06:10.291377 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:10 crc kubenswrapper[4661]: I0120 18:06:10.291403 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:10Z","lastTransitionTime":"2026-01-20T18:06:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:10 crc kubenswrapper[4661]: I0120 18:06:10.341926 4661 generic.go:334] "Generic (PLEG): container finished" podID="e190abed-d178-4ce7-9485-f6090ecf8578" containerID="31c8fb341a4de1d1144737f83eb46ad0b301f7eb48dee0969da7ade7fbd513da" exitCode=0 Jan 20 18:06:10 crc kubenswrapper[4661]: I0120 18:06:10.342028 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-j9j6p" event={"ID":"e190abed-d178-4ce7-9485-f6090ecf8578","Type":"ContainerDied","Data":"31c8fb341a4de1d1144737f83eb46ad0b301f7eb48dee0969da7ade7fbd513da"} Jan 20 18:06:10 crc kubenswrapper[4661]: I0120 18:06:10.374254 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-svf7c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"78855c94-da90-4523-8d65-70f7fd153dee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce85015f47761ddd35031a4b2aa10eddde92a1f1ee206e6454b967b03b49372e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zvj2k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7dad5141c6e2e07d42bee1c473efffa900d0d900467b1524cd59962582696a3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zvj2k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T18:06:05Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-svf7c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:10Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:10 crc kubenswrapper[4661]: I0120 18:06:10.394303 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:10 crc kubenswrapper[4661]: I0120 18:06:10.394375 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:10 crc kubenswrapper[4661]: I0120 18:06:10.394395 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:10 crc kubenswrapper[4661]: I0120 18:06:10.394420 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:10 crc kubenswrapper[4661]: I0120 18:06:10.394438 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:10Z","lastTransitionTime":"2026-01-20T18:06:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:10 crc kubenswrapper[4661]: I0120 18:06:10.394913 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-tfdrt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1c3f1ce7-0584-4bf1-8398-a277e9a4599b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://163c719cffaaa547e54e81b543b5f5b2ce5abf7f6309d2859831a14e42df189f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbq77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T18:06:07Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-tfdrt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:10Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:10 crc kubenswrapper[4661]: I0120 18:06:10.417814 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26d03e00aaf9fc7a94d8fe25f4f6f7a028f4e5eb9956411442757ca8b2046d27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:10Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:10 crc kubenswrapper[4661]: I0120 18:06:10.439302 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:10Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:10 crc kubenswrapper[4661]: I0120 18:06:10.463374 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aafdc595f8f331b863d71124f1aa3c686ec883829377108268dd78de88f498ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a15e7bb714cbcf03a4ed8925508be80b06b04f3cd455d293237554c8ad0fdeee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:10Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:10 crc kubenswrapper[4661]: I0120 18:06:10.482253 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-j9j6p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e190abed-d178-4ce7-9485-f6090ecf8578\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:05Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxwgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6923af243783c919b8d74338d7221f91f7c6b770d97eb3a2f7e30360376f071d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6923af243783c919b8d74338d7221f91f7c6b770d97eb3a2f7e30360376f071d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T18:06:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T18:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxwgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d61ecbabdd991af4f3f3005e3d6fab0d3f7fa863e7503f45dd91633dfc68c597\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d61ecbabdd991af4f3f3005e3d6fab0d3f7fa863e7503f45dd91633dfc68c597\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T18:06:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T18:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxwgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31c8fb341a4de1d1144737f83eb46ad0b301f7eb48dee0969da7ade7fbd513da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31c8fb341a4de1d1144737f83eb46ad0b301f7eb48dee0969da7ade7fbd513da\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T18:06:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T18:06:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxwgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxwgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxwgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxwgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T18:06:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-j9j6p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:10Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:10 crc kubenswrapper[4661]: I0120 18:06:10.499792 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:03Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:10Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:10 crc kubenswrapper[4661]: I0120 18:06:10.501516 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:10 crc kubenswrapper[4661]: I0120 18:06:10.501566 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:10 crc kubenswrapper[4661]: I0120 18:06:10.501598 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:10 crc kubenswrapper[4661]: I0120 18:06:10.501623 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:10 crc kubenswrapper[4661]: I0120 18:06:10.501641 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:10Z","lastTransitionTime":"2026-01-20T18:06:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:10 crc kubenswrapper[4661]: I0120 18:06:10.515272 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-9m9jm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c44ff326-6791-438a-8d65-b2be26e9c819\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de5a607340e429cf954b1b6e147c4dbff99ffee4d311e9692410698574915af2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kn7nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T18:06:04Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-9m9jm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:10Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:10 crc kubenswrapper[4661]: I0120 18:06:10.540412 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fxb9d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3856f23c-8dc3-4b36-b3b7-955dff315250\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:06Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:06Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66kpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66kpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66kpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66kpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66kpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66kpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66kpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66kpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://babd416d0d33b286f533dc5bd8d6904d24fd23632efce36edb6e13183fbd390a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://babd416d0d33b286f533dc5bd8d6904d24fd23632efce36edb6e13183fbd390a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T18:06:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66kpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T18:06:06Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fxb9d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:10Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:10 crc kubenswrapper[4661]: I0120 18:06:10.563119 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5947c5f0-b932-4127-a183-6b9023784c81\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:05:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:05:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:05:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:05:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:05:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2286c38d543136df613b2611b8d494d0777a950158adb169c26708335c024251\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7995b8e096ce8c7adf28d9baa4e12d943a697db80ee2b6e6b347b334e44b0df\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a1fb928361cffd6f14855b6c1cf5964eccc9f923435bf79dddd8f0c94decd9a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a8e025f49d745d0d846c606a3ec9dd6fbd2d255e8662ba1fd1a65f0d4289e77\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3584f02089912eecb6ea77d78d4f093929ce92631cb9ea758f1311268963b6b1\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-20T18:06:02Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0120 18:05:56.920405 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0120 18:05:56.921589 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1862726087/tls.crt::/tmp/serving-cert-1862726087/tls.key\\\\\\\"\\\\nI0120 18:06:02.544098 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0120 18:06:02.549414 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0120 18:06:02.549439 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0120 18:06:02.549472 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0120 18:06:02.549479 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0120 18:06:02.569160 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0120 18:06:02.569400 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0120 18:06:02.569474 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0120 18:06:02.569536 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0120 18:06:02.569594 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0120 18:06:02.569648 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0120 18:06:02.569744 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0120 18:06:02.569342 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0120 18:06:02.573278 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T18:05:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f09e5fcc7fafac7a11257184f5919c06b5b2e56a677b67c664e6489d9a581a20\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:05:46Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6eedc9bdf3c37af238cf9ad5172a8d93751c0641cbf43057016157f086c77538\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6eedc9bdf3c37af238cf9ad5172a8d93751c0641cbf43057016157f086c77538\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T18:05:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T18:05:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T18:05:44Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:10Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:10 crc kubenswrapper[4661]: I0120 18:06:10.578469 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:03Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:10Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:10 crc kubenswrapper[4661]: I0120 18:06:10.592674 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d831477bdf455582c54cba87020fc1141541282a25169c4b9730a78855e5719\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:10Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:10 crc kubenswrapper[4661]: I0120 18:06:10.607479 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:10 crc kubenswrapper[4661]: I0120 18:06:10.607535 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:10 crc kubenswrapper[4661]: I0120 18:06:10.607552 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:10 crc kubenswrapper[4661]: I0120 18:06:10.607578 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:10 crc kubenswrapper[4661]: I0120 18:06:10.607596 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:10Z","lastTransitionTime":"2026-01-20T18:06:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:10 crc kubenswrapper[4661]: I0120 18:06:10.609065 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-z97p2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b6f2401-3eb9-4ee4-b79c-6faee06bc21c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d04be3c87130e9506908a5ff0bf35490bafa64b4cec7b6ae1b67c4a8bd7df5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff8qg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T18:06:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-z97p2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:10Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:10 crc kubenswrapper[4661]: I0120 18:06:10.625372 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f7511825-196e-48ea-a80c-f30a6806a15f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:05:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:05:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f30ca85f0d31021dde3b56c646ddd5d841e699b809c85e54afa944cc8035df6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://008613eee577926f777b6eba5a93379dca1203429fb29918bb057f2aba5eba4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:05:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://baf1692fe971ebe4534bc83cc471812d2b2883b6f97e53728ded6cd57b40c6f1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://faea3c0fefa61b8f0e07a050f59ca7b88d89a7ac8dba19ab019cff00fd782da3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T18:05:44Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:10Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:10 crc kubenswrapper[4661]: I0120 18:06:10.711146 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:10 crc kubenswrapper[4661]: I0120 18:06:10.711199 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:10 crc kubenswrapper[4661]: I0120 18:06:10.711214 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:10 crc kubenswrapper[4661]: I0120 18:06:10.711239 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:10 crc kubenswrapper[4661]: I0120 18:06:10.711256 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:10Z","lastTransitionTime":"2026-01-20T18:06:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:10 crc kubenswrapper[4661]: I0120 18:06:10.814550 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:10 crc kubenswrapper[4661]: I0120 18:06:10.814868 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:10 crc kubenswrapper[4661]: I0120 18:06:10.815151 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:10 crc kubenswrapper[4661]: I0120 18:06:10.815367 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:10 crc kubenswrapper[4661]: I0120 18:06:10.815815 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:10Z","lastTransitionTime":"2026-01-20T18:06:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:10 crc kubenswrapper[4661]: I0120 18:06:10.869975 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 18:06:10 crc kubenswrapper[4661]: I0120 18:06:10.870138 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 18:06:10 crc kubenswrapper[4661]: E0120 18:06:10.870163 4661 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 18:06:18.870134592 +0000 UTC m=+35.200924254 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 18:06:10 crc kubenswrapper[4661]: I0120 18:06:10.870206 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 18:06:10 crc kubenswrapper[4661]: I0120 18:06:10.870259 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 18:06:10 crc kubenswrapper[4661]: I0120 18:06:10.870309 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 18:06:10 crc kubenswrapper[4661]: E0120 18:06:10.870363 4661 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 20 18:06:10 crc kubenswrapper[4661]: E0120 18:06:10.870373 4661 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 20 18:06:10 crc kubenswrapper[4661]: E0120 18:06:10.870448 4661 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-20 18:06:18.87043975 +0000 UTC m=+35.201229412 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 20 18:06:10 crc kubenswrapper[4661]: E0120 18:06:10.870532 4661 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 20 18:06:10 crc kubenswrapper[4661]: E0120 18:06:10.870562 4661 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-20 18:06:18.870485661 +0000 UTC m=+35.201275363 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 20 18:06:10 crc kubenswrapper[4661]: E0120 18:06:10.870567 4661 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 20 18:06:10 crc kubenswrapper[4661]: E0120 18:06:10.870642 4661 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 20 18:06:10 crc kubenswrapper[4661]: E0120 18:06:10.870535 4661 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 20 18:06:10 crc kubenswrapper[4661]: E0120 18:06:10.870739 4661 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 20 18:06:10 crc kubenswrapper[4661]: E0120 18:06:10.870751 4661 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-20 18:06:18.870708828 +0000 UTC m=+35.201498530 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 20 18:06:10 crc kubenswrapper[4661]: E0120 18:06:10.870753 4661 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 20 18:06:10 crc kubenswrapper[4661]: E0120 18:06:10.870848 4661 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-20 18:06:18.870832361 +0000 UTC m=+35.201622063 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 20 18:06:10 crc kubenswrapper[4661]: I0120 18:06:10.927188 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:10 crc kubenswrapper[4661]: I0120 18:06:10.927233 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:10 crc kubenswrapper[4661]: I0120 18:06:10.927244 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:10 crc kubenswrapper[4661]: I0120 18:06:10.927263 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:10 crc kubenswrapper[4661]: I0120 18:06:10.927274 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:10Z","lastTransitionTime":"2026-01-20T18:06:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:11 crc kubenswrapper[4661]: I0120 18:06:11.030203 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:11 crc kubenswrapper[4661]: I0120 18:06:11.030254 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:11 crc kubenswrapper[4661]: I0120 18:06:11.030269 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:11 crc kubenswrapper[4661]: I0120 18:06:11.030288 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:11 crc kubenswrapper[4661]: I0120 18:06:11.030300 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:11Z","lastTransitionTime":"2026-01-20T18:06:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:11 crc kubenswrapper[4661]: I0120 18:06:11.097953 4661 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-12 04:56:44.338420094 +0000 UTC Jan 20 18:06:11 crc kubenswrapper[4661]: I0120 18:06:11.134054 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:11 crc kubenswrapper[4661]: I0120 18:06:11.134119 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:11 crc kubenswrapper[4661]: I0120 18:06:11.134137 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:11 crc kubenswrapper[4661]: I0120 18:06:11.134165 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:11 crc kubenswrapper[4661]: I0120 18:06:11.134185 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:11Z","lastTransitionTime":"2026-01-20T18:06:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:11 crc kubenswrapper[4661]: I0120 18:06:11.141490 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 18:06:11 crc kubenswrapper[4661]: I0120 18:06:11.141527 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 18:06:11 crc kubenswrapper[4661]: I0120 18:06:11.141536 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 18:06:11 crc kubenswrapper[4661]: E0120 18:06:11.141657 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 20 18:06:11 crc kubenswrapper[4661]: E0120 18:06:11.141831 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 20 18:06:11 crc kubenswrapper[4661]: E0120 18:06:11.142053 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 20 18:06:11 crc kubenswrapper[4661]: I0120 18:06:11.237402 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:11 crc kubenswrapper[4661]: I0120 18:06:11.237515 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:11 crc kubenswrapper[4661]: I0120 18:06:11.237533 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:11 crc kubenswrapper[4661]: I0120 18:06:11.237560 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:11 crc kubenswrapper[4661]: I0120 18:06:11.237581 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:11Z","lastTransitionTime":"2026-01-20T18:06:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:11 crc kubenswrapper[4661]: I0120 18:06:11.340245 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:11 crc kubenswrapper[4661]: I0120 18:06:11.340822 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:11 crc kubenswrapper[4661]: I0120 18:06:11.340849 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:11 crc kubenswrapper[4661]: I0120 18:06:11.340880 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:11 crc kubenswrapper[4661]: I0120 18:06:11.340893 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:11Z","lastTransitionTime":"2026-01-20T18:06:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:11 crc kubenswrapper[4661]: I0120 18:06:11.349330 4661 generic.go:334] "Generic (PLEG): container finished" podID="e190abed-d178-4ce7-9485-f6090ecf8578" containerID="db8122764bd0508f39da125b5849fbe3bad9558e511c18f26bdcf4e5b23ca3a1" exitCode=0 Jan 20 18:06:11 crc kubenswrapper[4661]: I0120 18:06:11.349420 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-j9j6p" event={"ID":"e190abed-d178-4ce7-9485-f6090ecf8578","Type":"ContainerDied","Data":"db8122764bd0508f39da125b5849fbe3bad9558e511c18f26bdcf4e5b23ca3a1"} Jan 20 18:06:11 crc kubenswrapper[4661]: I0120 18:06:11.357508 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fxb9d" event={"ID":"3856f23c-8dc3-4b36-b3b7-955dff315250","Type":"ContainerStarted","Data":"dfbc19df20b659446872267891c3a922b6a01e39d8f0557505f25cdc5ba1a648"} Jan 20 18:06:11 crc kubenswrapper[4661]: I0120 18:06:11.369094 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26d03e00aaf9fc7a94d8fe25f4f6f7a028f4e5eb9956411442757ca8b2046d27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:11Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:11 crc kubenswrapper[4661]: I0120 18:06:11.385028 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:11Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:11 crc kubenswrapper[4661]: I0120 18:06:11.402889 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-svf7c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"78855c94-da90-4523-8d65-70f7fd153dee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce85015f47761ddd35031a4b2aa10eddde92a1f1ee206e6454b967b03b49372e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zvj2k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7dad5141c6e2e07d42bee1c473efffa900d0d900467b1524cd59962582696a3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zvj2k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T18:06:05Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-svf7c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:11Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:11 crc kubenswrapper[4661]: I0120 18:06:11.425404 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-tfdrt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1c3f1ce7-0584-4bf1-8398-a277e9a4599b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://163c719cffaaa547e54e81b543b5f5b2ce5abf7f6309d2859831a14e42df189f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbq77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T18:06:07Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-tfdrt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:11Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:11 crc kubenswrapper[4661]: I0120 18:06:11.439222 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aafdc595f8f331b863d71124f1aa3c686ec883829377108268dd78de88f498ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a15e7bb714cbcf03a4ed8925508be80b06b04f3cd455d293237554c8ad0fdeee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:11Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:11 crc kubenswrapper[4661]: I0120 18:06:11.444712 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:11 crc kubenswrapper[4661]: I0120 18:06:11.444784 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:11 crc kubenswrapper[4661]: I0120 18:06:11.444812 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:11 crc kubenswrapper[4661]: I0120 18:06:11.444848 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:11 crc kubenswrapper[4661]: I0120 18:06:11.444874 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:11Z","lastTransitionTime":"2026-01-20T18:06:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:11 crc kubenswrapper[4661]: I0120 18:06:11.457217 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-j9j6p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e190abed-d178-4ce7-9485-f6090ecf8578\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:05Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxwgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6923af243783c919b8d74338d7221f91f7c6b770d97eb3a2f7e30360376f071d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6923af243783c919b8d74338d7221f91f7c6b770d97eb3a2f7e30360376f071d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T18:06:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T18:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxwgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d61ecbabdd991af4f3f3005e3d6fab0d3f7fa863e7503f45dd91633dfc68c597\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d61ecbabdd991af4f3f3005e3d6fab0d3f7fa863e7503f45dd91633dfc68c597\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T18:06:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T18:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxwgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31c8fb341a4de1d1144737f83eb46ad0b301f7eb48dee0969da7ade7fbd513da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31c8fb341a4de1d1144737f83eb46ad0b301f7eb48dee0969da7ade7fbd513da\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T18:06:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T18:06:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxwgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db8122764bd0508f39da125b5849fbe3bad9558e511c18f26bdcf4e5b23ca3a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db8122764bd0508f39da125b5849fbe3bad9558e511c18f26bdcf4e5b23ca3a1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T18:06:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T18:06:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxwgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxwgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxwgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T18:06:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-j9j6p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:11Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:11 crc kubenswrapper[4661]: I0120 18:06:11.477268 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5947c5f0-b932-4127-a183-6b9023784c81\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:05:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:05:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:05:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:05:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:05:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2286c38d543136df613b2611b8d494d0777a950158adb169c26708335c024251\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7995b8e096ce8c7adf28d9baa4e12d943a697db80ee2b6e6b347b334e44b0df\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a1fb928361cffd6f14855b6c1cf5964eccc9f923435bf79dddd8f0c94decd9a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a8e025f49d745d0d846c606a3ec9dd6fbd2d255e8662ba1fd1a65f0d4289e77\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3584f02089912eecb6ea77d78d4f093929ce92631cb9ea758f1311268963b6b1\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-20T18:06:02Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0120 18:05:56.920405 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0120 18:05:56.921589 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1862726087/tls.crt::/tmp/serving-cert-1862726087/tls.key\\\\\\\"\\\\nI0120 18:06:02.544098 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0120 18:06:02.549414 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0120 18:06:02.549439 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0120 18:06:02.549472 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0120 18:06:02.549479 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0120 18:06:02.569160 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0120 18:06:02.569400 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0120 18:06:02.569474 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0120 18:06:02.569536 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0120 18:06:02.569594 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0120 18:06:02.569648 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0120 18:06:02.569744 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0120 18:06:02.569342 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0120 18:06:02.573278 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T18:05:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f09e5fcc7fafac7a11257184f5919c06b5b2e56a677b67c664e6489d9a581a20\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:05:46Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6eedc9bdf3c37af238cf9ad5172a8d93751c0641cbf43057016157f086c77538\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6eedc9bdf3c37af238cf9ad5172a8d93751c0641cbf43057016157f086c77538\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T18:05:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T18:05:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T18:05:44Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:11Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:11 crc kubenswrapper[4661]: I0120 18:06:11.496121 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:03Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:11Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:11 crc kubenswrapper[4661]: I0120 18:06:11.511112 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:03Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:11Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:11 crc kubenswrapper[4661]: I0120 18:06:11.522481 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-9m9jm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c44ff326-6791-438a-8d65-b2be26e9c819\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de5a607340e429cf954b1b6e147c4dbff99ffee4d311e9692410698574915af2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kn7nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T18:06:04Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-9m9jm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:11Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:11 crc kubenswrapper[4661]: I0120 18:06:11.543600 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fxb9d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3856f23c-8dc3-4b36-b3b7-955dff315250\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:06Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:06Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66kpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66kpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66kpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66kpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66kpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66kpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66kpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66kpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://babd416d0d33b286f533dc5bd8d6904d24fd23632efce36edb6e13183fbd390a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://babd416d0d33b286f533dc5bd8d6904d24fd23632efce36edb6e13183fbd390a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T18:06:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66kpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T18:06:06Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fxb9d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:11Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:11 crc kubenswrapper[4661]: I0120 18:06:11.552698 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:11 crc kubenswrapper[4661]: I0120 18:06:11.552739 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:11 crc kubenswrapper[4661]: I0120 18:06:11.552749 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:11 crc kubenswrapper[4661]: I0120 18:06:11.552769 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:11 crc kubenswrapper[4661]: I0120 18:06:11.552783 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:11Z","lastTransitionTime":"2026-01-20T18:06:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:11 crc kubenswrapper[4661]: I0120 18:06:11.558504 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f7511825-196e-48ea-a80c-f30a6806a15f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:05:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:05:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f30ca85f0d31021dde3b56c646ddd5d841e699b809c85e54afa944cc8035df6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://008613eee577926f777b6eba5a93379dca1203429fb29918bb057f2aba5eba4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:05:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://baf1692fe971ebe4534bc83cc471812d2b2883b6f97e53728ded6cd57b40c6f1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://faea3c0fefa61b8f0e07a050f59ca7b88d89a7ac8dba19ab019cff00fd782da3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T18:05:44Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:11Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:11 crc kubenswrapper[4661]: I0120 18:06:11.572400 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d831477bdf455582c54cba87020fc1141541282a25169c4b9730a78855e5719\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:11Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:11 crc kubenswrapper[4661]: I0120 18:06:11.586829 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-z97p2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b6f2401-3eb9-4ee4-b79c-6faee06bc21c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d04be3c87130e9506908a5ff0bf35490bafa64b4cec7b6ae1b67c4a8bd7df5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff8qg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T18:06:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-z97p2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:11Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:11 crc kubenswrapper[4661]: I0120 18:06:11.655542 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:11 crc kubenswrapper[4661]: I0120 18:06:11.655577 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:11 crc kubenswrapper[4661]: I0120 18:06:11.655587 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:11 crc kubenswrapper[4661]: I0120 18:06:11.655604 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:11 crc kubenswrapper[4661]: I0120 18:06:11.655614 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:11Z","lastTransitionTime":"2026-01-20T18:06:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:11 crc kubenswrapper[4661]: I0120 18:06:11.758826 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:11 crc kubenswrapper[4661]: I0120 18:06:11.759290 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:11 crc kubenswrapper[4661]: I0120 18:06:11.759304 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:11 crc kubenswrapper[4661]: I0120 18:06:11.759323 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:11 crc kubenswrapper[4661]: I0120 18:06:11.759336 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:11Z","lastTransitionTime":"2026-01-20T18:06:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:11 crc kubenswrapper[4661]: I0120 18:06:11.862682 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:11 crc kubenswrapper[4661]: I0120 18:06:11.862728 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:11 crc kubenswrapper[4661]: I0120 18:06:11.862738 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:11 crc kubenswrapper[4661]: I0120 18:06:11.862756 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:11 crc kubenswrapper[4661]: I0120 18:06:11.862768 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:11Z","lastTransitionTime":"2026-01-20T18:06:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:11 crc kubenswrapper[4661]: I0120 18:06:11.966044 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:11 crc kubenswrapper[4661]: I0120 18:06:11.966107 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:11 crc kubenswrapper[4661]: I0120 18:06:11.966119 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:11 crc kubenswrapper[4661]: I0120 18:06:11.966139 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:11 crc kubenswrapper[4661]: I0120 18:06:11.966153 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:11Z","lastTransitionTime":"2026-01-20T18:06:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:12 crc kubenswrapper[4661]: I0120 18:06:12.069520 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:12 crc kubenswrapper[4661]: I0120 18:06:12.069570 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:12 crc kubenswrapper[4661]: I0120 18:06:12.069585 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:12 crc kubenswrapper[4661]: I0120 18:06:12.069604 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:12 crc kubenswrapper[4661]: I0120 18:06:12.069617 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:12Z","lastTransitionTime":"2026-01-20T18:06:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:12 crc kubenswrapper[4661]: I0120 18:06:12.098450 4661 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-28 08:29:28.852332322 +0000 UTC Jan 20 18:06:12 crc kubenswrapper[4661]: I0120 18:06:12.172211 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:12 crc kubenswrapper[4661]: I0120 18:06:12.172251 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:12 crc kubenswrapper[4661]: I0120 18:06:12.172262 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:12 crc kubenswrapper[4661]: I0120 18:06:12.172281 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:12 crc kubenswrapper[4661]: I0120 18:06:12.172294 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:12Z","lastTransitionTime":"2026-01-20T18:06:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:12 crc kubenswrapper[4661]: I0120 18:06:12.275483 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:12 crc kubenswrapper[4661]: I0120 18:06:12.275518 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:12 crc kubenswrapper[4661]: I0120 18:06:12.275529 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:12 crc kubenswrapper[4661]: I0120 18:06:12.275545 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:12 crc kubenswrapper[4661]: I0120 18:06:12.275555 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:12Z","lastTransitionTime":"2026-01-20T18:06:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:12 crc kubenswrapper[4661]: I0120 18:06:12.378849 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:12 crc kubenswrapper[4661]: I0120 18:06:12.378879 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:12 crc kubenswrapper[4661]: I0120 18:06:12.378889 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:12 crc kubenswrapper[4661]: I0120 18:06:12.378903 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:12 crc kubenswrapper[4661]: I0120 18:06:12.378911 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:12Z","lastTransitionTime":"2026-01-20T18:06:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:12 crc kubenswrapper[4661]: I0120 18:06:12.482353 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:12 crc kubenswrapper[4661]: I0120 18:06:12.482391 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:12 crc kubenswrapper[4661]: I0120 18:06:12.482401 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:12 crc kubenswrapper[4661]: I0120 18:06:12.482418 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:12 crc kubenswrapper[4661]: I0120 18:06:12.482428 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:12Z","lastTransitionTime":"2026-01-20T18:06:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:12 crc kubenswrapper[4661]: I0120 18:06:12.584928 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:12 crc kubenswrapper[4661]: I0120 18:06:12.584979 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:12 crc kubenswrapper[4661]: I0120 18:06:12.584993 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:12 crc kubenswrapper[4661]: I0120 18:06:12.585012 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:12 crc kubenswrapper[4661]: I0120 18:06:12.585022 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:12Z","lastTransitionTime":"2026-01-20T18:06:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:12 crc kubenswrapper[4661]: I0120 18:06:12.688396 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:12 crc kubenswrapper[4661]: I0120 18:06:12.688454 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:12 crc kubenswrapper[4661]: I0120 18:06:12.688466 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:12 crc kubenswrapper[4661]: I0120 18:06:12.688487 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:12 crc kubenswrapper[4661]: I0120 18:06:12.688513 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:12Z","lastTransitionTime":"2026-01-20T18:06:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:12 crc kubenswrapper[4661]: I0120 18:06:12.791366 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:12 crc kubenswrapper[4661]: I0120 18:06:12.791418 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:12 crc kubenswrapper[4661]: I0120 18:06:12.791430 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:12 crc kubenswrapper[4661]: I0120 18:06:12.791450 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:12 crc kubenswrapper[4661]: I0120 18:06:12.791464 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:12Z","lastTransitionTime":"2026-01-20T18:06:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:12 crc kubenswrapper[4661]: I0120 18:06:12.894475 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:12 crc kubenswrapper[4661]: I0120 18:06:12.894539 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:12 crc kubenswrapper[4661]: I0120 18:06:12.894554 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:12 crc kubenswrapper[4661]: I0120 18:06:12.894578 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:12 crc kubenswrapper[4661]: I0120 18:06:12.894594 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:12Z","lastTransitionTime":"2026-01-20T18:06:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:12 crc kubenswrapper[4661]: I0120 18:06:12.997372 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:12 crc kubenswrapper[4661]: I0120 18:06:12.997772 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:12 crc kubenswrapper[4661]: I0120 18:06:12.997911 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:12 crc kubenswrapper[4661]: I0120 18:06:12.998007 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:12 crc kubenswrapper[4661]: I0120 18:06:12.998103 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:12Z","lastTransitionTime":"2026-01-20T18:06:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:13 crc kubenswrapper[4661]: I0120 18:06:13.098896 4661 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-13 05:24:19.167116933 +0000 UTC Jan 20 18:06:13 crc kubenswrapper[4661]: I0120 18:06:13.101794 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:13 crc kubenswrapper[4661]: I0120 18:06:13.101866 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:13 crc kubenswrapper[4661]: I0120 18:06:13.101880 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:13 crc kubenswrapper[4661]: I0120 18:06:13.101905 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:13 crc kubenswrapper[4661]: I0120 18:06:13.101932 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:13Z","lastTransitionTime":"2026-01-20T18:06:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:13 crc kubenswrapper[4661]: I0120 18:06:13.142188 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 18:06:13 crc kubenswrapper[4661]: I0120 18:06:13.142323 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 18:06:13 crc kubenswrapper[4661]: I0120 18:06:13.142476 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 18:06:13 crc kubenswrapper[4661]: E0120 18:06:13.142631 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 20 18:06:13 crc kubenswrapper[4661]: E0120 18:06:13.142748 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 20 18:06:13 crc kubenswrapper[4661]: E0120 18:06:13.142916 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 20 18:06:13 crc kubenswrapper[4661]: I0120 18:06:13.205069 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:13 crc kubenswrapper[4661]: I0120 18:06:13.205120 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:13 crc kubenswrapper[4661]: I0120 18:06:13.205190 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:13 crc kubenswrapper[4661]: I0120 18:06:13.205214 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:13 crc kubenswrapper[4661]: I0120 18:06:13.205257 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:13Z","lastTransitionTime":"2026-01-20T18:06:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:13 crc kubenswrapper[4661]: I0120 18:06:13.308707 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:13 crc kubenswrapper[4661]: I0120 18:06:13.308747 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:13 crc kubenswrapper[4661]: I0120 18:06:13.308756 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:13 crc kubenswrapper[4661]: I0120 18:06:13.308776 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:13 crc kubenswrapper[4661]: I0120 18:06:13.308787 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:13Z","lastTransitionTime":"2026-01-20T18:06:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:13 crc kubenswrapper[4661]: I0120 18:06:13.412957 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:13 crc kubenswrapper[4661]: I0120 18:06:13.413049 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:13 crc kubenswrapper[4661]: I0120 18:06:13.413066 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:13 crc kubenswrapper[4661]: I0120 18:06:13.413091 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:13 crc kubenswrapper[4661]: I0120 18:06:13.413108 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:13Z","lastTransitionTime":"2026-01-20T18:06:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:13 crc kubenswrapper[4661]: I0120 18:06:13.516424 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:13 crc kubenswrapper[4661]: I0120 18:06:13.516477 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:13 crc kubenswrapper[4661]: I0120 18:06:13.516494 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:13 crc kubenswrapper[4661]: I0120 18:06:13.516520 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:13 crc kubenswrapper[4661]: I0120 18:06:13.516538 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:13Z","lastTransitionTime":"2026-01-20T18:06:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:13 crc kubenswrapper[4661]: I0120 18:06:13.577191 4661 generic.go:334] "Generic (PLEG): container finished" podID="e190abed-d178-4ce7-9485-f6090ecf8578" containerID="60e382a199aa3a85c11fdf8c490a4f039a191cff8a604b004e2f4ea6dacb6800" exitCode=0 Jan 20 18:06:13 crc kubenswrapper[4661]: I0120 18:06:13.577268 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-j9j6p" event={"ID":"e190abed-d178-4ce7-9485-f6090ecf8578","Type":"ContainerDied","Data":"60e382a199aa3a85c11fdf8c490a4f039a191cff8a604b004e2f4ea6dacb6800"} Jan 20 18:06:13 crc kubenswrapper[4661]: I0120 18:06:13.585788 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fxb9d" event={"ID":"3856f23c-8dc3-4b36-b3b7-955dff315250","Type":"ContainerStarted","Data":"1ef28f4922dda916a079ff808db18e7c37635d7850639783e2eb8f743ac6cfa7"} Jan 20 18:06:13 crc kubenswrapper[4661]: I0120 18:06:13.586294 4661 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-fxb9d" Jan 20 18:06:13 crc kubenswrapper[4661]: I0120 18:06:13.586343 4661 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-fxb9d" Jan 20 18:06:13 crc kubenswrapper[4661]: I0120 18:06:13.597096 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-9m9jm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c44ff326-6791-438a-8d65-b2be26e9c819\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de5a607340e429cf954b1b6e147c4dbff99ffee4d311e9692410698574915af2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kn7nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T18:06:04Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-9m9jm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:13Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:13 crc kubenswrapper[4661]: I0120 18:06:13.630854 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fxb9d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3856f23c-8dc3-4b36-b3b7-955dff315250\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:06Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:06Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66kpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66kpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66kpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66kpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66kpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66kpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66kpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66kpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://babd416d0d33b286f533dc5bd8d6904d24fd23632efce36edb6e13183fbd390a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://babd416d0d33b286f533dc5bd8d6904d24fd23632efce36edb6e13183fbd390a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T18:06:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66kpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T18:06:06Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fxb9d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:13Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:13 crc kubenswrapper[4661]: I0120 18:06:13.634188 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:13 crc kubenswrapper[4661]: I0120 18:06:13.634259 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:13 crc kubenswrapper[4661]: I0120 18:06:13.634278 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:13 crc kubenswrapper[4661]: I0120 18:06:13.634305 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:13 crc kubenswrapper[4661]: I0120 18:06:13.634323 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:13Z","lastTransitionTime":"2026-01-20T18:06:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:13 crc kubenswrapper[4661]: I0120 18:06:13.644286 4661 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-fxb9d" Jan 20 18:06:13 crc kubenswrapper[4661]: I0120 18:06:13.655166 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5947c5f0-b932-4127-a183-6b9023784c81\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:05:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:05:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:05:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:05:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:05:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2286c38d543136df613b2611b8d494d0777a950158adb169c26708335c024251\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7995b8e096ce8c7adf28d9baa4e12d943a697db80ee2b6e6b347b334e44b0df\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a1fb928361cffd6f14855b6c1cf5964eccc9f923435bf79dddd8f0c94decd9a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a8e025f49d745d0d846c606a3ec9dd6fbd2d255e8662ba1fd1a65f0d4289e77\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3584f02089912eecb6ea77d78d4f093929ce92631cb9ea758f1311268963b6b1\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-20T18:06:02Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0120 18:05:56.920405 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0120 18:05:56.921589 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1862726087/tls.crt::/tmp/serving-cert-1862726087/tls.key\\\\\\\"\\\\nI0120 18:06:02.544098 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0120 18:06:02.549414 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0120 18:06:02.549439 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0120 18:06:02.549472 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0120 18:06:02.549479 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0120 18:06:02.569160 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0120 18:06:02.569400 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0120 18:06:02.569474 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0120 18:06:02.569536 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0120 18:06:02.569594 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0120 18:06:02.569648 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0120 18:06:02.569744 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0120 18:06:02.569342 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0120 18:06:02.573278 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T18:05:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f09e5fcc7fafac7a11257184f5919c06b5b2e56a677b67c664e6489d9a581a20\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:05:46Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6eedc9bdf3c37af238cf9ad5172a8d93751c0641cbf43057016157f086c77538\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6eedc9bdf3c37af238cf9ad5172a8d93751c0641cbf43057016157f086c77538\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T18:05:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T18:05:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T18:05:44Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:13Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:13 crc kubenswrapper[4661]: I0120 18:06:13.655840 4661 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-fxb9d" Jan 20 18:06:13 crc kubenswrapper[4661]: I0120 18:06:13.680946 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:03Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:13Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:13 crc kubenswrapper[4661]: I0120 18:06:13.705832 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:03Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:13Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:13 crc kubenswrapper[4661]: I0120 18:06:13.723201 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-z97p2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b6f2401-3eb9-4ee4-b79c-6faee06bc21c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d04be3c87130e9506908a5ff0bf35490bafa64b4cec7b6ae1b67c4a8bd7df5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff8qg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T18:06:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-z97p2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:13Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:13 crc kubenswrapper[4661]: I0120 18:06:13.738104 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f7511825-196e-48ea-a80c-f30a6806a15f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:05:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:05:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f30ca85f0d31021dde3b56c646ddd5d841e699b809c85e54afa944cc8035df6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://008613eee577926f777b6eba5a93379dca1203429fb29918bb057f2aba5eba4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:05:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://baf1692fe971ebe4534bc83cc471812d2b2883b6f97e53728ded6cd57b40c6f1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://faea3c0fefa61b8f0e07a050f59ca7b88d89a7ac8dba19ab019cff00fd782da3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T18:05:44Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:13Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:13 crc kubenswrapper[4661]: I0120 18:06:13.752927 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d831477bdf455582c54cba87020fc1141541282a25169c4b9730a78855e5719\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:13Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:13 crc kubenswrapper[4661]: I0120 18:06:13.755187 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:13 crc kubenswrapper[4661]: I0120 18:06:13.755226 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:13 crc kubenswrapper[4661]: I0120 18:06:13.755286 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:13 crc kubenswrapper[4661]: I0120 18:06:13.755539 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:13 crc kubenswrapper[4661]: I0120 18:06:13.755564 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:13Z","lastTransitionTime":"2026-01-20T18:06:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:13 crc kubenswrapper[4661]: I0120 18:06:13.770829 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-tfdrt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1c3f1ce7-0584-4bf1-8398-a277e9a4599b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://163c719cffaaa547e54e81b543b5f5b2ce5abf7f6309d2859831a14e42df189f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbq77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T18:06:07Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-tfdrt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:13Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:13 crc kubenswrapper[4661]: I0120 18:06:13.790305 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26d03e00aaf9fc7a94d8fe25f4f6f7a028f4e5eb9956411442757ca8b2046d27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:13Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:13 crc kubenswrapper[4661]: I0120 18:06:13.805592 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:13Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:13 crc kubenswrapper[4661]: I0120 18:06:13.831067 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-svf7c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"78855c94-da90-4523-8d65-70f7fd153dee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce85015f47761ddd35031a4b2aa10eddde92a1f1ee206e6454b967b03b49372e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zvj2k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7dad5141c6e2e07d42bee1c473efffa900d0d900467b1524cd59962582696a3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zvj2k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T18:06:05Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-svf7c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:13Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:13 crc kubenswrapper[4661]: I0120 18:06:13.853369 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-j9j6p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e190abed-d178-4ce7-9485-f6090ecf8578\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:05Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxwgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6923af243783c919b8d74338d7221f91f7c6b770d97eb3a2f7e30360376f071d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6923af243783c919b8d74338d7221f91f7c6b770d97eb3a2f7e30360376f071d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T18:06:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T18:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxwgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d61ecbabdd991af4f3f3005e3d6fab0d3f7fa863e7503f45dd91633dfc68c597\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d61ecbabdd991af4f3f3005e3d6fab0d3f7fa863e7503f45dd91633dfc68c597\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T18:06:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T18:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxwgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31c8fb341a4de1d1144737f83eb46ad0b301f7eb48dee0969da7ade7fbd513da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31c8fb341a4de1d1144737f83eb46ad0b301f7eb48dee0969da7ade7fbd513da\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T18:06:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T18:06:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxwgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db8122764bd0508f39da125b5849fbe3bad9558e511c18f26bdcf4e5b23ca3a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db8122764bd0508f39da125b5849fbe3bad9558e511c18f26bdcf4e5b23ca3a1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T18:06:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T18:06:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxwgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://60e382a199aa3a85c11fdf8c490a4f039a191cff8a604b004e2f4ea6dacb6800\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://60e382a199aa3a85c11fdf8c490a4f039a191cff8a604b004e2f4ea6dacb6800\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T18:06:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T18:06:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxwgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxwgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T18:06:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-j9j6p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:13Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:13 crc kubenswrapper[4661]: I0120 18:06:13.859989 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:13 crc kubenswrapper[4661]: I0120 18:06:13.860043 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:13 crc kubenswrapper[4661]: I0120 18:06:13.860054 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:13 crc kubenswrapper[4661]: I0120 18:06:13.860074 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:13 crc kubenswrapper[4661]: I0120 18:06:13.860089 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:13Z","lastTransitionTime":"2026-01-20T18:06:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:13 crc kubenswrapper[4661]: I0120 18:06:13.870387 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aafdc595f8f331b863d71124f1aa3c686ec883829377108268dd78de88f498ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a15e7bb714cbcf03a4ed8925508be80b06b04f3cd455d293237554c8ad0fdeee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:13Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:13 crc kubenswrapper[4661]: I0120 18:06:13.888144 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f7511825-196e-48ea-a80c-f30a6806a15f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:05:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:05:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f30ca85f0d31021dde3b56c646ddd5d841e699b809c85e54afa944cc8035df6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://008613eee577926f777b6eba5a93379dca1203429fb29918bb057f2aba5eba4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:05:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://baf1692fe971ebe4534bc83cc471812d2b2883b6f97e53728ded6cd57b40c6f1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://faea3c0fefa61b8f0e07a050f59ca7b88d89a7ac8dba19ab019cff00fd782da3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T18:05:44Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:13Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:13 crc kubenswrapper[4661]: I0120 18:06:13.905564 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d831477bdf455582c54cba87020fc1141541282a25169c4b9730a78855e5719\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:13Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:13 crc kubenswrapper[4661]: I0120 18:06:13.919587 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-z97p2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b6f2401-3eb9-4ee4-b79c-6faee06bc21c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d04be3c87130e9506908a5ff0bf35490bafa64b4cec7b6ae1b67c4a8bd7df5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff8qg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T18:06:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-z97p2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:13Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:13 crc kubenswrapper[4661]: I0120 18:06:13.934224 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26d03e00aaf9fc7a94d8fe25f4f6f7a028f4e5eb9956411442757ca8b2046d27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:13Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:13 crc kubenswrapper[4661]: I0120 18:06:13.947081 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:13Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:13 crc kubenswrapper[4661]: I0120 18:06:13.958644 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-svf7c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"78855c94-da90-4523-8d65-70f7fd153dee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce85015f47761ddd35031a4b2aa10eddde92a1f1ee206e6454b967b03b49372e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zvj2k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7dad5141c6e2e07d42bee1c473efffa900d0d900467b1524cd59962582696a3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zvj2k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T18:06:05Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-svf7c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:13Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:13 crc kubenswrapper[4661]: I0120 18:06:13.962377 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:13 crc kubenswrapper[4661]: I0120 18:06:13.962411 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:13 crc kubenswrapper[4661]: I0120 18:06:13.962423 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:13 crc kubenswrapper[4661]: I0120 18:06:13.962442 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:13 crc kubenswrapper[4661]: I0120 18:06:13.962453 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:13Z","lastTransitionTime":"2026-01-20T18:06:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:13 crc kubenswrapper[4661]: I0120 18:06:13.971448 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-tfdrt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1c3f1ce7-0584-4bf1-8398-a277e9a4599b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://163c719cffaaa547e54e81b543b5f5b2ce5abf7f6309d2859831a14e42df189f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbq77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T18:06:07Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-tfdrt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:13Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:13 crc kubenswrapper[4661]: I0120 18:06:13.984626 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aafdc595f8f331b863d71124f1aa3c686ec883829377108268dd78de88f498ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a15e7bb714cbcf03a4ed8925508be80b06b04f3cd455d293237554c8ad0fdeee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:13Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:13 crc kubenswrapper[4661]: I0120 18:06:13.985606 4661 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Jan 20 18:06:14 crc kubenswrapper[4661]: I0120 18:06:14.065618 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:14 crc kubenswrapper[4661]: I0120 18:06:14.065727 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:14 crc kubenswrapper[4661]: I0120 18:06:14.065747 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:14 crc kubenswrapper[4661]: I0120 18:06:14.065775 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:14 crc kubenswrapper[4661]: I0120 18:06:14.065792 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:14Z","lastTransitionTime":"2026-01-20T18:06:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:14 crc kubenswrapper[4661]: I0120 18:06:14.099458 4661 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-19 01:43:48.442773138 +0000 UTC Jan 20 18:06:14 crc kubenswrapper[4661]: I0120 18:06:14.169370 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:14 crc kubenswrapper[4661]: I0120 18:06:14.169463 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:14 crc kubenswrapper[4661]: I0120 18:06:14.169485 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:14 crc kubenswrapper[4661]: I0120 18:06:14.169531 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:14 crc kubenswrapper[4661]: I0120 18:06:14.169569 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:14Z","lastTransitionTime":"2026-01-20T18:06:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:14 crc kubenswrapper[4661]: I0120 18:06:14.273388 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:14 crc kubenswrapper[4661]: I0120 18:06:14.273481 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:14 crc kubenswrapper[4661]: I0120 18:06:14.273505 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:14 crc kubenswrapper[4661]: I0120 18:06:14.273538 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:14 crc kubenswrapper[4661]: I0120 18:06:14.273562 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:14Z","lastTransitionTime":"2026-01-20T18:06:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:14 crc kubenswrapper[4661]: I0120 18:06:14.377797 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:14 crc kubenswrapper[4661]: I0120 18:06:14.377866 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:14 crc kubenswrapper[4661]: I0120 18:06:14.377885 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:14 crc kubenswrapper[4661]: I0120 18:06:14.377914 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:14 crc kubenswrapper[4661]: I0120 18:06:14.377934 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:14Z","lastTransitionTime":"2026-01-20T18:06:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:14 crc kubenswrapper[4661]: I0120 18:06:14.483595 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:14 crc kubenswrapper[4661]: I0120 18:06:14.485090 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:14 crc kubenswrapper[4661]: I0120 18:06:14.485334 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:14 crc kubenswrapper[4661]: I0120 18:06:14.485641 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:14 crc kubenswrapper[4661]: I0120 18:06:14.485858 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:14Z","lastTransitionTime":"2026-01-20T18:06:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:14 crc kubenswrapper[4661]: I0120 18:06:14.590392 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:14 crc kubenswrapper[4661]: I0120 18:06:14.590584 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:14 crc kubenswrapper[4661]: I0120 18:06:14.590659 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:14 crc kubenswrapper[4661]: I0120 18:06:14.590764 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:14 crc kubenswrapper[4661]: I0120 18:06:14.590854 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:14Z","lastTransitionTime":"2026-01-20T18:06:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:14 crc kubenswrapper[4661]: I0120 18:06:14.594231 4661 generic.go:334] "Generic (PLEG): container finished" podID="e190abed-d178-4ce7-9485-f6090ecf8578" containerID="bb0e9f6dd4681c1b791524e22d3f668ce544cdc72a33af01fa70f2dd93d2972f" exitCode=0 Jan 20 18:06:14 crc kubenswrapper[4661]: I0120 18:06:14.594505 4661 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 20 18:06:14 crc kubenswrapper[4661]: I0120 18:06:14.594805 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-j9j6p" event={"ID":"e190abed-d178-4ce7-9485-f6090ecf8578","Type":"ContainerDied","Data":"bb0e9f6dd4681c1b791524e22d3f668ce544cdc72a33af01fa70f2dd93d2972f"} Jan 20 18:06:14 crc kubenswrapper[4661]: I0120 18:06:14.695084 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:14 crc kubenswrapper[4661]: I0120 18:06:14.695133 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:14 crc kubenswrapper[4661]: I0120 18:06:14.695146 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:14 crc kubenswrapper[4661]: I0120 18:06:14.695168 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:14 crc kubenswrapper[4661]: I0120 18:06:14.695183 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:14Z","lastTransitionTime":"2026-01-20T18:06:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:14 crc kubenswrapper[4661]: I0120 18:06:14.797878 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:14 crc kubenswrapper[4661]: I0120 18:06:14.797910 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:14 crc kubenswrapper[4661]: I0120 18:06:14.797918 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:14 crc kubenswrapper[4661]: I0120 18:06:14.797936 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:14 crc kubenswrapper[4661]: I0120 18:06:14.797947 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:14Z","lastTransitionTime":"2026-01-20T18:06:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:14 crc kubenswrapper[4661]: I0120 18:06:14.901662 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:14 crc kubenswrapper[4661]: I0120 18:06:14.901737 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:14 crc kubenswrapper[4661]: I0120 18:06:14.901749 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:14 crc kubenswrapper[4661]: I0120 18:06:14.901777 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:14 crc kubenswrapper[4661]: I0120 18:06:14.901800 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:14Z","lastTransitionTime":"2026-01-20T18:06:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:15 crc kubenswrapper[4661]: I0120 18:06:15.007456 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:15 crc kubenswrapper[4661]: I0120 18:06:15.007509 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:15 crc kubenswrapper[4661]: I0120 18:06:15.007527 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:15 crc kubenswrapper[4661]: I0120 18:06:15.007566 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:15 crc kubenswrapper[4661]: I0120 18:06:15.007585 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:15Z","lastTransitionTime":"2026-01-20T18:06:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:15 crc kubenswrapper[4661]: I0120 18:06:15.007340 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-j9j6p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e190abed-d178-4ce7-9485-f6090ecf8578\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:05Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxwgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6923af243783c919b8d74338d7221f91f7c6b770d97eb3a2f7e30360376f071d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6923af243783c919b8d74338d7221f91f7c6b770d97eb3a2f7e30360376f071d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T18:06:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T18:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxwgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d61ecbabdd991af4f3f3005e3d6fab0d3f7fa863e7503f45dd91633dfc68c597\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d61ecbabdd991af4f3f3005e3d6fab0d3f7fa863e7503f45dd91633dfc68c597\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T18:06:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T18:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxwgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31c8fb341a4de1d1144737f83eb46ad0b301f7eb48dee0969da7ade7fbd513da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31c8fb341a4de1d1144737f83eb46ad0b301f7eb48dee0969da7ade7fbd513da\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T18:06:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T18:06:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxwgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db8122764bd0508f39da125b5849fbe3bad9558e511c18f26bdcf4e5b23ca3a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db8122764bd0508f39da125b5849fbe3bad9558e511c18f26bdcf4e5b23ca3a1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T18:06:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T18:06:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxwgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://60e382a199aa3a85c11fdf8c490a4f039a191cff8a604b004e2f4ea6dacb6800\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://60e382a199aa3a85c11fdf8c490a4f039a191cff8a604b004e2f4ea6dacb6800\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T18:06:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T18:06:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxwgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxwgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T18:06:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-j9j6p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:15Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:15 crc kubenswrapper[4661]: I0120 18:06:15.023626 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5947c5f0-b932-4127-a183-6b9023784c81\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:05:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:05:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:05:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:05:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:05:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2286c38d543136df613b2611b8d494d0777a950158adb169c26708335c024251\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7995b8e096ce8c7adf28d9baa4e12d943a697db80ee2b6e6b347b334e44b0df\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a1fb928361cffd6f14855b6c1cf5964eccc9f923435bf79dddd8f0c94decd9a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a8e025f49d745d0d846c606a3ec9dd6fbd2d255e8662ba1fd1a65f0d4289e77\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3584f02089912eecb6ea77d78d4f093929ce92631cb9ea758f1311268963b6b1\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-20T18:06:02Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0120 18:05:56.920405 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0120 18:05:56.921589 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1862726087/tls.crt::/tmp/serving-cert-1862726087/tls.key\\\\\\\"\\\\nI0120 18:06:02.544098 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0120 18:06:02.549414 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0120 18:06:02.549439 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0120 18:06:02.549472 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0120 18:06:02.549479 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0120 18:06:02.569160 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0120 18:06:02.569400 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0120 18:06:02.569474 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0120 18:06:02.569536 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0120 18:06:02.569594 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0120 18:06:02.569648 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0120 18:06:02.569744 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0120 18:06:02.569342 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0120 18:06:02.573278 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T18:05:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f09e5fcc7fafac7a11257184f5919c06b5b2e56a677b67c664e6489d9a581a20\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:05:46Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6eedc9bdf3c37af238cf9ad5172a8d93751c0641cbf43057016157f086c77538\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6eedc9bdf3c37af238cf9ad5172a8d93751c0641cbf43057016157f086c77538\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T18:05:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T18:05:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T18:05:44Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:15Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:15 crc kubenswrapper[4661]: I0120 18:06:15.042238 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:03Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:15Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:15 crc kubenswrapper[4661]: I0120 18:06:15.067224 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:03Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:15Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:15 crc kubenswrapper[4661]: I0120 18:06:15.078463 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-9m9jm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c44ff326-6791-438a-8d65-b2be26e9c819\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de5a607340e429cf954b1b6e147c4dbff99ffee4d311e9692410698574915af2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kn7nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T18:06:04Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-9m9jm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:15Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:15 crc kubenswrapper[4661]: I0120 18:06:15.099618 4661 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-11 17:13:09.295559095 +0000 UTC Jan 20 18:06:15 crc kubenswrapper[4661]: I0120 18:06:15.101037 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fxb9d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3856f23c-8dc3-4b36-b3b7-955dff315250\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:06Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:06Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54a53d0636da9c6e7974633697967fa21ba02b0357019aca7c83994f57d06d84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66kpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://37fb98a4cea5fe59a694ef52ebebfd3366649970415c8bd3b1307e6d150ffe66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66kpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6bac19d8c5ba66dc20e5e4b90b2ba10efe69f218908b04abb221416f47e47f5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66kpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f5f5d96326cd37c1101488fff8b4ce215ce84766faf13112bed7df0a767de0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66kpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d53da47c39bd1f10fe866890f30f12f27cb0cfce0348c89fc0e89b3e8f563f2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66kpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://407e4d66f22050b80251fcb98ac7168d601d70dff1679bdaca0fc82d6068da41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66kpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ef28f4922dda916a079ff808db18e7c37635d7850639783e2eb8f743ac6cfa7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66kpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dfbc19df20b659446872267891c3a922b6a01e39d8f0557505f25cdc5ba1a648\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66kpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://babd416d0d33b286f533dc5bd8d6904d24fd23632efce36edb6e13183fbd390a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://babd416d0d33b286f533dc5bd8d6904d24fd23632efce36edb6e13183fbd390a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T18:06:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66kpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T18:06:06Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fxb9d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:15Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:15 crc kubenswrapper[4661]: I0120 18:06:15.110016 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:15 crc kubenswrapper[4661]: I0120 18:06:15.110052 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:15 crc kubenswrapper[4661]: I0120 18:06:15.110062 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:15 crc kubenswrapper[4661]: I0120 18:06:15.110112 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:15 crc kubenswrapper[4661]: I0120 18:06:15.110122 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:15Z","lastTransitionTime":"2026-01-20T18:06:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:15 crc kubenswrapper[4661]: I0120 18:06:15.114443 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-9m9jm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c44ff326-6791-438a-8d65-b2be26e9c819\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de5a607340e429cf954b1b6e147c4dbff99ffee4d311e9692410698574915af2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kn7nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T18:06:04Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-9m9jm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:15Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:15 crc kubenswrapper[4661]: I0120 18:06:15.141704 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 18:06:15 crc kubenswrapper[4661]: E0120 18:06:15.142090 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 20 18:06:15 crc kubenswrapper[4661]: I0120 18:06:15.142613 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 18:06:15 crc kubenswrapper[4661]: E0120 18:06:15.142771 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 20 18:06:15 crc kubenswrapper[4661]: I0120 18:06:15.143023 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 18:06:15 crc kubenswrapper[4661]: E0120 18:06:15.143280 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 20 18:06:15 crc kubenswrapper[4661]: I0120 18:06:15.146476 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fxb9d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3856f23c-8dc3-4b36-b3b7-955dff315250\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:06Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:06Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54a53d0636da9c6e7974633697967fa21ba02b0357019aca7c83994f57d06d84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66kpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://37fb98a4cea5fe59a694ef52ebebfd3366649970415c8bd3b1307e6d150ffe66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66kpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6bac19d8c5ba66dc20e5e4b90b2ba10efe69f218908b04abb221416f47e47f5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66kpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f5f5d96326cd37c1101488fff8b4ce215ce84766faf13112bed7df0a767de0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66kpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d53da47c39bd1f10fe866890f30f12f27cb0cfce0348c89fc0e89b3e8f563f2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66kpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://407e4d66f22050b80251fcb98ac7168d601d70dff1679bdaca0fc82d6068da41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66kpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ef28f4922dda916a079ff808db18e7c37635d7850639783e2eb8f743ac6cfa7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66kpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dfbc19df20b659446872267891c3a922b6a01e39d8f0557505f25cdc5ba1a648\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66kpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://babd416d0d33b286f533dc5bd8d6904d24fd23632efce36edb6e13183fbd390a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://babd416d0d33b286f533dc5bd8d6904d24fd23632efce36edb6e13183fbd390a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T18:06:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66kpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T18:06:06Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fxb9d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:15Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:15 crc kubenswrapper[4661]: I0120 18:06:15.169960 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5947c5f0-b932-4127-a183-6b9023784c81\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:05:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:05:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:05:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:05:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:05:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2286c38d543136df613b2611b8d494d0777a950158adb169c26708335c024251\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7995b8e096ce8c7adf28d9baa4e12d943a697db80ee2b6e6b347b334e44b0df\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a1fb928361cffd6f14855b6c1cf5964eccc9f923435bf79dddd8f0c94decd9a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a8e025f49d745d0d846c606a3ec9dd6fbd2d255e8662ba1fd1a65f0d4289e77\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3584f02089912eecb6ea77d78d4f093929ce92631cb9ea758f1311268963b6b1\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-20T18:06:02Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0120 18:05:56.920405 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0120 18:05:56.921589 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1862726087/tls.crt::/tmp/serving-cert-1862726087/tls.key\\\\\\\"\\\\nI0120 18:06:02.544098 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0120 18:06:02.549414 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0120 18:06:02.549439 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0120 18:06:02.549472 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0120 18:06:02.549479 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0120 18:06:02.569160 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0120 18:06:02.569400 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0120 18:06:02.569474 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0120 18:06:02.569536 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0120 18:06:02.569594 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0120 18:06:02.569648 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0120 18:06:02.569744 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0120 18:06:02.569342 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0120 18:06:02.573278 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T18:05:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f09e5fcc7fafac7a11257184f5919c06b5b2e56a677b67c664e6489d9a581a20\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:05:46Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6eedc9bdf3c37af238cf9ad5172a8d93751c0641cbf43057016157f086c77538\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6eedc9bdf3c37af238cf9ad5172a8d93751c0641cbf43057016157f086c77538\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T18:05:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T18:05:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T18:05:44Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:15Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:15 crc kubenswrapper[4661]: I0120 18:06:15.190778 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:03Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:15Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:15 crc kubenswrapper[4661]: I0120 18:06:15.209643 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:03Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:15Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:15 crc kubenswrapper[4661]: I0120 18:06:15.212865 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:15 crc kubenswrapper[4661]: I0120 18:06:15.212900 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:15 crc kubenswrapper[4661]: I0120 18:06:15.212911 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:15 crc kubenswrapper[4661]: I0120 18:06:15.212932 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:15 crc kubenswrapper[4661]: I0120 18:06:15.212946 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:15Z","lastTransitionTime":"2026-01-20T18:06:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:15 crc kubenswrapper[4661]: I0120 18:06:15.233850 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-z97p2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b6f2401-3eb9-4ee4-b79c-6faee06bc21c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d04be3c87130e9506908a5ff0bf35490bafa64b4cec7b6ae1b67c4a8bd7df5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff8qg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T18:06:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-z97p2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:15Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:15 crc kubenswrapper[4661]: I0120 18:06:15.269062 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f7511825-196e-48ea-a80c-f30a6806a15f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:05:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:05:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f30ca85f0d31021dde3b56c646ddd5d841e699b809c85e54afa944cc8035df6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://008613eee577926f777b6eba5a93379dca1203429fb29918bb057f2aba5eba4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:05:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://baf1692fe971ebe4534bc83cc471812d2b2883b6f97e53728ded6cd57b40c6f1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://faea3c0fefa61b8f0e07a050f59ca7b88d89a7ac8dba19ab019cff00fd782da3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T18:05:44Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:15Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:15 crc kubenswrapper[4661]: I0120 18:06:15.286175 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d831477bdf455582c54cba87020fc1141541282a25169c4b9730a78855e5719\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:15Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:15 crc kubenswrapper[4661]: I0120 18:06:15.300258 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-tfdrt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1c3f1ce7-0584-4bf1-8398-a277e9a4599b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://163c719cffaaa547e54e81b543b5f5b2ce5abf7f6309d2859831a14e42df189f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbq77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T18:06:07Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-tfdrt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:15Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:15 crc kubenswrapper[4661]: I0120 18:06:15.313653 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26d03e00aaf9fc7a94d8fe25f4f6f7a028f4e5eb9956411442757ca8b2046d27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:15Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:15 crc kubenswrapper[4661]: I0120 18:06:15.315704 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:15 crc kubenswrapper[4661]: I0120 18:06:15.315727 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:15 crc kubenswrapper[4661]: I0120 18:06:15.315737 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:15 crc kubenswrapper[4661]: I0120 18:06:15.315753 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:15 crc kubenswrapper[4661]: I0120 18:06:15.315762 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:15Z","lastTransitionTime":"2026-01-20T18:06:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:15 crc kubenswrapper[4661]: I0120 18:06:15.327349 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:15Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:15 crc kubenswrapper[4661]: I0120 18:06:15.338933 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-svf7c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"78855c94-da90-4523-8d65-70f7fd153dee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce85015f47761ddd35031a4b2aa10eddde92a1f1ee206e6454b967b03b49372e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zvj2k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7dad5141c6e2e07d42bee1c473efffa900d0d900467b1524cd59962582696a3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zvj2k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T18:06:05Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-svf7c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:15Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:15 crc kubenswrapper[4661]: I0120 18:06:15.354434 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-j9j6p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e190abed-d178-4ce7-9485-f6090ecf8578\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxwgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6923af243783c919b8d74338d7221f91f7c6b770d97eb3a2f7e30360376f071d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6923af243783c919b8d74338d7221f91f7c6b770d97eb3a2f7e30360376f071d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T18:06:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T18:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxwgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d61ecbabdd991af4f3f3005e3d6fab0d3f7fa863e7503f45dd91633dfc68c597\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d61ecbabdd991af4f3f3005e3d6fab0d3f7fa863e7503f45dd91633dfc68c597\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T18:06:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T18:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxwgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31c8fb341a4de1d1144737f83eb46ad0b301f7eb48dee0969da7ade7fbd513da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31c8fb341a4de1d1144737f83eb46ad0b301f7eb48dee0969da7ade7fbd513da\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T18:06:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T18:06:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxwgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db8122764bd0508f39da125b5849fbe3bad9558e511c18f26bdcf4e5b23ca3a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db8122764bd0508f39da125b5849fbe3bad9558e511c18f26bdcf4e5b23ca3a1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T18:06:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T18:06:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxwgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://60e382a199aa3a85c11fdf8c490a4f039a191cff8a604b004e2f4ea6dacb6800\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://60e382a199aa3a85c11fdf8c490a4f039a191cff8a604b004e2f4ea6dacb6800\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T18:06:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T18:06:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxwgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb0e9f6dd4681c1b791524e22d3f668ce544cdc72a33af01fa70f2dd93d2972f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb0e9f6dd4681c1b791524e22d3f668ce544cdc72a33af01fa70f2dd93d2972f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T18:06:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T18:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxwgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T18:06:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-j9j6p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:15Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:15 crc kubenswrapper[4661]: I0120 18:06:15.369774 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aafdc595f8f331b863d71124f1aa3c686ec883829377108268dd78de88f498ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a15e7bb714cbcf03a4ed8925508be80b06b04f3cd455d293237554c8ad0fdeee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:15Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:15 crc kubenswrapper[4661]: I0120 18:06:15.384058 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:03Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:15Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:15 crc kubenswrapper[4661]: I0120 18:06:15.393775 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-9m9jm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c44ff326-6791-438a-8d65-b2be26e9c819\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de5a607340e429cf954b1b6e147c4dbff99ffee4d311e9692410698574915af2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kn7nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T18:06:04Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-9m9jm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:15Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:15 crc kubenswrapper[4661]: I0120 18:06:15.412722 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fxb9d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3856f23c-8dc3-4b36-b3b7-955dff315250\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:06Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:06Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54a53d0636da9c6e7974633697967fa21ba02b0357019aca7c83994f57d06d84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66kpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://37fb98a4cea5fe59a694ef52ebebfd3366649970415c8bd3b1307e6d150ffe66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66kpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6bac19d8c5ba66dc20e5e4b90b2ba10efe69f218908b04abb221416f47e47f5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66kpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f5f5d96326cd37c1101488fff8b4ce215ce84766faf13112bed7df0a767de0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66kpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d53da47c39bd1f10fe866890f30f12f27cb0cfce0348c89fc0e89b3e8f563f2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66kpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://407e4d66f22050b80251fcb98ac7168d601d70dff1679bdaca0fc82d6068da41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66kpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ef28f4922dda916a079ff808db18e7c37635d7850639783e2eb8f743ac6cfa7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66kpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dfbc19df20b659446872267891c3a922b6a01e39d8f0557505f25cdc5ba1a648\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66kpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://babd416d0d33b286f533dc5bd8d6904d24fd23632efce36edb6e13183fbd390a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://babd416d0d33b286f533dc5bd8d6904d24fd23632efce36edb6e13183fbd390a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T18:06:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66kpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T18:06:06Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fxb9d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:15Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:15 crc kubenswrapper[4661]: I0120 18:06:15.417971 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:15 crc kubenswrapper[4661]: I0120 18:06:15.417999 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:15 crc kubenswrapper[4661]: I0120 18:06:15.418009 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:15 crc kubenswrapper[4661]: I0120 18:06:15.418025 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:15 crc kubenswrapper[4661]: I0120 18:06:15.418035 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:15Z","lastTransitionTime":"2026-01-20T18:06:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:15 crc kubenswrapper[4661]: I0120 18:06:15.430189 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5947c5f0-b932-4127-a183-6b9023784c81\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:05:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:05:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:05:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:05:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:05:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2286c38d543136df613b2611b8d494d0777a950158adb169c26708335c024251\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7995b8e096ce8c7adf28d9baa4e12d943a697db80ee2b6e6b347b334e44b0df\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a1fb928361cffd6f14855b6c1cf5964eccc9f923435bf79dddd8f0c94decd9a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a8e025f49d745d0d846c606a3ec9dd6fbd2d255e8662ba1fd1a65f0d4289e77\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3584f02089912eecb6ea77d78d4f093929ce92631cb9ea758f1311268963b6b1\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-20T18:06:02Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0120 18:05:56.920405 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0120 18:05:56.921589 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1862726087/tls.crt::/tmp/serving-cert-1862726087/tls.key\\\\\\\"\\\\nI0120 18:06:02.544098 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0120 18:06:02.549414 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0120 18:06:02.549439 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0120 18:06:02.549472 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0120 18:06:02.549479 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0120 18:06:02.569160 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0120 18:06:02.569400 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0120 18:06:02.569474 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0120 18:06:02.569536 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0120 18:06:02.569594 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0120 18:06:02.569648 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0120 18:06:02.569744 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0120 18:06:02.569342 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0120 18:06:02.573278 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T18:05:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f09e5fcc7fafac7a11257184f5919c06b5b2e56a677b67c664e6489d9a581a20\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:05:46Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6eedc9bdf3c37af238cf9ad5172a8d93751c0641cbf43057016157f086c77538\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6eedc9bdf3c37af238cf9ad5172a8d93751c0641cbf43057016157f086c77538\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T18:05:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T18:05:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T18:05:44Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:15Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:15 crc kubenswrapper[4661]: I0120 18:06:15.446051 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:03Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:15Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:15 crc kubenswrapper[4661]: I0120 18:06:15.461379 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d831477bdf455582c54cba87020fc1141541282a25169c4b9730a78855e5719\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:15Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:15 crc kubenswrapper[4661]: I0120 18:06:15.474086 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-z97p2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b6f2401-3eb9-4ee4-b79c-6faee06bc21c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d04be3c87130e9506908a5ff0bf35490bafa64b4cec7b6ae1b67c4a8bd7df5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff8qg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T18:06:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-z97p2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:15Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:15 crc kubenswrapper[4661]: I0120 18:06:15.487395 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f7511825-196e-48ea-a80c-f30a6806a15f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:05:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:05:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f30ca85f0d31021dde3b56c646ddd5d841e699b809c85e54afa944cc8035df6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://008613eee577926f777b6eba5a93379dca1203429fb29918bb057f2aba5eba4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:05:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://baf1692fe971ebe4534bc83cc471812d2b2883b6f97e53728ded6cd57b40c6f1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://faea3c0fefa61b8f0e07a050f59ca7b88d89a7ac8dba19ab019cff00fd782da3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T18:05:44Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:15Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:15 crc kubenswrapper[4661]: I0120 18:06:15.514882 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-svf7c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"78855c94-da90-4523-8d65-70f7fd153dee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce85015f47761ddd35031a4b2aa10eddde92a1f1ee206e6454b967b03b49372e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zvj2k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7dad5141c6e2e07d42bee1c473efffa900d0d900467b1524cd59962582696a3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zvj2k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T18:06:05Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-svf7c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:15Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:15 crc kubenswrapper[4661]: I0120 18:06:15.520604 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:15 crc kubenswrapper[4661]: I0120 18:06:15.520636 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:15 crc kubenswrapper[4661]: I0120 18:06:15.520644 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:15 crc kubenswrapper[4661]: I0120 18:06:15.520661 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:15 crc kubenswrapper[4661]: I0120 18:06:15.520684 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:15Z","lastTransitionTime":"2026-01-20T18:06:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:15 crc kubenswrapper[4661]: I0120 18:06:15.529166 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-tfdrt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1c3f1ce7-0584-4bf1-8398-a277e9a4599b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://163c719cffaaa547e54e81b543b5f5b2ce5abf7f6309d2859831a14e42df189f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbq77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T18:06:07Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-tfdrt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:15Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:15 crc kubenswrapper[4661]: I0120 18:06:15.542098 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26d03e00aaf9fc7a94d8fe25f4f6f7a028f4e5eb9956411442757ca8b2046d27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:15Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:15 crc kubenswrapper[4661]: I0120 18:06:15.544796 4661 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-fxb9d" Jan 20 18:06:15 crc kubenswrapper[4661]: I0120 18:06:15.554720 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:15Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:15 crc kubenswrapper[4661]: I0120 18:06:15.567829 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aafdc595f8f331b863d71124f1aa3c686ec883829377108268dd78de88f498ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a15e7bb714cbcf03a4ed8925508be80b06b04f3cd455d293237554c8ad0fdeee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:15Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:15 crc kubenswrapper[4661]: I0120 18:06:15.582022 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-j9j6p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e190abed-d178-4ce7-9485-f6090ecf8578\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxwgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6923af243783c919b8d74338d7221f91f7c6b770d97eb3a2f7e30360376f071d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6923af243783c919b8d74338d7221f91f7c6b770d97eb3a2f7e30360376f071d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T18:06:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T18:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxwgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d61ecbabdd991af4f3f3005e3d6fab0d3f7fa863e7503f45dd91633dfc68c597\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d61ecbabdd991af4f3f3005e3d6fab0d3f7fa863e7503f45dd91633dfc68c597\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T18:06:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T18:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxwgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31c8fb341a4de1d1144737f83eb46ad0b301f7eb48dee0969da7ade7fbd513da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31c8fb341a4de1d1144737f83eb46ad0b301f7eb48dee0969da7ade7fbd513da\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T18:06:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T18:06:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxwgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db8122764bd0508f39da125b5849fbe3bad9558e511c18f26bdcf4e5b23ca3a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db8122764bd0508f39da125b5849fbe3bad9558e511c18f26bdcf4e5b23ca3a1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T18:06:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T18:06:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxwgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://60e382a199aa3a85c11fdf8c490a4f039a191cff8a604b004e2f4ea6dacb6800\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://60e382a199aa3a85c11fdf8c490a4f039a191cff8a604b004e2f4ea6dacb6800\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T18:06:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T18:06:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxwgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb0e9f6dd4681c1b791524e22d3f668ce544cdc72a33af01fa70f2dd93d2972f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb0e9f6dd4681c1b791524e22d3f668ce544cdc72a33af01fa70f2dd93d2972f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T18:06:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T18:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxwgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T18:06:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-j9j6p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:15Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:15 crc kubenswrapper[4661]: I0120 18:06:15.610999 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-j9j6p" event={"ID":"e190abed-d178-4ce7-9485-f6090ecf8578","Type":"ContainerStarted","Data":"9ad84b24b0398f3f900b9440d55a7914e661a18580ef8b248ffdce4d8a6c75c9"} Jan 20 18:06:15 crc kubenswrapper[4661]: I0120 18:06:15.624528 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:15 crc kubenswrapper[4661]: I0120 18:06:15.624589 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:15 crc kubenswrapper[4661]: I0120 18:06:15.624603 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:15 crc kubenswrapper[4661]: I0120 18:06:15.624625 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:15 crc kubenswrapper[4661]: I0120 18:06:15.624636 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:15Z","lastTransitionTime":"2026-01-20T18:06:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:15 crc kubenswrapper[4661]: I0120 18:06:15.628918 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aafdc595f8f331b863d71124f1aa3c686ec883829377108268dd78de88f498ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a15e7bb714cbcf03a4ed8925508be80b06b04f3cd455d293237554c8ad0fdeee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:15Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:15 crc kubenswrapper[4661]: I0120 18:06:15.644410 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-j9j6p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e190abed-d178-4ce7-9485-f6090ecf8578\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ad84b24b0398f3f900b9440d55a7914e661a18580ef8b248ffdce4d8a6c75c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxwgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6923af243783c919b8d74338d7221f91f7c6b770d97eb3a2f7e30360376f071d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6923af243783c919b8d74338d7221f91f7c6b770d97eb3a2f7e30360376f071d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T18:06:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T18:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxwgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d61ecbabdd991af4f3f3005e3d6fab0d3f7fa863e7503f45dd91633dfc68c597\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d61ecbabdd991af4f3f3005e3d6fab0d3f7fa863e7503f45dd91633dfc68c597\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T18:06:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T18:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxwgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31c8fb341a4de1d1144737f83eb46ad0b301f7eb48dee0969da7ade7fbd513da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31c8fb341a4de1d1144737f83eb46ad0b301f7eb48dee0969da7ade7fbd513da\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T18:06:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T18:06:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxwgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db8122764bd0508f39da125b5849fbe3bad9558e511c18f26bdcf4e5b23ca3a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db8122764bd0508f39da125b5849fbe3bad9558e511c18f26bdcf4e5b23ca3a1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T18:06:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T18:06:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxwgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://60e382a199aa3a85c11fdf8c490a4f039a191cff8a604b004e2f4ea6dacb6800\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://60e382a199aa3a85c11fdf8c490a4f039a191cff8a604b004e2f4ea6dacb6800\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T18:06:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T18:06:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxwgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb0e9f6dd4681c1b791524e22d3f668ce544cdc72a33af01fa70f2dd93d2972f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb0e9f6dd4681c1b791524e22d3f668ce544cdc72a33af01fa70f2dd93d2972f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T18:06:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T18:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxwgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T18:06:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-j9j6p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:15Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:15 crc kubenswrapper[4661]: I0120 18:06:15.668769 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5947c5f0-b932-4127-a183-6b9023784c81\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:05:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:05:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:05:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:05:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:05:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2286c38d543136df613b2611b8d494d0777a950158adb169c26708335c024251\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7995b8e096ce8c7adf28d9baa4e12d943a697db80ee2b6e6b347b334e44b0df\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a1fb928361cffd6f14855b6c1cf5964eccc9f923435bf79dddd8f0c94decd9a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a8e025f49d745d0d846c606a3ec9dd6fbd2d255e8662ba1fd1a65f0d4289e77\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3584f02089912eecb6ea77d78d4f093929ce92631cb9ea758f1311268963b6b1\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-20T18:06:02Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0120 18:05:56.920405 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0120 18:05:56.921589 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1862726087/tls.crt::/tmp/serving-cert-1862726087/tls.key\\\\\\\"\\\\nI0120 18:06:02.544098 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0120 18:06:02.549414 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0120 18:06:02.549439 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0120 18:06:02.549472 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0120 18:06:02.549479 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0120 18:06:02.569160 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0120 18:06:02.569400 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0120 18:06:02.569474 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0120 18:06:02.569536 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0120 18:06:02.569594 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0120 18:06:02.569648 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0120 18:06:02.569744 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0120 18:06:02.569342 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0120 18:06:02.573278 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T18:05:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f09e5fcc7fafac7a11257184f5919c06b5b2e56a677b67c664e6489d9a581a20\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:05:46Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6eedc9bdf3c37af238cf9ad5172a8d93751c0641cbf43057016157f086c77538\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6eedc9bdf3c37af238cf9ad5172a8d93751c0641cbf43057016157f086c77538\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T18:05:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T18:05:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T18:05:44Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:15Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:15 crc kubenswrapper[4661]: I0120 18:06:15.689174 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:03Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:15Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:15 crc kubenswrapper[4661]: I0120 18:06:15.704776 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:03Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:15Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:15 crc kubenswrapper[4661]: I0120 18:06:15.731241 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:15 crc kubenswrapper[4661]: I0120 18:06:15.731280 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:15 crc kubenswrapper[4661]: I0120 18:06:15.731290 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:15 crc kubenswrapper[4661]: I0120 18:06:15.731309 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:15 crc kubenswrapper[4661]: I0120 18:06:15.731321 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:15Z","lastTransitionTime":"2026-01-20T18:06:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:15 crc kubenswrapper[4661]: I0120 18:06:15.731400 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-9m9jm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c44ff326-6791-438a-8d65-b2be26e9c819\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de5a607340e429cf954b1b6e147c4dbff99ffee4d311e9692410698574915af2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kn7nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T18:06:04Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-9m9jm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:15Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:15 crc kubenswrapper[4661]: I0120 18:06:15.758691 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fxb9d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3856f23c-8dc3-4b36-b3b7-955dff315250\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:06Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:06Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54a53d0636da9c6e7974633697967fa21ba02b0357019aca7c83994f57d06d84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66kpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://37fb98a4cea5fe59a694ef52ebebfd3366649970415c8bd3b1307e6d150ffe66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66kpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6bac19d8c5ba66dc20e5e4b90b2ba10efe69f218908b04abb221416f47e47f5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66kpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f5f5d96326cd37c1101488fff8b4ce215ce84766faf13112bed7df0a767de0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66kpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d53da47c39bd1f10fe866890f30f12f27cb0cfce0348c89fc0e89b3e8f563f2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66kpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://407e4d66f22050b80251fcb98ac7168d601d70dff1679bdaca0fc82d6068da41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66kpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ef28f4922dda916a079ff808db18e7c37635d7850639783e2eb8f743ac6cfa7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66kpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dfbc19df20b659446872267891c3a922b6a01e39d8f0557505f25cdc5ba1a648\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66kpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://babd416d0d33b286f533dc5bd8d6904d24fd23632efce36edb6e13183fbd390a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://babd416d0d33b286f533dc5bd8d6904d24fd23632efce36edb6e13183fbd390a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T18:06:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66kpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T18:06:06Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fxb9d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:15Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:15 crc kubenswrapper[4661]: I0120 18:06:15.779406 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f7511825-196e-48ea-a80c-f30a6806a15f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:05:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:05:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f30ca85f0d31021dde3b56c646ddd5d841e699b809c85e54afa944cc8035df6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://008613eee577926f777b6eba5a93379dca1203429fb29918bb057f2aba5eba4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:05:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://baf1692fe971ebe4534bc83cc471812d2b2883b6f97e53728ded6cd57b40c6f1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://faea3c0fefa61b8f0e07a050f59ca7b88d89a7ac8dba19ab019cff00fd782da3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T18:05:44Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:15Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:15 crc kubenswrapper[4661]: I0120 18:06:15.792939 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d831477bdf455582c54cba87020fc1141541282a25169c4b9730a78855e5719\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:15Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:15 crc kubenswrapper[4661]: I0120 18:06:15.808464 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-z97p2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b6f2401-3eb9-4ee4-b79c-6faee06bc21c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d04be3c87130e9506908a5ff0bf35490bafa64b4cec7b6ae1b67c4a8bd7df5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff8qg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T18:06:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-z97p2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:15Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:15 crc kubenswrapper[4661]: I0120 18:06:15.833310 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:15 crc kubenswrapper[4661]: I0120 18:06:15.833363 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:15 crc kubenswrapper[4661]: I0120 18:06:15.833376 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:15 crc kubenswrapper[4661]: I0120 18:06:15.833396 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:15 crc kubenswrapper[4661]: I0120 18:06:15.833409 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:15Z","lastTransitionTime":"2026-01-20T18:06:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:15 crc kubenswrapper[4661]: I0120 18:06:15.834261 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26d03e00aaf9fc7a94d8fe25f4f6f7a028f4e5eb9956411442757ca8b2046d27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:15Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:15 crc kubenswrapper[4661]: I0120 18:06:15.846361 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:15Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:15 crc kubenswrapper[4661]: I0120 18:06:15.858219 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-svf7c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"78855c94-da90-4523-8d65-70f7fd153dee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce85015f47761ddd35031a4b2aa10eddde92a1f1ee206e6454b967b03b49372e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zvj2k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7dad5141c6e2e07d42bee1c473efffa900d0d900467b1524cd59962582696a3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zvj2k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T18:06:05Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-svf7c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:15Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:15 crc kubenswrapper[4661]: I0120 18:06:15.870586 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-tfdrt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1c3f1ce7-0584-4bf1-8398-a277e9a4599b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://163c719cffaaa547e54e81b543b5f5b2ce5abf7f6309d2859831a14e42df189f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbq77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T18:06:07Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-tfdrt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:15Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:15 crc kubenswrapper[4661]: I0120 18:06:15.937830 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:15 crc kubenswrapper[4661]: I0120 18:06:15.937868 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:15 crc kubenswrapper[4661]: I0120 18:06:15.937879 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:15 crc kubenswrapper[4661]: I0120 18:06:15.937894 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:15 crc kubenswrapper[4661]: I0120 18:06:15.937903 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:15Z","lastTransitionTime":"2026-01-20T18:06:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:16 crc kubenswrapper[4661]: I0120 18:06:16.041048 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:16 crc kubenswrapper[4661]: I0120 18:06:16.041091 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:16 crc kubenswrapper[4661]: I0120 18:06:16.041104 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:16 crc kubenswrapper[4661]: I0120 18:06:16.041125 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:16 crc kubenswrapper[4661]: I0120 18:06:16.041139 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:16Z","lastTransitionTime":"2026-01-20T18:06:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:16 crc kubenswrapper[4661]: I0120 18:06:16.100144 4661 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-11 19:08:42.297281867 +0000 UTC Jan 20 18:06:16 crc kubenswrapper[4661]: I0120 18:06:16.143380 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:16 crc kubenswrapper[4661]: I0120 18:06:16.143427 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:16 crc kubenswrapper[4661]: I0120 18:06:16.143438 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:16 crc kubenswrapper[4661]: I0120 18:06:16.143457 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:16 crc kubenswrapper[4661]: I0120 18:06:16.143467 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:16Z","lastTransitionTime":"2026-01-20T18:06:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:16 crc kubenswrapper[4661]: I0120 18:06:16.246115 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:16 crc kubenswrapper[4661]: I0120 18:06:16.246164 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:16 crc kubenswrapper[4661]: I0120 18:06:16.246177 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:16 crc kubenswrapper[4661]: I0120 18:06:16.246197 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:16 crc kubenswrapper[4661]: I0120 18:06:16.246214 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:16Z","lastTransitionTime":"2026-01-20T18:06:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:16 crc kubenswrapper[4661]: I0120 18:06:16.349839 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:16 crc kubenswrapper[4661]: I0120 18:06:16.349889 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:16 crc kubenswrapper[4661]: I0120 18:06:16.349913 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:16 crc kubenswrapper[4661]: I0120 18:06:16.349931 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:16 crc kubenswrapper[4661]: I0120 18:06:16.349941 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:16Z","lastTransitionTime":"2026-01-20T18:06:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:16 crc kubenswrapper[4661]: I0120 18:06:16.452640 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:16 crc kubenswrapper[4661]: I0120 18:06:16.452710 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:16 crc kubenswrapper[4661]: I0120 18:06:16.452721 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:16 crc kubenswrapper[4661]: I0120 18:06:16.452740 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:16 crc kubenswrapper[4661]: I0120 18:06:16.452767 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:16Z","lastTransitionTime":"2026-01-20T18:06:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:16 crc kubenswrapper[4661]: I0120 18:06:16.555121 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:16 crc kubenswrapper[4661]: I0120 18:06:16.555166 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:16 crc kubenswrapper[4661]: I0120 18:06:16.555177 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:16 crc kubenswrapper[4661]: I0120 18:06:16.555194 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:16 crc kubenswrapper[4661]: I0120 18:06:16.555204 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:16Z","lastTransitionTime":"2026-01-20T18:06:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:16 crc kubenswrapper[4661]: I0120 18:06:16.617872 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-fxb9d_3856f23c-8dc3-4b36-b3b7-955dff315250/ovnkube-controller/0.log" Jan 20 18:06:16 crc kubenswrapper[4661]: I0120 18:06:16.621540 4661 generic.go:334] "Generic (PLEG): container finished" podID="3856f23c-8dc3-4b36-b3b7-955dff315250" containerID="1ef28f4922dda916a079ff808db18e7c37635d7850639783e2eb8f743ac6cfa7" exitCode=1 Jan 20 18:06:16 crc kubenswrapper[4661]: I0120 18:06:16.621613 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fxb9d" event={"ID":"3856f23c-8dc3-4b36-b3b7-955dff315250","Type":"ContainerDied","Data":"1ef28f4922dda916a079ff808db18e7c37635d7850639783e2eb8f743ac6cfa7"} Jan 20 18:06:16 crc kubenswrapper[4661]: I0120 18:06:16.622932 4661 scope.go:117] "RemoveContainer" containerID="1ef28f4922dda916a079ff808db18e7c37635d7850639783e2eb8f743ac6cfa7" Jan 20 18:06:16 crc kubenswrapper[4661]: I0120 18:06:16.640288 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5947c5f0-b932-4127-a183-6b9023784c81\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:05:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:05:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:05:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:05:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:05:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2286c38d543136df613b2611b8d494d0777a950158adb169c26708335c024251\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7995b8e096ce8c7adf28d9baa4e12d943a697db80ee2b6e6b347b334e44b0df\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a1fb928361cffd6f14855b6c1cf5964eccc9f923435bf79dddd8f0c94decd9a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a8e025f49d745d0d846c606a3ec9dd6fbd2d255e8662ba1fd1a65f0d4289e77\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3584f02089912eecb6ea77d78d4f093929ce92631cb9ea758f1311268963b6b1\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-20T18:06:02Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0120 18:05:56.920405 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0120 18:05:56.921589 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1862726087/tls.crt::/tmp/serving-cert-1862726087/tls.key\\\\\\\"\\\\nI0120 18:06:02.544098 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0120 18:06:02.549414 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0120 18:06:02.549439 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0120 18:06:02.549472 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0120 18:06:02.549479 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0120 18:06:02.569160 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0120 18:06:02.569400 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0120 18:06:02.569474 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0120 18:06:02.569536 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0120 18:06:02.569594 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0120 18:06:02.569648 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0120 18:06:02.569744 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0120 18:06:02.569342 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0120 18:06:02.573278 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T18:05:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f09e5fcc7fafac7a11257184f5919c06b5b2e56a677b67c664e6489d9a581a20\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:05:46Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6eedc9bdf3c37af238cf9ad5172a8d93751c0641cbf43057016157f086c77538\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6eedc9bdf3c37af238cf9ad5172a8d93751c0641cbf43057016157f086c77538\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T18:05:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T18:05:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T18:05:44Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:16Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:16 crc kubenswrapper[4661]: I0120 18:06:16.656265 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:03Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:16Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:16 crc kubenswrapper[4661]: I0120 18:06:16.660526 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:16 crc kubenswrapper[4661]: I0120 18:06:16.660581 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:16 crc kubenswrapper[4661]: I0120 18:06:16.660594 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:16 crc kubenswrapper[4661]: I0120 18:06:16.660619 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:16 crc kubenswrapper[4661]: I0120 18:06:16.660634 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:16Z","lastTransitionTime":"2026-01-20T18:06:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:16 crc kubenswrapper[4661]: I0120 18:06:16.669916 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:03Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:16Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:16 crc kubenswrapper[4661]: I0120 18:06:16.681386 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-9m9jm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c44ff326-6791-438a-8d65-b2be26e9c819\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de5a607340e429cf954b1b6e147c4dbff99ffee4d311e9692410698574915af2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kn7nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T18:06:04Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-9m9jm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:16Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:16 crc kubenswrapper[4661]: I0120 18:06:16.709479 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fxb9d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3856f23c-8dc3-4b36-b3b7-955dff315250\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:06Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:06Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54a53d0636da9c6e7974633697967fa21ba02b0357019aca7c83994f57d06d84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66kpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://37fb98a4cea5fe59a694ef52ebebfd3366649970415c8bd3b1307e6d150ffe66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66kpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6bac19d8c5ba66dc20e5e4b90b2ba10efe69f218908b04abb221416f47e47f5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66kpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f5f5d96326cd37c1101488fff8b4ce215ce84766faf13112bed7df0a767de0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66kpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d53da47c39bd1f10fe866890f30f12f27cb0cfce0348c89fc0e89b3e8f563f2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66kpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://407e4d66f22050b80251fcb98ac7168d601d70dff1679bdaca0fc82d6068da41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66kpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ef28f4922dda916a079ff808db18e7c37635d7850639783e2eb8f743ac6cfa7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ef28f4922dda916a079ff808db18e7c37635d7850639783e2eb8f743ac6cfa7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-20T18:06:16Z\\\",\\\"message\\\":\\\"v1.NetworkAttachmentDefinition (0s) from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117\\\\nI0120 18:06:16.459075 5855 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0120 18:06:16.460791 5855 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0120 18:06:16.460838 5855 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0120 18:06:16.460885 5855 factory.go:656] Stopping watch factory\\\\nI0120 18:06:16.460916 5855 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0120 18:06:16.460933 5855 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0120 18:06:16.460945 5855 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0120 18:06:16.459308 5855 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0120 18:06:16.459371 5855 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0120 18:06:16.459405 5855 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T18:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66kpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dfbc19df20b659446872267891c3a922b6a01e39d8f0557505f25cdc5ba1a648\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66kpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://babd416d0d33b286f533dc5bd8d6904d24fd23632efce36edb6e13183fbd390a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://babd416d0d33b286f533dc5bd8d6904d24fd23632efce36edb6e13183fbd390a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T18:06:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66kpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T18:06:06Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fxb9d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:16Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:16 crc kubenswrapper[4661]: I0120 18:06:16.724944 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f7511825-196e-48ea-a80c-f30a6806a15f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:05:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:05:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f30ca85f0d31021dde3b56c646ddd5d841e699b809c85e54afa944cc8035df6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://008613eee577926f777b6eba5a93379dca1203429fb29918bb057f2aba5eba4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:05:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://baf1692fe971ebe4534bc83cc471812d2b2883b6f97e53728ded6cd57b40c6f1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://faea3c0fefa61b8f0e07a050f59ca7b88d89a7ac8dba19ab019cff00fd782da3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T18:05:44Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:16Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:16 crc kubenswrapper[4661]: I0120 18:06:16.742082 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d831477bdf455582c54cba87020fc1141541282a25169c4b9730a78855e5719\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:16Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:16 crc kubenswrapper[4661]: I0120 18:06:16.763236 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-z97p2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b6f2401-3eb9-4ee4-b79c-6faee06bc21c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d04be3c87130e9506908a5ff0bf35490bafa64b4cec7b6ae1b67c4a8bd7df5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff8qg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T18:06:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-z97p2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:16Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:16 crc kubenswrapper[4661]: I0120 18:06:16.763464 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:16 crc kubenswrapper[4661]: I0120 18:06:16.763478 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:16 crc kubenswrapper[4661]: I0120 18:06:16.763487 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:16 crc kubenswrapper[4661]: I0120 18:06:16.763501 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:16 crc kubenswrapper[4661]: I0120 18:06:16.763510 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:16Z","lastTransitionTime":"2026-01-20T18:06:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:16 crc kubenswrapper[4661]: I0120 18:06:16.778789 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26d03e00aaf9fc7a94d8fe25f4f6f7a028f4e5eb9956411442757ca8b2046d27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:16Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:16 crc kubenswrapper[4661]: I0120 18:06:16.792620 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:16Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:16 crc kubenswrapper[4661]: I0120 18:06:16.805998 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-svf7c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"78855c94-da90-4523-8d65-70f7fd153dee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce85015f47761ddd35031a4b2aa10eddde92a1f1ee206e6454b967b03b49372e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zvj2k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7dad5141c6e2e07d42bee1c473efffa900d0d900467b1524cd59962582696a3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zvj2k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T18:06:05Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-svf7c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:16Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:16 crc kubenswrapper[4661]: I0120 18:06:16.820047 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-tfdrt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1c3f1ce7-0584-4bf1-8398-a277e9a4599b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://163c719cffaaa547e54e81b543b5f5b2ce5abf7f6309d2859831a14e42df189f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbq77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T18:06:07Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-tfdrt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:16Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:16 crc kubenswrapper[4661]: I0120 18:06:16.836250 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aafdc595f8f331b863d71124f1aa3c686ec883829377108268dd78de88f498ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a15e7bb714cbcf03a4ed8925508be80b06b04f3cd455d293237554c8ad0fdeee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:16Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:16 crc kubenswrapper[4661]: I0120 18:06:16.854444 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-j9j6p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e190abed-d178-4ce7-9485-f6090ecf8578\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ad84b24b0398f3f900b9440d55a7914e661a18580ef8b248ffdce4d8a6c75c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxwgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6923af243783c919b8d74338d7221f91f7c6b770d97eb3a2f7e30360376f071d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6923af243783c919b8d74338d7221f91f7c6b770d97eb3a2f7e30360376f071d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T18:06:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T18:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxwgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d61ecbabdd991af4f3f3005e3d6fab0d3f7fa863e7503f45dd91633dfc68c597\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d61ecbabdd991af4f3f3005e3d6fab0d3f7fa863e7503f45dd91633dfc68c597\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T18:06:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T18:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxwgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31c8fb341a4de1d1144737f83eb46ad0b301f7eb48dee0969da7ade7fbd513da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31c8fb341a4de1d1144737f83eb46ad0b301f7eb48dee0969da7ade7fbd513da\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T18:06:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T18:06:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxwgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db8122764bd0508f39da125b5849fbe3bad9558e511c18f26bdcf4e5b23ca3a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db8122764bd0508f39da125b5849fbe3bad9558e511c18f26bdcf4e5b23ca3a1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T18:06:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T18:06:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxwgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://60e382a199aa3a85c11fdf8c490a4f039a191cff8a604b004e2f4ea6dacb6800\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://60e382a199aa3a85c11fdf8c490a4f039a191cff8a604b004e2f4ea6dacb6800\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T18:06:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T18:06:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxwgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb0e9f6dd4681c1b791524e22d3f668ce544cdc72a33af01fa70f2dd93d2972f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb0e9f6dd4681c1b791524e22d3f668ce544cdc72a33af01fa70f2dd93d2972f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T18:06:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T18:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxwgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T18:06:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-j9j6p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:16Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:16 crc kubenswrapper[4661]: I0120 18:06:16.866464 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:16 crc kubenswrapper[4661]: I0120 18:06:16.866494 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:16 crc kubenswrapper[4661]: I0120 18:06:16.866502 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:16 crc kubenswrapper[4661]: I0120 18:06:16.866520 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:16 crc kubenswrapper[4661]: I0120 18:06:16.866529 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:16Z","lastTransitionTime":"2026-01-20T18:06:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:16 crc kubenswrapper[4661]: I0120 18:06:16.968766 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:16 crc kubenswrapper[4661]: I0120 18:06:16.968805 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:16 crc kubenswrapper[4661]: I0120 18:06:16.968814 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:16 crc kubenswrapper[4661]: I0120 18:06:16.968830 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:16 crc kubenswrapper[4661]: I0120 18:06:16.968839 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:16Z","lastTransitionTime":"2026-01-20T18:06:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:17 crc kubenswrapper[4661]: I0120 18:06:17.071546 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:17 crc kubenswrapper[4661]: I0120 18:06:17.071581 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:17 crc kubenswrapper[4661]: I0120 18:06:17.071591 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:17 crc kubenswrapper[4661]: I0120 18:06:17.071607 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:17 crc kubenswrapper[4661]: I0120 18:06:17.071616 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:17Z","lastTransitionTime":"2026-01-20T18:06:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:17 crc kubenswrapper[4661]: I0120 18:06:17.100502 4661 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-28 08:47:45.562642913 +0000 UTC Jan 20 18:06:17 crc kubenswrapper[4661]: I0120 18:06:17.141185 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 18:06:17 crc kubenswrapper[4661]: E0120 18:06:17.141343 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 20 18:06:17 crc kubenswrapper[4661]: I0120 18:06:17.141217 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 18:06:17 crc kubenswrapper[4661]: E0120 18:06:17.141875 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 20 18:06:17 crc kubenswrapper[4661]: I0120 18:06:17.141952 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 18:06:17 crc kubenswrapper[4661]: E0120 18:06:17.142024 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 20 18:06:17 crc kubenswrapper[4661]: I0120 18:06:17.161005 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:17 crc kubenswrapper[4661]: I0120 18:06:17.161031 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:17 crc kubenswrapper[4661]: I0120 18:06:17.161040 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:17 crc kubenswrapper[4661]: I0120 18:06:17.161057 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:17 crc kubenswrapper[4661]: I0120 18:06:17.161067 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:17Z","lastTransitionTime":"2026-01-20T18:06:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:17 crc kubenswrapper[4661]: E0120 18:06:17.173806 4661 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T18:06:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T18:06:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:17Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T18:06:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T18:06:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:17Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7f2069d5-53e0-4198-b42b-b73aa1252865\\\",\\\"systemUUID\\\":\\\"727045d4-7edb-4891-a9ee-dd5ccba890df\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:17Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:17 crc kubenswrapper[4661]: I0120 18:06:17.177638 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:17 crc kubenswrapper[4661]: I0120 18:06:17.177689 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:17 crc kubenswrapper[4661]: I0120 18:06:17.177704 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:17 crc kubenswrapper[4661]: I0120 18:06:17.177719 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:17 crc kubenswrapper[4661]: I0120 18:06:17.177730 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:17Z","lastTransitionTime":"2026-01-20T18:06:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:17 crc kubenswrapper[4661]: E0120 18:06:17.189939 4661 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T18:06:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T18:06:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:17Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T18:06:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T18:06:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:17Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7f2069d5-53e0-4198-b42b-b73aa1252865\\\",\\\"systemUUID\\\":\\\"727045d4-7edb-4891-a9ee-dd5ccba890df\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:17Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:17 crc kubenswrapper[4661]: I0120 18:06:17.193326 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:17 crc kubenswrapper[4661]: I0120 18:06:17.193349 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:17 crc kubenswrapper[4661]: I0120 18:06:17.193357 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:17 crc kubenswrapper[4661]: I0120 18:06:17.193370 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:17 crc kubenswrapper[4661]: I0120 18:06:17.193379 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:17Z","lastTransitionTime":"2026-01-20T18:06:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:17 crc kubenswrapper[4661]: E0120 18:06:17.206095 4661 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T18:06:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T18:06:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:17Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T18:06:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T18:06:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:17Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7f2069d5-53e0-4198-b42b-b73aa1252865\\\",\\\"systemUUID\\\":\\\"727045d4-7edb-4891-a9ee-dd5ccba890df\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:17Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:17 crc kubenswrapper[4661]: I0120 18:06:17.209869 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:17 crc kubenswrapper[4661]: I0120 18:06:17.209911 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:17 crc kubenswrapper[4661]: I0120 18:06:17.209923 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:17 crc kubenswrapper[4661]: I0120 18:06:17.209939 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:17 crc kubenswrapper[4661]: I0120 18:06:17.209951 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:17Z","lastTransitionTime":"2026-01-20T18:06:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:17 crc kubenswrapper[4661]: E0120 18:06:17.221170 4661 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T18:06:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T18:06:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:17Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T18:06:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T18:06:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:17Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7f2069d5-53e0-4198-b42b-b73aa1252865\\\",\\\"systemUUID\\\":\\\"727045d4-7edb-4891-a9ee-dd5ccba890df\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:17Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:17 crc kubenswrapper[4661]: I0120 18:06:17.225997 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:17 crc kubenswrapper[4661]: I0120 18:06:17.226024 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:17 crc kubenswrapper[4661]: I0120 18:06:17.226033 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:17 crc kubenswrapper[4661]: I0120 18:06:17.226048 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:17 crc kubenswrapper[4661]: I0120 18:06:17.226058 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:17Z","lastTransitionTime":"2026-01-20T18:06:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:17 crc kubenswrapper[4661]: E0120 18:06:17.237473 4661 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T18:06:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T18:06:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:17Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T18:06:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T18:06:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:17Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7f2069d5-53e0-4198-b42b-b73aa1252865\\\",\\\"systemUUID\\\":\\\"727045d4-7edb-4891-a9ee-dd5ccba890df\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:17Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:17 crc kubenswrapper[4661]: E0120 18:06:17.237586 4661 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 20 18:06:17 crc kubenswrapper[4661]: I0120 18:06:17.239214 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:17 crc kubenswrapper[4661]: I0120 18:06:17.239242 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:17 crc kubenswrapper[4661]: I0120 18:06:17.239254 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:17 crc kubenswrapper[4661]: I0120 18:06:17.239271 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:17 crc kubenswrapper[4661]: I0120 18:06:17.239285 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:17Z","lastTransitionTime":"2026-01-20T18:06:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:17 crc kubenswrapper[4661]: I0120 18:06:17.341785 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:17 crc kubenswrapper[4661]: I0120 18:06:17.341851 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:17 crc kubenswrapper[4661]: I0120 18:06:17.341863 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:17 crc kubenswrapper[4661]: I0120 18:06:17.341884 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:17 crc kubenswrapper[4661]: I0120 18:06:17.341914 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:17Z","lastTransitionTime":"2026-01-20T18:06:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:17 crc kubenswrapper[4661]: I0120 18:06:17.445483 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:17 crc kubenswrapper[4661]: I0120 18:06:17.445533 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:17 crc kubenswrapper[4661]: I0120 18:06:17.445543 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:17 crc kubenswrapper[4661]: I0120 18:06:17.445565 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:17 crc kubenswrapper[4661]: I0120 18:06:17.445585 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:17Z","lastTransitionTime":"2026-01-20T18:06:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:17 crc kubenswrapper[4661]: I0120 18:06:17.447770 4661 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 20 18:06:17 crc kubenswrapper[4661]: I0120 18:06:17.468612 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-svf7c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"78855c94-da90-4523-8d65-70f7fd153dee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce85015f47761ddd35031a4b2aa10eddde92a1f1ee206e6454b967b03b49372e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zvj2k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7dad5141c6e2e07d42bee1c473efffa900d0d900467b1524cd59962582696a3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zvj2k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T18:06:05Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-svf7c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:17Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:17 crc kubenswrapper[4661]: I0120 18:06:17.480452 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-tfdrt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1c3f1ce7-0584-4bf1-8398-a277e9a4599b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://163c719cffaaa547e54e81b543b5f5b2ce5abf7f6309d2859831a14e42df189f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbq77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T18:06:07Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-tfdrt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:17Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:17 crc kubenswrapper[4661]: I0120 18:06:17.499713 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26d03e00aaf9fc7a94d8fe25f4f6f7a028f4e5eb9956411442757ca8b2046d27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:17Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:17 crc kubenswrapper[4661]: I0120 18:06:17.516522 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:17Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:17 crc kubenswrapper[4661]: I0120 18:06:17.535244 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aafdc595f8f331b863d71124f1aa3c686ec883829377108268dd78de88f498ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a15e7bb714cbcf03a4ed8925508be80b06b04f3cd455d293237554c8ad0fdeee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:17Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:17 crc kubenswrapper[4661]: I0120 18:06:17.548200 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:17 crc kubenswrapper[4661]: I0120 18:06:17.548239 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:17 crc kubenswrapper[4661]: I0120 18:06:17.548250 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:17 crc kubenswrapper[4661]: I0120 18:06:17.548269 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:17 crc kubenswrapper[4661]: I0120 18:06:17.548284 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:17Z","lastTransitionTime":"2026-01-20T18:06:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:17 crc kubenswrapper[4661]: I0120 18:06:17.557431 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-j9j6p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e190abed-d178-4ce7-9485-f6090ecf8578\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ad84b24b0398f3f900b9440d55a7914e661a18580ef8b248ffdce4d8a6c75c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxwgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6923af243783c919b8d74338d7221f91f7c6b770d97eb3a2f7e30360376f071d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6923af243783c919b8d74338d7221f91f7c6b770d97eb3a2f7e30360376f071d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T18:06:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T18:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxwgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d61ecbabdd991af4f3f3005e3d6fab0d3f7fa863e7503f45dd91633dfc68c597\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d61ecbabdd991af4f3f3005e3d6fab0d3f7fa863e7503f45dd91633dfc68c597\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T18:06:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T18:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxwgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31c8fb341a4de1d1144737f83eb46ad0b301f7eb48dee0969da7ade7fbd513da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31c8fb341a4de1d1144737f83eb46ad0b301f7eb48dee0969da7ade7fbd513da\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T18:06:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T18:06:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxwgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db8122764bd0508f39da125b5849fbe3bad9558e511c18f26bdcf4e5b23ca3a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db8122764bd0508f39da125b5849fbe3bad9558e511c18f26bdcf4e5b23ca3a1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T18:06:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T18:06:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxwgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://60e382a199aa3a85c11fdf8c490a4f039a191cff8a604b004e2f4ea6dacb6800\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://60e382a199aa3a85c11fdf8c490a4f039a191cff8a604b004e2f4ea6dacb6800\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T18:06:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T18:06:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxwgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb0e9f6dd4681c1b791524e22d3f668ce544cdc72a33af01fa70f2dd93d2972f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb0e9f6dd4681c1b791524e22d3f668ce544cdc72a33af01fa70f2dd93d2972f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T18:06:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T18:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxwgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T18:06:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-j9j6p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:17Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:17 crc kubenswrapper[4661]: I0120 18:06:17.578137 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:03Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:17Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:17 crc kubenswrapper[4661]: I0120 18:06:17.593683 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-9m9jm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c44ff326-6791-438a-8d65-b2be26e9c819\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de5a607340e429cf954b1b6e147c4dbff99ffee4d311e9692410698574915af2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kn7nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T18:06:04Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-9m9jm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:17Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:17 crc kubenswrapper[4661]: I0120 18:06:17.612572 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fxb9d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3856f23c-8dc3-4b36-b3b7-955dff315250\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:06Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:06Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54a53d0636da9c6e7974633697967fa21ba02b0357019aca7c83994f57d06d84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66kpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://37fb98a4cea5fe59a694ef52ebebfd3366649970415c8bd3b1307e6d150ffe66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66kpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6bac19d8c5ba66dc20e5e4b90b2ba10efe69f218908b04abb221416f47e47f5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66kpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f5f5d96326cd37c1101488fff8b4ce215ce84766faf13112bed7df0a767de0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66kpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d53da47c39bd1f10fe866890f30f12f27cb0cfce0348c89fc0e89b3e8f563f2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66kpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://407e4d66f22050b80251fcb98ac7168d601d70dff1679bdaca0fc82d6068da41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66kpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ef28f4922dda916a079ff808db18e7c37635d7850639783e2eb8f743ac6cfa7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ef28f4922dda916a079ff808db18e7c37635d7850639783e2eb8f743ac6cfa7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-20T18:06:16Z\\\",\\\"message\\\":\\\"v1.NetworkAttachmentDefinition (0s) from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117\\\\nI0120 18:06:16.459075 5855 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0120 18:06:16.460791 5855 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0120 18:06:16.460838 5855 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0120 18:06:16.460885 5855 factory.go:656] Stopping watch factory\\\\nI0120 18:06:16.460916 5855 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0120 18:06:16.460933 5855 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0120 18:06:16.460945 5855 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0120 18:06:16.459308 5855 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0120 18:06:16.459371 5855 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0120 18:06:16.459405 5855 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T18:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66kpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dfbc19df20b659446872267891c3a922b6a01e39d8f0557505f25cdc5ba1a648\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66kpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://babd416d0d33b286f533dc5bd8d6904d24fd23632efce36edb6e13183fbd390a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://babd416d0d33b286f533dc5bd8d6904d24fd23632efce36edb6e13183fbd390a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T18:06:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66kpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T18:06:06Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fxb9d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:17Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:17 crc kubenswrapper[4661]: I0120 18:06:17.627981 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-fxb9d_3856f23c-8dc3-4b36-b3b7-955dff315250/ovnkube-controller/0.log" Jan 20 18:06:17 crc kubenswrapper[4661]: I0120 18:06:17.631547 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fxb9d" event={"ID":"3856f23c-8dc3-4b36-b3b7-955dff315250","Type":"ContainerStarted","Data":"936cc61252844f599e65f506adbc3ca6e06fee03fe77fc1a75295040562e5c1b"} Jan 20 18:06:17 crc kubenswrapper[4661]: I0120 18:06:17.632069 4661 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-fxb9d" Jan 20 18:06:17 crc kubenswrapper[4661]: I0120 18:06:17.639161 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5947c5f0-b932-4127-a183-6b9023784c81\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:05:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:05:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:05:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2286c38d543136df613b2611b8d494d0777a950158adb169c26708335c024251\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7995b8e096ce8c7adf28d9baa4e12d943a697db80ee2b6e6b347b334e44b0df\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a1fb928361cffd6f14855b6c1cf5964eccc9f923435bf79dddd8f0c94decd9a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a8e025f49d745d0d846c606a3ec9dd6fbd2d255e8662ba1fd1a65f0d4289e77\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3584f02089912eecb6ea77d78d4f093929ce92631cb9ea758f1311268963b6b1\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-20T18:06:02Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0120 18:05:56.920405 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0120 18:05:56.921589 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1862726087/tls.crt::/tmp/serving-cert-1862726087/tls.key\\\\\\\"\\\\nI0120 18:06:02.544098 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0120 18:06:02.549414 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0120 18:06:02.549439 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0120 18:06:02.549472 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0120 18:06:02.549479 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0120 18:06:02.569160 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0120 18:06:02.569400 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0120 18:06:02.569474 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0120 18:06:02.569536 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0120 18:06:02.569594 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0120 18:06:02.569648 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0120 18:06:02.569744 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0120 18:06:02.569342 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0120 18:06:02.573278 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T18:05:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f09e5fcc7fafac7a11257184f5919c06b5b2e56a677b67c664e6489d9a581a20\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:05:46Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6eedc9bdf3c37af238cf9ad5172a8d93751c0641cbf43057016157f086c77538\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6eedc9bdf3c37af238cf9ad5172a8d93751c0641cbf43057016157f086c77538\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T18:05:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T18:05:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T18:05:44Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:17Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:17 crc kubenswrapper[4661]: I0120 18:06:17.652257 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:17 crc kubenswrapper[4661]: I0120 18:06:17.652381 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:17 crc kubenswrapper[4661]: I0120 18:06:17.652402 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:17 crc kubenswrapper[4661]: I0120 18:06:17.652432 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:17 crc kubenswrapper[4661]: I0120 18:06:17.652451 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:17Z","lastTransitionTime":"2026-01-20T18:06:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:17 crc kubenswrapper[4661]: I0120 18:06:17.660980 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:03Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:17Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:17 crc kubenswrapper[4661]: I0120 18:06:17.679368 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d831477bdf455582c54cba87020fc1141541282a25169c4b9730a78855e5719\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:17Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:17 crc kubenswrapper[4661]: I0120 18:06:17.692091 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-z97p2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b6f2401-3eb9-4ee4-b79c-6faee06bc21c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d04be3c87130e9506908a5ff0bf35490bafa64b4cec7b6ae1b67c4a8bd7df5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff8qg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T18:06:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-z97p2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:17Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:17 crc kubenswrapper[4661]: I0120 18:06:17.708927 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f7511825-196e-48ea-a80c-f30a6806a15f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:05:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:05:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f30ca85f0d31021dde3b56c646ddd5d841e699b809c85e54afa944cc8035df6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://008613eee577926f777b6eba5a93379dca1203429fb29918bb057f2aba5eba4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:05:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://baf1692fe971ebe4534bc83cc471812d2b2883b6f97e53728ded6cd57b40c6f1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://faea3c0fefa61b8f0e07a050f59ca7b88d89a7ac8dba19ab019cff00fd782da3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T18:05:44Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:17Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:17 crc kubenswrapper[4661]: I0120 18:06:17.728528 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-j9j6p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e190abed-d178-4ce7-9485-f6090ecf8578\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ad84b24b0398f3f900b9440d55a7914e661a18580ef8b248ffdce4d8a6c75c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxwgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6923af243783c919b8d74338d7221f91f7c6b770d97eb3a2f7e30360376f071d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6923af243783c919b8d74338d7221f91f7c6b770d97eb3a2f7e30360376f071d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T18:06:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T18:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxwgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d61ecbabdd991af4f3f3005e3d6fab0d3f7fa863e7503f45dd91633dfc68c597\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d61ecbabdd991af4f3f3005e3d6fab0d3f7fa863e7503f45dd91633dfc68c597\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T18:06:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T18:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxwgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31c8fb341a4de1d1144737f83eb46ad0b301f7eb48dee0969da7ade7fbd513da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31c8fb341a4de1d1144737f83eb46ad0b301f7eb48dee0969da7ade7fbd513da\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T18:06:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T18:06:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxwgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db8122764bd0508f39da125b5849fbe3bad9558e511c18f26bdcf4e5b23ca3a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db8122764bd0508f39da125b5849fbe3bad9558e511c18f26bdcf4e5b23ca3a1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T18:06:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T18:06:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxwgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://60e382a199aa3a85c11fdf8c490a4f039a191cff8a604b004e2f4ea6dacb6800\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://60e382a199aa3a85c11fdf8c490a4f039a191cff8a604b004e2f4ea6dacb6800\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T18:06:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T18:06:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxwgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb0e9f6dd4681c1b791524e22d3f668ce544cdc72a33af01fa70f2dd93d2972f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb0e9f6dd4681c1b791524e22d3f668ce544cdc72a33af01fa70f2dd93d2972f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T18:06:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T18:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxwgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T18:06:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-j9j6p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:17Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:17 crc kubenswrapper[4661]: I0120 18:06:17.750710 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aafdc595f8f331b863d71124f1aa3c686ec883829377108268dd78de88f498ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a15e7bb714cbcf03a4ed8925508be80b06b04f3cd455d293237554c8ad0fdeee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:17Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:17 crc kubenswrapper[4661]: I0120 18:06:17.756136 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:17 crc kubenswrapper[4661]: I0120 18:06:17.756232 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:17 crc kubenswrapper[4661]: I0120 18:06:17.756261 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:17 crc kubenswrapper[4661]: I0120 18:06:17.756296 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:17 crc kubenswrapper[4661]: I0120 18:06:17.756357 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:17Z","lastTransitionTime":"2026-01-20T18:06:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:17 crc kubenswrapper[4661]: I0120 18:06:17.765344 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-9m9jm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c44ff326-6791-438a-8d65-b2be26e9c819\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de5a607340e429cf954b1b6e147c4dbff99ffee4d311e9692410698574915af2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kn7nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T18:06:04Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-9m9jm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:17Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:17 crc kubenswrapper[4661]: I0120 18:06:17.815646 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fxb9d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3856f23c-8dc3-4b36-b3b7-955dff315250\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:06Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:06Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54a53d0636da9c6e7974633697967fa21ba02b0357019aca7c83994f57d06d84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66kpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://37fb98a4cea5fe59a694ef52ebebfd3366649970415c8bd3b1307e6d150ffe66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66kpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6bac19d8c5ba66dc20e5e4b90b2ba10efe69f218908b04abb221416f47e47f5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66kpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f5f5d96326cd37c1101488fff8b4ce215ce84766faf13112bed7df0a767de0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66kpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d53da47c39bd1f10fe866890f30f12f27cb0cfce0348c89fc0e89b3e8f563f2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66kpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://407e4d66f22050b80251fcb98ac7168d601d70dff1679bdaca0fc82d6068da41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66kpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://936cc61252844f599e65f506adbc3ca6e06fee03fe77fc1a75295040562e5c1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ef28f4922dda916a079ff808db18e7c37635d7850639783e2eb8f743ac6cfa7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-20T18:06:16Z\\\",\\\"message\\\":\\\"v1.NetworkAttachmentDefinition (0s) from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117\\\\nI0120 18:06:16.459075 5855 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0120 18:06:16.460791 5855 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0120 18:06:16.460838 5855 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0120 18:06:16.460885 5855 factory.go:656] Stopping watch factory\\\\nI0120 18:06:16.460916 5855 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0120 18:06:16.460933 5855 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0120 18:06:16.460945 5855 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0120 18:06:16.459308 5855 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0120 18:06:16.459371 5855 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0120 18:06:16.459405 5855 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T18:06:13Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66kpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dfbc19df20b659446872267891c3a922b6a01e39d8f0557505f25cdc5ba1a648\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66kpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://babd416d0d33b286f533dc5bd8d6904d24fd23632efce36edb6e13183fbd390a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://babd416d0d33b286f533dc5bd8d6904d24fd23632efce36edb6e13183fbd390a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T18:06:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66kpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T18:06:06Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fxb9d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:17Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:17 crc kubenswrapper[4661]: I0120 18:06:17.837998 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5947c5f0-b932-4127-a183-6b9023784c81\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:05:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:05:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:05:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2286c38d543136df613b2611b8d494d0777a950158adb169c26708335c024251\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7995b8e096ce8c7adf28d9baa4e12d943a697db80ee2b6e6b347b334e44b0df\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a1fb928361cffd6f14855b6c1cf5964eccc9f923435bf79dddd8f0c94decd9a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a8e025f49d745d0d846c606a3ec9dd6fbd2d255e8662ba1fd1a65f0d4289e77\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3584f02089912eecb6ea77d78d4f093929ce92631cb9ea758f1311268963b6b1\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-20T18:06:02Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0120 18:05:56.920405 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0120 18:05:56.921589 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1862726087/tls.crt::/tmp/serving-cert-1862726087/tls.key\\\\\\\"\\\\nI0120 18:06:02.544098 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0120 18:06:02.549414 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0120 18:06:02.549439 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0120 18:06:02.549472 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0120 18:06:02.549479 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0120 18:06:02.569160 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0120 18:06:02.569400 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0120 18:06:02.569474 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0120 18:06:02.569536 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0120 18:06:02.569594 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0120 18:06:02.569648 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0120 18:06:02.569744 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0120 18:06:02.569342 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0120 18:06:02.573278 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T18:05:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f09e5fcc7fafac7a11257184f5919c06b5b2e56a677b67c664e6489d9a581a20\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:05:46Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6eedc9bdf3c37af238cf9ad5172a8d93751c0641cbf43057016157f086c77538\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6eedc9bdf3c37af238cf9ad5172a8d93751c0641cbf43057016157f086c77538\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T18:05:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T18:05:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T18:05:44Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:17Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:17 crc kubenswrapper[4661]: I0120 18:06:17.859413 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:17 crc kubenswrapper[4661]: I0120 18:06:17.859456 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:17 crc kubenswrapper[4661]: I0120 18:06:17.859467 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:17 crc kubenswrapper[4661]: I0120 18:06:17.859485 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:17 crc kubenswrapper[4661]: I0120 18:06:17.859494 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:17Z","lastTransitionTime":"2026-01-20T18:06:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:17 crc kubenswrapper[4661]: I0120 18:06:17.870983 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:03Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:17Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:17 crc kubenswrapper[4661]: I0120 18:06:17.887999 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:03Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:17Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:17 crc kubenswrapper[4661]: I0120 18:06:17.902596 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-z97p2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b6f2401-3eb9-4ee4-b79c-6faee06bc21c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d04be3c87130e9506908a5ff0bf35490bafa64b4cec7b6ae1b67c4a8bd7df5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff8qg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T18:06:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-z97p2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:17Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:17 crc kubenswrapper[4661]: I0120 18:06:17.916982 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f7511825-196e-48ea-a80c-f30a6806a15f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:05:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:05:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f30ca85f0d31021dde3b56c646ddd5d841e699b809c85e54afa944cc8035df6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://008613eee577926f777b6eba5a93379dca1203429fb29918bb057f2aba5eba4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:05:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://baf1692fe971ebe4534bc83cc471812d2b2883b6f97e53728ded6cd57b40c6f1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://faea3c0fefa61b8f0e07a050f59ca7b88d89a7ac8dba19ab019cff00fd782da3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T18:05:44Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:17Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:17 crc kubenswrapper[4661]: I0120 18:06:17.931036 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d831477bdf455582c54cba87020fc1141541282a25169c4b9730a78855e5719\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:17Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:17 crc kubenswrapper[4661]: I0120 18:06:17.941829 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-tfdrt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1c3f1ce7-0584-4bf1-8398-a277e9a4599b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://163c719cffaaa547e54e81b543b5f5b2ce5abf7f6309d2859831a14e42df189f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbq77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T18:06:07Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-tfdrt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:17Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:17 crc kubenswrapper[4661]: I0120 18:06:17.958032 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26d03e00aaf9fc7a94d8fe25f4f6f7a028f4e5eb9956411442757ca8b2046d27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:17Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:17 crc kubenswrapper[4661]: I0120 18:06:17.962326 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:17 crc kubenswrapper[4661]: I0120 18:06:17.962366 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:17 crc kubenswrapper[4661]: I0120 18:06:17.962378 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:17 crc kubenswrapper[4661]: I0120 18:06:17.962399 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:17 crc kubenswrapper[4661]: I0120 18:06:17.962414 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:17Z","lastTransitionTime":"2026-01-20T18:06:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:17 crc kubenswrapper[4661]: I0120 18:06:17.973605 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:17Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:17 crc kubenswrapper[4661]: I0120 18:06:17.986987 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-svf7c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"78855c94-da90-4523-8d65-70f7fd153dee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce85015f47761ddd35031a4b2aa10eddde92a1f1ee206e6454b967b03b49372e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zvj2k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7dad5141c6e2e07d42bee1c473efffa900d0d900467b1524cd59962582696a3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zvj2k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T18:06:05Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-svf7c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:17Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:18 crc kubenswrapper[4661]: I0120 18:06:18.064864 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:18 crc kubenswrapper[4661]: I0120 18:06:18.064921 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:18 crc kubenswrapper[4661]: I0120 18:06:18.064934 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:18 crc kubenswrapper[4661]: I0120 18:06:18.064955 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:18 crc kubenswrapper[4661]: I0120 18:06:18.064969 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:18Z","lastTransitionTime":"2026-01-20T18:06:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:18 crc kubenswrapper[4661]: I0120 18:06:18.101548 4661 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-04 13:24:43.543293641 +0000 UTC Jan 20 18:06:18 crc kubenswrapper[4661]: I0120 18:06:18.167713 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:18 crc kubenswrapper[4661]: I0120 18:06:18.167803 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:18 crc kubenswrapper[4661]: I0120 18:06:18.167818 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:18 crc kubenswrapper[4661]: I0120 18:06:18.167838 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:18 crc kubenswrapper[4661]: I0120 18:06:18.167852 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:18Z","lastTransitionTime":"2026-01-20T18:06:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:18 crc kubenswrapper[4661]: I0120 18:06:18.272093 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:18 crc kubenswrapper[4661]: I0120 18:06:18.272154 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:18 crc kubenswrapper[4661]: I0120 18:06:18.272179 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:18 crc kubenswrapper[4661]: I0120 18:06:18.272208 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:18 crc kubenswrapper[4661]: I0120 18:06:18.272227 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:18Z","lastTransitionTime":"2026-01-20T18:06:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:18 crc kubenswrapper[4661]: I0120 18:06:18.376381 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:18 crc kubenswrapper[4661]: I0120 18:06:18.376433 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:18 crc kubenswrapper[4661]: I0120 18:06:18.376443 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:18 crc kubenswrapper[4661]: I0120 18:06:18.376461 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:18 crc kubenswrapper[4661]: I0120 18:06:18.376472 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:18Z","lastTransitionTime":"2026-01-20T18:06:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:18 crc kubenswrapper[4661]: I0120 18:06:18.450547 4661 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4hf4s"] Jan 20 18:06:18 crc kubenswrapper[4661]: I0120 18:06:18.451298 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4hf4s" Jan 20 18:06:18 crc kubenswrapper[4661]: I0120 18:06:18.453771 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Jan 20 18:06:18 crc kubenswrapper[4661]: I0120 18:06:18.455595 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Jan 20 18:06:18 crc kubenswrapper[4661]: I0120 18:06:18.468710 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:03Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:18Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:18 crc kubenswrapper[4661]: I0120 18:06:18.479234 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:18 crc kubenswrapper[4661]: I0120 18:06:18.479425 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:18 crc kubenswrapper[4661]: I0120 18:06:18.479497 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:18 crc kubenswrapper[4661]: I0120 18:06:18.479589 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:18 crc kubenswrapper[4661]: I0120 18:06:18.479694 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:18Z","lastTransitionTime":"2026-01-20T18:06:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:18 crc kubenswrapper[4661]: I0120 18:06:18.488771 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:03Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:18Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:18 crc kubenswrapper[4661]: I0120 18:06:18.502993 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-9m9jm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c44ff326-6791-438a-8d65-b2be26e9c819\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de5a607340e429cf954b1b6e147c4dbff99ffee4d311e9692410698574915af2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kn7nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T18:06:04Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-9m9jm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:18Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:18 crc kubenswrapper[4661]: I0120 18:06:18.532318 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fxb9d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3856f23c-8dc3-4b36-b3b7-955dff315250\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:06Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:06Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54a53d0636da9c6e7974633697967fa21ba02b0357019aca7c83994f57d06d84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66kpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://37fb98a4cea5fe59a694ef52ebebfd3366649970415c8bd3b1307e6d150ffe66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66kpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6bac19d8c5ba66dc20e5e4b90b2ba10efe69f218908b04abb221416f47e47f5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66kpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f5f5d96326cd37c1101488fff8b4ce215ce84766faf13112bed7df0a767de0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66kpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d53da47c39bd1f10fe866890f30f12f27cb0cfce0348c89fc0e89b3e8f563f2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66kpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://407e4d66f22050b80251fcb98ac7168d601d70dff1679bdaca0fc82d6068da41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66kpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://936cc61252844f599e65f506adbc3ca6e06fee03fe77fc1a75295040562e5c1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ef28f4922dda916a079ff808db18e7c37635d7850639783e2eb8f743ac6cfa7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-20T18:06:16Z\\\",\\\"message\\\":\\\"v1.NetworkAttachmentDefinition (0s) from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117\\\\nI0120 18:06:16.459075 5855 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0120 18:06:16.460791 5855 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0120 18:06:16.460838 5855 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0120 18:06:16.460885 5855 factory.go:656] Stopping watch factory\\\\nI0120 18:06:16.460916 5855 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0120 18:06:16.460933 5855 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0120 18:06:16.460945 5855 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0120 18:06:16.459308 5855 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0120 18:06:16.459371 5855 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0120 18:06:16.459405 5855 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T18:06:13Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66kpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dfbc19df20b659446872267891c3a922b6a01e39d8f0557505f25cdc5ba1a648\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66kpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://babd416d0d33b286f533dc5bd8d6904d24fd23632efce36edb6e13183fbd390a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://babd416d0d33b286f533dc5bd8d6904d24fd23632efce36edb6e13183fbd390a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T18:06:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66kpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T18:06:06Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fxb9d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:18Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:18 crc kubenswrapper[4661]: I0120 18:06:18.548584 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5947c5f0-b932-4127-a183-6b9023784c81\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:05:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:05:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:05:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2286c38d543136df613b2611b8d494d0777a950158adb169c26708335c024251\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7995b8e096ce8c7adf28d9baa4e12d943a697db80ee2b6e6b347b334e44b0df\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a1fb928361cffd6f14855b6c1cf5964eccc9f923435bf79dddd8f0c94decd9a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a8e025f49d745d0d846c606a3ec9dd6fbd2d255e8662ba1fd1a65f0d4289e77\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3584f02089912eecb6ea77d78d4f093929ce92631cb9ea758f1311268963b6b1\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-20T18:06:02Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0120 18:05:56.920405 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0120 18:05:56.921589 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1862726087/tls.crt::/tmp/serving-cert-1862726087/tls.key\\\\\\\"\\\\nI0120 18:06:02.544098 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0120 18:06:02.549414 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0120 18:06:02.549439 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0120 18:06:02.549472 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0120 18:06:02.549479 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0120 18:06:02.569160 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0120 18:06:02.569400 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0120 18:06:02.569474 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0120 18:06:02.569536 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0120 18:06:02.569594 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0120 18:06:02.569648 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0120 18:06:02.569744 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0120 18:06:02.569342 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0120 18:06:02.573278 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T18:05:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f09e5fcc7fafac7a11257184f5919c06b5b2e56a677b67c664e6489d9a581a20\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:05:46Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6eedc9bdf3c37af238cf9ad5172a8d93751c0641cbf43057016157f086c77538\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6eedc9bdf3c37af238cf9ad5172a8d93751c0641cbf43057016157f086c77538\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T18:05:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T18:05:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T18:05:44Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:18Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:18 crc kubenswrapper[4661]: I0120 18:06:18.563538 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d831477bdf455582c54cba87020fc1141541282a25169c4b9730a78855e5719\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:18Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:18 crc kubenswrapper[4661]: I0120 18:06:18.578758 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/2cada643-eb7b-4036-8788-500338f73fac-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-4hf4s\" (UID: \"2cada643-eb7b-4036-8788-500338f73fac\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4hf4s" Jan 20 18:06:18 crc kubenswrapper[4661]: I0120 18:06:18.578873 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6gwqf\" (UniqueName: \"kubernetes.io/projected/2cada643-eb7b-4036-8788-500338f73fac-kube-api-access-6gwqf\") pod \"ovnkube-control-plane-749d76644c-4hf4s\" (UID: \"2cada643-eb7b-4036-8788-500338f73fac\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4hf4s" Jan 20 18:06:18 crc kubenswrapper[4661]: I0120 18:06:18.578972 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/2cada643-eb7b-4036-8788-500338f73fac-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-4hf4s\" (UID: \"2cada643-eb7b-4036-8788-500338f73fac\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4hf4s" Jan 20 18:06:18 crc kubenswrapper[4661]: I0120 18:06:18.579049 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/2cada643-eb7b-4036-8788-500338f73fac-env-overrides\") pod \"ovnkube-control-plane-749d76644c-4hf4s\" (UID: \"2cada643-eb7b-4036-8788-500338f73fac\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4hf4s" Jan 20 18:06:18 crc kubenswrapper[4661]: I0120 18:06:18.583347 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:18 crc kubenswrapper[4661]: I0120 18:06:18.583417 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:18 crc kubenswrapper[4661]: I0120 18:06:18.583437 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:18 crc kubenswrapper[4661]: I0120 18:06:18.583465 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:18 crc kubenswrapper[4661]: I0120 18:06:18.583486 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:18Z","lastTransitionTime":"2026-01-20T18:06:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:18 crc kubenswrapper[4661]: I0120 18:06:18.587306 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-z97p2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b6f2401-3eb9-4ee4-b79c-6faee06bc21c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d04be3c87130e9506908a5ff0bf35490bafa64b4cec7b6ae1b67c4a8bd7df5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff8qg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T18:06:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-z97p2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:18Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:18 crc kubenswrapper[4661]: I0120 18:06:18.606120 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4hf4s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2cada643-eb7b-4036-8788-500338f73fac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gwqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gwqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T18:06:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-4hf4s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:18Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:18 crc kubenswrapper[4661]: I0120 18:06:18.626184 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f7511825-196e-48ea-a80c-f30a6806a15f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:05:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:05:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f30ca85f0d31021dde3b56c646ddd5d841e699b809c85e54afa944cc8035df6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://008613eee577926f777b6eba5a93379dca1203429fb29918bb057f2aba5eba4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:05:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://baf1692fe971ebe4534bc83cc471812d2b2883b6f97e53728ded6cd57b40c6f1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://faea3c0fefa61b8f0e07a050f59ca7b88d89a7ac8dba19ab019cff00fd782da3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T18:05:44Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:18Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:18 crc kubenswrapper[4661]: I0120 18:06:18.637494 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-fxb9d_3856f23c-8dc3-4b36-b3b7-955dff315250/ovnkube-controller/1.log" Jan 20 18:06:18 crc kubenswrapper[4661]: I0120 18:06:18.638287 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-fxb9d_3856f23c-8dc3-4b36-b3b7-955dff315250/ovnkube-controller/0.log" Jan 20 18:06:18 crc kubenswrapper[4661]: I0120 18:06:18.642861 4661 generic.go:334] "Generic (PLEG): container finished" podID="3856f23c-8dc3-4b36-b3b7-955dff315250" containerID="936cc61252844f599e65f506adbc3ca6e06fee03fe77fc1a75295040562e5c1b" exitCode=1 Jan 20 18:06:18 crc kubenswrapper[4661]: I0120 18:06:18.642993 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fxb9d" event={"ID":"3856f23c-8dc3-4b36-b3b7-955dff315250","Type":"ContainerDied","Data":"936cc61252844f599e65f506adbc3ca6e06fee03fe77fc1a75295040562e5c1b"} Jan 20 18:06:18 crc kubenswrapper[4661]: I0120 18:06:18.643093 4661 scope.go:117] "RemoveContainer" containerID="1ef28f4922dda916a079ff808db18e7c37635d7850639783e2eb8f743ac6cfa7" Jan 20 18:06:18 crc kubenswrapper[4661]: I0120 18:06:18.645460 4661 scope.go:117] "RemoveContainer" containerID="936cc61252844f599e65f506adbc3ca6e06fee03fe77fc1a75295040562e5c1b" Jan 20 18:06:18 crc kubenswrapper[4661]: E0120 18:06:18.646002 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-fxb9d_openshift-ovn-kubernetes(3856f23c-8dc3-4b36-b3b7-955dff315250)\"" pod="openshift-ovn-kubernetes/ovnkube-node-fxb9d" podUID="3856f23c-8dc3-4b36-b3b7-955dff315250" Jan 20 18:06:18 crc kubenswrapper[4661]: I0120 18:06:18.659407 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:18Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:18 crc kubenswrapper[4661]: I0120 18:06:18.680469 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6gwqf\" (UniqueName: \"kubernetes.io/projected/2cada643-eb7b-4036-8788-500338f73fac-kube-api-access-6gwqf\") pod \"ovnkube-control-plane-749d76644c-4hf4s\" (UID: \"2cada643-eb7b-4036-8788-500338f73fac\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4hf4s" Jan 20 18:06:18 crc kubenswrapper[4661]: I0120 18:06:18.680603 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/2cada643-eb7b-4036-8788-500338f73fac-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-4hf4s\" (UID: \"2cada643-eb7b-4036-8788-500338f73fac\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4hf4s" Jan 20 18:06:18 crc kubenswrapper[4661]: I0120 18:06:18.680722 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/2cada643-eb7b-4036-8788-500338f73fac-env-overrides\") pod \"ovnkube-control-plane-749d76644c-4hf4s\" (UID: \"2cada643-eb7b-4036-8788-500338f73fac\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4hf4s" Jan 20 18:06:18 crc kubenswrapper[4661]: I0120 18:06:18.680823 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/2cada643-eb7b-4036-8788-500338f73fac-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-4hf4s\" (UID: \"2cada643-eb7b-4036-8788-500338f73fac\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4hf4s" Jan 20 18:06:18 crc kubenswrapper[4661]: I0120 18:06:18.682053 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/2cada643-eb7b-4036-8788-500338f73fac-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-4hf4s\" (UID: \"2cada643-eb7b-4036-8788-500338f73fac\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4hf4s" Jan 20 18:06:18 crc kubenswrapper[4661]: I0120 18:06:18.682114 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/2cada643-eb7b-4036-8788-500338f73fac-env-overrides\") pod \"ovnkube-control-plane-749d76644c-4hf4s\" (UID: \"2cada643-eb7b-4036-8788-500338f73fac\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4hf4s" Jan 20 18:06:18 crc kubenswrapper[4661]: I0120 18:06:18.687599 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:18 crc kubenswrapper[4661]: I0120 18:06:18.687629 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:18 crc kubenswrapper[4661]: I0120 18:06:18.687651 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:18 crc kubenswrapper[4661]: I0120 18:06:18.687785 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:18 crc kubenswrapper[4661]: I0120 18:06:18.687808 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:18Z","lastTransitionTime":"2026-01-20T18:06:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:18 crc kubenswrapper[4661]: I0120 18:06:18.689501 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-svf7c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"78855c94-da90-4523-8d65-70f7fd153dee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce85015f47761ddd35031a4b2aa10eddde92a1f1ee206e6454b967b03b49372e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zvj2k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7dad5141c6e2e07d42bee1c473efffa900d0d900467b1524cd59962582696a3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zvj2k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T18:06:05Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-svf7c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:18Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:18 crc kubenswrapper[4661]: I0120 18:06:18.704946 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/2cada643-eb7b-4036-8788-500338f73fac-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-4hf4s\" (UID: \"2cada643-eb7b-4036-8788-500338f73fac\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4hf4s" Jan 20 18:06:18 crc kubenswrapper[4661]: I0120 18:06:18.718502 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-tfdrt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1c3f1ce7-0584-4bf1-8398-a277e9a4599b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://163c719cffaaa547e54e81b543b5f5b2ce5abf7f6309d2859831a14e42df189f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbq77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T18:06:07Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-tfdrt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:18Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:18 crc kubenswrapper[4661]: I0120 18:06:18.719489 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6gwqf\" (UniqueName: \"kubernetes.io/projected/2cada643-eb7b-4036-8788-500338f73fac-kube-api-access-6gwqf\") pod \"ovnkube-control-plane-749d76644c-4hf4s\" (UID: \"2cada643-eb7b-4036-8788-500338f73fac\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4hf4s" Jan 20 18:06:18 crc kubenswrapper[4661]: I0120 18:06:18.742329 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26d03e00aaf9fc7a94d8fe25f4f6f7a028f4e5eb9956411442757ca8b2046d27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:18Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:18 crc kubenswrapper[4661]: I0120 18:06:18.759113 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aafdc595f8f331b863d71124f1aa3c686ec883829377108268dd78de88f498ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a15e7bb714cbcf03a4ed8925508be80b06b04f3cd455d293237554c8ad0fdeee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:18Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:18 crc kubenswrapper[4661]: I0120 18:06:18.770433 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4hf4s" Jan 20 18:06:18 crc kubenswrapper[4661]: I0120 18:06:18.789484 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-j9j6p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e190abed-d178-4ce7-9485-f6090ecf8578\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ad84b24b0398f3f900b9440d55a7914e661a18580ef8b248ffdce4d8a6c75c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxwgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6923af243783c919b8d74338d7221f91f7c6b770d97eb3a2f7e30360376f071d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6923af243783c919b8d74338d7221f91f7c6b770d97eb3a2f7e30360376f071d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T18:06:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T18:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxwgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d61ecbabdd991af4f3f3005e3d6fab0d3f7fa863e7503f45dd91633dfc68c597\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d61ecbabdd991af4f3f3005e3d6fab0d3f7fa863e7503f45dd91633dfc68c597\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T18:06:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T18:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxwgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31c8fb341a4de1d1144737f83eb46ad0b301f7eb48dee0969da7ade7fbd513da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31c8fb341a4de1d1144737f83eb46ad0b301f7eb48dee0969da7ade7fbd513da\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T18:06:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T18:06:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxwgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db8122764bd0508f39da125b5849fbe3bad9558e511c18f26bdcf4e5b23ca3a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db8122764bd0508f39da125b5849fbe3bad9558e511c18f26bdcf4e5b23ca3a1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T18:06:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T18:06:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxwgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://60e382a199aa3a85c11fdf8c490a4f039a191cff8a604b004e2f4ea6dacb6800\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://60e382a199aa3a85c11fdf8c490a4f039a191cff8a604b004e2f4ea6dacb6800\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T18:06:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T18:06:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxwgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb0e9f6dd4681c1b791524e22d3f668ce544cdc72a33af01fa70f2dd93d2972f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb0e9f6dd4681c1b791524e22d3f668ce544cdc72a33af01fa70f2dd93d2972f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T18:06:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T18:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxwgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T18:06:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-j9j6p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:18Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:18 crc kubenswrapper[4661]: I0120 18:06:18.793703 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:18 crc kubenswrapper[4661]: I0120 18:06:18.793762 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:18 crc kubenswrapper[4661]: I0120 18:06:18.793779 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:18 crc kubenswrapper[4661]: I0120 18:06:18.793802 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:18 crc kubenswrapper[4661]: I0120 18:06:18.793816 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:18Z","lastTransitionTime":"2026-01-20T18:06:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:18 crc kubenswrapper[4661]: I0120 18:06:18.810254 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f7511825-196e-48ea-a80c-f30a6806a15f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:05:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:05:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f30ca85f0d31021dde3b56c646ddd5d841e699b809c85e54afa944cc8035df6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://008613eee577926f777b6eba5a93379dca1203429fb29918bb057f2aba5eba4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:05:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://baf1692fe971ebe4534bc83cc471812d2b2883b6f97e53728ded6cd57b40c6f1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://faea3c0fefa61b8f0e07a050f59ca7b88d89a7ac8dba19ab019cff00fd782da3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T18:05:44Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:18Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:18 crc kubenswrapper[4661]: I0120 18:06:18.825513 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d831477bdf455582c54cba87020fc1141541282a25169c4b9730a78855e5719\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:18Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:18 crc kubenswrapper[4661]: I0120 18:06:18.840937 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-z97p2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b6f2401-3eb9-4ee4-b79c-6faee06bc21c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d04be3c87130e9506908a5ff0bf35490bafa64b4cec7b6ae1b67c4a8bd7df5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff8qg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T18:06:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-z97p2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:18Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:18 crc kubenswrapper[4661]: I0120 18:06:18.854749 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4hf4s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2cada643-eb7b-4036-8788-500338f73fac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gwqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gwqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T18:06:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-4hf4s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:18Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:18 crc kubenswrapper[4661]: I0120 18:06:18.872394 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26d03e00aaf9fc7a94d8fe25f4f6f7a028f4e5eb9956411442757ca8b2046d27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:18Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:18 crc kubenswrapper[4661]: I0120 18:06:18.883088 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 18:06:18 crc kubenswrapper[4661]: I0120 18:06:18.883232 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 18:06:18 crc kubenswrapper[4661]: I0120 18:06:18.883288 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 18:06:18 crc kubenswrapper[4661]: I0120 18:06:18.883331 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 18:06:18 crc kubenswrapper[4661]: I0120 18:06:18.883367 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 18:06:18 crc kubenswrapper[4661]: E0120 18:06:18.883401 4661 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 18:06:34.883364873 +0000 UTC m=+51.214154545 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 18:06:18 crc kubenswrapper[4661]: E0120 18:06:18.883439 4661 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 20 18:06:18 crc kubenswrapper[4661]: E0120 18:06:18.883518 4661 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-20 18:06:34.883500567 +0000 UTC m=+51.214290299 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 20 18:06:18 crc kubenswrapper[4661]: E0120 18:06:18.883538 4661 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 20 18:06:18 crc kubenswrapper[4661]: E0120 18:06:18.883584 4661 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-20 18:06:34.883572679 +0000 UTC m=+51.214362351 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 20 18:06:18 crc kubenswrapper[4661]: E0120 18:06:18.883823 4661 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 20 18:06:18 crc kubenswrapper[4661]: E0120 18:06:18.883830 4661 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 20 18:06:18 crc kubenswrapper[4661]: E0120 18:06:18.883845 4661 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 20 18:06:18 crc kubenswrapper[4661]: E0120 18:06:18.883863 4661 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 20 18:06:18 crc kubenswrapper[4661]: E0120 18:06:18.883870 4661 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 20 18:06:18 crc kubenswrapper[4661]: E0120 18:06:18.883881 4661 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 20 18:06:18 crc kubenswrapper[4661]: E0120 18:06:18.883911 4661 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-20 18:06:34.883900818 +0000 UTC m=+51.214690490 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 20 18:06:18 crc kubenswrapper[4661]: E0120 18:06:18.883947 4661 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-20 18:06:34.883923939 +0000 UTC m=+51.214713711 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 20 18:06:18 crc kubenswrapper[4661]: I0120 18:06:18.886621 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:18Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:18 crc kubenswrapper[4661]: I0120 18:06:18.897401 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:18 crc kubenswrapper[4661]: I0120 18:06:18.897488 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:18 crc kubenswrapper[4661]: I0120 18:06:18.897550 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:18 crc kubenswrapper[4661]: I0120 18:06:18.897578 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:18 crc kubenswrapper[4661]: I0120 18:06:18.897596 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:18Z","lastTransitionTime":"2026-01-20T18:06:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:18 crc kubenswrapper[4661]: I0120 18:06:18.899326 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-svf7c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"78855c94-da90-4523-8d65-70f7fd153dee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce85015f47761ddd35031a4b2aa10eddde92a1f1ee206e6454b967b03b49372e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zvj2k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7dad5141c6e2e07d42bee1c473efffa900d0d900467b1524cd59962582696a3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zvj2k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T18:06:05Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-svf7c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:18Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:18 crc kubenswrapper[4661]: I0120 18:06:18.911011 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-tfdrt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1c3f1ce7-0584-4bf1-8398-a277e9a4599b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://163c719cffaaa547e54e81b543b5f5b2ce5abf7f6309d2859831a14e42df189f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbq77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T18:06:07Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-tfdrt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:18Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:18 crc kubenswrapper[4661]: I0120 18:06:18.927110 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aafdc595f8f331b863d71124f1aa3c686ec883829377108268dd78de88f498ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a15e7bb714cbcf03a4ed8925508be80b06b04f3cd455d293237554c8ad0fdeee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:18Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:18 crc kubenswrapper[4661]: I0120 18:06:18.942092 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-j9j6p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e190abed-d178-4ce7-9485-f6090ecf8578\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ad84b24b0398f3f900b9440d55a7914e661a18580ef8b248ffdce4d8a6c75c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxwgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6923af243783c919b8d74338d7221f91f7c6b770d97eb3a2f7e30360376f071d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6923af243783c919b8d74338d7221f91f7c6b770d97eb3a2f7e30360376f071d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T18:06:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T18:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxwgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d61ecbabdd991af4f3f3005e3d6fab0d3f7fa863e7503f45dd91633dfc68c597\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d61ecbabdd991af4f3f3005e3d6fab0d3f7fa863e7503f45dd91633dfc68c597\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T18:06:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T18:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxwgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31c8fb341a4de1d1144737f83eb46ad0b301f7eb48dee0969da7ade7fbd513da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31c8fb341a4de1d1144737f83eb46ad0b301f7eb48dee0969da7ade7fbd513da\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T18:06:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T18:06:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxwgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db8122764bd0508f39da125b5849fbe3bad9558e511c18f26bdcf4e5b23ca3a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db8122764bd0508f39da125b5849fbe3bad9558e511c18f26bdcf4e5b23ca3a1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T18:06:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T18:06:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxwgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://60e382a199aa3a85c11fdf8c490a4f039a191cff8a604b004e2f4ea6dacb6800\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://60e382a199aa3a85c11fdf8c490a4f039a191cff8a604b004e2f4ea6dacb6800\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T18:06:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T18:06:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxwgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb0e9f6dd4681c1b791524e22d3f668ce544cdc72a33af01fa70f2dd93d2972f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb0e9f6dd4681c1b791524e22d3f668ce544cdc72a33af01fa70f2dd93d2972f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T18:06:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T18:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxwgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T18:06:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-j9j6p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:18Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:18 crc kubenswrapper[4661]: I0120 18:06:18.957074 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5947c5f0-b932-4127-a183-6b9023784c81\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:05:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:05:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:05:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2286c38d543136df613b2611b8d494d0777a950158adb169c26708335c024251\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7995b8e096ce8c7adf28d9baa4e12d943a697db80ee2b6e6b347b334e44b0df\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a1fb928361cffd6f14855b6c1cf5964eccc9f923435bf79dddd8f0c94decd9a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a8e025f49d745d0d846c606a3ec9dd6fbd2d255e8662ba1fd1a65f0d4289e77\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3584f02089912eecb6ea77d78d4f093929ce92631cb9ea758f1311268963b6b1\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-20T18:06:02Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0120 18:05:56.920405 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0120 18:05:56.921589 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1862726087/tls.crt::/tmp/serving-cert-1862726087/tls.key\\\\\\\"\\\\nI0120 18:06:02.544098 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0120 18:06:02.549414 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0120 18:06:02.549439 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0120 18:06:02.549472 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0120 18:06:02.549479 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0120 18:06:02.569160 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0120 18:06:02.569400 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0120 18:06:02.569474 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0120 18:06:02.569536 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0120 18:06:02.569594 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0120 18:06:02.569648 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0120 18:06:02.569744 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0120 18:06:02.569342 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0120 18:06:02.573278 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T18:05:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f09e5fcc7fafac7a11257184f5919c06b5b2e56a677b67c664e6489d9a581a20\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:05:46Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6eedc9bdf3c37af238cf9ad5172a8d93751c0641cbf43057016157f086c77538\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6eedc9bdf3c37af238cf9ad5172a8d93751c0641cbf43057016157f086c77538\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T18:05:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T18:05:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T18:05:44Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:18Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:18 crc kubenswrapper[4661]: I0120 18:06:18.972146 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:03Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:18Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:18 crc kubenswrapper[4661]: I0120 18:06:18.984862 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:03Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:18Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:18 crc kubenswrapper[4661]: I0120 18:06:18.997922 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-9m9jm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c44ff326-6791-438a-8d65-b2be26e9c819\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de5a607340e429cf954b1b6e147c4dbff99ffee4d311e9692410698574915af2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kn7nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T18:06:04Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-9m9jm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:18Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:19 crc kubenswrapper[4661]: I0120 18:06:19.007075 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:19 crc kubenswrapper[4661]: I0120 18:06:19.007110 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:19 crc kubenswrapper[4661]: I0120 18:06:19.007118 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:19 crc kubenswrapper[4661]: I0120 18:06:19.007135 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:19 crc kubenswrapper[4661]: I0120 18:06:19.007145 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:19Z","lastTransitionTime":"2026-01-20T18:06:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:19 crc kubenswrapper[4661]: I0120 18:06:19.026301 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fxb9d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3856f23c-8dc3-4b36-b3b7-955dff315250\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:06Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:06Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54a53d0636da9c6e7974633697967fa21ba02b0357019aca7c83994f57d06d84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66kpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://37fb98a4cea5fe59a694ef52ebebfd3366649970415c8bd3b1307e6d150ffe66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66kpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6bac19d8c5ba66dc20e5e4b90b2ba10efe69f218908b04abb221416f47e47f5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66kpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f5f5d96326cd37c1101488fff8b4ce215ce84766faf13112bed7df0a767de0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66kpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d53da47c39bd1f10fe866890f30f12f27cb0cfce0348c89fc0e89b3e8f563f2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66kpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://407e4d66f22050b80251fcb98ac7168d601d70dff1679bdaca0fc82d6068da41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66kpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://936cc61252844f599e65f506adbc3ca6e06fee03fe77fc1a75295040562e5c1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ef28f4922dda916a079ff808db18e7c37635d7850639783e2eb8f743ac6cfa7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-20T18:06:16Z\\\",\\\"message\\\":\\\"v1.NetworkAttachmentDefinition (0s) from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117\\\\nI0120 18:06:16.459075 5855 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0120 18:06:16.460791 5855 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0120 18:06:16.460838 5855 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0120 18:06:16.460885 5855 factory.go:656] Stopping watch factory\\\\nI0120 18:06:16.460916 5855 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0120 18:06:16.460933 5855 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0120 18:06:16.460945 5855 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0120 18:06:16.459308 5855 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0120 18:06:16.459371 5855 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0120 18:06:16.459405 5855 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T18:06:13Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://936cc61252844f599e65f506adbc3ca6e06fee03fe77fc1a75295040562e5c1b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-20T18:06:18Z\\\",\\\"message\\\":\\\"kPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0120 18:06:17.419651 6023 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0120 18:06:17.419792 6023 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0120 18:06:17.419824 6023 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0120 18:06:17.419873 6023 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0120 18:06:17.420180 6023 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0120 18:06:17.449846 6023 shared_informer.go:320] Caches are synced for node-tracker-controller\\\\nI0120 18:06:17.449878 6023 services_controller.go:204] Setting up event handlers for services for network=default\\\\nI0120 18:06:17.449945 6023 ovnkube.go:599] Stopped ovnkube\\\\nI0120 18:06:17.449972 6023 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0120 18:06:17.450061 6023 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T18:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66kpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dfbc19df20b659446872267891c3a922b6a01e39d8f0557505f25cdc5ba1a648\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66kpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://babd416d0d33b286f533dc5bd8d6904d24fd23632efce36edb6e13183fbd390a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://babd416d0d33b286f533dc5bd8d6904d24fd23632efce36edb6e13183fbd390a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T18:06:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66kpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T18:06:06Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fxb9d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:19Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:19 crc kubenswrapper[4661]: I0120 18:06:19.102318 4661 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-13 15:35:00.172571626 +0000 UTC Jan 20 18:06:19 crc kubenswrapper[4661]: I0120 18:06:19.109447 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:19 crc kubenswrapper[4661]: I0120 18:06:19.109502 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:19 crc kubenswrapper[4661]: I0120 18:06:19.109519 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:19 crc kubenswrapper[4661]: I0120 18:06:19.109548 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:19 crc kubenswrapper[4661]: I0120 18:06:19.109604 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:19Z","lastTransitionTime":"2026-01-20T18:06:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:19 crc kubenswrapper[4661]: I0120 18:06:19.142103 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 18:06:19 crc kubenswrapper[4661]: I0120 18:06:19.142103 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 18:06:19 crc kubenswrapper[4661]: E0120 18:06:19.142266 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 20 18:06:19 crc kubenswrapper[4661]: I0120 18:06:19.142407 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 18:06:19 crc kubenswrapper[4661]: E0120 18:06:19.142465 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 20 18:06:19 crc kubenswrapper[4661]: E0120 18:06:19.142722 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 20 18:06:19 crc kubenswrapper[4661]: I0120 18:06:19.212867 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:19 crc kubenswrapper[4661]: I0120 18:06:19.212930 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:19 crc kubenswrapper[4661]: I0120 18:06:19.212948 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:19 crc kubenswrapper[4661]: I0120 18:06:19.212976 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:19 crc kubenswrapper[4661]: I0120 18:06:19.212994 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:19Z","lastTransitionTime":"2026-01-20T18:06:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:19 crc kubenswrapper[4661]: I0120 18:06:19.316110 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:19 crc kubenswrapper[4661]: I0120 18:06:19.316165 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:19 crc kubenswrapper[4661]: I0120 18:06:19.316176 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:19 crc kubenswrapper[4661]: I0120 18:06:19.316196 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:19 crc kubenswrapper[4661]: I0120 18:06:19.316209 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:19Z","lastTransitionTime":"2026-01-20T18:06:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:19 crc kubenswrapper[4661]: I0120 18:06:19.419401 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:19 crc kubenswrapper[4661]: I0120 18:06:19.419455 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:19 crc kubenswrapper[4661]: I0120 18:06:19.419465 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:19 crc kubenswrapper[4661]: I0120 18:06:19.419486 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:19 crc kubenswrapper[4661]: I0120 18:06:19.419501 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:19Z","lastTransitionTime":"2026-01-20T18:06:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:19 crc kubenswrapper[4661]: I0120 18:06:19.522151 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:19 crc kubenswrapper[4661]: I0120 18:06:19.522189 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:19 crc kubenswrapper[4661]: I0120 18:06:19.522199 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:19 crc kubenswrapper[4661]: I0120 18:06:19.522215 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:19 crc kubenswrapper[4661]: I0120 18:06:19.522225 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:19Z","lastTransitionTime":"2026-01-20T18:06:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:19 crc kubenswrapper[4661]: I0120 18:06:19.625662 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:19 crc kubenswrapper[4661]: I0120 18:06:19.625715 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:19 crc kubenswrapper[4661]: I0120 18:06:19.625724 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:19 crc kubenswrapper[4661]: I0120 18:06:19.625744 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:19 crc kubenswrapper[4661]: I0120 18:06:19.625755 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:19Z","lastTransitionTime":"2026-01-20T18:06:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:19 crc kubenswrapper[4661]: I0120 18:06:19.656490 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4hf4s" event={"ID":"2cada643-eb7b-4036-8788-500338f73fac","Type":"ContainerStarted","Data":"846c1cab30f986276eb919ac7474fbde1b6d5edb6557ab47057723b68d78b782"} Jan 20 18:06:19 crc kubenswrapper[4661]: I0120 18:06:19.656580 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4hf4s" event={"ID":"2cada643-eb7b-4036-8788-500338f73fac","Type":"ContainerStarted","Data":"59b5cf3db3513f82b52401408842627d3e40bdc3009c226548556808410b2289"} Jan 20 18:06:19 crc kubenswrapper[4661]: I0120 18:06:19.656622 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4hf4s" event={"ID":"2cada643-eb7b-4036-8788-500338f73fac","Type":"ContainerStarted","Data":"7dd212f4f1f0fb91a998033e1b374244182a330e2198c9586a533aaf580f16ad"} Jan 20 18:06:19 crc kubenswrapper[4661]: I0120 18:06:19.658891 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-fxb9d_3856f23c-8dc3-4b36-b3b7-955dff315250/ovnkube-controller/1.log" Jan 20 18:06:19 crc kubenswrapper[4661]: I0120 18:06:19.665104 4661 scope.go:117] "RemoveContainer" containerID="936cc61252844f599e65f506adbc3ca6e06fee03fe77fc1a75295040562e5c1b" Jan 20 18:06:19 crc kubenswrapper[4661]: E0120 18:06:19.665399 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-fxb9d_openshift-ovn-kubernetes(3856f23c-8dc3-4b36-b3b7-955dff315250)\"" pod="openshift-ovn-kubernetes/ovnkube-node-fxb9d" podUID="3856f23c-8dc3-4b36-b3b7-955dff315250" Jan 20 18:06:19 crc kubenswrapper[4661]: I0120 18:06:19.672455 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f7511825-196e-48ea-a80c-f30a6806a15f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:05:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:05:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f30ca85f0d31021dde3b56c646ddd5d841e699b809c85e54afa944cc8035df6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://008613eee577926f777b6eba5a93379dca1203429fb29918bb057f2aba5eba4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:05:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://baf1692fe971ebe4534bc83cc471812d2b2883b6f97e53728ded6cd57b40c6f1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://faea3c0fefa61b8f0e07a050f59ca7b88d89a7ac8dba19ab019cff00fd782da3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T18:05:44Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:19Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:19 crc kubenswrapper[4661]: I0120 18:06:19.687742 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d831477bdf455582c54cba87020fc1141541282a25169c4b9730a78855e5719\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:19Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:19 crc kubenswrapper[4661]: I0120 18:06:19.703365 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-z97p2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b6f2401-3eb9-4ee4-b79c-6faee06bc21c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d04be3c87130e9506908a5ff0bf35490bafa64b4cec7b6ae1b67c4a8bd7df5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff8qg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T18:06:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-z97p2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:19Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:19 crc kubenswrapper[4661]: I0120 18:06:19.714298 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4hf4s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2cada643-eb7b-4036-8788-500338f73fac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://59b5cf3db3513f82b52401408842627d3e40bdc3009c226548556808410b2289\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gwqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://846c1cab30f986276eb919ac7474fbde1b6d5edb6557ab47057723b68d78b782\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gwqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T18:06:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-4hf4s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:19Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:19 crc kubenswrapper[4661]: I0120 18:06:19.727762 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26d03e00aaf9fc7a94d8fe25f4f6f7a028f4e5eb9956411442757ca8b2046d27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:19Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:19 crc kubenswrapper[4661]: I0120 18:06:19.728439 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:19 crc kubenswrapper[4661]: I0120 18:06:19.728472 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:19 crc kubenswrapper[4661]: I0120 18:06:19.728486 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:19 crc kubenswrapper[4661]: I0120 18:06:19.728507 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:19 crc kubenswrapper[4661]: I0120 18:06:19.728522 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:19Z","lastTransitionTime":"2026-01-20T18:06:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:19 crc kubenswrapper[4661]: I0120 18:06:19.739553 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:19Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:19 crc kubenswrapper[4661]: I0120 18:06:19.751067 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-svf7c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"78855c94-da90-4523-8d65-70f7fd153dee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce85015f47761ddd35031a4b2aa10eddde92a1f1ee206e6454b967b03b49372e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zvj2k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7dad5141c6e2e07d42bee1c473efffa900d0d900467b1524cd59962582696a3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zvj2k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T18:06:05Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-svf7c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:19Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:19 crc kubenswrapper[4661]: I0120 18:06:19.761235 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-tfdrt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1c3f1ce7-0584-4bf1-8398-a277e9a4599b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://163c719cffaaa547e54e81b543b5f5b2ce5abf7f6309d2859831a14e42df189f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbq77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T18:06:07Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-tfdrt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:19Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:19 crc kubenswrapper[4661]: I0120 18:06:19.775593 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aafdc595f8f331b863d71124f1aa3c686ec883829377108268dd78de88f498ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a15e7bb714cbcf03a4ed8925508be80b06b04f3cd455d293237554c8ad0fdeee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:19Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:19 crc kubenswrapper[4661]: I0120 18:06:19.797750 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-j9j6p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e190abed-d178-4ce7-9485-f6090ecf8578\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ad84b24b0398f3f900b9440d55a7914e661a18580ef8b248ffdce4d8a6c75c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxwgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6923af243783c919b8d74338d7221f91f7c6b770d97eb3a2f7e30360376f071d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6923af243783c919b8d74338d7221f91f7c6b770d97eb3a2f7e30360376f071d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T18:06:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T18:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxwgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d61ecbabdd991af4f3f3005e3d6fab0d3f7fa863e7503f45dd91633dfc68c597\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d61ecbabdd991af4f3f3005e3d6fab0d3f7fa863e7503f45dd91633dfc68c597\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T18:06:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T18:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxwgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31c8fb341a4de1d1144737f83eb46ad0b301f7eb48dee0969da7ade7fbd513da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31c8fb341a4de1d1144737f83eb46ad0b301f7eb48dee0969da7ade7fbd513da\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T18:06:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T18:06:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxwgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db8122764bd0508f39da125b5849fbe3bad9558e511c18f26bdcf4e5b23ca3a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db8122764bd0508f39da125b5849fbe3bad9558e511c18f26bdcf4e5b23ca3a1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T18:06:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T18:06:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxwgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://60e382a199aa3a85c11fdf8c490a4f039a191cff8a604b004e2f4ea6dacb6800\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://60e382a199aa3a85c11fdf8c490a4f039a191cff8a604b004e2f4ea6dacb6800\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T18:06:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T18:06:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxwgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb0e9f6dd4681c1b791524e22d3f668ce544cdc72a33af01fa70f2dd93d2972f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb0e9f6dd4681c1b791524e22d3f668ce544cdc72a33af01fa70f2dd93d2972f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T18:06:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T18:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxwgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T18:06:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-j9j6p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:19Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:19 crc kubenswrapper[4661]: I0120 18:06:19.811659 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5947c5f0-b932-4127-a183-6b9023784c81\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:05:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:05:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:05:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2286c38d543136df613b2611b8d494d0777a950158adb169c26708335c024251\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7995b8e096ce8c7adf28d9baa4e12d943a697db80ee2b6e6b347b334e44b0df\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a1fb928361cffd6f14855b6c1cf5964eccc9f923435bf79dddd8f0c94decd9a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a8e025f49d745d0d846c606a3ec9dd6fbd2d255e8662ba1fd1a65f0d4289e77\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3584f02089912eecb6ea77d78d4f093929ce92631cb9ea758f1311268963b6b1\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-20T18:06:02Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0120 18:05:56.920405 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0120 18:05:56.921589 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1862726087/tls.crt::/tmp/serving-cert-1862726087/tls.key\\\\\\\"\\\\nI0120 18:06:02.544098 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0120 18:06:02.549414 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0120 18:06:02.549439 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0120 18:06:02.549472 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0120 18:06:02.549479 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0120 18:06:02.569160 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0120 18:06:02.569400 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0120 18:06:02.569474 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0120 18:06:02.569536 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0120 18:06:02.569594 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0120 18:06:02.569648 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0120 18:06:02.569744 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0120 18:06:02.569342 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0120 18:06:02.573278 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T18:05:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f09e5fcc7fafac7a11257184f5919c06b5b2e56a677b67c664e6489d9a581a20\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:05:46Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6eedc9bdf3c37af238cf9ad5172a8d93751c0641cbf43057016157f086c77538\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6eedc9bdf3c37af238cf9ad5172a8d93751c0641cbf43057016157f086c77538\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T18:05:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T18:05:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T18:05:44Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:19Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:19 crc kubenswrapper[4661]: I0120 18:06:19.826527 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:03Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:19Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:19 crc kubenswrapper[4661]: I0120 18:06:19.830923 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:19 crc kubenswrapper[4661]: I0120 18:06:19.831053 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:19 crc kubenswrapper[4661]: I0120 18:06:19.831131 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:19 crc kubenswrapper[4661]: I0120 18:06:19.831214 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:19 crc kubenswrapper[4661]: I0120 18:06:19.831282 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:19Z","lastTransitionTime":"2026-01-20T18:06:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:19 crc kubenswrapper[4661]: I0120 18:06:19.839114 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:03Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:19Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:19 crc kubenswrapper[4661]: I0120 18:06:19.854746 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-9m9jm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c44ff326-6791-438a-8d65-b2be26e9c819\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de5a607340e429cf954b1b6e147c4dbff99ffee4d311e9692410698574915af2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kn7nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T18:06:04Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-9m9jm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:19Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:19 crc kubenswrapper[4661]: I0120 18:06:19.875955 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fxb9d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3856f23c-8dc3-4b36-b3b7-955dff315250\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:06Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:06Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54a53d0636da9c6e7974633697967fa21ba02b0357019aca7c83994f57d06d84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66kpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://37fb98a4cea5fe59a694ef52ebebfd3366649970415c8bd3b1307e6d150ffe66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66kpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6bac19d8c5ba66dc20e5e4b90b2ba10efe69f218908b04abb221416f47e47f5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66kpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f5f5d96326cd37c1101488fff8b4ce215ce84766faf13112bed7df0a767de0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66kpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d53da47c39bd1f10fe866890f30f12f27cb0cfce0348c89fc0e89b3e8f563f2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66kpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://407e4d66f22050b80251fcb98ac7168d601d70dff1679bdaca0fc82d6068da41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66kpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://936cc61252844f599e65f506adbc3ca6e06fee03fe77fc1a75295040562e5c1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ef28f4922dda916a079ff808db18e7c37635d7850639783e2eb8f743ac6cfa7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-20T18:06:16Z\\\",\\\"message\\\":\\\"v1.NetworkAttachmentDefinition (0s) from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117\\\\nI0120 18:06:16.459075 5855 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0120 18:06:16.460791 5855 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0120 18:06:16.460838 5855 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0120 18:06:16.460885 5855 factory.go:656] Stopping watch factory\\\\nI0120 18:06:16.460916 5855 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0120 18:06:16.460933 5855 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0120 18:06:16.460945 5855 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0120 18:06:16.459308 5855 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0120 18:06:16.459371 5855 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0120 18:06:16.459405 5855 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T18:06:13Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://936cc61252844f599e65f506adbc3ca6e06fee03fe77fc1a75295040562e5c1b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-20T18:06:18Z\\\",\\\"message\\\":\\\"kPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0120 18:06:17.419651 6023 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0120 18:06:17.419792 6023 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0120 18:06:17.419824 6023 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0120 18:06:17.419873 6023 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0120 18:06:17.420180 6023 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0120 18:06:17.449846 6023 shared_informer.go:320] Caches are synced for node-tracker-controller\\\\nI0120 18:06:17.449878 6023 services_controller.go:204] Setting up event handlers for services for network=default\\\\nI0120 18:06:17.449945 6023 ovnkube.go:599] Stopped ovnkube\\\\nI0120 18:06:17.449972 6023 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0120 18:06:17.450061 6023 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T18:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66kpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dfbc19df20b659446872267891c3a922b6a01e39d8f0557505f25cdc5ba1a648\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66kpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://babd416d0d33b286f533dc5bd8d6904d24fd23632efce36edb6e13183fbd390a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://babd416d0d33b286f533dc5bd8d6904d24fd23632efce36edb6e13183fbd390a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T18:06:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66kpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T18:06:06Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fxb9d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:19Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:19 crc kubenswrapper[4661]: I0120 18:06:19.891868 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-z97p2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b6f2401-3eb9-4ee4-b79c-6faee06bc21c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d04be3c87130e9506908a5ff0bf35490bafa64b4cec7b6ae1b67c4a8bd7df5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff8qg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T18:06:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-z97p2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:19Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:19 crc kubenswrapper[4661]: I0120 18:06:19.903846 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4hf4s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2cada643-eb7b-4036-8788-500338f73fac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://59b5cf3db3513f82b52401408842627d3e40bdc3009c226548556808410b2289\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gwqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://846c1cab30f986276eb919ac7474fbde1b6d5edb6557ab47057723b68d78b782\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gwqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T18:06:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-4hf4s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:19Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:19 crc kubenswrapper[4661]: I0120 18:06:19.918036 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f7511825-196e-48ea-a80c-f30a6806a15f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:05:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:05:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f30ca85f0d31021dde3b56c646ddd5d841e699b809c85e54afa944cc8035df6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://008613eee577926f777b6eba5a93379dca1203429fb29918bb057f2aba5eba4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:05:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://baf1692fe971ebe4534bc83cc471812d2b2883b6f97e53728ded6cd57b40c6f1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://faea3c0fefa61b8f0e07a050f59ca7b88d89a7ac8dba19ab019cff00fd782da3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T18:05:44Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:19Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:19 crc kubenswrapper[4661]: I0120 18:06:19.934493 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:19 crc kubenswrapper[4661]: I0120 18:06:19.934581 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:19 crc kubenswrapper[4661]: I0120 18:06:19.934612 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:19 crc kubenswrapper[4661]: I0120 18:06:19.934650 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:19 crc kubenswrapper[4661]: I0120 18:06:19.934718 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:19Z","lastTransitionTime":"2026-01-20T18:06:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:19 crc kubenswrapper[4661]: I0120 18:06:19.938788 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d831477bdf455582c54cba87020fc1141541282a25169c4b9730a78855e5719\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:19Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:19 crc kubenswrapper[4661]: I0120 18:06:19.953820 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-tfdrt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1c3f1ce7-0584-4bf1-8398-a277e9a4599b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://163c719cffaaa547e54e81b543b5f5b2ce5abf7f6309d2859831a14e42df189f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbq77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T18:06:07Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-tfdrt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:19Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:19 crc kubenswrapper[4661]: I0120 18:06:19.958156 4661 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-dhd6h"] Jan 20 18:06:19 crc kubenswrapper[4661]: I0120 18:06:19.958987 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dhd6h" Jan 20 18:06:19 crc kubenswrapper[4661]: E0120 18:06:19.959105 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dhd6h" podUID="58dc28d6-51a6-49ce-ab8e-ac0b6bbe7131" Jan 20 18:06:19 crc kubenswrapper[4661]: I0120 18:06:19.970014 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26d03e00aaf9fc7a94d8fe25f4f6f7a028f4e5eb9956411442757ca8b2046d27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:19Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:19 crc kubenswrapper[4661]: I0120 18:06:19.982684 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:19Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:19 crc kubenswrapper[4661]: I0120 18:06:19.997550 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-svf7c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"78855c94-da90-4523-8d65-70f7fd153dee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce85015f47761ddd35031a4b2aa10eddde92a1f1ee206e6454b967b03b49372e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zvj2k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7dad5141c6e2e07d42bee1c473efffa900d0d900467b1524cd59962582696a3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zvj2k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T18:06:05Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-svf7c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:19Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:20 crc kubenswrapper[4661]: I0120 18:06:20.017008 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-j9j6p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e190abed-d178-4ce7-9485-f6090ecf8578\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ad84b24b0398f3f900b9440d55a7914e661a18580ef8b248ffdce4d8a6c75c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxwgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6923af243783c919b8d74338d7221f91f7c6b770d97eb3a2f7e30360376f071d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6923af243783c919b8d74338d7221f91f7c6b770d97eb3a2f7e30360376f071d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T18:06:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T18:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxwgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d61ecbabdd991af4f3f3005e3d6fab0d3f7fa863e7503f45dd91633dfc68c597\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d61ecbabdd991af4f3f3005e3d6fab0d3f7fa863e7503f45dd91633dfc68c597\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T18:06:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T18:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxwgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31c8fb341a4de1d1144737f83eb46ad0b301f7eb48dee0969da7ade7fbd513da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31c8fb341a4de1d1144737f83eb46ad0b301f7eb48dee0969da7ade7fbd513da\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T18:06:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T18:06:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxwgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db8122764bd0508f39da125b5849fbe3bad9558e511c18f26bdcf4e5b23ca3a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db8122764bd0508f39da125b5849fbe3bad9558e511c18f26bdcf4e5b23ca3a1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T18:06:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T18:06:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxwgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://60e382a199aa3a85c11fdf8c490a4f039a191cff8a604b004e2f4ea6dacb6800\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://60e382a199aa3a85c11fdf8c490a4f039a191cff8a604b004e2f4ea6dacb6800\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T18:06:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T18:06:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxwgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb0e9f6dd4681c1b791524e22d3f668ce544cdc72a33af01fa70f2dd93d2972f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb0e9f6dd4681c1b791524e22d3f668ce544cdc72a33af01fa70f2dd93d2972f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T18:06:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T18:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxwgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T18:06:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-j9j6p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:20Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:20 crc kubenswrapper[4661]: I0120 18:06:20.031638 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aafdc595f8f331b863d71124f1aa3c686ec883829377108268dd78de88f498ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a15e7bb714cbcf03a4ed8925508be80b06b04f3cd455d293237554c8ad0fdeee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:20Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:20 crc kubenswrapper[4661]: I0120 18:06:20.037091 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:20 crc kubenswrapper[4661]: I0120 18:06:20.037185 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:20 crc kubenswrapper[4661]: I0120 18:06:20.037260 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:20 crc kubenswrapper[4661]: I0120 18:06:20.037330 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:20 crc kubenswrapper[4661]: I0120 18:06:20.037388 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:20Z","lastTransitionTime":"2026-01-20T18:06:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:20 crc kubenswrapper[4661]: I0120 18:06:20.044422 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-9m9jm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c44ff326-6791-438a-8d65-b2be26e9c819\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de5a607340e429cf954b1b6e147c4dbff99ffee4d311e9692410698574915af2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kn7nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T18:06:04Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-9m9jm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:20Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:20 crc kubenswrapper[4661]: I0120 18:06:20.066477 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fxb9d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3856f23c-8dc3-4b36-b3b7-955dff315250\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:06Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:06Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54a53d0636da9c6e7974633697967fa21ba02b0357019aca7c83994f57d06d84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66kpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://37fb98a4cea5fe59a694ef52ebebfd3366649970415c8bd3b1307e6d150ffe66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66kpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6bac19d8c5ba66dc20e5e4b90b2ba10efe69f218908b04abb221416f47e47f5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66kpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f5f5d96326cd37c1101488fff8b4ce215ce84766faf13112bed7df0a767de0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66kpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d53da47c39bd1f10fe866890f30f12f27cb0cfce0348c89fc0e89b3e8f563f2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66kpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://407e4d66f22050b80251fcb98ac7168d601d70dff1679bdaca0fc82d6068da41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66kpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://936cc61252844f599e65f506adbc3ca6e06fee03fe77fc1a75295040562e5c1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://936cc61252844f599e65f506adbc3ca6e06fee03fe77fc1a75295040562e5c1b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-20T18:06:18Z\\\",\\\"message\\\":\\\"kPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0120 18:06:17.419651 6023 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0120 18:06:17.419792 6023 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0120 18:06:17.419824 6023 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0120 18:06:17.419873 6023 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0120 18:06:17.420180 6023 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0120 18:06:17.449846 6023 shared_informer.go:320] Caches are synced for node-tracker-controller\\\\nI0120 18:06:17.449878 6023 services_controller.go:204] Setting up event handlers for services for network=default\\\\nI0120 18:06:17.449945 6023 ovnkube.go:599] Stopped ovnkube\\\\nI0120 18:06:17.449972 6023 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0120 18:06:17.450061 6023 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T18:06:16Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-fxb9d_openshift-ovn-kubernetes(3856f23c-8dc3-4b36-b3b7-955dff315250)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66kpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dfbc19df20b659446872267891c3a922b6a01e39d8f0557505f25cdc5ba1a648\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66kpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://babd416d0d33b286f533dc5bd8d6904d24fd23632efce36edb6e13183fbd390a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://babd416d0d33b286f533dc5bd8d6904d24fd23632efce36edb6e13183fbd390a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T18:06:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66kpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T18:06:06Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fxb9d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:20Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:20 crc kubenswrapper[4661]: I0120 18:06:20.080885 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5947c5f0-b932-4127-a183-6b9023784c81\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:05:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:05:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:05:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2286c38d543136df613b2611b8d494d0777a950158adb169c26708335c024251\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7995b8e096ce8c7adf28d9baa4e12d943a697db80ee2b6e6b347b334e44b0df\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a1fb928361cffd6f14855b6c1cf5964eccc9f923435bf79dddd8f0c94decd9a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a8e025f49d745d0d846c606a3ec9dd6fbd2d255e8662ba1fd1a65f0d4289e77\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3584f02089912eecb6ea77d78d4f093929ce92631cb9ea758f1311268963b6b1\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-20T18:06:02Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0120 18:05:56.920405 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0120 18:05:56.921589 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1862726087/tls.crt::/tmp/serving-cert-1862726087/tls.key\\\\\\\"\\\\nI0120 18:06:02.544098 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0120 18:06:02.549414 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0120 18:06:02.549439 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0120 18:06:02.549472 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0120 18:06:02.549479 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0120 18:06:02.569160 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0120 18:06:02.569400 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0120 18:06:02.569474 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0120 18:06:02.569536 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0120 18:06:02.569594 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0120 18:06:02.569648 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0120 18:06:02.569744 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0120 18:06:02.569342 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0120 18:06:02.573278 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T18:05:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f09e5fcc7fafac7a11257184f5919c06b5b2e56a677b67c664e6489d9a581a20\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:05:46Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6eedc9bdf3c37af238cf9ad5172a8d93751c0641cbf43057016157f086c77538\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6eedc9bdf3c37af238cf9ad5172a8d93751c0641cbf43057016157f086c77538\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T18:05:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T18:05:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T18:05:44Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:20Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:20 crc kubenswrapper[4661]: I0120 18:06:20.095823 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:03Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:20Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:20 crc kubenswrapper[4661]: I0120 18:06:20.097224 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/58dc28d6-51a6-49ce-ab8e-ac0b6bbe7131-metrics-certs\") pod \"network-metrics-daemon-dhd6h\" (UID: \"58dc28d6-51a6-49ce-ab8e-ac0b6bbe7131\") " pod="openshift-multus/network-metrics-daemon-dhd6h" Jan 20 18:06:20 crc kubenswrapper[4661]: I0120 18:06:20.097327 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j95bf\" (UniqueName: \"kubernetes.io/projected/58dc28d6-51a6-49ce-ab8e-ac0b6bbe7131-kube-api-access-j95bf\") pod \"network-metrics-daemon-dhd6h\" (UID: \"58dc28d6-51a6-49ce-ab8e-ac0b6bbe7131\") " pod="openshift-multus/network-metrics-daemon-dhd6h" Jan 20 18:06:20 crc kubenswrapper[4661]: I0120 18:06:20.102793 4661 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-27 04:37:09.238500203 +0000 UTC Jan 20 18:06:20 crc kubenswrapper[4661]: I0120 18:06:20.108336 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:03Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:20Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:20 crc kubenswrapper[4661]: I0120 18:06:20.125313 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:03Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:20Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:20 crc kubenswrapper[4661]: I0120 18:06:20.140118 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:20 crc kubenswrapper[4661]: I0120 18:06:20.140183 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:20 crc kubenswrapper[4661]: I0120 18:06:20.140198 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:20 crc kubenswrapper[4661]: I0120 18:06:20.140221 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:20 crc kubenswrapper[4661]: I0120 18:06:20.140240 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:20Z","lastTransitionTime":"2026-01-20T18:06:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:20 crc kubenswrapper[4661]: I0120 18:06:20.151247 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:03Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:20Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:20 crc kubenswrapper[4661]: I0120 18:06:20.165750 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-9m9jm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c44ff326-6791-438a-8d65-b2be26e9c819\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de5a607340e429cf954b1b6e147c4dbff99ffee4d311e9692410698574915af2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kn7nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T18:06:04Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-9m9jm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:20Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:20 crc kubenswrapper[4661]: I0120 18:06:20.193166 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fxb9d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3856f23c-8dc3-4b36-b3b7-955dff315250\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:06Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:06Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54a53d0636da9c6e7974633697967fa21ba02b0357019aca7c83994f57d06d84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66kpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://37fb98a4cea5fe59a694ef52ebebfd3366649970415c8bd3b1307e6d150ffe66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66kpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6bac19d8c5ba66dc20e5e4b90b2ba10efe69f218908b04abb221416f47e47f5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66kpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f5f5d96326cd37c1101488fff8b4ce215ce84766faf13112bed7df0a767de0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66kpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d53da47c39bd1f10fe866890f30f12f27cb0cfce0348c89fc0e89b3e8f563f2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66kpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://407e4d66f22050b80251fcb98ac7168d601d70dff1679bdaca0fc82d6068da41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66kpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://936cc61252844f599e65f506adbc3ca6e06fee03fe77fc1a75295040562e5c1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://936cc61252844f599e65f506adbc3ca6e06fee03fe77fc1a75295040562e5c1b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-20T18:06:18Z\\\",\\\"message\\\":\\\"kPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0120 18:06:17.419651 6023 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0120 18:06:17.419792 6023 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0120 18:06:17.419824 6023 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0120 18:06:17.419873 6023 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0120 18:06:17.420180 6023 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0120 18:06:17.449846 6023 shared_informer.go:320] Caches are synced for node-tracker-controller\\\\nI0120 18:06:17.449878 6023 services_controller.go:204] Setting up event handlers for services for network=default\\\\nI0120 18:06:17.449945 6023 ovnkube.go:599] Stopped ovnkube\\\\nI0120 18:06:17.449972 6023 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0120 18:06:17.450061 6023 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T18:06:16Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-fxb9d_openshift-ovn-kubernetes(3856f23c-8dc3-4b36-b3b7-955dff315250)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66kpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dfbc19df20b659446872267891c3a922b6a01e39d8f0557505f25cdc5ba1a648\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66kpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://babd416d0d33b286f533dc5bd8d6904d24fd23632efce36edb6e13183fbd390a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://babd416d0d33b286f533dc5bd8d6904d24fd23632efce36edb6e13183fbd390a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T18:06:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66kpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T18:06:06Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fxb9d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:20Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:20 crc kubenswrapper[4661]: I0120 18:06:20.198182 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/58dc28d6-51a6-49ce-ab8e-ac0b6bbe7131-metrics-certs\") pod \"network-metrics-daemon-dhd6h\" (UID: \"58dc28d6-51a6-49ce-ab8e-ac0b6bbe7131\") " pod="openshift-multus/network-metrics-daemon-dhd6h" Jan 20 18:06:20 crc kubenswrapper[4661]: I0120 18:06:20.198235 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j95bf\" (UniqueName: \"kubernetes.io/projected/58dc28d6-51a6-49ce-ab8e-ac0b6bbe7131-kube-api-access-j95bf\") pod \"network-metrics-daemon-dhd6h\" (UID: \"58dc28d6-51a6-49ce-ab8e-ac0b6bbe7131\") " pod="openshift-multus/network-metrics-daemon-dhd6h" Jan 20 18:06:20 crc kubenswrapper[4661]: E0120 18:06:20.198447 4661 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 20 18:06:20 crc kubenswrapper[4661]: E0120 18:06:20.198564 4661 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/58dc28d6-51a6-49ce-ab8e-ac0b6bbe7131-metrics-certs podName:58dc28d6-51a6-49ce-ab8e-ac0b6bbe7131 nodeName:}" failed. No retries permitted until 2026-01-20 18:06:20.698536677 +0000 UTC m=+37.029326369 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/58dc28d6-51a6-49ce-ab8e-ac0b6bbe7131-metrics-certs") pod "network-metrics-daemon-dhd6h" (UID: "58dc28d6-51a6-49ce-ab8e-ac0b6bbe7131") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 20 18:06:20 crc kubenswrapper[4661]: I0120 18:06:20.211714 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-dhd6h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"58dc28d6-51a6-49ce-ab8e-ac0b6bbe7131\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j95bf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j95bf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T18:06:19Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-dhd6h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:20Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:20 crc kubenswrapper[4661]: I0120 18:06:20.224695 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j95bf\" (UniqueName: \"kubernetes.io/projected/58dc28d6-51a6-49ce-ab8e-ac0b6bbe7131-kube-api-access-j95bf\") pod \"network-metrics-daemon-dhd6h\" (UID: \"58dc28d6-51a6-49ce-ab8e-ac0b6bbe7131\") " pod="openshift-multus/network-metrics-daemon-dhd6h" Jan 20 18:06:20 crc kubenswrapper[4661]: I0120 18:06:20.234360 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5947c5f0-b932-4127-a183-6b9023784c81\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:05:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:05:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:05:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2286c38d543136df613b2611b8d494d0777a950158adb169c26708335c024251\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7995b8e096ce8c7adf28d9baa4e12d943a697db80ee2b6e6b347b334e44b0df\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a1fb928361cffd6f14855b6c1cf5964eccc9f923435bf79dddd8f0c94decd9a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a8e025f49d745d0d846c606a3ec9dd6fbd2d255e8662ba1fd1a65f0d4289e77\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3584f02089912eecb6ea77d78d4f093929ce92631cb9ea758f1311268963b6b1\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-20T18:06:02Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0120 18:05:56.920405 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0120 18:05:56.921589 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1862726087/tls.crt::/tmp/serving-cert-1862726087/tls.key\\\\\\\"\\\\nI0120 18:06:02.544098 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0120 18:06:02.549414 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0120 18:06:02.549439 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0120 18:06:02.549472 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0120 18:06:02.549479 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0120 18:06:02.569160 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0120 18:06:02.569400 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0120 18:06:02.569474 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0120 18:06:02.569536 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0120 18:06:02.569594 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0120 18:06:02.569648 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0120 18:06:02.569744 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0120 18:06:02.569342 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0120 18:06:02.573278 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T18:05:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f09e5fcc7fafac7a11257184f5919c06b5b2e56a677b67c664e6489d9a581a20\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:05:46Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6eedc9bdf3c37af238cf9ad5172a8d93751c0641cbf43057016157f086c77538\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6eedc9bdf3c37af238cf9ad5172a8d93751c0641cbf43057016157f086c77538\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T18:05:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T18:05:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T18:05:44Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:20Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:20 crc kubenswrapper[4661]: I0120 18:06:20.243692 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:20 crc kubenswrapper[4661]: I0120 18:06:20.243832 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:20 crc kubenswrapper[4661]: I0120 18:06:20.243899 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:20 crc kubenswrapper[4661]: I0120 18:06:20.243968 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:20 crc kubenswrapper[4661]: I0120 18:06:20.244033 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:20Z","lastTransitionTime":"2026-01-20T18:06:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:20 crc kubenswrapper[4661]: I0120 18:06:20.256538 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d831477bdf455582c54cba87020fc1141541282a25169c4b9730a78855e5719\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:20Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:20 crc kubenswrapper[4661]: I0120 18:06:20.276635 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-z97p2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b6f2401-3eb9-4ee4-b79c-6faee06bc21c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d04be3c87130e9506908a5ff0bf35490bafa64b4cec7b6ae1b67c4a8bd7df5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff8qg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T18:06:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-z97p2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:20Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:20 crc kubenswrapper[4661]: I0120 18:06:20.296604 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4hf4s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2cada643-eb7b-4036-8788-500338f73fac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://59b5cf3db3513f82b52401408842627d3e40bdc3009c226548556808410b2289\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gwqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://846c1cab30f986276eb919ac7474fbde1b6d5edb6557ab47057723b68d78b782\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gwqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T18:06:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-4hf4s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:20Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:20 crc kubenswrapper[4661]: I0120 18:06:20.314908 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f7511825-196e-48ea-a80c-f30a6806a15f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:05:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:05:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f30ca85f0d31021dde3b56c646ddd5d841e699b809c85e54afa944cc8035df6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://008613eee577926f777b6eba5a93379dca1203429fb29918bb057f2aba5eba4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:05:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://baf1692fe971ebe4534bc83cc471812d2b2883b6f97e53728ded6cd57b40c6f1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://faea3c0fefa61b8f0e07a050f59ca7b88d89a7ac8dba19ab019cff00fd782da3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T18:05:44Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:20Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:20 crc kubenswrapper[4661]: I0120 18:06:20.333922 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:20Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:20 crc kubenswrapper[4661]: I0120 18:06:20.347933 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:20 crc kubenswrapper[4661]: I0120 18:06:20.348002 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:20 crc kubenswrapper[4661]: I0120 18:06:20.348020 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:20 crc kubenswrapper[4661]: I0120 18:06:20.348057 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:20 crc kubenswrapper[4661]: I0120 18:06:20.348077 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:20Z","lastTransitionTime":"2026-01-20T18:06:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:20 crc kubenswrapper[4661]: I0120 18:06:20.356044 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-svf7c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"78855c94-da90-4523-8d65-70f7fd153dee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce85015f47761ddd35031a4b2aa10eddde92a1f1ee206e6454b967b03b49372e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zvj2k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7dad5141c6e2e07d42bee1c473efffa900d0d900467b1524cd59962582696a3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zvj2k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T18:06:05Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-svf7c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:20Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:20 crc kubenswrapper[4661]: I0120 18:06:20.371774 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-tfdrt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1c3f1ce7-0584-4bf1-8398-a277e9a4599b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://163c719cffaaa547e54e81b543b5f5b2ce5abf7f6309d2859831a14e42df189f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbq77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T18:06:07Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-tfdrt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:20Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:20 crc kubenswrapper[4661]: I0120 18:06:20.391807 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26d03e00aaf9fc7a94d8fe25f4f6f7a028f4e5eb9956411442757ca8b2046d27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:20Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:20 crc kubenswrapper[4661]: I0120 18:06:20.412968 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aafdc595f8f331b863d71124f1aa3c686ec883829377108268dd78de88f498ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a15e7bb714cbcf03a4ed8925508be80b06b04f3cd455d293237554c8ad0fdeee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:20Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:20 crc kubenswrapper[4661]: I0120 18:06:20.430145 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-j9j6p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e190abed-d178-4ce7-9485-f6090ecf8578\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ad84b24b0398f3f900b9440d55a7914e661a18580ef8b248ffdce4d8a6c75c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxwgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6923af243783c919b8d74338d7221f91f7c6b770d97eb3a2f7e30360376f071d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6923af243783c919b8d74338d7221f91f7c6b770d97eb3a2f7e30360376f071d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T18:06:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T18:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxwgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d61ecbabdd991af4f3f3005e3d6fab0d3f7fa863e7503f45dd91633dfc68c597\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d61ecbabdd991af4f3f3005e3d6fab0d3f7fa863e7503f45dd91633dfc68c597\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T18:06:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T18:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxwgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31c8fb341a4de1d1144737f83eb46ad0b301f7eb48dee0969da7ade7fbd513da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31c8fb341a4de1d1144737f83eb46ad0b301f7eb48dee0969da7ade7fbd513da\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T18:06:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T18:06:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxwgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db8122764bd0508f39da125b5849fbe3bad9558e511c18f26bdcf4e5b23ca3a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db8122764bd0508f39da125b5849fbe3bad9558e511c18f26bdcf4e5b23ca3a1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T18:06:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T18:06:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxwgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://60e382a199aa3a85c11fdf8c490a4f039a191cff8a604b004e2f4ea6dacb6800\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://60e382a199aa3a85c11fdf8c490a4f039a191cff8a604b004e2f4ea6dacb6800\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T18:06:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T18:06:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxwgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb0e9f6dd4681c1b791524e22d3f668ce544cdc72a33af01fa70f2dd93d2972f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb0e9f6dd4681c1b791524e22d3f668ce544cdc72a33af01fa70f2dd93d2972f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T18:06:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T18:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxwgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T18:06:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-j9j6p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:20Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:20 crc kubenswrapper[4661]: I0120 18:06:20.452179 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:20 crc kubenswrapper[4661]: I0120 18:06:20.452251 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:20 crc kubenswrapper[4661]: I0120 18:06:20.452276 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:20 crc kubenswrapper[4661]: I0120 18:06:20.452315 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:20 crc kubenswrapper[4661]: I0120 18:06:20.452340 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:20Z","lastTransitionTime":"2026-01-20T18:06:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:20 crc kubenswrapper[4661]: I0120 18:06:20.556084 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:20 crc kubenswrapper[4661]: I0120 18:06:20.556149 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:20 crc kubenswrapper[4661]: I0120 18:06:20.556171 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:20 crc kubenswrapper[4661]: I0120 18:06:20.556199 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:20 crc kubenswrapper[4661]: I0120 18:06:20.556220 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:20Z","lastTransitionTime":"2026-01-20T18:06:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:20 crc kubenswrapper[4661]: I0120 18:06:20.659812 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:20 crc kubenswrapper[4661]: I0120 18:06:20.659875 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:20 crc kubenswrapper[4661]: I0120 18:06:20.659888 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:20 crc kubenswrapper[4661]: I0120 18:06:20.659911 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:20 crc kubenswrapper[4661]: I0120 18:06:20.659929 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:20Z","lastTransitionTime":"2026-01-20T18:06:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:20 crc kubenswrapper[4661]: I0120 18:06:20.704583 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/58dc28d6-51a6-49ce-ab8e-ac0b6bbe7131-metrics-certs\") pod \"network-metrics-daemon-dhd6h\" (UID: \"58dc28d6-51a6-49ce-ab8e-ac0b6bbe7131\") " pod="openshift-multus/network-metrics-daemon-dhd6h" Jan 20 18:06:20 crc kubenswrapper[4661]: E0120 18:06:20.704891 4661 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 20 18:06:20 crc kubenswrapper[4661]: E0120 18:06:20.705384 4661 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/58dc28d6-51a6-49ce-ab8e-ac0b6bbe7131-metrics-certs podName:58dc28d6-51a6-49ce-ab8e-ac0b6bbe7131 nodeName:}" failed. No retries permitted until 2026-01-20 18:06:21.705346615 +0000 UTC m=+38.036136347 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/58dc28d6-51a6-49ce-ab8e-ac0b6bbe7131-metrics-certs") pod "network-metrics-daemon-dhd6h" (UID: "58dc28d6-51a6-49ce-ab8e-ac0b6bbe7131") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 20 18:06:20 crc kubenswrapper[4661]: I0120 18:06:20.762917 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:20 crc kubenswrapper[4661]: I0120 18:06:20.762980 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:20 crc kubenswrapper[4661]: I0120 18:06:20.762994 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:20 crc kubenswrapper[4661]: I0120 18:06:20.763015 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:20 crc kubenswrapper[4661]: I0120 18:06:20.763033 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:20Z","lastTransitionTime":"2026-01-20T18:06:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:20 crc kubenswrapper[4661]: I0120 18:06:20.866712 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:20 crc kubenswrapper[4661]: I0120 18:06:20.866802 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:20 crc kubenswrapper[4661]: I0120 18:06:20.866827 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:20 crc kubenswrapper[4661]: I0120 18:06:20.866861 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:20 crc kubenswrapper[4661]: I0120 18:06:20.866889 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:20Z","lastTransitionTime":"2026-01-20T18:06:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:20 crc kubenswrapper[4661]: I0120 18:06:20.970762 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:20 crc kubenswrapper[4661]: I0120 18:06:20.970845 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:20 crc kubenswrapper[4661]: I0120 18:06:20.970862 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:20 crc kubenswrapper[4661]: I0120 18:06:20.970890 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:20 crc kubenswrapper[4661]: I0120 18:06:20.970910 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:20Z","lastTransitionTime":"2026-01-20T18:06:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:21 crc kubenswrapper[4661]: I0120 18:06:21.074904 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:21 crc kubenswrapper[4661]: I0120 18:06:21.074982 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:21 crc kubenswrapper[4661]: I0120 18:06:21.074995 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:21 crc kubenswrapper[4661]: I0120 18:06:21.075019 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:21 crc kubenswrapper[4661]: I0120 18:06:21.075039 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:21Z","lastTransitionTime":"2026-01-20T18:06:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:21 crc kubenswrapper[4661]: I0120 18:06:21.103322 4661 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-01 10:17:31.553988764 +0000 UTC Jan 20 18:06:21 crc kubenswrapper[4661]: I0120 18:06:21.142005 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dhd6h" Jan 20 18:06:21 crc kubenswrapper[4661]: I0120 18:06:21.142158 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 18:06:21 crc kubenswrapper[4661]: E0120 18:06:21.142279 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dhd6h" podUID="58dc28d6-51a6-49ce-ab8e-ac0b6bbe7131" Jan 20 18:06:21 crc kubenswrapper[4661]: I0120 18:06:21.142007 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 18:06:21 crc kubenswrapper[4661]: E0120 18:06:21.142414 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 20 18:06:21 crc kubenswrapper[4661]: E0120 18:06:21.142591 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 20 18:06:21 crc kubenswrapper[4661]: I0120 18:06:21.142059 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 18:06:21 crc kubenswrapper[4661]: E0120 18:06:21.143015 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 20 18:06:21 crc kubenswrapper[4661]: I0120 18:06:21.178493 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:21 crc kubenswrapper[4661]: I0120 18:06:21.178543 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:21 crc kubenswrapper[4661]: I0120 18:06:21.178557 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:21 crc kubenswrapper[4661]: I0120 18:06:21.178576 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:21 crc kubenswrapper[4661]: I0120 18:06:21.178590 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:21Z","lastTransitionTime":"2026-01-20T18:06:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:21 crc kubenswrapper[4661]: I0120 18:06:21.283466 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:21 crc kubenswrapper[4661]: I0120 18:06:21.283525 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:21 crc kubenswrapper[4661]: I0120 18:06:21.283547 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:21 crc kubenswrapper[4661]: I0120 18:06:21.283570 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:21 crc kubenswrapper[4661]: I0120 18:06:21.283589 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:21Z","lastTransitionTime":"2026-01-20T18:06:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:21 crc kubenswrapper[4661]: I0120 18:06:21.386987 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:21 crc kubenswrapper[4661]: I0120 18:06:21.387070 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:21 crc kubenswrapper[4661]: I0120 18:06:21.387091 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:21 crc kubenswrapper[4661]: I0120 18:06:21.387122 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:21 crc kubenswrapper[4661]: I0120 18:06:21.387142 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:21Z","lastTransitionTime":"2026-01-20T18:06:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:21 crc kubenswrapper[4661]: I0120 18:06:21.490390 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:21 crc kubenswrapper[4661]: I0120 18:06:21.490461 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:21 crc kubenswrapper[4661]: I0120 18:06:21.490477 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:21 crc kubenswrapper[4661]: I0120 18:06:21.490512 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:21 crc kubenswrapper[4661]: I0120 18:06:21.490532 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:21Z","lastTransitionTime":"2026-01-20T18:06:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:21 crc kubenswrapper[4661]: I0120 18:06:21.594131 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:21 crc kubenswrapper[4661]: I0120 18:06:21.594535 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:21 crc kubenswrapper[4661]: I0120 18:06:21.594551 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:21 crc kubenswrapper[4661]: I0120 18:06:21.594571 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:21 crc kubenswrapper[4661]: I0120 18:06:21.594584 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:21Z","lastTransitionTime":"2026-01-20T18:06:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:21 crc kubenswrapper[4661]: I0120 18:06:21.697917 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:21 crc kubenswrapper[4661]: I0120 18:06:21.697960 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:21 crc kubenswrapper[4661]: I0120 18:06:21.697971 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:21 crc kubenswrapper[4661]: I0120 18:06:21.697990 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:21 crc kubenswrapper[4661]: I0120 18:06:21.698008 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:21Z","lastTransitionTime":"2026-01-20T18:06:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:21 crc kubenswrapper[4661]: I0120 18:06:21.717027 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/58dc28d6-51a6-49ce-ab8e-ac0b6bbe7131-metrics-certs\") pod \"network-metrics-daemon-dhd6h\" (UID: \"58dc28d6-51a6-49ce-ab8e-ac0b6bbe7131\") " pod="openshift-multus/network-metrics-daemon-dhd6h" Jan 20 18:06:21 crc kubenswrapper[4661]: E0120 18:06:21.717395 4661 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 20 18:06:21 crc kubenswrapper[4661]: E0120 18:06:21.717507 4661 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/58dc28d6-51a6-49ce-ab8e-ac0b6bbe7131-metrics-certs podName:58dc28d6-51a6-49ce-ab8e-ac0b6bbe7131 nodeName:}" failed. No retries permitted until 2026-01-20 18:06:23.717480271 +0000 UTC m=+40.048269973 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/58dc28d6-51a6-49ce-ab8e-ac0b6bbe7131-metrics-certs") pod "network-metrics-daemon-dhd6h" (UID: "58dc28d6-51a6-49ce-ab8e-ac0b6bbe7131") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 20 18:06:21 crc kubenswrapper[4661]: I0120 18:06:21.805584 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:21 crc kubenswrapper[4661]: I0120 18:06:21.805880 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:21 crc kubenswrapper[4661]: I0120 18:06:21.805903 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:21 crc kubenswrapper[4661]: I0120 18:06:21.805949 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:21 crc kubenswrapper[4661]: I0120 18:06:21.805971 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:21Z","lastTransitionTime":"2026-01-20T18:06:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:21 crc kubenswrapper[4661]: I0120 18:06:21.910233 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:21 crc kubenswrapper[4661]: I0120 18:06:21.910300 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:21 crc kubenswrapper[4661]: I0120 18:06:21.910319 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:21 crc kubenswrapper[4661]: I0120 18:06:21.910347 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:21 crc kubenswrapper[4661]: I0120 18:06:21.910365 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:21Z","lastTransitionTime":"2026-01-20T18:06:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:22 crc kubenswrapper[4661]: I0120 18:06:22.013928 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:22 crc kubenswrapper[4661]: I0120 18:06:22.014356 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:22 crc kubenswrapper[4661]: I0120 18:06:22.014426 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:22 crc kubenswrapper[4661]: I0120 18:06:22.014495 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:22 crc kubenswrapper[4661]: I0120 18:06:22.014561 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:22Z","lastTransitionTime":"2026-01-20T18:06:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:22 crc kubenswrapper[4661]: I0120 18:06:22.104094 4661 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-30 10:57:30.595121721 +0000 UTC Jan 20 18:06:22 crc kubenswrapper[4661]: I0120 18:06:22.118527 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:22 crc kubenswrapper[4661]: I0120 18:06:22.118607 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:22 crc kubenswrapper[4661]: I0120 18:06:22.118628 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:22 crc kubenswrapper[4661]: I0120 18:06:22.118656 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:22 crc kubenswrapper[4661]: I0120 18:06:22.118702 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:22Z","lastTransitionTime":"2026-01-20T18:06:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:22 crc kubenswrapper[4661]: I0120 18:06:22.220961 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:22 crc kubenswrapper[4661]: I0120 18:06:22.221233 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:22 crc kubenswrapper[4661]: I0120 18:06:22.221377 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:22 crc kubenswrapper[4661]: I0120 18:06:22.221467 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:22 crc kubenswrapper[4661]: I0120 18:06:22.221542 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:22Z","lastTransitionTime":"2026-01-20T18:06:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:22 crc kubenswrapper[4661]: I0120 18:06:22.324988 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:22 crc kubenswrapper[4661]: I0120 18:06:22.325063 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:22 crc kubenswrapper[4661]: I0120 18:06:22.325087 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:22 crc kubenswrapper[4661]: I0120 18:06:22.325119 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:22 crc kubenswrapper[4661]: I0120 18:06:22.325144 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:22Z","lastTransitionTime":"2026-01-20T18:06:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:22 crc kubenswrapper[4661]: I0120 18:06:22.428865 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:22 crc kubenswrapper[4661]: I0120 18:06:22.429322 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:22 crc kubenswrapper[4661]: I0120 18:06:22.429423 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:22 crc kubenswrapper[4661]: I0120 18:06:22.429520 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:22 crc kubenswrapper[4661]: I0120 18:06:22.429626 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:22Z","lastTransitionTime":"2026-01-20T18:06:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:22 crc kubenswrapper[4661]: I0120 18:06:22.532596 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:22 crc kubenswrapper[4661]: I0120 18:06:22.532701 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:22 crc kubenswrapper[4661]: I0120 18:06:22.532720 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:22 crc kubenswrapper[4661]: I0120 18:06:22.532748 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:22 crc kubenswrapper[4661]: I0120 18:06:22.532769 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:22Z","lastTransitionTime":"2026-01-20T18:06:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:22 crc kubenswrapper[4661]: I0120 18:06:22.636300 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:22 crc kubenswrapper[4661]: I0120 18:06:22.636381 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:22 crc kubenswrapper[4661]: I0120 18:06:22.636404 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:22 crc kubenswrapper[4661]: I0120 18:06:22.636436 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:22 crc kubenswrapper[4661]: I0120 18:06:22.636457 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:22Z","lastTransitionTime":"2026-01-20T18:06:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:22 crc kubenswrapper[4661]: I0120 18:06:22.739155 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:22 crc kubenswrapper[4661]: I0120 18:06:22.739231 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:22 crc kubenswrapper[4661]: I0120 18:06:22.739253 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:22 crc kubenswrapper[4661]: I0120 18:06:22.739278 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:22 crc kubenswrapper[4661]: I0120 18:06:22.739301 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:22Z","lastTransitionTime":"2026-01-20T18:06:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:22 crc kubenswrapper[4661]: I0120 18:06:22.842583 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:22 crc kubenswrapper[4661]: I0120 18:06:22.843003 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:22 crc kubenswrapper[4661]: I0120 18:06:22.843254 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:22 crc kubenswrapper[4661]: I0120 18:06:22.843798 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:22 crc kubenswrapper[4661]: I0120 18:06:22.844240 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:22Z","lastTransitionTime":"2026-01-20T18:06:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:22 crc kubenswrapper[4661]: I0120 18:06:22.948350 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:22 crc kubenswrapper[4661]: I0120 18:06:22.948796 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:22 crc kubenswrapper[4661]: I0120 18:06:22.949039 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:22 crc kubenswrapper[4661]: I0120 18:06:22.949257 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:22 crc kubenswrapper[4661]: I0120 18:06:22.949436 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:22Z","lastTransitionTime":"2026-01-20T18:06:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:23 crc kubenswrapper[4661]: I0120 18:06:23.053402 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:23 crc kubenswrapper[4661]: I0120 18:06:23.053769 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:23 crc kubenswrapper[4661]: I0120 18:06:23.053863 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:23 crc kubenswrapper[4661]: I0120 18:06:23.053980 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:23 crc kubenswrapper[4661]: I0120 18:06:23.054063 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:23Z","lastTransitionTime":"2026-01-20T18:06:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:23 crc kubenswrapper[4661]: I0120 18:06:23.105203 4661 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-14 19:02:53.236035081 +0000 UTC Jan 20 18:06:23 crc kubenswrapper[4661]: I0120 18:06:23.141925 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dhd6h" Jan 20 18:06:23 crc kubenswrapper[4661]: I0120 18:06:23.141953 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 18:06:23 crc kubenswrapper[4661]: I0120 18:06:23.141925 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 18:06:23 crc kubenswrapper[4661]: E0120 18:06:23.142168 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 20 18:06:23 crc kubenswrapper[4661]: E0120 18:06:23.142260 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 20 18:06:23 crc kubenswrapper[4661]: I0120 18:06:23.141953 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 18:06:23 crc kubenswrapper[4661]: E0120 18:06:23.142417 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dhd6h" podUID="58dc28d6-51a6-49ce-ab8e-ac0b6bbe7131" Jan 20 18:06:23 crc kubenswrapper[4661]: E0120 18:06:23.142852 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 20 18:06:23 crc kubenswrapper[4661]: I0120 18:06:23.157283 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:23 crc kubenswrapper[4661]: I0120 18:06:23.158093 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:23 crc kubenswrapper[4661]: I0120 18:06:23.158355 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:23 crc kubenswrapper[4661]: I0120 18:06:23.158626 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:23 crc kubenswrapper[4661]: I0120 18:06:23.158818 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:23Z","lastTransitionTime":"2026-01-20T18:06:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:23 crc kubenswrapper[4661]: I0120 18:06:23.264288 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:23 crc kubenswrapper[4661]: I0120 18:06:23.264359 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:23 crc kubenswrapper[4661]: I0120 18:06:23.264375 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:23 crc kubenswrapper[4661]: I0120 18:06:23.264405 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:23 crc kubenswrapper[4661]: I0120 18:06:23.264428 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:23Z","lastTransitionTime":"2026-01-20T18:06:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:23 crc kubenswrapper[4661]: I0120 18:06:23.367224 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:23 crc kubenswrapper[4661]: I0120 18:06:23.368093 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:23 crc kubenswrapper[4661]: I0120 18:06:23.368138 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:23 crc kubenswrapper[4661]: I0120 18:06:23.368161 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:23 crc kubenswrapper[4661]: I0120 18:06:23.368178 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:23Z","lastTransitionTime":"2026-01-20T18:06:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:23 crc kubenswrapper[4661]: I0120 18:06:23.472111 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:23 crc kubenswrapper[4661]: I0120 18:06:23.472180 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:23 crc kubenswrapper[4661]: I0120 18:06:23.472201 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:23 crc kubenswrapper[4661]: I0120 18:06:23.472233 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:23 crc kubenswrapper[4661]: I0120 18:06:23.472254 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:23Z","lastTransitionTime":"2026-01-20T18:06:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:23 crc kubenswrapper[4661]: I0120 18:06:23.575606 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:23 crc kubenswrapper[4661]: I0120 18:06:23.575705 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:23 crc kubenswrapper[4661]: I0120 18:06:23.575721 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:23 crc kubenswrapper[4661]: I0120 18:06:23.575743 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:23 crc kubenswrapper[4661]: I0120 18:06:23.575756 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:23Z","lastTransitionTime":"2026-01-20T18:06:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:23 crc kubenswrapper[4661]: I0120 18:06:23.681600 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:23 crc kubenswrapper[4661]: I0120 18:06:23.682556 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:23 crc kubenswrapper[4661]: I0120 18:06:23.682842 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:23 crc kubenswrapper[4661]: I0120 18:06:23.683119 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:23 crc kubenswrapper[4661]: I0120 18:06:23.683337 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:23Z","lastTransitionTime":"2026-01-20T18:06:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:23 crc kubenswrapper[4661]: I0120 18:06:23.749152 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/58dc28d6-51a6-49ce-ab8e-ac0b6bbe7131-metrics-certs\") pod \"network-metrics-daemon-dhd6h\" (UID: \"58dc28d6-51a6-49ce-ab8e-ac0b6bbe7131\") " pod="openshift-multus/network-metrics-daemon-dhd6h" Jan 20 18:06:23 crc kubenswrapper[4661]: E0120 18:06:23.749436 4661 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 20 18:06:23 crc kubenswrapper[4661]: E0120 18:06:23.749582 4661 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/58dc28d6-51a6-49ce-ab8e-ac0b6bbe7131-metrics-certs podName:58dc28d6-51a6-49ce-ab8e-ac0b6bbe7131 nodeName:}" failed. No retries permitted until 2026-01-20 18:06:27.749554193 +0000 UTC m=+44.080343845 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/58dc28d6-51a6-49ce-ab8e-ac0b6bbe7131-metrics-certs") pod "network-metrics-daemon-dhd6h" (UID: "58dc28d6-51a6-49ce-ab8e-ac0b6bbe7131") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 20 18:06:23 crc kubenswrapper[4661]: I0120 18:06:23.786584 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:23 crc kubenswrapper[4661]: I0120 18:06:23.786942 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:23 crc kubenswrapper[4661]: I0120 18:06:23.787111 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:23 crc kubenswrapper[4661]: I0120 18:06:23.787264 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:23 crc kubenswrapper[4661]: I0120 18:06:23.787409 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:23Z","lastTransitionTime":"2026-01-20T18:06:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:23 crc kubenswrapper[4661]: I0120 18:06:23.890428 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:23 crc kubenswrapper[4661]: I0120 18:06:23.890826 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:23 crc kubenswrapper[4661]: I0120 18:06:23.890993 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:23 crc kubenswrapper[4661]: I0120 18:06:23.891156 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:23 crc kubenswrapper[4661]: I0120 18:06:23.891289 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:23Z","lastTransitionTime":"2026-01-20T18:06:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:23 crc kubenswrapper[4661]: I0120 18:06:23.995984 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:23 crc kubenswrapper[4661]: I0120 18:06:23.996053 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:23 crc kubenswrapper[4661]: I0120 18:06:23.996069 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:23 crc kubenswrapper[4661]: I0120 18:06:23.996091 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:23 crc kubenswrapper[4661]: I0120 18:06:23.996108 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:23Z","lastTransitionTime":"2026-01-20T18:06:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:24 crc kubenswrapper[4661]: I0120 18:06:24.100268 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:24 crc kubenswrapper[4661]: I0120 18:06:24.100338 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:24 crc kubenswrapper[4661]: I0120 18:06:24.100354 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:24 crc kubenswrapper[4661]: I0120 18:06:24.100438 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:24 crc kubenswrapper[4661]: I0120 18:06:24.100485 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:24Z","lastTransitionTime":"2026-01-20T18:06:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:24 crc kubenswrapper[4661]: I0120 18:06:24.105791 4661 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-17 08:23:44.039518073 +0000 UTC Jan 20 18:06:24 crc kubenswrapper[4661]: I0120 18:06:24.166219 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aafdc595f8f331b863d71124f1aa3c686ec883829377108268dd78de88f498ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a15e7bb714cbcf03a4ed8925508be80b06b04f3cd455d293237554c8ad0fdeee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:24Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:24 crc kubenswrapper[4661]: I0120 18:06:24.195073 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-j9j6p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e190abed-d178-4ce7-9485-f6090ecf8578\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ad84b24b0398f3f900b9440d55a7914e661a18580ef8b248ffdce4d8a6c75c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxwgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6923af243783c919b8d74338d7221f91f7c6b770d97eb3a2f7e30360376f071d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6923af243783c919b8d74338d7221f91f7c6b770d97eb3a2f7e30360376f071d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T18:06:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T18:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxwgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d61ecbabdd991af4f3f3005e3d6fab0d3f7fa863e7503f45dd91633dfc68c597\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d61ecbabdd991af4f3f3005e3d6fab0d3f7fa863e7503f45dd91633dfc68c597\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T18:06:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T18:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxwgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31c8fb341a4de1d1144737f83eb46ad0b301f7eb48dee0969da7ade7fbd513da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31c8fb341a4de1d1144737f83eb46ad0b301f7eb48dee0969da7ade7fbd513da\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T18:06:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T18:06:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxwgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db8122764bd0508f39da125b5849fbe3bad9558e511c18f26bdcf4e5b23ca3a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db8122764bd0508f39da125b5849fbe3bad9558e511c18f26bdcf4e5b23ca3a1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T18:06:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T18:06:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxwgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://60e382a199aa3a85c11fdf8c490a4f039a191cff8a604b004e2f4ea6dacb6800\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://60e382a199aa3a85c11fdf8c490a4f039a191cff8a604b004e2f4ea6dacb6800\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T18:06:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T18:06:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxwgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb0e9f6dd4681c1b791524e22d3f668ce544cdc72a33af01fa70f2dd93d2972f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb0e9f6dd4681c1b791524e22d3f668ce544cdc72a33af01fa70f2dd93d2972f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T18:06:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T18:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxwgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T18:06:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-j9j6p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:24Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:24 crc kubenswrapper[4661]: I0120 18:06:24.204833 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:24 crc kubenswrapper[4661]: I0120 18:06:24.204875 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:24 crc kubenswrapper[4661]: I0120 18:06:24.204888 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:24 crc kubenswrapper[4661]: I0120 18:06:24.204911 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:24 crc kubenswrapper[4661]: I0120 18:06:24.204924 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:24Z","lastTransitionTime":"2026-01-20T18:06:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:24 crc kubenswrapper[4661]: I0120 18:06:24.214462 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-dhd6h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"58dc28d6-51a6-49ce-ab8e-ac0b6bbe7131\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j95bf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j95bf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T18:06:19Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-dhd6h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:24Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:24 crc kubenswrapper[4661]: I0120 18:06:24.237221 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5947c5f0-b932-4127-a183-6b9023784c81\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:05:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:05:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:05:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2286c38d543136df613b2611b8d494d0777a950158adb169c26708335c024251\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7995b8e096ce8c7adf28d9baa4e12d943a697db80ee2b6e6b347b334e44b0df\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a1fb928361cffd6f14855b6c1cf5964eccc9f923435bf79dddd8f0c94decd9a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a8e025f49d745d0d846c606a3ec9dd6fbd2d255e8662ba1fd1a65f0d4289e77\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3584f02089912eecb6ea77d78d4f093929ce92631cb9ea758f1311268963b6b1\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-20T18:06:02Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0120 18:05:56.920405 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0120 18:05:56.921589 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1862726087/tls.crt::/tmp/serving-cert-1862726087/tls.key\\\\\\\"\\\\nI0120 18:06:02.544098 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0120 18:06:02.549414 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0120 18:06:02.549439 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0120 18:06:02.549472 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0120 18:06:02.549479 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0120 18:06:02.569160 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0120 18:06:02.569400 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0120 18:06:02.569474 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0120 18:06:02.569536 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0120 18:06:02.569594 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0120 18:06:02.569648 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0120 18:06:02.569744 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0120 18:06:02.569342 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0120 18:06:02.573278 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T18:05:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f09e5fcc7fafac7a11257184f5919c06b5b2e56a677b67c664e6489d9a581a20\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:05:46Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6eedc9bdf3c37af238cf9ad5172a8d93751c0641cbf43057016157f086c77538\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6eedc9bdf3c37af238cf9ad5172a8d93751c0641cbf43057016157f086c77538\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T18:05:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T18:05:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T18:05:44Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:24Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:24 crc kubenswrapper[4661]: I0120 18:06:24.256428 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:03Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:24Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:24 crc kubenswrapper[4661]: I0120 18:06:24.273202 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:03Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:24Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:24 crc kubenswrapper[4661]: I0120 18:06:24.289278 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-9m9jm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c44ff326-6791-438a-8d65-b2be26e9c819\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de5a607340e429cf954b1b6e147c4dbff99ffee4d311e9692410698574915af2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kn7nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T18:06:04Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-9m9jm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:24Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:24 crc kubenswrapper[4661]: I0120 18:06:24.307323 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:24 crc kubenswrapper[4661]: I0120 18:06:24.307376 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:24 crc kubenswrapper[4661]: I0120 18:06:24.307392 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:24 crc kubenswrapper[4661]: I0120 18:06:24.307413 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:24 crc kubenswrapper[4661]: I0120 18:06:24.307431 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:24Z","lastTransitionTime":"2026-01-20T18:06:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:24 crc kubenswrapper[4661]: I0120 18:06:24.327767 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fxb9d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3856f23c-8dc3-4b36-b3b7-955dff315250\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:06Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:06Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54a53d0636da9c6e7974633697967fa21ba02b0357019aca7c83994f57d06d84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66kpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://37fb98a4cea5fe59a694ef52ebebfd3366649970415c8bd3b1307e6d150ffe66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66kpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6bac19d8c5ba66dc20e5e4b90b2ba10efe69f218908b04abb221416f47e47f5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66kpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f5f5d96326cd37c1101488fff8b4ce215ce84766faf13112bed7df0a767de0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66kpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d53da47c39bd1f10fe866890f30f12f27cb0cfce0348c89fc0e89b3e8f563f2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66kpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://407e4d66f22050b80251fcb98ac7168d601d70dff1679bdaca0fc82d6068da41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66kpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://936cc61252844f599e65f506adbc3ca6e06fee03fe77fc1a75295040562e5c1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://936cc61252844f599e65f506adbc3ca6e06fee03fe77fc1a75295040562e5c1b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-20T18:06:18Z\\\",\\\"message\\\":\\\"kPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0120 18:06:17.419651 6023 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0120 18:06:17.419792 6023 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0120 18:06:17.419824 6023 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0120 18:06:17.419873 6023 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0120 18:06:17.420180 6023 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0120 18:06:17.449846 6023 shared_informer.go:320] Caches are synced for node-tracker-controller\\\\nI0120 18:06:17.449878 6023 services_controller.go:204] Setting up event handlers for services for network=default\\\\nI0120 18:06:17.449945 6023 ovnkube.go:599] Stopped ovnkube\\\\nI0120 18:06:17.449972 6023 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0120 18:06:17.450061 6023 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T18:06:16Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-fxb9d_openshift-ovn-kubernetes(3856f23c-8dc3-4b36-b3b7-955dff315250)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66kpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dfbc19df20b659446872267891c3a922b6a01e39d8f0557505f25cdc5ba1a648\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66kpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://babd416d0d33b286f533dc5bd8d6904d24fd23632efce36edb6e13183fbd390a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://babd416d0d33b286f533dc5bd8d6904d24fd23632efce36edb6e13183fbd390a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T18:06:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66kpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T18:06:06Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fxb9d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:24Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:24 crc kubenswrapper[4661]: I0120 18:06:24.351417 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f7511825-196e-48ea-a80c-f30a6806a15f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:05:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:05:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f30ca85f0d31021dde3b56c646ddd5d841e699b809c85e54afa944cc8035df6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://008613eee577926f777b6eba5a93379dca1203429fb29918bb057f2aba5eba4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:05:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://baf1692fe971ebe4534bc83cc471812d2b2883b6f97e53728ded6cd57b40c6f1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://faea3c0fefa61b8f0e07a050f59ca7b88d89a7ac8dba19ab019cff00fd782da3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T18:05:44Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:24Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:24 crc kubenswrapper[4661]: I0120 18:06:24.377290 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d831477bdf455582c54cba87020fc1141541282a25169c4b9730a78855e5719\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:24Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:24 crc kubenswrapper[4661]: I0120 18:06:24.400048 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-z97p2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b6f2401-3eb9-4ee4-b79c-6faee06bc21c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d04be3c87130e9506908a5ff0bf35490bafa64b4cec7b6ae1b67c4a8bd7df5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff8qg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T18:06:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-z97p2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:24Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:24 crc kubenswrapper[4661]: I0120 18:06:24.410558 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:24 crc kubenswrapper[4661]: I0120 18:06:24.410608 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:24 crc kubenswrapper[4661]: I0120 18:06:24.410621 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:24 crc kubenswrapper[4661]: I0120 18:06:24.410644 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:24 crc kubenswrapper[4661]: I0120 18:06:24.410655 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:24Z","lastTransitionTime":"2026-01-20T18:06:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:24 crc kubenswrapper[4661]: I0120 18:06:24.414020 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4hf4s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2cada643-eb7b-4036-8788-500338f73fac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://59b5cf3db3513f82b52401408842627d3e40bdc3009c226548556808410b2289\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gwqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://846c1cab30f986276eb919ac7474fbde1b6d5edb6557ab47057723b68d78b782\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gwqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T18:06:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-4hf4s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:24Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:24 crc kubenswrapper[4661]: I0120 18:06:24.429426 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26d03e00aaf9fc7a94d8fe25f4f6f7a028f4e5eb9956411442757ca8b2046d27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:24Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:24 crc kubenswrapper[4661]: I0120 18:06:24.443925 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:24Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:24 crc kubenswrapper[4661]: I0120 18:06:24.456937 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-svf7c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"78855c94-da90-4523-8d65-70f7fd153dee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce85015f47761ddd35031a4b2aa10eddde92a1f1ee206e6454b967b03b49372e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zvj2k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7dad5141c6e2e07d42bee1c473efffa900d0d900467b1524cd59962582696a3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zvj2k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T18:06:05Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-svf7c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:24Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:24 crc kubenswrapper[4661]: I0120 18:06:24.471356 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-tfdrt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1c3f1ce7-0584-4bf1-8398-a277e9a4599b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://163c719cffaaa547e54e81b543b5f5b2ce5abf7f6309d2859831a14e42df189f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbq77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T18:06:07Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-tfdrt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:24Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:24 crc kubenswrapper[4661]: I0120 18:06:24.513361 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:24 crc kubenswrapper[4661]: I0120 18:06:24.513430 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:24 crc kubenswrapper[4661]: I0120 18:06:24.513446 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:24 crc kubenswrapper[4661]: I0120 18:06:24.513472 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:24 crc kubenswrapper[4661]: I0120 18:06:24.513489 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:24Z","lastTransitionTime":"2026-01-20T18:06:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:24 crc kubenswrapper[4661]: I0120 18:06:24.622227 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:24 crc kubenswrapper[4661]: I0120 18:06:24.622363 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:24 crc kubenswrapper[4661]: I0120 18:06:24.622392 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:24 crc kubenswrapper[4661]: I0120 18:06:24.622423 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:24 crc kubenswrapper[4661]: I0120 18:06:24.622451 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:24Z","lastTransitionTime":"2026-01-20T18:06:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:24 crc kubenswrapper[4661]: I0120 18:06:24.726210 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:24 crc kubenswrapper[4661]: I0120 18:06:24.726601 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:24 crc kubenswrapper[4661]: I0120 18:06:24.726736 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:24 crc kubenswrapper[4661]: I0120 18:06:24.726934 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:24 crc kubenswrapper[4661]: I0120 18:06:24.727025 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:24Z","lastTransitionTime":"2026-01-20T18:06:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:24 crc kubenswrapper[4661]: I0120 18:06:24.830813 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:24 crc kubenswrapper[4661]: I0120 18:06:24.830871 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:24 crc kubenswrapper[4661]: I0120 18:06:24.830886 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:24 crc kubenswrapper[4661]: I0120 18:06:24.830913 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:24 crc kubenswrapper[4661]: I0120 18:06:24.830928 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:24Z","lastTransitionTime":"2026-01-20T18:06:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:24 crc kubenswrapper[4661]: I0120 18:06:24.934629 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:24 crc kubenswrapper[4661]: I0120 18:06:24.934695 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:24 crc kubenswrapper[4661]: I0120 18:06:24.934706 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:24 crc kubenswrapper[4661]: I0120 18:06:24.934722 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:24 crc kubenswrapper[4661]: I0120 18:06:24.934736 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:24Z","lastTransitionTime":"2026-01-20T18:06:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:25 crc kubenswrapper[4661]: I0120 18:06:25.038527 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:25 crc kubenswrapper[4661]: I0120 18:06:25.038895 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:25 crc kubenswrapper[4661]: I0120 18:06:25.039006 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:25 crc kubenswrapper[4661]: I0120 18:06:25.039130 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:25 crc kubenswrapper[4661]: I0120 18:06:25.039221 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:25Z","lastTransitionTime":"2026-01-20T18:06:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:25 crc kubenswrapper[4661]: I0120 18:06:25.106636 4661 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-30 11:14:47.019647979 +0000 UTC Jan 20 18:06:25 crc kubenswrapper[4661]: I0120 18:06:25.141302 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dhd6h" Jan 20 18:06:25 crc kubenswrapper[4661]: I0120 18:06:25.141380 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 18:06:25 crc kubenswrapper[4661]: I0120 18:06:25.141613 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 18:06:25 crc kubenswrapper[4661]: I0120 18:06:25.141733 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:25 crc kubenswrapper[4661]: I0120 18:06:25.142467 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:25 crc kubenswrapper[4661]: I0120 18:06:25.142621 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:25 crc kubenswrapper[4661]: I0120 18:06:25.142866 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:25 crc kubenswrapper[4661]: I0120 18:06:25.143094 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:25Z","lastTransitionTime":"2026-01-20T18:06:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:25 crc kubenswrapper[4661]: E0120 18:06:25.142980 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dhd6h" podUID="58dc28d6-51a6-49ce-ab8e-ac0b6bbe7131" Jan 20 18:06:25 crc kubenswrapper[4661]: E0120 18:06:25.143124 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 20 18:06:25 crc kubenswrapper[4661]: I0120 18:06:25.141778 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 18:06:25 crc kubenswrapper[4661]: E0120 18:06:25.143216 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 20 18:06:25 crc kubenswrapper[4661]: E0120 18:06:25.143646 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 20 18:06:25 crc kubenswrapper[4661]: I0120 18:06:25.247197 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:25 crc kubenswrapper[4661]: I0120 18:06:25.247577 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:25 crc kubenswrapper[4661]: I0120 18:06:25.247642 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:25 crc kubenswrapper[4661]: I0120 18:06:25.247776 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:25 crc kubenswrapper[4661]: I0120 18:06:25.247858 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:25Z","lastTransitionTime":"2026-01-20T18:06:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:25 crc kubenswrapper[4661]: I0120 18:06:25.351385 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:25 crc kubenswrapper[4661]: I0120 18:06:25.351810 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:25 crc kubenswrapper[4661]: I0120 18:06:25.351892 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:25 crc kubenswrapper[4661]: I0120 18:06:25.351971 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:25 crc kubenswrapper[4661]: I0120 18:06:25.352041 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:25Z","lastTransitionTime":"2026-01-20T18:06:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:25 crc kubenswrapper[4661]: I0120 18:06:25.455400 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:25 crc kubenswrapper[4661]: I0120 18:06:25.455510 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:25 crc kubenswrapper[4661]: I0120 18:06:25.455528 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:25 crc kubenswrapper[4661]: I0120 18:06:25.455585 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:25 crc kubenswrapper[4661]: I0120 18:06:25.455614 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:25Z","lastTransitionTime":"2026-01-20T18:06:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:25 crc kubenswrapper[4661]: I0120 18:06:25.558994 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:25 crc kubenswrapper[4661]: I0120 18:06:25.559068 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:25 crc kubenswrapper[4661]: I0120 18:06:25.559086 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:25 crc kubenswrapper[4661]: I0120 18:06:25.559116 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:25 crc kubenswrapper[4661]: I0120 18:06:25.559135 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:25Z","lastTransitionTime":"2026-01-20T18:06:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:25 crc kubenswrapper[4661]: I0120 18:06:25.662326 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:25 crc kubenswrapper[4661]: I0120 18:06:25.662393 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:25 crc kubenswrapper[4661]: I0120 18:06:25.662411 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:25 crc kubenswrapper[4661]: I0120 18:06:25.662438 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:25 crc kubenswrapper[4661]: I0120 18:06:25.662466 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:25Z","lastTransitionTime":"2026-01-20T18:06:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:25 crc kubenswrapper[4661]: I0120 18:06:25.765643 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:25 crc kubenswrapper[4661]: I0120 18:06:25.765748 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:25 crc kubenswrapper[4661]: I0120 18:06:25.765770 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:25 crc kubenswrapper[4661]: I0120 18:06:25.765803 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:25 crc kubenswrapper[4661]: I0120 18:06:25.765824 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:25Z","lastTransitionTime":"2026-01-20T18:06:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:25 crc kubenswrapper[4661]: I0120 18:06:25.869566 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:25 crc kubenswrapper[4661]: I0120 18:06:25.869638 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:25 crc kubenswrapper[4661]: I0120 18:06:25.869662 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:25 crc kubenswrapper[4661]: I0120 18:06:25.869733 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:25 crc kubenswrapper[4661]: I0120 18:06:25.869755 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:25Z","lastTransitionTime":"2026-01-20T18:06:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:25 crc kubenswrapper[4661]: I0120 18:06:25.972725 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:25 crc kubenswrapper[4661]: I0120 18:06:25.972785 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:25 crc kubenswrapper[4661]: I0120 18:06:25.972802 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:25 crc kubenswrapper[4661]: I0120 18:06:25.972823 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:25 crc kubenswrapper[4661]: I0120 18:06:25.972838 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:25Z","lastTransitionTime":"2026-01-20T18:06:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:26 crc kubenswrapper[4661]: I0120 18:06:26.077006 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:26 crc kubenswrapper[4661]: I0120 18:06:26.077053 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:26 crc kubenswrapper[4661]: I0120 18:06:26.077065 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:26 crc kubenswrapper[4661]: I0120 18:06:26.077084 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:26 crc kubenswrapper[4661]: I0120 18:06:26.077101 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:26Z","lastTransitionTime":"2026-01-20T18:06:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:26 crc kubenswrapper[4661]: I0120 18:06:26.107850 4661 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-04 14:20:06.255179845 +0000 UTC Jan 20 18:06:26 crc kubenswrapper[4661]: I0120 18:06:26.179413 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:26 crc kubenswrapper[4661]: I0120 18:06:26.179982 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:26 crc kubenswrapper[4661]: I0120 18:06:26.180003 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:26 crc kubenswrapper[4661]: I0120 18:06:26.180029 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:26 crc kubenswrapper[4661]: I0120 18:06:26.180043 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:26Z","lastTransitionTime":"2026-01-20T18:06:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:26 crc kubenswrapper[4661]: I0120 18:06:26.285295 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:26 crc kubenswrapper[4661]: I0120 18:06:26.285346 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:26 crc kubenswrapper[4661]: I0120 18:06:26.285382 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:26 crc kubenswrapper[4661]: I0120 18:06:26.285404 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:26 crc kubenswrapper[4661]: I0120 18:06:26.285420 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:26Z","lastTransitionTime":"2026-01-20T18:06:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:26 crc kubenswrapper[4661]: I0120 18:06:26.389040 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:26 crc kubenswrapper[4661]: I0120 18:06:26.389114 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:26 crc kubenswrapper[4661]: I0120 18:06:26.389135 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:26 crc kubenswrapper[4661]: I0120 18:06:26.389160 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:26 crc kubenswrapper[4661]: I0120 18:06:26.389179 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:26Z","lastTransitionTime":"2026-01-20T18:06:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:26 crc kubenswrapper[4661]: I0120 18:06:26.495901 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:26 crc kubenswrapper[4661]: I0120 18:06:26.495963 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:26 crc kubenswrapper[4661]: I0120 18:06:26.495984 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:26 crc kubenswrapper[4661]: I0120 18:06:26.496013 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:26 crc kubenswrapper[4661]: I0120 18:06:26.496031 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:26Z","lastTransitionTime":"2026-01-20T18:06:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:26 crc kubenswrapper[4661]: I0120 18:06:26.597907 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:26 crc kubenswrapper[4661]: I0120 18:06:26.597945 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:26 crc kubenswrapper[4661]: I0120 18:06:26.597954 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:26 crc kubenswrapper[4661]: I0120 18:06:26.597970 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:26 crc kubenswrapper[4661]: I0120 18:06:26.597980 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:26Z","lastTransitionTime":"2026-01-20T18:06:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:26 crc kubenswrapper[4661]: I0120 18:06:26.700522 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:26 crc kubenswrapper[4661]: I0120 18:06:26.700576 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:26 crc kubenswrapper[4661]: I0120 18:06:26.700592 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:26 crc kubenswrapper[4661]: I0120 18:06:26.700616 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:26 crc kubenswrapper[4661]: I0120 18:06:26.700632 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:26Z","lastTransitionTime":"2026-01-20T18:06:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:26 crc kubenswrapper[4661]: I0120 18:06:26.804549 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:26 crc kubenswrapper[4661]: I0120 18:06:26.804598 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:26 crc kubenswrapper[4661]: I0120 18:06:26.804611 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:26 crc kubenswrapper[4661]: I0120 18:06:26.804648 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:26 crc kubenswrapper[4661]: I0120 18:06:26.804661 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:26Z","lastTransitionTime":"2026-01-20T18:06:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:26 crc kubenswrapper[4661]: I0120 18:06:26.907014 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:26 crc kubenswrapper[4661]: I0120 18:06:26.907067 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:26 crc kubenswrapper[4661]: I0120 18:06:26.907081 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:26 crc kubenswrapper[4661]: I0120 18:06:26.907101 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:26 crc kubenswrapper[4661]: I0120 18:06:26.907113 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:26Z","lastTransitionTime":"2026-01-20T18:06:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:27 crc kubenswrapper[4661]: I0120 18:06:27.011491 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:27 crc kubenswrapper[4661]: I0120 18:06:27.011561 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:27 crc kubenswrapper[4661]: I0120 18:06:27.011580 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:27 crc kubenswrapper[4661]: I0120 18:06:27.011615 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:27 crc kubenswrapper[4661]: I0120 18:06:27.011639 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:27Z","lastTransitionTime":"2026-01-20T18:06:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:27 crc kubenswrapper[4661]: I0120 18:06:27.108950 4661 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-08 21:06:29.395193449 +0000 UTC Jan 20 18:06:27 crc kubenswrapper[4661]: I0120 18:06:27.114277 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:27 crc kubenswrapper[4661]: I0120 18:06:27.114313 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:27 crc kubenswrapper[4661]: I0120 18:06:27.114322 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:27 crc kubenswrapper[4661]: I0120 18:06:27.114337 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:27 crc kubenswrapper[4661]: I0120 18:06:27.114348 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:27Z","lastTransitionTime":"2026-01-20T18:06:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:27 crc kubenswrapper[4661]: I0120 18:06:27.141301 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 18:06:27 crc kubenswrapper[4661]: I0120 18:06:27.141402 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dhd6h" Jan 20 18:06:27 crc kubenswrapper[4661]: I0120 18:06:27.141456 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 18:06:27 crc kubenswrapper[4661]: I0120 18:06:27.141310 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 18:06:27 crc kubenswrapper[4661]: E0120 18:06:27.141581 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 20 18:06:27 crc kubenswrapper[4661]: E0120 18:06:27.141770 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 20 18:06:27 crc kubenswrapper[4661]: E0120 18:06:27.141914 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dhd6h" podUID="58dc28d6-51a6-49ce-ab8e-ac0b6bbe7131" Jan 20 18:06:27 crc kubenswrapper[4661]: E0120 18:06:27.142095 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 20 18:06:27 crc kubenswrapper[4661]: I0120 18:06:27.217884 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:27 crc kubenswrapper[4661]: I0120 18:06:27.217945 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:27 crc kubenswrapper[4661]: I0120 18:06:27.217962 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:27 crc kubenswrapper[4661]: I0120 18:06:27.217993 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:27 crc kubenswrapper[4661]: I0120 18:06:27.218017 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:27Z","lastTransitionTime":"2026-01-20T18:06:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:27 crc kubenswrapper[4661]: I0120 18:06:27.320965 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:27 crc kubenswrapper[4661]: I0120 18:06:27.321020 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:27 crc kubenswrapper[4661]: I0120 18:06:27.321038 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:27 crc kubenswrapper[4661]: I0120 18:06:27.321061 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:27 crc kubenswrapper[4661]: I0120 18:06:27.321078 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:27Z","lastTransitionTime":"2026-01-20T18:06:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:27 crc kubenswrapper[4661]: I0120 18:06:27.424735 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:27 crc kubenswrapper[4661]: I0120 18:06:27.424784 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:27 crc kubenswrapper[4661]: I0120 18:06:27.424795 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:27 crc kubenswrapper[4661]: I0120 18:06:27.424818 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:27 crc kubenswrapper[4661]: I0120 18:06:27.424830 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:27Z","lastTransitionTime":"2026-01-20T18:06:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:27 crc kubenswrapper[4661]: I0120 18:06:27.502008 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:27 crc kubenswrapper[4661]: I0120 18:06:27.502037 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:27 crc kubenswrapper[4661]: I0120 18:06:27.502046 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:27 crc kubenswrapper[4661]: I0120 18:06:27.502063 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:27 crc kubenswrapper[4661]: I0120 18:06:27.502073 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:27Z","lastTransitionTime":"2026-01-20T18:06:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:27 crc kubenswrapper[4661]: E0120 18:06:27.518043 4661 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T18:06:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T18:06:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:27Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T18:06:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T18:06:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:27Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7f2069d5-53e0-4198-b42b-b73aa1252865\\\",\\\"systemUUID\\\":\\\"727045d4-7edb-4891-a9ee-dd5ccba890df\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:27Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:27 crc kubenswrapper[4661]: I0120 18:06:27.521919 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:27 crc kubenswrapper[4661]: I0120 18:06:27.521953 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:27 crc kubenswrapper[4661]: I0120 18:06:27.521968 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:27 crc kubenswrapper[4661]: I0120 18:06:27.521988 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:27 crc kubenswrapper[4661]: I0120 18:06:27.522000 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:27Z","lastTransitionTime":"2026-01-20T18:06:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:27 crc kubenswrapper[4661]: E0120 18:06:27.538440 4661 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T18:06:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T18:06:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:27Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T18:06:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T18:06:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:27Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7f2069d5-53e0-4198-b42b-b73aa1252865\\\",\\\"systemUUID\\\":\\\"727045d4-7edb-4891-a9ee-dd5ccba890df\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:27Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:27 crc kubenswrapper[4661]: I0120 18:06:27.542917 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:27 crc kubenswrapper[4661]: I0120 18:06:27.542956 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:27 crc kubenswrapper[4661]: I0120 18:06:27.542969 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:27 crc kubenswrapper[4661]: I0120 18:06:27.543018 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:27 crc kubenswrapper[4661]: I0120 18:06:27.543031 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:27Z","lastTransitionTime":"2026-01-20T18:06:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:27 crc kubenswrapper[4661]: E0120 18:06:27.557591 4661 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T18:06:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T18:06:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:27Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T18:06:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T18:06:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:27Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7f2069d5-53e0-4198-b42b-b73aa1252865\\\",\\\"systemUUID\\\":\\\"727045d4-7edb-4891-a9ee-dd5ccba890df\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:27Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:27 crc kubenswrapper[4661]: I0120 18:06:27.561834 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:27 crc kubenswrapper[4661]: I0120 18:06:27.561864 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:27 crc kubenswrapper[4661]: I0120 18:06:27.561876 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:27 crc kubenswrapper[4661]: I0120 18:06:27.561894 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:27 crc kubenswrapper[4661]: I0120 18:06:27.561906 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:27Z","lastTransitionTime":"2026-01-20T18:06:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:27 crc kubenswrapper[4661]: E0120 18:06:27.580151 4661 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T18:06:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T18:06:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:27Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T18:06:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T18:06:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:27Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7f2069d5-53e0-4198-b42b-b73aa1252865\\\",\\\"systemUUID\\\":\\\"727045d4-7edb-4891-a9ee-dd5ccba890df\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:27Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:27 crc kubenswrapper[4661]: I0120 18:06:27.585849 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:27 crc kubenswrapper[4661]: I0120 18:06:27.585886 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:27 crc kubenswrapper[4661]: I0120 18:06:27.585899 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:27 crc kubenswrapper[4661]: I0120 18:06:27.585916 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:27 crc kubenswrapper[4661]: I0120 18:06:27.585929 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:27Z","lastTransitionTime":"2026-01-20T18:06:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:27 crc kubenswrapper[4661]: E0120 18:06:27.602583 4661 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T18:06:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T18:06:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:27Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T18:06:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T18:06:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:27Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7f2069d5-53e0-4198-b42b-b73aa1252865\\\",\\\"systemUUID\\\":\\\"727045d4-7edb-4891-a9ee-dd5ccba890df\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:27Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:27 crc kubenswrapper[4661]: E0120 18:06:27.602764 4661 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 20 18:06:27 crc kubenswrapper[4661]: I0120 18:06:27.605200 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:27 crc kubenswrapper[4661]: I0120 18:06:27.605261 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:27 crc kubenswrapper[4661]: I0120 18:06:27.605280 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:27 crc kubenswrapper[4661]: I0120 18:06:27.605304 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:27 crc kubenswrapper[4661]: I0120 18:06:27.605323 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:27Z","lastTransitionTime":"2026-01-20T18:06:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:27 crc kubenswrapper[4661]: I0120 18:06:27.708324 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:27 crc kubenswrapper[4661]: I0120 18:06:27.708385 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:27 crc kubenswrapper[4661]: I0120 18:06:27.708398 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:27 crc kubenswrapper[4661]: I0120 18:06:27.708549 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:27 crc kubenswrapper[4661]: I0120 18:06:27.708569 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:27Z","lastTransitionTime":"2026-01-20T18:06:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:27 crc kubenswrapper[4661]: I0120 18:06:27.801289 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/58dc28d6-51a6-49ce-ab8e-ac0b6bbe7131-metrics-certs\") pod \"network-metrics-daemon-dhd6h\" (UID: \"58dc28d6-51a6-49ce-ab8e-ac0b6bbe7131\") " pod="openshift-multus/network-metrics-daemon-dhd6h" Jan 20 18:06:27 crc kubenswrapper[4661]: E0120 18:06:27.801810 4661 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 20 18:06:27 crc kubenswrapper[4661]: E0120 18:06:27.802155 4661 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/58dc28d6-51a6-49ce-ab8e-ac0b6bbe7131-metrics-certs podName:58dc28d6-51a6-49ce-ab8e-ac0b6bbe7131 nodeName:}" failed. No retries permitted until 2026-01-20 18:06:35.802133157 +0000 UTC m=+52.132922840 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/58dc28d6-51a6-49ce-ab8e-ac0b6bbe7131-metrics-certs") pod "network-metrics-daemon-dhd6h" (UID: "58dc28d6-51a6-49ce-ab8e-ac0b6bbe7131") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 20 18:06:27 crc kubenswrapper[4661]: I0120 18:06:27.811627 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:27 crc kubenswrapper[4661]: I0120 18:06:27.811841 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:27 crc kubenswrapper[4661]: I0120 18:06:27.811939 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:27 crc kubenswrapper[4661]: I0120 18:06:27.812037 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:27 crc kubenswrapper[4661]: I0120 18:06:27.812121 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:27Z","lastTransitionTime":"2026-01-20T18:06:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:27 crc kubenswrapper[4661]: I0120 18:06:27.915955 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:27 crc kubenswrapper[4661]: I0120 18:06:27.916891 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:27 crc kubenswrapper[4661]: I0120 18:06:27.916933 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:27 crc kubenswrapper[4661]: I0120 18:06:27.916965 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:27 crc kubenswrapper[4661]: I0120 18:06:27.916996 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:27Z","lastTransitionTime":"2026-01-20T18:06:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:28 crc kubenswrapper[4661]: I0120 18:06:28.020464 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:28 crc kubenswrapper[4661]: I0120 18:06:28.020552 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:28 crc kubenswrapper[4661]: I0120 18:06:28.020575 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:28 crc kubenswrapper[4661]: I0120 18:06:28.020606 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:28 crc kubenswrapper[4661]: I0120 18:06:28.020632 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:28Z","lastTransitionTime":"2026-01-20T18:06:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:28 crc kubenswrapper[4661]: I0120 18:06:28.110066 4661 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-09 14:51:39.920476245 +0000 UTC Jan 20 18:06:28 crc kubenswrapper[4661]: I0120 18:06:28.124082 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:28 crc kubenswrapper[4661]: I0120 18:06:28.124150 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:28 crc kubenswrapper[4661]: I0120 18:06:28.124171 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:28 crc kubenswrapper[4661]: I0120 18:06:28.124621 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:28 crc kubenswrapper[4661]: I0120 18:06:28.124871 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:28Z","lastTransitionTime":"2026-01-20T18:06:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:28 crc kubenswrapper[4661]: I0120 18:06:28.228613 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:28 crc kubenswrapper[4661]: I0120 18:06:28.229023 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:28 crc kubenswrapper[4661]: I0120 18:06:28.229122 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:28 crc kubenswrapper[4661]: I0120 18:06:28.229305 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:28 crc kubenswrapper[4661]: I0120 18:06:28.229426 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:28Z","lastTransitionTime":"2026-01-20T18:06:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:28 crc kubenswrapper[4661]: I0120 18:06:28.333472 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:28 crc kubenswrapper[4661]: I0120 18:06:28.334154 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:28 crc kubenswrapper[4661]: I0120 18:06:28.334314 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:28 crc kubenswrapper[4661]: I0120 18:06:28.334461 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:28 crc kubenswrapper[4661]: I0120 18:06:28.334595 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:28Z","lastTransitionTime":"2026-01-20T18:06:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:28 crc kubenswrapper[4661]: I0120 18:06:28.438541 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:28 crc kubenswrapper[4661]: I0120 18:06:28.438993 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:28 crc kubenswrapper[4661]: I0120 18:06:28.439169 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:28 crc kubenswrapper[4661]: I0120 18:06:28.439329 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:28 crc kubenswrapper[4661]: I0120 18:06:28.439479 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:28Z","lastTransitionTime":"2026-01-20T18:06:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:28 crc kubenswrapper[4661]: I0120 18:06:28.543736 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:28 crc kubenswrapper[4661]: I0120 18:06:28.543835 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:28 crc kubenswrapper[4661]: I0120 18:06:28.543894 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:28 crc kubenswrapper[4661]: I0120 18:06:28.543923 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:28 crc kubenswrapper[4661]: I0120 18:06:28.543982 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:28Z","lastTransitionTime":"2026-01-20T18:06:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:28 crc kubenswrapper[4661]: I0120 18:06:28.646715 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:28 crc kubenswrapper[4661]: I0120 18:06:28.646792 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:28 crc kubenswrapper[4661]: I0120 18:06:28.646809 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:28 crc kubenswrapper[4661]: I0120 18:06:28.646839 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:28 crc kubenswrapper[4661]: I0120 18:06:28.646859 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:28Z","lastTransitionTime":"2026-01-20T18:06:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:28 crc kubenswrapper[4661]: I0120 18:06:28.750522 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:28 crc kubenswrapper[4661]: I0120 18:06:28.750859 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:28 crc kubenswrapper[4661]: I0120 18:06:28.750909 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:28 crc kubenswrapper[4661]: I0120 18:06:28.750958 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:28 crc kubenswrapper[4661]: I0120 18:06:28.750979 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:28Z","lastTransitionTime":"2026-01-20T18:06:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:28 crc kubenswrapper[4661]: I0120 18:06:28.854882 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:28 crc kubenswrapper[4661]: I0120 18:06:28.854953 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:28 crc kubenswrapper[4661]: I0120 18:06:28.854972 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:28 crc kubenswrapper[4661]: I0120 18:06:28.855001 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:28 crc kubenswrapper[4661]: I0120 18:06:28.855064 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:28Z","lastTransitionTime":"2026-01-20T18:06:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:28 crc kubenswrapper[4661]: I0120 18:06:28.958412 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:28 crc kubenswrapper[4661]: I0120 18:06:28.958458 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:28 crc kubenswrapper[4661]: I0120 18:06:28.958467 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:28 crc kubenswrapper[4661]: I0120 18:06:28.958485 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:28 crc kubenswrapper[4661]: I0120 18:06:28.958496 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:28Z","lastTransitionTime":"2026-01-20T18:06:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:29 crc kubenswrapper[4661]: I0120 18:06:29.061099 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:29 crc kubenswrapper[4661]: I0120 18:06:29.061517 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:29 crc kubenswrapper[4661]: I0120 18:06:29.061699 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:29 crc kubenswrapper[4661]: I0120 18:06:29.061868 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:29 crc kubenswrapper[4661]: I0120 18:06:29.062000 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:29Z","lastTransitionTime":"2026-01-20T18:06:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:29 crc kubenswrapper[4661]: I0120 18:06:29.110606 4661 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-25 01:45:21.25895468 +0000 UTC Jan 20 18:06:29 crc kubenswrapper[4661]: I0120 18:06:29.141440 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 18:06:29 crc kubenswrapper[4661]: I0120 18:06:29.141586 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dhd6h" Jan 20 18:06:29 crc kubenswrapper[4661]: E0120 18:06:29.141728 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 20 18:06:29 crc kubenswrapper[4661]: I0120 18:06:29.141462 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 18:06:29 crc kubenswrapper[4661]: E0120 18:06:29.141873 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dhd6h" podUID="58dc28d6-51a6-49ce-ab8e-ac0b6bbe7131" Jan 20 18:06:29 crc kubenswrapper[4661]: I0120 18:06:29.141469 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 18:06:29 crc kubenswrapper[4661]: E0120 18:06:29.142008 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 20 18:06:29 crc kubenswrapper[4661]: E0120 18:06:29.142081 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 20 18:06:29 crc kubenswrapper[4661]: I0120 18:06:29.165793 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:29 crc kubenswrapper[4661]: I0120 18:06:29.166102 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:29 crc kubenswrapper[4661]: I0120 18:06:29.166194 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:29 crc kubenswrapper[4661]: I0120 18:06:29.166304 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:29 crc kubenswrapper[4661]: I0120 18:06:29.166436 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:29Z","lastTransitionTime":"2026-01-20T18:06:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:29 crc kubenswrapper[4661]: I0120 18:06:29.270305 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:29 crc kubenswrapper[4661]: I0120 18:06:29.270385 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:29 crc kubenswrapper[4661]: I0120 18:06:29.270406 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:29 crc kubenswrapper[4661]: I0120 18:06:29.270433 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:29 crc kubenswrapper[4661]: I0120 18:06:29.270451 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:29Z","lastTransitionTime":"2026-01-20T18:06:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:29 crc kubenswrapper[4661]: I0120 18:06:29.373296 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:29 crc kubenswrapper[4661]: I0120 18:06:29.373379 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:29 crc kubenswrapper[4661]: I0120 18:06:29.373407 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:29 crc kubenswrapper[4661]: I0120 18:06:29.373436 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:29 crc kubenswrapper[4661]: I0120 18:06:29.373453 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:29Z","lastTransitionTime":"2026-01-20T18:06:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:29 crc kubenswrapper[4661]: I0120 18:06:29.476731 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:29 crc kubenswrapper[4661]: I0120 18:06:29.476797 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:29 crc kubenswrapper[4661]: I0120 18:06:29.476810 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:29 crc kubenswrapper[4661]: I0120 18:06:29.476833 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:29 crc kubenswrapper[4661]: I0120 18:06:29.476850 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:29Z","lastTransitionTime":"2026-01-20T18:06:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:29 crc kubenswrapper[4661]: I0120 18:06:29.579742 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:29 crc kubenswrapper[4661]: I0120 18:06:29.579804 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:29 crc kubenswrapper[4661]: I0120 18:06:29.579824 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:29 crc kubenswrapper[4661]: I0120 18:06:29.579890 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:29 crc kubenswrapper[4661]: I0120 18:06:29.579912 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:29Z","lastTransitionTime":"2026-01-20T18:06:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:29 crc kubenswrapper[4661]: I0120 18:06:29.683726 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:29 crc kubenswrapper[4661]: I0120 18:06:29.684190 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:29 crc kubenswrapper[4661]: I0120 18:06:29.684286 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:29 crc kubenswrapper[4661]: I0120 18:06:29.684400 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:29 crc kubenswrapper[4661]: I0120 18:06:29.684496 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:29Z","lastTransitionTime":"2026-01-20T18:06:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:29 crc kubenswrapper[4661]: I0120 18:06:29.788215 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:29 crc kubenswrapper[4661]: I0120 18:06:29.788261 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:29 crc kubenswrapper[4661]: I0120 18:06:29.788272 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:29 crc kubenswrapper[4661]: I0120 18:06:29.788291 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:29 crc kubenswrapper[4661]: I0120 18:06:29.788302 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:29Z","lastTransitionTime":"2026-01-20T18:06:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:29 crc kubenswrapper[4661]: I0120 18:06:29.891721 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:29 crc kubenswrapper[4661]: I0120 18:06:29.891792 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:29 crc kubenswrapper[4661]: I0120 18:06:29.891818 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:29 crc kubenswrapper[4661]: I0120 18:06:29.891847 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:29 crc kubenswrapper[4661]: I0120 18:06:29.891869 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:29Z","lastTransitionTime":"2026-01-20T18:06:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:29 crc kubenswrapper[4661]: I0120 18:06:29.995969 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:29 crc kubenswrapper[4661]: I0120 18:06:29.996046 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:29 crc kubenswrapper[4661]: I0120 18:06:29.996060 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:29 crc kubenswrapper[4661]: I0120 18:06:29.996101 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:29 crc kubenswrapper[4661]: I0120 18:06:29.996120 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:29Z","lastTransitionTime":"2026-01-20T18:06:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:30 crc kubenswrapper[4661]: I0120 18:06:30.099168 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:30 crc kubenswrapper[4661]: I0120 18:06:30.099214 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:30 crc kubenswrapper[4661]: I0120 18:06:30.099226 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:30 crc kubenswrapper[4661]: I0120 18:06:30.099247 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:30 crc kubenswrapper[4661]: I0120 18:06:30.099259 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:30Z","lastTransitionTime":"2026-01-20T18:06:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:30 crc kubenswrapper[4661]: I0120 18:06:30.111599 4661 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-10 15:37:24.031567714 +0000 UTC Jan 20 18:06:30 crc kubenswrapper[4661]: I0120 18:06:30.202044 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:30 crc kubenswrapper[4661]: I0120 18:06:30.202327 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:30 crc kubenswrapper[4661]: I0120 18:06:30.202418 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:30 crc kubenswrapper[4661]: I0120 18:06:30.202509 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:30 crc kubenswrapper[4661]: I0120 18:06:30.202591 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:30Z","lastTransitionTime":"2026-01-20T18:06:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:30 crc kubenswrapper[4661]: I0120 18:06:30.306898 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:30 crc kubenswrapper[4661]: I0120 18:06:30.306970 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:30 crc kubenswrapper[4661]: I0120 18:06:30.306988 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:30 crc kubenswrapper[4661]: I0120 18:06:30.307014 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:30 crc kubenswrapper[4661]: I0120 18:06:30.307032 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:30Z","lastTransitionTime":"2026-01-20T18:06:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:30 crc kubenswrapper[4661]: I0120 18:06:30.410159 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:30 crc kubenswrapper[4661]: I0120 18:06:30.410238 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:30 crc kubenswrapper[4661]: I0120 18:06:30.410255 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:30 crc kubenswrapper[4661]: I0120 18:06:30.410285 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:30 crc kubenswrapper[4661]: I0120 18:06:30.410308 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:30Z","lastTransitionTime":"2026-01-20T18:06:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:30 crc kubenswrapper[4661]: I0120 18:06:30.514255 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:30 crc kubenswrapper[4661]: I0120 18:06:30.514331 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:30 crc kubenswrapper[4661]: I0120 18:06:30.514348 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:30 crc kubenswrapper[4661]: I0120 18:06:30.514375 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:30 crc kubenswrapper[4661]: I0120 18:06:30.514395 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:30Z","lastTransitionTime":"2026-01-20T18:06:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:30 crc kubenswrapper[4661]: I0120 18:06:30.617312 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:30 crc kubenswrapper[4661]: I0120 18:06:30.617392 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:30 crc kubenswrapper[4661]: I0120 18:06:30.617417 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:30 crc kubenswrapper[4661]: I0120 18:06:30.617450 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:30 crc kubenswrapper[4661]: I0120 18:06:30.617475 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:30Z","lastTransitionTime":"2026-01-20T18:06:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:30 crc kubenswrapper[4661]: I0120 18:06:30.720906 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:30 crc kubenswrapper[4661]: I0120 18:06:30.720978 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:30 crc kubenswrapper[4661]: I0120 18:06:30.721000 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:30 crc kubenswrapper[4661]: I0120 18:06:30.721031 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:30 crc kubenswrapper[4661]: I0120 18:06:30.721056 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:30Z","lastTransitionTime":"2026-01-20T18:06:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:30 crc kubenswrapper[4661]: I0120 18:06:30.823981 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:30 crc kubenswrapper[4661]: I0120 18:06:30.824049 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:30 crc kubenswrapper[4661]: I0120 18:06:30.824074 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:30 crc kubenswrapper[4661]: I0120 18:06:30.824106 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:30 crc kubenswrapper[4661]: I0120 18:06:30.824129 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:30Z","lastTransitionTime":"2026-01-20T18:06:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:30 crc kubenswrapper[4661]: I0120 18:06:30.929108 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:30 crc kubenswrapper[4661]: I0120 18:06:30.929175 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:30 crc kubenswrapper[4661]: I0120 18:06:30.929190 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:30 crc kubenswrapper[4661]: I0120 18:06:30.929212 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:30 crc kubenswrapper[4661]: I0120 18:06:30.929232 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:30Z","lastTransitionTime":"2026-01-20T18:06:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:31 crc kubenswrapper[4661]: I0120 18:06:31.039559 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:31 crc kubenswrapper[4661]: I0120 18:06:31.040823 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:31 crc kubenswrapper[4661]: I0120 18:06:31.040859 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:31 crc kubenswrapper[4661]: I0120 18:06:31.040887 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:31 crc kubenswrapper[4661]: I0120 18:06:31.040905 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:31Z","lastTransitionTime":"2026-01-20T18:06:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:31 crc kubenswrapper[4661]: I0120 18:06:31.112501 4661 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-18 14:08:41.144449024 +0000 UTC Jan 20 18:06:31 crc kubenswrapper[4661]: I0120 18:06:31.141262 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dhd6h" Jan 20 18:06:31 crc kubenswrapper[4661]: E0120 18:06:31.141431 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dhd6h" podUID="58dc28d6-51a6-49ce-ab8e-ac0b6bbe7131" Jan 20 18:06:31 crc kubenswrapper[4661]: I0120 18:06:31.141924 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 18:06:31 crc kubenswrapper[4661]: I0120 18:06:31.142028 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 18:06:31 crc kubenswrapper[4661]: E0120 18:06:31.142117 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 20 18:06:31 crc kubenswrapper[4661]: I0120 18:06:31.142137 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 18:06:31 crc kubenswrapper[4661]: E0120 18:06:31.142188 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 20 18:06:31 crc kubenswrapper[4661]: E0120 18:06:31.142263 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 20 18:06:31 crc kubenswrapper[4661]: I0120 18:06:31.144848 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:31 crc kubenswrapper[4661]: I0120 18:06:31.144903 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:31 crc kubenswrapper[4661]: I0120 18:06:31.144920 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:31 crc kubenswrapper[4661]: I0120 18:06:31.144946 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:31 crc kubenswrapper[4661]: I0120 18:06:31.144963 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:31Z","lastTransitionTime":"2026-01-20T18:06:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:31 crc kubenswrapper[4661]: I0120 18:06:31.248331 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:31 crc kubenswrapper[4661]: I0120 18:06:31.248387 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:31 crc kubenswrapper[4661]: I0120 18:06:31.248407 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:31 crc kubenswrapper[4661]: I0120 18:06:31.248432 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:31 crc kubenswrapper[4661]: I0120 18:06:31.248450 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:31Z","lastTransitionTime":"2026-01-20T18:06:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:31 crc kubenswrapper[4661]: I0120 18:06:31.351067 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:31 crc kubenswrapper[4661]: I0120 18:06:31.351109 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:31 crc kubenswrapper[4661]: I0120 18:06:31.351121 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:31 crc kubenswrapper[4661]: I0120 18:06:31.351138 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:31 crc kubenswrapper[4661]: I0120 18:06:31.351157 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:31Z","lastTransitionTime":"2026-01-20T18:06:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:31 crc kubenswrapper[4661]: I0120 18:06:31.456324 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:31 crc kubenswrapper[4661]: I0120 18:06:31.456362 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:31 crc kubenswrapper[4661]: I0120 18:06:31.456373 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:31 crc kubenswrapper[4661]: I0120 18:06:31.456394 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:31 crc kubenswrapper[4661]: I0120 18:06:31.456407 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:31Z","lastTransitionTime":"2026-01-20T18:06:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:31 crc kubenswrapper[4661]: I0120 18:06:31.558982 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:31 crc kubenswrapper[4661]: I0120 18:06:31.559070 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:31 crc kubenswrapper[4661]: I0120 18:06:31.559085 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:31 crc kubenswrapper[4661]: I0120 18:06:31.559107 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:31 crc kubenswrapper[4661]: I0120 18:06:31.559121 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:31Z","lastTransitionTime":"2026-01-20T18:06:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:31 crc kubenswrapper[4661]: I0120 18:06:31.663467 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:31 crc kubenswrapper[4661]: I0120 18:06:31.663548 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:31 crc kubenswrapper[4661]: I0120 18:06:31.663573 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:31 crc kubenswrapper[4661]: I0120 18:06:31.663605 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:31 crc kubenswrapper[4661]: I0120 18:06:31.663628 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:31Z","lastTransitionTime":"2026-01-20T18:06:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:31 crc kubenswrapper[4661]: I0120 18:06:31.766924 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:31 crc kubenswrapper[4661]: I0120 18:06:31.766981 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:31 crc kubenswrapper[4661]: I0120 18:06:31.767001 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:31 crc kubenswrapper[4661]: I0120 18:06:31.767030 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:31 crc kubenswrapper[4661]: I0120 18:06:31.767048 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:31Z","lastTransitionTime":"2026-01-20T18:06:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:31 crc kubenswrapper[4661]: I0120 18:06:31.870574 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:31 crc kubenswrapper[4661]: I0120 18:06:31.870618 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:31 crc kubenswrapper[4661]: I0120 18:06:31.870633 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:31 crc kubenswrapper[4661]: I0120 18:06:31.870656 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:31 crc kubenswrapper[4661]: I0120 18:06:31.871583 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:31Z","lastTransitionTime":"2026-01-20T18:06:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:31 crc kubenswrapper[4661]: I0120 18:06:31.975040 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:31 crc kubenswrapper[4661]: I0120 18:06:31.975087 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:31 crc kubenswrapper[4661]: I0120 18:06:31.975096 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:31 crc kubenswrapper[4661]: I0120 18:06:31.975113 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:31 crc kubenswrapper[4661]: I0120 18:06:31.975127 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:31Z","lastTransitionTime":"2026-01-20T18:06:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:32 crc kubenswrapper[4661]: I0120 18:06:32.078230 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:32 crc kubenswrapper[4661]: I0120 18:06:32.078290 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:32 crc kubenswrapper[4661]: I0120 18:06:32.078308 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:32 crc kubenswrapper[4661]: I0120 18:06:32.078340 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:32 crc kubenswrapper[4661]: I0120 18:06:32.078358 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:32Z","lastTransitionTime":"2026-01-20T18:06:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:32 crc kubenswrapper[4661]: I0120 18:06:32.112714 4661 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-04 14:11:55.066874703 +0000 UTC Jan 20 18:06:32 crc kubenswrapper[4661]: I0120 18:06:32.182357 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:32 crc kubenswrapper[4661]: I0120 18:06:32.182415 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:32 crc kubenswrapper[4661]: I0120 18:06:32.182434 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:32 crc kubenswrapper[4661]: I0120 18:06:32.182459 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:32 crc kubenswrapper[4661]: I0120 18:06:32.182476 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:32Z","lastTransitionTime":"2026-01-20T18:06:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:32 crc kubenswrapper[4661]: I0120 18:06:32.285429 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:32 crc kubenswrapper[4661]: I0120 18:06:32.285537 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:32 crc kubenswrapper[4661]: I0120 18:06:32.285557 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:32 crc kubenswrapper[4661]: I0120 18:06:32.285627 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:32 crc kubenswrapper[4661]: I0120 18:06:32.285647 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:32Z","lastTransitionTime":"2026-01-20T18:06:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:32 crc kubenswrapper[4661]: I0120 18:06:32.389626 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:32 crc kubenswrapper[4661]: I0120 18:06:32.389747 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:32 crc kubenswrapper[4661]: I0120 18:06:32.389773 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:32 crc kubenswrapper[4661]: I0120 18:06:32.389805 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:32 crc kubenswrapper[4661]: I0120 18:06:32.389830 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:32Z","lastTransitionTime":"2026-01-20T18:06:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:32 crc kubenswrapper[4661]: I0120 18:06:32.492951 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:32 crc kubenswrapper[4661]: I0120 18:06:32.493022 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:32 crc kubenswrapper[4661]: I0120 18:06:32.493041 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:32 crc kubenswrapper[4661]: I0120 18:06:32.493071 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:32 crc kubenswrapper[4661]: I0120 18:06:32.493090 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:32Z","lastTransitionTime":"2026-01-20T18:06:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:32 crc kubenswrapper[4661]: I0120 18:06:32.596589 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:32 crc kubenswrapper[4661]: I0120 18:06:32.596748 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:32 crc kubenswrapper[4661]: I0120 18:06:32.596777 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:32 crc kubenswrapper[4661]: I0120 18:06:32.596846 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:32 crc kubenswrapper[4661]: I0120 18:06:32.596870 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:32Z","lastTransitionTime":"2026-01-20T18:06:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:32 crc kubenswrapper[4661]: I0120 18:06:32.699941 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:32 crc kubenswrapper[4661]: I0120 18:06:32.699988 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:32 crc kubenswrapper[4661]: I0120 18:06:32.699999 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:32 crc kubenswrapper[4661]: I0120 18:06:32.700018 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:32 crc kubenswrapper[4661]: I0120 18:06:32.700032 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:32Z","lastTransitionTime":"2026-01-20T18:06:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:32 crc kubenswrapper[4661]: I0120 18:06:32.803993 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:32 crc kubenswrapper[4661]: I0120 18:06:32.804457 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:32 crc kubenswrapper[4661]: I0120 18:06:32.804606 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:32 crc kubenswrapper[4661]: I0120 18:06:32.804786 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:32 crc kubenswrapper[4661]: I0120 18:06:32.804942 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:32Z","lastTransitionTime":"2026-01-20T18:06:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:32 crc kubenswrapper[4661]: I0120 18:06:32.908387 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:32 crc kubenswrapper[4661]: I0120 18:06:32.908811 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:32 crc kubenswrapper[4661]: I0120 18:06:32.908995 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:32 crc kubenswrapper[4661]: I0120 18:06:32.909151 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:32 crc kubenswrapper[4661]: I0120 18:06:32.909270 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:32Z","lastTransitionTime":"2026-01-20T18:06:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:33 crc kubenswrapper[4661]: I0120 18:06:33.012621 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:33 crc kubenswrapper[4661]: I0120 18:06:33.013080 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:33 crc kubenswrapper[4661]: I0120 18:06:33.013230 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:33 crc kubenswrapper[4661]: I0120 18:06:33.013400 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:33 crc kubenswrapper[4661]: I0120 18:06:33.013529 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:33Z","lastTransitionTime":"2026-01-20T18:06:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:33 crc kubenswrapper[4661]: I0120 18:06:33.112959 4661 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-31 22:20:28.219205638 +0000 UTC Jan 20 18:06:33 crc kubenswrapper[4661]: I0120 18:06:33.115878 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:33 crc kubenswrapper[4661]: I0120 18:06:33.115928 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:33 crc kubenswrapper[4661]: I0120 18:06:33.115941 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:33 crc kubenswrapper[4661]: I0120 18:06:33.115963 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:33 crc kubenswrapper[4661]: I0120 18:06:33.115976 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:33Z","lastTransitionTime":"2026-01-20T18:06:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:33 crc kubenswrapper[4661]: I0120 18:06:33.141846 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 18:06:33 crc kubenswrapper[4661]: I0120 18:06:33.141885 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 18:06:33 crc kubenswrapper[4661]: I0120 18:06:33.141881 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 18:06:33 crc kubenswrapper[4661]: I0120 18:06:33.141934 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dhd6h" Jan 20 18:06:33 crc kubenswrapper[4661]: E0120 18:06:33.142897 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 20 18:06:33 crc kubenswrapper[4661]: E0120 18:06:33.142959 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 20 18:06:33 crc kubenswrapper[4661]: E0120 18:06:33.143163 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dhd6h" podUID="58dc28d6-51a6-49ce-ab8e-ac0b6bbe7131" Jan 20 18:06:33 crc kubenswrapper[4661]: E0120 18:06:33.143331 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 20 18:06:33 crc kubenswrapper[4661]: I0120 18:06:33.224276 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:33 crc kubenswrapper[4661]: I0120 18:06:33.224352 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:33 crc kubenswrapper[4661]: I0120 18:06:33.224372 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:33 crc kubenswrapper[4661]: I0120 18:06:33.224399 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:33 crc kubenswrapper[4661]: I0120 18:06:33.224415 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:33Z","lastTransitionTime":"2026-01-20T18:06:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:33 crc kubenswrapper[4661]: I0120 18:06:33.328279 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:33 crc kubenswrapper[4661]: I0120 18:06:33.328335 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:33 crc kubenswrapper[4661]: I0120 18:06:33.328349 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:33 crc kubenswrapper[4661]: I0120 18:06:33.328367 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:33 crc kubenswrapper[4661]: I0120 18:06:33.328379 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:33Z","lastTransitionTime":"2026-01-20T18:06:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:33 crc kubenswrapper[4661]: I0120 18:06:33.431962 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:33 crc kubenswrapper[4661]: I0120 18:06:33.432603 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:33 crc kubenswrapper[4661]: I0120 18:06:33.432946 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:33 crc kubenswrapper[4661]: I0120 18:06:33.433192 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:33 crc kubenswrapper[4661]: I0120 18:06:33.433382 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:33Z","lastTransitionTime":"2026-01-20T18:06:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:33 crc kubenswrapper[4661]: I0120 18:06:33.536626 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:33 crc kubenswrapper[4661]: I0120 18:06:33.536704 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:33 crc kubenswrapper[4661]: I0120 18:06:33.536717 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:33 crc kubenswrapper[4661]: I0120 18:06:33.536735 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:33 crc kubenswrapper[4661]: I0120 18:06:33.536745 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:33Z","lastTransitionTime":"2026-01-20T18:06:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:33 crc kubenswrapper[4661]: I0120 18:06:33.640103 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:33 crc kubenswrapper[4661]: I0120 18:06:33.640196 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:33 crc kubenswrapper[4661]: I0120 18:06:33.640213 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:33 crc kubenswrapper[4661]: I0120 18:06:33.640237 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:33 crc kubenswrapper[4661]: I0120 18:06:33.640254 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:33Z","lastTransitionTime":"2026-01-20T18:06:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:33 crc kubenswrapper[4661]: I0120 18:06:33.742446 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:33 crc kubenswrapper[4661]: I0120 18:06:33.742508 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:33 crc kubenswrapper[4661]: I0120 18:06:33.742517 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:33 crc kubenswrapper[4661]: I0120 18:06:33.742532 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:33 crc kubenswrapper[4661]: I0120 18:06:33.742543 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:33Z","lastTransitionTime":"2026-01-20T18:06:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:33 crc kubenswrapper[4661]: I0120 18:06:33.845734 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:33 crc kubenswrapper[4661]: I0120 18:06:33.846008 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:33 crc kubenswrapper[4661]: I0120 18:06:33.846080 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:33 crc kubenswrapper[4661]: I0120 18:06:33.846166 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:33 crc kubenswrapper[4661]: I0120 18:06:33.846277 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:33Z","lastTransitionTime":"2026-01-20T18:06:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:33 crc kubenswrapper[4661]: I0120 18:06:33.948733 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:33 crc kubenswrapper[4661]: I0120 18:06:33.949216 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:33 crc kubenswrapper[4661]: I0120 18:06:33.949443 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:33 crc kubenswrapper[4661]: I0120 18:06:33.949840 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:33 crc kubenswrapper[4661]: I0120 18:06:33.950076 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:33Z","lastTransitionTime":"2026-01-20T18:06:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:34 crc kubenswrapper[4661]: I0120 18:06:34.052599 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:34 crc kubenswrapper[4661]: I0120 18:06:34.052629 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:34 crc kubenswrapper[4661]: I0120 18:06:34.052636 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:34 crc kubenswrapper[4661]: I0120 18:06:34.052676 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:34 crc kubenswrapper[4661]: I0120 18:06:34.052687 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:34Z","lastTransitionTime":"2026-01-20T18:06:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:34 crc kubenswrapper[4661]: I0120 18:06:34.113796 4661 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-26 06:36:22.865667246 +0000 UTC Jan 20 18:06:34 crc kubenswrapper[4661]: I0120 18:06:34.142053 4661 scope.go:117] "RemoveContainer" containerID="936cc61252844f599e65f506adbc3ca6e06fee03fe77fc1a75295040562e5c1b" Jan 20 18:06:34 crc kubenswrapper[4661]: I0120 18:06:34.155696 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:34 crc kubenswrapper[4661]: I0120 18:06:34.155949 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:34 crc kubenswrapper[4661]: I0120 18:06:34.156111 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:34 crc kubenswrapper[4661]: I0120 18:06:34.156249 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:34 crc kubenswrapper[4661]: I0120 18:06:34.156381 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:34Z","lastTransitionTime":"2026-01-20T18:06:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:34 crc kubenswrapper[4661]: I0120 18:06:34.159885 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d831477bdf455582c54cba87020fc1141541282a25169c4b9730a78855e5719\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:34Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:34 crc kubenswrapper[4661]: I0120 18:06:34.184237 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-z97p2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b6f2401-3eb9-4ee4-b79c-6faee06bc21c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d04be3c87130e9506908a5ff0bf35490bafa64b4cec7b6ae1b67c4a8bd7df5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff8qg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T18:06:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-z97p2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:34Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:34 crc kubenswrapper[4661]: I0120 18:06:34.201504 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4hf4s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2cada643-eb7b-4036-8788-500338f73fac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://59b5cf3db3513f82b52401408842627d3e40bdc3009c226548556808410b2289\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gwqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://846c1cab30f986276eb919ac7474fbde1b6d5edb6557ab47057723b68d78b782\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gwqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T18:06:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-4hf4s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:34Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:34 crc kubenswrapper[4661]: I0120 18:06:34.217562 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f7511825-196e-48ea-a80c-f30a6806a15f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:05:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:05:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f30ca85f0d31021dde3b56c646ddd5d841e699b809c85e54afa944cc8035df6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://008613eee577926f777b6eba5a93379dca1203429fb29918bb057f2aba5eba4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:05:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://baf1692fe971ebe4534bc83cc471812d2b2883b6f97e53728ded6cd57b40c6f1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://faea3c0fefa61b8f0e07a050f59ca7b88d89a7ac8dba19ab019cff00fd782da3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T18:05:44Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:34Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:34 crc kubenswrapper[4661]: I0120 18:06:34.236063 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-svf7c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"78855c94-da90-4523-8d65-70f7fd153dee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce85015f47761ddd35031a4b2aa10eddde92a1f1ee206e6454b967b03b49372e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zvj2k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7dad5141c6e2e07d42bee1c473efffa900d0d900467b1524cd59962582696a3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zvj2k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T18:06:05Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-svf7c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:34Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:34 crc kubenswrapper[4661]: I0120 18:06:34.254741 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-tfdrt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1c3f1ce7-0584-4bf1-8398-a277e9a4599b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://163c719cffaaa547e54e81b543b5f5b2ce5abf7f6309d2859831a14e42df189f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbq77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T18:06:07Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-tfdrt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:34Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:34 crc kubenswrapper[4661]: I0120 18:06:34.265137 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:34 crc kubenswrapper[4661]: I0120 18:06:34.265216 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:34 crc kubenswrapper[4661]: I0120 18:06:34.265237 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:34 crc kubenswrapper[4661]: I0120 18:06:34.265270 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:34 crc kubenswrapper[4661]: I0120 18:06:34.265289 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:34Z","lastTransitionTime":"2026-01-20T18:06:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:34 crc kubenswrapper[4661]: I0120 18:06:34.273952 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26d03e00aaf9fc7a94d8fe25f4f6f7a028f4e5eb9956411442757ca8b2046d27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:34Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:34 crc kubenswrapper[4661]: I0120 18:06:34.288097 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:34Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:34 crc kubenswrapper[4661]: I0120 18:06:34.301484 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aafdc595f8f331b863d71124f1aa3c686ec883829377108268dd78de88f498ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a15e7bb714cbcf03a4ed8925508be80b06b04f3cd455d293237554c8ad0fdeee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:34Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:34 crc kubenswrapper[4661]: I0120 18:06:34.318132 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-j9j6p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e190abed-d178-4ce7-9485-f6090ecf8578\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ad84b24b0398f3f900b9440d55a7914e661a18580ef8b248ffdce4d8a6c75c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxwgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6923af243783c919b8d74338d7221f91f7c6b770d97eb3a2f7e30360376f071d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6923af243783c919b8d74338d7221f91f7c6b770d97eb3a2f7e30360376f071d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T18:06:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T18:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxwgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d61ecbabdd991af4f3f3005e3d6fab0d3f7fa863e7503f45dd91633dfc68c597\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d61ecbabdd991af4f3f3005e3d6fab0d3f7fa863e7503f45dd91633dfc68c597\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T18:06:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T18:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxwgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31c8fb341a4de1d1144737f83eb46ad0b301f7eb48dee0969da7ade7fbd513da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31c8fb341a4de1d1144737f83eb46ad0b301f7eb48dee0969da7ade7fbd513da\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T18:06:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T18:06:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxwgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db8122764bd0508f39da125b5849fbe3bad9558e511c18f26bdcf4e5b23ca3a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db8122764bd0508f39da125b5849fbe3bad9558e511c18f26bdcf4e5b23ca3a1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T18:06:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T18:06:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxwgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://60e382a199aa3a85c11fdf8c490a4f039a191cff8a604b004e2f4ea6dacb6800\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://60e382a199aa3a85c11fdf8c490a4f039a191cff8a604b004e2f4ea6dacb6800\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T18:06:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T18:06:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxwgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb0e9f6dd4681c1b791524e22d3f668ce544cdc72a33af01fa70f2dd93d2972f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb0e9f6dd4681c1b791524e22d3f668ce544cdc72a33af01fa70f2dd93d2972f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T18:06:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T18:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxwgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T18:06:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-j9j6p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:34Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:34 crc kubenswrapper[4661]: I0120 18:06:34.336093 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:03Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:34Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:34 crc kubenswrapper[4661]: I0120 18:06:34.354585 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-9m9jm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c44ff326-6791-438a-8d65-b2be26e9c819\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de5a607340e429cf954b1b6e147c4dbff99ffee4d311e9692410698574915af2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kn7nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T18:06:04Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-9m9jm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:34Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:34 crc kubenswrapper[4661]: I0120 18:06:34.378887 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:34 crc kubenswrapper[4661]: I0120 18:06:34.378942 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:34 crc kubenswrapper[4661]: I0120 18:06:34.378953 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:34 crc kubenswrapper[4661]: I0120 18:06:34.378970 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:34 crc kubenswrapper[4661]: I0120 18:06:34.378984 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:34Z","lastTransitionTime":"2026-01-20T18:06:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:34 crc kubenswrapper[4661]: I0120 18:06:34.383066 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fxb9d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3856f23c-8dc3-4b36-b3b7-955dff315250\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:06Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:06Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54a53d0636da9c6e7974633697967fa21ba02b0357019aca7c83994f57d06d84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66kpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://37fb98a4cea5fe59a694ef52ebebfd3366649970415c8bd3b1307e6d150ffe66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66kpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6bac19d8c5ba66dc20e5e4b90b2ba10efe69f218908b04abb221416f47e47f5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66kpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f5f5d96326cd37c1101488fff8b4ce215ce84766faf13112bed7df0a767de0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66kpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d53da47c39bd1f10fe866890f30f12f27cb0cfce0348c89fc0e89b3e8f563f2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66kpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://407e4d66f22050b80251fcb98ac7168d601d70dff1679bdaca0fc82d6068da41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66kpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://936cc61252844f599e65f506adbc3ca6e06fee03fe77fc1a75295040562e5c1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://936cc61252844f599e65f506adbc3ca6e06fee03fe77fc1a75295040562e5c1b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-20T18:06:18Z\\\",\\\"message\\\":\\\"kPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0120 18:06:17.419651 6023 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0120 18:06:17.419792 6023 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0120 18:06:17.419824 6023 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0120 18:06:17.419873 6023 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0120 18:06:17.420180 6023 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0120 18:06:17.449846 6023 shared_informer.go:320] Caches are synced for node-tracker-controller\\\\nI0120 18:06:17.449878 6023 services_controller.go:204] Setting up event handlers for services for network=default\\\\nI0120 18:06:17.449945 6023 ovnkube.go:599] Stopped ovnkube\\\\nI0120 18:06:17.449972 6023 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0120 18:06:17.450061 6023 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T18:06:16Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-fxb9d_openshift-ovn-kubernetes(3856f23c-8dc3-4b36-b3b7-955dff315250)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66kpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dfbc19df20b659446872267891c3a922b6a01e39d8f0557505f25cdc5ba1a648\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66kpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://babd416d0d33b286f533dc5bd8d6904d24fd23632efce36edb6e13183fbd390a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://babd416d0d33b286f533dc5bd8d6904d24fd23632efce36edb6e13183fbd390a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T18:06:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66kpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T18:06:06Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fxb9d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:34Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:34 crc kubenswrapper[4661]: I0120 18:06:34.402926 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-dhd6h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"58dc28d6-51a6-49ce-ab8e-ac0b6bbe7131\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j95bf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j95bf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T18:06:19Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-dhd6h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:34Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:34 crc kubenswrapper[4661]: I0120 18:06:34.425831 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5947c5f0-b932-4127-a183-6b9023784c81\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:05:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:05:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:05:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2286c38d543136df613b2611b8d494d0777a950158adb169c26708335c024251\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7995b8e096ce8c7adf28d9baa4e12d943a697db80ee2b6e6b347b334e44b0df\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a1fb928361cffd6f14855b6c1cf5964eccc9f923435bf79dddd8f0c94decd9a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a8e025f49d745d0d846c606a3ec9dd6fbd2d255e8662ba1fd1a65f0d4289e77\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3584f02089912eecb6ea77d78d4f093929ce92631cb9ea758f1311268963b6b1\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-20T18:06:02Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0120 18:05:56.920405 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0120 18:05:56.921589 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1862726087/tls.crt::/tmp/serving-cert-1862726087/tls.key\\\\\\\"\\\\nI0120 18:06:02.544098 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0120 18:06:02.549414 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0120 18:06:02.549439 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0120 18:06:02.549472 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0120 18:06:02.549479 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0120 18:06:02.569160 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0120 18:06:02.569400 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0120 18:06:02.569474 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0120 18:06:02.569536 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0120 18:06:02.569594 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0120 18:06:02.569648 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0120 18:06:02.569744 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0120 18:06:02.569342 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0120 18:06:02.573278 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T18:05:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f09e5fcc7fafac7a11257184f5919c06b5b2e56a677b67c664e6489d9a581a20\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:05:46Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6eedc9bdf3c37af238cf9ad5172a8d93751c0641cbf43057016157f086c77538\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6eedc9bdf3c37af238cf9ad5172a8d93751c0641cbf43057016157f086c77538\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T18:05:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T18:05:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T18:05:44Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:34Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:34 crc kubenswrapper[4661]: I0120 18:06:34.441838 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:03Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:34Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:34 crc kubenswrapper[4661]: I0120 18:06:34.482539 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:34 crc kubenswrapper[4661]: I0120 18:06:34.482579 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:34 crc kubenswrapper[4661]: I0120 18:06:34.482591 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:34 crc kubenswrapper[4661]: I0120 18:06:34.482610 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:34 crc kubenswrapper[4661]: I0120 18:06:34.482624 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:34Z","lastTransitionTime":"2026-01-20T18:06:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:34 crc kubenswrapper[4661]: I0120 18:06:34.585934 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:34 crc kubenswrapper[4661]: I0120 18:06:34.586293 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:34 crc kubenswrapper[4661]: I0120 18:06:34.586306 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:34 crc kubenswrapper[4661]: I0120 18:06:34.586322 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:34 crc kubenswrapper[4661]: I0120 18:06:34.586620 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:34Z","lastTransitionTime":"2026-01-20T18:06:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:34 crc kubenswrapper[4661]: I0120 18:06:34.655763 4661 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 20 18:06:34 crc kubenswrapper[4661]: I0120 18:06:34.692788 4661 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Jan 20 18:06:34 crc kubenswrapper[4661]: I0120 18:06:34.693941 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:34 crc kubenswrapper[4661]: I0120 18:06:34.693974 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:34 crc kubenswrapper[4661]: I0120 18:06:34.693983 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:34 crc kubenswrapper[4661]: I0120 18:06:34.694000 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:34 crc kubenswrapper[4661]: I0120 18:06:34.694010 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:34Z","lastTransitionTime":"2026-01-20T18:06:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:34 crc kubenswrapper[4661]: I0120 18:06:34.706392 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5947c5f0-b932-4127-a183-6b9023784c81\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:05:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:05:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:05:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2286c38d543136df613b2611b8d494d0777a950158adb169c26708335c024251\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7995b8e096ce8c7adf28d9baa4e12d943a697db80ee2b6e6b347b334e44b0df\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a1fb928361cffd6f14855b6c1cf5964eccc9f923435bf79dddd8f0c94decd9a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a8e025f49d745d0d846c606a3ec9dd6fbd2d255e8662ba1fd1a65f0d4289e77\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3584f02089912eecb6ea77d78d4f093929ce92631cb9ea758f1311268963b6b1\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-20T18:06:02Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0120 18:05:56.920405 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0120 18:05:56.921589 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1862726087/tls.crt::/tmp/serving-cert-1862726087/tls.key\\\\\\\"\\\\nI0120 18:06:02.544098 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0120 18:06:02.549414 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0120 18:06:02.549439 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0120 18:06:02.549472 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0120 18:06:02.549479 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0120 18:06:02.569160 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0120 18:06:02.569400 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0120 18:06:02.569474 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0120 18:06:02.569536 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0120 18:06:02.569594 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0120 18:06:02.569648 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0120 18:06:02.569744 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0120 18:06:02.569342 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0120 18:06:02.573278 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T18:05:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f09e5fcc7fafac7a11257184f5919c06b5b2e56a677b67c664e6489d9a581a20\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:05:46Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6eedc9bdf3c37af238cf9ad5172a8d93751c0641cbf43057016157f086c77538\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6eedc9bdf3c37af238cf9ad5172a8d93751c0641cbf43057016157f086c77538\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T18:05:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T18:05:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T18:05:44Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:34Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:34 crc kubenswrapper[4661]: I0120 18:06:34.723787 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:03Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:34Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:34 crc kubenswrapper[4661]: I0120 18:06:34.726949 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-fxb9d_3856f23c-8dc3-4b36-b3b7-955dff315250/ovnkube-controller/1.log" Jan 20 18:06:34 crc kubenswrapper[4661]: I0120 18:06:34.729879 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fxb9d" event={"ID":"3856f23c-8dc3-4b36-b3b7-955dff315250","Type":"ContainerStarted","Data":"823de7d3705ac9e99525bc0fcc4f577fb363555af9dd66346f33065839076105"} Jan 20 18:06:34 crc kubenswrapper[4661]: I0120 18:06:34.730572 4661 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-fxb9d" Jan 20 18:06:34 crc kubenswrapper[4661]: I0120 18:06:34.744566 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:03Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:34Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:34 crc kubenswrapper[4661]: I0120 18:06:34.766723 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-9m9jm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c44ff326-6791-438a-8d65-b2be26e9c819\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de5a607340e429cf954b1b6e147c4dbff99ffee4d311e9692410698574915af2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kn7nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T18:06:04Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-9m9jm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:34Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:34 crc kubenswrapper[4661]: I0120 18:06:34.790015 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fxb9d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3856f23c-8dc3-4b36-b3b7-955dff315250\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:06Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:06Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54a53d0636da9c6e7974633697967fa21ba02b0357019aca7c83994f57d06d84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66kpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://37fb98a4cea5fe59a694ef52ebebfd3366649970415c8bd3b1307e6d150ffe66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66kpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6bac19d8c5ba66dc20e5e4b90b2ba10efe69f218908b04abb221416f47e47f5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66kpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f5f5d96326cd37c1101488fff8b4ce215ce84766faf13112bed7df0a767de0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66kpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d53da47c39bd1f10fe866890f30f12f27cb0cfce0348c89fc0e89b3e8f563f2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66kpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://407e4d66f22050b80251fcb98ac7168d601d70dff1679bdaca0fc82d6068da41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66kpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://936cc61252844f599e65f506adbc3ca6e06fee03fe77fc1a75295040562e5c1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://936cc61252844f599e65f506adbc3ca6e06fee03fe77fc1a75295040562e5c1b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-20T18:06:18Z\\\",\\\"message\\\":\\\"kPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0120 18:06:17.419651 6023 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0120 18:06:17.419792 6023 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0120 18:06:17.419824 6023 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0120 18:06:17.419873 6023 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0120 18:06:17.420180 6023 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0120 18:06:17.449846 6023 shared_informer.go:320] Caches are synced for node-tracker-controller\\\\nI0120 18:06:17.449878 6023 services_controller.go:204] Setting up event handlers for services for network=default\\\\nI0120 18:06:17.449945 6023 ovnkube.go:599] Stopped ovnkube\\\\nI0120 18:06:17.449972 6023 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0120 18:06:17.450061 6023 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T18:06:16Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-fxb9d_openshift-ovn-kubernetes(3856f23c-8dc3-4b36-b3b7-955dff315250)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66kpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dfbc19df20b659446872267891c3a922b6a01e39d8f0557505f25cdc5ba1a648\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66kpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://babd416d0d33b286f533dc5bd8d6904d24fd23632efce36edb6e13183fbd390a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://babd416d0d33b286f533dc5bd8d6904d24fd23632efce36edb6e13183fbd390a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T18:06:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66kpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T18:06:06Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fxb9d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:34Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:34 crc kubenswrapper[4661]: I0120 18:06:34.796850 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:34 crc kubenswrapper[4661]: I0120 18:06:34.796885 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:34 crc kubenswrapper[4661]: I0120 18:06:34.796893 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:34 crc kubenswrapper[4661]: I0120 18:06:34.796909 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:34 crc kubenswrapper[4661]: I0120 18:06:34.796941 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:34Z","lastTransitionTime":"2026-01-20T18:06:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:34 crc kubenswrapper[4661]: I0120 18:06:34.801221 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-dhd6h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"58dc28d6-51a6-49ce-ab8e-ac0b6bbe7131\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j95bf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j95bf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T18:06:19Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-dhd6h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:34Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:34 crc kubenswrapper[4661]: I0120 18:06:34.813548 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f7511825-196e-48ea-a80c-f30a6806a15f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:05:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:05:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f30ca85f0d31021dde3b56c646ddd5d841e699b809c85e54afa944cc8035df6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://008613eee577926f777b6eba5a93379dca1203429fb29918bb057f2aba5eba4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:05:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://baf1692fe971ebe4534bc83cc471812d2b2883b6f97e53728ded6cd57b40c6f1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://faea3c0fefa61b8f0e07a050f59ca7b88d89a7ac8dba19ab019cff00fd782da3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T18:05:44Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:34Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:34 crc kubenswrapper[4661]: I0120 18:06:34.825064 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d831477bdf455582c54cba87020fc1141541282a25169c4b9730a78855e5719\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:34Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:34 crc kubenswrapper[4661]: I0120 18:06:34.839208 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-z97p2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b6f2401-3eb9-4ee4-b79c-6faee06bc21c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d04be3c87130e9506908a5ff0bf35490bafa64b4cec7b6ae1b67c4a8bd7df5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff8qg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T18:06:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-z97p2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:34Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:34 crc kubenswrapper[4661]: I0120 18:06:34.851177 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4hf4s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2cada643-eb7b-4036-8788-500338f73fac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://59b5cf3db3513f82b52401408842627d3e40bdc3009c226548556808410b2289\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gwqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://846c1cab30f986276eb919ac7474fbde1b6d5edb6557ab47057723b68d78b782\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gwqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T18:06:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-4hf4s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:34Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:34 crc kubenswrapper[4661]: I0120 18:06:34.864084 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26d03e00aaf9fc7a94d8fe25f4f6f7a028f4e5eb9956411442757ca8b2046d27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:34Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:34 crc kubenswrapper[4661]: I0120 18:06:34.876974 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:34Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:34 crc kubenswrapper[4661]: I0120 18:06:34.891438 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-svf7c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"78855c94-da90-4523-8d65-70f7fd153dee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce85015f47761ddd35031a4b2aa10eddde92a1f1ee206e6454b967b03b49372e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zvj2k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7dad5141c6e2e07d42bee1c473efffa900d0d900467b1524cd59962582696a3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zvj2k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T18:06:05Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-svf7c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:34Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:34 crc kubenswrapper[4661]: I0120 18:06:34.894372 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 18:06:34 crc kubenswrapper[4661]: I0120 18:06:34.894478 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 18:06:34 crc kubenswrapper[4661]: E0120 18:06:34.894567 4661 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 18:07:06.894548107 +0000 UTC m=+83.225337769 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 18:06:34 crc kubenswrapper[4661]: E0120 18:06:34.894622 4661 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 20 18:06:34 crc kubenswrapper[4661]: E0120 18:06:34.894721 4661 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-20 18:07:06.894703231 +0000 UTC m=+83.225492893 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 20 18:06:34 crc kubenswrapper[4661]: E0120 18:06:34.894765 4661 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 20 18:06:34 crc kubenswrapper[4661]: E0120 18:06:34.894821 4661 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-20 18:07:06.894813614 +0000 UTC m=+83.225603276 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 20 18:06:34 crc kubenswrapper[4661]: I0120 18:06:34.894628 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 18:06:34 crc kubenswrapper[4661]: I0120 18:06:34.895449 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 18:06:34 crc kubenswrapper[4661]: E0120 18:06:34.895538 4661 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 20 18:06:34 crc kubenswrapper[4661]: E0120 18:06:34.895561 4661 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 20 18:06:34 crc kubenswrapper[4661]: E0120 18:06:34.895573 4661 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 20 18:06:34 crc kubenswrapper[4661]: E0120 18:06:34.895711 4661 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 20 18:06:34 crc kubenswrapper[4661]: E0120 18:06:34.895729 4661 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 20 18:06:34 crc kubenswrapper[4661]: E0120 18:06:34.895738 4661 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 20 18:06:34 crc kubenswrapper[4661]: E0120 18:06:34.895775 4661 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-20 18:07:06.895767121 +0000 UTC m=+83.226556773 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 20 18:06:34 crc kubenswrapper[4661]: E0120 18:06:34.895884 4661 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-20 18:07:06.895829973 +0000 UTC m=+83.226619825 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 20 18:06:34 crc kubenswrapper[4661]: I0120 18:06:34.896033 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 18:06:34 crc kubenswrapper[4661]: I0120 18:06:34.900026 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:34 crc kubenswrapper[4661]: I0120 18:06:34.900059 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:34 crc kubenswrapper[4661]: I0120 18:06:34.900070 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:34 crc kubenswrapper[4661]: I0120 18:06:34.900096 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:34 crc kubenswrapper[4661]: I0120 18:06:34.900110 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:34Z","lastTransitionTime":"2026-01-20T18:06:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:34 crc kubenswrapper[4661]: I0120 18:06:34.906798 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-tfdrt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1c3f1ce7-0584-4bf1-8398-a277e9a4599b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://163c719cffaaa547e54e81b543b5f5b2ce5abf7f6309d2859831a14e42df189f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbq77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T18:06:07Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-tfdrt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:34Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:34 crc kubenswrapper[4661]: I0120 18:06:34.927183 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aafdc595f8f331b863d71124f1aa3c686ec883829377108268dd78de88f498ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a15e7bb714cbcf03a4ed8925508be80b06b04f3cd455d293237554c8ad0fdeee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:34Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:34 crc kubenswrapper[4661]: I0120 18:06:34.945518 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-j9j6p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e190abed-d178-4ce7-9485-f6090ecf8578\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ad84b24b0398f3f900b9440d55a7914e661a18580ef8b248ffdce4d8a6c75c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxwgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6923af243783c919b8d74338d7221f91f7c6b770d97eb3a2f7e30360376f071d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6923af243783c919b8d74338d7221f91f7c6b770d97eb3a2f7e30360376f071d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T18:06:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T18:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxwgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d61ecbabdd991af4f3f3005e3d6fab0d3f7fa863e7503f45dd91633dfc68c597\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d61ecbabdd991af4f3f3005e3d6fab0d3f7fa863e7503f45dd91633dfc68c597\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T18:06:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T18:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxwgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31c8fb341a4de1d1144737f83eb46ad0b301f7eb48dee0969da7ade7fbd513da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31c8fb341a4de1d1144737f83eb46ad0b301f7eb48dee0969da7ade7fbd513da\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T18:06:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T18:06:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxwgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db8122764bd0508f39da125b5849fbe3bad9558e511c18f26bdcf4e5b23ca3a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db8122764bd0508f39da125b5849fbe3bad9558e511c18f26bdcf4e5b23ca3a1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T18:06:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T18:06:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxwgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://60e382a199aa3a85c11fdf8c490a4f039a191cff8a604b004e2f4ea6dacb6800\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://60e382a199aa3a85c11fdf8c490a4f039a191cff8a604b004e2f4ea6dacb6800\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T18:06:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T18:06:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxwgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb0e9f6dd4681c1b791524e22d3f668ce544cdc72a33af01fa70f2dd93d2972f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb0e9f6dd4681c1b791524e22d3f668ce544cdc72a33af01fa70f2dd93d2972f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T18:06:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T18:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxwgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T18:06:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-j9j6p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:34Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:34 crc kubenswrapper[4661]: I0120 18:06:34.969899 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26d03e00aaf9fc7a94d8fe25f4f6f7a028f4e5eb9956411442757ca8b2046d27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:34Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:34 crc kubenswrapper[4661]: I0120 18:06:34.987375 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:34Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:35 crc kubenswrapper[4661]: I0120 18:06:35.003530 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:35 crc kubenswrapper[4661]: I0120 18:06:35.003580 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:35 crc kubenswrapper[4661]: I0120 18:06:35.003591 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:35 crc kubenswrapper[4661]: I0120 18:06:35.003611 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:35 crc kubenswrapper[4661]: I0120 18:06:35.003626 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:35Z","lastTransitionTime":"2026-01-20T18:06:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:35 crc kubenswrapper[4661]: I0120 18:06:35.006247 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-svf7c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"78855c94-da90-4523-8d65-70f7fd153dee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce85015f47761ddd35031a4b2aa10eddde92a1f1ee206e6454b967b03b49372e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zvj2k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7dad5141c6e2e07d42bee1c473efffa900d0d900467b1524cd59962582696a3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zvj2k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T18:06:05Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-svf7c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:35Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:35 crc kubenswrapper[4661]: I0120 18:06:35.018325 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-tfdrt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1c3f1ce7-0584-4bf1-8398-a277e9a4599b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://163c719cffaaa547e54e81b543b5f5b2ce5abf7f6309d2859831a14e42df189f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbq77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T18:06:07Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-tfdrt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:35Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:35 crc kubenswrapper[4661]: I0120 18:06:35.030881 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e82a8ce9-e23c-4fbc-9d26-0e81374193ba\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:05:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:05:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:05:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://887bb0c57d5fcddfad0ffb44f39fb809f945050689a5fb64f145b607b2dcd4f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://786a785de87e5345f01ed57fb6cd17efebe4633953b9e6bc9c169469621aea5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fdf1da4bed4fed480e327c750f01ea201663449a9975540540859463e5b4821f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://892bacf66ebee9e56348d6d6f391b0fd23a5c99369ddaf9280590d8598b32e62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://892bacf66ebee9e56348d6d6f391b0fd23a5c99369ddaf9280590d8598b32e62\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T18:05:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T18:05:44Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T18:05:44Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:35Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:35 crc kubenswrapper[4661]: I0120 18:06:35.046701 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aafdc595f8f331b863d71124f1aa3c686ec883829377108268dd78de88f498ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a15e7bb714cbcf03a4ed8925508be80b06b04f3cd455d293237554c8ad0fdeee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:35Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:35 crc kubenswrapper[4661]: I0120 18:06:35.077390 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-j9j6p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e190abed-d178-4ce7-9485-f6090ecf8578\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ad84b24b0398f3f900b9440d55a7914e661a18580ef8b248ffdce4d8a6c75c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxwgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6923af243783c919b8d74338d7221f91f7c6b770d97eb3a2f7e30360376f071d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6923af243783c919b8d74338d7221f91f7c6b770d97eb3a2f7e30360376f071d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T18:06:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T18:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxwgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d61ecbabdd991af4f3f3005e3d6fab0d3f7fa863e7503f45dd91633dfc68c597\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d61ecbabdd991af4f3f3005e3d6fab0d3f7fa863e7503f45dd91633dfc68c597\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T18:06:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T18:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxwgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31c8fb341a4de1d1144737f83eb46ad0b301f7eb48dee0969da7ade7fbd513da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31c8fb341a4de1d1144737f83eb46ad0b301f7eb48dee0969da7ade7fbd513da\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T18:06:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T18:06:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxwgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db8122764bd0508f39da125b5849fbe3bad9558e511c18f26bdcf4e5b23ca3a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db8122764bd0508f39da125b5849fbe3bad9558e511c18f26bdcf4e5b23ca3a1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T18:06:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T18:06:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxwgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://60e382a199aa3a85c11fdf8c490a4f039a191cff8a604b004e2f4ea6dacb6800\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://60e382a199aa3a85c11fdf8c490a4f039a191cff8a604b004e2f4ea6dacb6800\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T18:06:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T18:06:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxwgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb0e9f6dd4681c1b791524e22d3f668ce544cdc72a33af01fa70f2dd93d2972f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb0e9f6dd4681c1b791524e22d3f668ce544cdc72a33af01fa70f2dd93d2972f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T18:06:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T18:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxwgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T18:06:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-j9j6p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:35Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:35 crc kubenswrapper[4661]: I0120 18:06:35.092204 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-dhd6h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"58dc28d6-51a6-49ce-ab8e-ac0b6bbe7131\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j95bf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j95bf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T18:06:19Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-dhd6h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:35Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:35 crc kubenswrapper[4661]: I0120 18:06:35.106454 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:35 crc kubenswrapper[4661]: I0120 18:06:35.106528 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:35 crc kubenswrapper[4661]: I0120 18:06:35.106543 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:35 crc kubenswrapper[4661]: I0120 18:06:35.106568 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:35 crc kubenswrapper[4661]: I0120 18:06:35.106587 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:35Z","lastTransitionTime":"2026-01-20T18:06:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:35 crc kubenswrapper[4661]: I0120 18:06:35.113974 4661 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-07 00:05:35.256967869 +0000 UTC Jan 20 18:06:35 crc kubenswrapper[4661]: I0120 18:06:35.114255 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5947c5f0-b932-4127-a183-6b9023784c81\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:05:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:05:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:05:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2286c38d543136df613b2611b8d494d0777a950158adb169c26708335c024251\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7995b8e096ce8c7adf28d9baa4e12d943a697db80ee2b6e6b347b334e44b0df\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a1fb928361cffd6f14855b6c1cf5964eccc9f923435bf79dddd8f0c94decd9a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a8e025f49d745d0d846c606a3ec9dd6fbd2d255e8662ba1fd1a65f0d4289e77\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3584f02089912eecb6ea77d78d4f093929ce92631cb9ea758f1311268963b6b1\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-20T18:06:02Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0120 18:05:56.920405 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0120 18:05:56.921589 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1862726087/tls.crt::/tmp/serving-cert-1862726087/tls.key\\\\\\\"\\\\nI0120 18:06:02.544098 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0120 18:06:02.549414 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0120 18:06:02.549439 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0120 18:06:02.549472 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0120 18:06:02.549479 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0120 18:06:02.569160 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0120 18:06:02.569400 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0120 18:06:02.569474 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0120 18:06:02.569536 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0120 18:06:02.569594 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0120 18:06:02.569648 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0120 18:06:02.569744 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0120 18:06:02.569342 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0120 18:06:02.573278 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T18:05:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f09e5fcc7fafac7a11257184f5919c06b5b2e56a677b67c664e6489d9a581a20\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:05:46Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6eedc9bdf3c37af238cf9ad5172a8d93751c0641cbf43057016157f086c77538\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6eedc9bdf3c37af238cf9ad5172a8d93751c0641cbf43057016157f086c77538\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T18:05:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T18:05:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T18:05:44Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:35Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:35 crc kubenswrapper[4661]: I0120 18:06:35.130878 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:03Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:35Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:35 crc kubenswrapper[4661]: I0120 18:06:35.141547 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 18:06:35 crc kubenswrapper[4661]: I0120 18:06:35.141634 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 18:06:35 crc kubenswrapper[4661]: I0120 18:06:35.141771 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dhd6h" Jan 20 18:06:35 crc kubenswrapper[4661]: E0120 18:06:35.141771 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 20 18:06:35 crc kubenswrapper[4661]: I0120 18:06:35.141569 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 18:06:35 crc kubenswrapper[4661]: E0120 18:06:35.141954 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dhd6h" podUID="58dc28d6-51a6-49ce-ab8e-ac0b6bbe7131" Jan 20 18:06:35 crc kubenswrapper[4661]: E0120 18:06:35.142102 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 20 18:06:35 crc kubenswrapper[4661]: E0120 18:06:35.142164 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 20 18:06:35 crc kubenswrapper[4661]: I0120 18:06:35.149938 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:03Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:35Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:35 crc kubenswrapper[4661]: I0120 18:06:35.162340 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-9m9jm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c44ff326-6791-438a-8d65-b2be26e9c819\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de5a607340e429cf954b1b6e147c4dbff99ffee4d311e9692410698574915af2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kn7nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T18:06:04Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-9m9jm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:35Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:35 crc kubenswrapper[4661]: I0120 18:06:35.183754 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fxb9d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3856f23c-8dc3-4b36-b3b7-955dff315250\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:06Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:06Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54a53d0636da9c6e7974633697967fa21ba02b0357019aca7c83994f57d06d84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66kpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://37fb98a4cea5fe59a694ef52ebebfd3366649970415c8bd3b1307e6d150ffe66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66kpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6bac19d8c5ba66dc20e5e4b90b2ba10efe69f218908b04abb221416f47e47f5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66kpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f5f5d96326cd37c1101488fff8b4ce215ce84766faf13112bed7df0a767de0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66kpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d53da47c39bd1f10fe866890f30f12f27cb0cfce0348c89fc0e89b3e8f563f2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66kpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://407e4d66f22050b80251fcb98ac7168d601d70dff1679bdaca0fc82d6068da41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66kpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://823de7d3705ac9e99525bc0fcc4f577fb363555af9dd66346f33065839076105\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://936cc61252844f599e65f506adbc3ca6e06fee03fe77fc1a75295040562e5c1b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-20T18:06:18Z\\\",\\\"message\\\":\\\"kPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0120 18:06:17.419651 6023 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0120 18:06:17.419792 6023 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0120 18:06:17.419824 6023 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0120 18:06:17.419873 6023 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0120 18:06:17.420180 6023 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0120 18:06:17.449846 6023 shared_informer.go:320] Caches are synced for node-tracker-controller\\\\nI0120 18:06:17.449878 6023 services_controller.go:204] Setting up event handlers for services for network=default\\\\nI0120 18:06:17.449945 6023 ovnkube.go:599] Stopped ovnkube\\\\nI0120 18:06:17.449972 6023 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0120 18:06:17.450061 6023 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T18:06:16Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66kpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dfbc19df20b659446872267891c3a922b6a01e39d8f0557505f25cdc5ba1a648\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66kpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://babd416d0d33b286f533dc5bd8d6904d24fd23632efce36edb6e13183fbd390a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://babd416d0d33b286f533dc5bd8d6904d24fd23632efce36edb6e13183fbd390a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T18:06:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66kpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T18:06:06Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fxb9d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:35Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:35 crc kubenswrapper[4661]: I0120 18:06:35.198825 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f7511825-196e-48ea-a80c-f30a6806a15f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:05:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:05:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f30ca85f0d31021dde3b56c646ddd5d841e699b809c85e54afa944cc8035df6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://008613eee577926f777b6eba5a93379dca1203429fb29918bb057f2aba5eba4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:05:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://baf1692fe971ebe4534bc83cc471812d2b2883b6f97e53728ded6cd57b40c6f1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://faea3c0fefa61b8f0e07a050f59ca7b88d89a7ac8dba19ab019cff00fd782da3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T18:05:44Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:35Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:35 crc kubenswrapper[4661]: I0120 18:06:35.209378 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:35 crc kubenswrapper[4661]: I0120 18:06:35.209440 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:35 crc kubenswrapper[4661]: I0120 18:06:35.209452 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:35 crc kubenswrapper[4661]: I0120 18:06:35.209474 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:35 crc kubenswrapper[4661]: I0120 18:06:35.209487 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:35Z","lastTransitionTime":"2026-01-20T18:06:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:35 crc kubenswrapper[4661]: I0120 18:06:35.213214 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d831477bdf455582c54cba87020fc1141541282a25169c4b9730a78855e5719\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:35Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:35 crc kubenswrapper[4661]: I0120 18:06:35.233211 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-z97p2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b6f2401-3eb9-4ee4-b79c-6faee06bc21c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d04be3c87130e9506908a5ff0bf35490bafa64b4cec7b6ae1b67c4a8bd7df5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff8qg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T18:06:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-z97p2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:35Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:35 crc kubenswrapper[4661]: I0120 18:06:35.249879 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4hf4s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2cada643-eb7b-4036-8788-500338f73fac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://59b5cf3db3513f82b52401408842627d3e40bdc3009c226548556808410b2289\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gwqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://846c1cab30f986276eb919ac7474fbde1b6d5edb6557ab47057723b68d78b782\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gwqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T18:06:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-4hf4s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:35Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:35 crc kubenswrapper[4661]: I0120 18:06:35.312744 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:35 crc kubenswrapper[4661]: I0120 18:06:35.312789 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:35 crc kubenswrapper[4661]: I0120 18:06:35.312799 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:35 crc kubenswrapper[4661]: I0120 18:06:35.312818 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:35 crc kubenswrapper[4661]: I0120 18:06:35.312830 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:35Z","lastTransitionTime":"2026-01-20T18:06:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:35 crc kubenswrapper[4661]: I0120 18:06:35.416835 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:35 crc kubenswrapper[4661]: I0120 18:06:35.416890 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:35 crc kubenswrapper[4661]: I0120 18:06:35.416902 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:35 crc kubenswrapper[4661]: I0120 18:06:35.416922 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:35 crc kubenswrapper[4661]: I0120 18:06:35.416934 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:35Z","lastTransitionTime":"2026-01-20T18:06:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:35 crc kubenswrapper[4661]: I0120 18:06:35.520376 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:35 crc kubenswrapper[4661]: I0120 18:06:35.520433 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:35 crc kubenswrapper[4661]: I0120 18:06:35.520451 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:35 crc kubenswrapper[4661]: I0120 18:06:35.520478 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:35 crc kubenswrapper[4661]: I0120 18:06:35.520497 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:35Z","lastTransitionTime":"2026-01-20T18:06:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:35 crc kubenswrapper[4661]: I0120 18:06:35.624024 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:35 crc kubenswrapper[4661]: I0120 18:06:35.624078 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:35 crc kubenswrapper[4661]: I0120 18:06:35.624088 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:35 crc kubenswrapper[4661]: I0120 18:06:35.624107 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:35 crc kubenswrapper[4661]: I0120 18:06:35.624118 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:35Z","lastTransitionTime":"2026-01-20T18:06:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:35 crc kubenswrapper[4661]: I0120 18:06:35.728429 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:35 crc kubenswrapper[4661]: I0120 18:06:35.728525 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:35 crc kubenswrapper[4661]: I0120 18:06:35.728549 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:35 crc kubenswrapper[4661]: I0120 18:06:35.728584 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:35 crc kubenswrapper[4661]: I0120 18:06:35.728607 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:35Z","lastTransitionTime":"2026-01-20T18:06:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:35 crc kubenswrapper[4661]: I0120 18:06:35.739442 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-fxb9d_3856f23c-8dc3-4b36-b3b7-955dff315250/ovnkube-controller/2.log" Jan 20 18:06:35 crc kubenswrapper[4661]: I0120 18:06:35.740716 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-fxb9d_3856f23c-8dc3-4b36-b3b7-955dff315250/ovnkube-controller/1.log" Jan 20 18:06:35 crc kubenswrapper[4661]: I0120 18:06:35.747579 4661 generic.go:334] "Generic (PLEG): container finished" podID="3856f23c-8dc3-4b36-b3b7-955dff315250" containerID="823de7d3705ac9e99525bc0fcc4f577fb363555af9dd66346f33065839076105" exitCode=1 Jan 20 18:06:35 crc kubenswrapper[4661]: I0120 18:06:35.747705 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fxb9d" event={"ID":"3856f23c-8dc3-4b36-b3b7-955dff315250","Type":"ContainerDied","Data":"823de7d3705ac9e99525bc0fcc4f577fb363555af9dd66346f33065839076105"} Jan 20 18:06:35 crc kubenswrapper[4661]: I0120 18:06:35.747783 4661 scope.go:117] "RemoveContainer" containerID="936cc61252844f599e65f506adbc3ca6e06fee03fe77fc1a75295040562e5c1b" Jan 20 18:06:35 crc kubenswrapper[4661]: I0120 18:06:35.749174 4661 scope.go:117] "RemoveContainer" containerID="823de7d3705ac9e99525bc0fcc4f577fb363555af9dd66346f33065839076105" Jan 20 18:06:35 crc kubenswrapper[4661]: E0120 18:06:35.749411 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-fxb9d_openshift-ovn-kubernetes(3856f23c-8dc3-4b36-b3b7-955dff315250)\"" pod="openshift-ovn-kubernetes/ovnkube-node-fxb9d" podUID="3856f23c-8dc3-4b36-b3b7-955dff315250" Jan 20 18:06:35 crc kubenswrapper[4661]: I0120 18:06:35.772703 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:03Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:35Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:35 crc kubenswrapper[4661]: I0120 18:06:35.793119 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-9m9jm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c44ff326-6791-438a-8d65-b2be26e9c819\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de5a607340e429cf954b1b6e147c4dbff99ffee4d311e9692410698574915af2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kn7nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T18:06:04Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-9m9jm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:35Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:35 crc kubenswrapper[4661]: I0120 18:06:35.804758 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/58dc28d6-51a6-49ce-ab8e-ac0b6bbe7131-metrics-certs\") pod \"network-metrics-daemon-dhd6h\" (UID: \"58dc28d6-51a6-49ce-ab8e-ac0b6bbe7131\") " pod="openshift-multus/network-metrics-daemon-dhd6h" Jan 20 18:06:35 crc kubenswrapper[4661]: E0120 18:06:35.805438 4661 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 20 18:06:35 crc kubenswrapper[4661]: E0120 18:06:35.805523 4661 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/58dc28d6-51a6-49ce-ab8e-ac0b6bbe7131-metrics-certs podName:58dc28d6-51a6-49ce-ab8e-ac0b6bbe7131 nodeName:}" failed. No retries permitted until 2026-01-20 18:06:51.805497321 +0000 UTC m=+68.136287013 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/58dc28d6-51a6-49ce-ab8e-ac0b6bbe7131-metrics-certs") pod "network-metrics-daemon-dhd6h" (UID: "58dc28d6-51a6-49ce-ab8e-ac0b6bbe7131") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 20 18:06:35 crc kubenswrapper[4661]: I0120 18:06:35.826772 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fxb9d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3856f23c-8dc3-4b36-b3b7-955dff315250\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:06Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:06Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54a53d0636da9c6e7974633697967fa21ba02b0357019aca7c83994f57d06d84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66kpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://37fb98a4cea5fe59a694ef52ebebfd3366649970415c8bd3b1307e6d150ffe66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66kpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6bac19d8c5ba66dc20e5e4b90b2ba10efe69f218908b04abb221416f47e47f5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66kpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f5f5d96326cd37c1101488fff8b4ce215ce84766faf13112bed7df0a767de0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66kpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d53da47c39bd1f10fe866890f30f12f27cb0cfce0348c89fc0e89b3e8f563f2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66kpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://407e4d66f22050b80251fcb98ac7168d601d70dff1679bdaca0fc82d6068da41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66kpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://823de7d3705ac9e99525bc0fcc4f577fb363555af9dd66346f33065839076105\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://936cc61252844f599e65f506adbc3ca6e06fee03fe77fc1a75295040562e5c1b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-20T18:06:18Z\\\",\\\"message\\\":\\\"kPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0120 18:06:17.419651 6023 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0120 18:06:17.419792 6023 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0120 18:06:17.419824 6023 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0120 18:06:17.419873 6023 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0120 18:06:17.420180 6023 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0120 18:06:17.449846 6023 shared_informer.go:320] Caches are synced for node-tracker-controller\\\\nI0120 18:06:17.449878 6023 services_controller.go:204] Setting up event handlers for services for network=default\\\\nI0120 18:06:17.449945 6023 ovnkube.go:599] Stopped ovnkube\\\\nI0120 18:06:17.449972 6023 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0120 18:06:17.450061 6023 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T18:06:16Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://823de7d3705ac9e99525bc0fcc4f577fb363555af9dd66346f33065839076105\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-20T18:06:35Z\\\",\\\"message\\\":\\\".BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0120 18:06:35.219065 6217 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0120 18:06:35.219313 6217 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0120 18:06:35.219547 6217 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0120 18:06:35.220089 6217 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0120 18:06:35.220357 6217 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0120 18:06:35.220410 6217 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0120 18:06:35.220459 6217 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0120 18:06:35.221651 6217 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0120 18:06:35.221712 6217 factory.go:656] Stopping watch factory\\\\nI0120 18:06:35.221730 6217 ovnkube.go:599] Stopped ovnkube\\\\nI0120 1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T18:06:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66kpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dfbc19df20b659446872267891c3a922b6a01e39d8f0557505f25cdc5ba1a648\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66kpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://babd416d0d33b286f533dc5bd8d6904d24fd23632efce36edb6e13183fbd390a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://babd416d0d33b286f533dc5bd8d6904d24fd23632efce36edb6e13183fbd390a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T18:06:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66kpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T18:06:06Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fxb9d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:35Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:35 crc kubenswrapper[4661]: I0120 18:06:35.832283 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:35 crc kubenswrapper[4661]: I0120 18:06:35.832339 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:35 crc kubenswrapper[4661]: I0120 18:06:35.832356 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:35 crc kubenswrapper[4661]: I0120 18:06:35.832383 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:35 crc kubenswrapper[4661]: I0120 18:06:35.832401 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:35Z","lastTransitionTime":"2026-01-20T18:06:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:35 crc kubenswrapper[4661]: I0120 18:06:35.843555 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-dhd6h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"58dc28d6-51a6-49ce-ab8e-ac0b6bbe7131\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j95bf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j95bf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T18:06:19Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-dhd6h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:35Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:35 crc kubenswrapper[4661]: I0120 18:06:35.864951 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5947c5f0-b932-4127-a183-6b9023784c81\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:05:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:05:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:05:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2286c38d543136df613b2611b8d494d0777a950158adb169c26708335c024251\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7995b8e096ce8c7adf28d9baa4e12d943a697db80ee2b6e6b347b334e44b0df\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a1fb928361cffd6f14855b6c1cf5964eccc9f923435bf79dddd8f0c94decd9a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a8e025f49d745d0d846c606a3ec9dd6fbd2d255e8662ba1fd1a65f0d4289e77\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3584f02089912eecb6ea77d78d4f093929ce92631cb9ea758f1311268963b6b1\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-20T18:06:02Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0120 18:05:56.920405 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0120 18:05:56.921589 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1862726087/tls.crt::/tmp/serving-cert-1862726087/tls.key\\\\\\\"\\\\nI0120 18:06:02.544098 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0120 18:06:02.549414 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0120 18:06:02.549439 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0120 18:06:02.549472 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0120 18:06:02.549479 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0120 18:06:02.569160 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0120 18:06:02.569400 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0120 18:06:02.569474 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0120 18:06:02.569536 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0120 18:06:02.569594 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0120 18:06:02.569648 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0120 18:06:02.569744 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0120 18:06:02.569342 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0120 18:06:02.573278 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T18:05:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f09e5fcc7fafac7a11257184f5919c06b5b2e56a677b67c664e6489d9a581a20\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:05:46Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6eedc9bdf3c37af238cf9ad5172a8d93751c0641cbf43057016157f086c77538\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6eedc9bdf3c37af238cf9ad5172a8d93751c0641cbf43057016157f086c77538\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T18:05:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T18:05:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T18:05:44Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:35Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:35 crc kubenswrapper[4661]: I0120 18:06:35.880419 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:03Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:35Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:35 crc kubenswrapper[4661]: I0120 18:06:35.898357 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d831477bdf455582c54cba87020fc1141541282a25169c4b9730a78855e5719\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:35Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:35 crc kubenswrapper[4661]: I0120 18:06:35.915406 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-z97p2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b6f2401-3eb9-4ee4-b79c-6faee06bc21c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d04be3c87130e9506908a5ff0bf35490bafa64b4cec7b6ae1b67c4a8bd7df5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff8qg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T18:06:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-z97p2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:35Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:35 crc kubenswrapper[4661]: I0120 18:06:35.931891 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4hf4s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2cada643-eb7b-4036-8788-500338f73fac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://59b5cf3db3513f82b52401408842627d3e40bdc3009c226548556808410b2289\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gwqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://846c1cab30f986276eb919ac7474fbde1b6d5edb6557ab47057723b68d78b782\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gwqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T18:06:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-4hf4s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:35Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:35 crc kubenswrapper[4661]: I0120 18:06:35.935424 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:35 crc kubenswrapper[4661]: I0120 18:06:35.935467 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:35 crc kubenswrapper[4661]: I0120 18:06:35.935482 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:35 crc kubenswrapper[4661]: I0120 18:06:35.935501 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:35 crc kubenswrapper[4661]: I0120 18:06:35.935512 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:35Z","lastTransitionTime":"2026-01-20T18:06:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:35 crc kubenswrapper[4661]: I0120 18:06:35.948163 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f7511825-196e-48ea-a80c-f30a6806a15f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:05:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:05:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f30ca85f0d31021dde3b56c646ddd5d841e699b809c85e54afa944cc8035df6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://008613eee577926f777b6eba5a93379dca1203429fb29918bb057f2aba5eba4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:05:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://baf1692fe971ebe4534bc83cc471812d2b2883b6f97e53728ded6cd57b40c6f1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://faea3c0fefa61b8f0e07a050f59ca7b88d89a7ac8dba19ab019cff00fd782da3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T18:05:44Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:35Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:35 crc kubenswrapper[4661]: I0120 18:06:35.963971 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-svf7c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"78855c94-da90-4523-8d65-70f7fd153dee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce85015f47761ddd35031a4b2aa10eddde92a1f1ee206e6454b967b03b49372e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zvj2k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7dad5141c6e2e07d42bee1c473efffa900d0d900467b1524cd59962582696a3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zvj2k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T18:06:05Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-svf7c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:35Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:35 crc kubenswrapper[4661]: I0120 18:06:35.977205 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-tfdrt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1c3f1ce7-0584-4bf1-8398-a277e9a4599b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://163c719cffaaa547e54e81b543b5f5b2ce5abf7f6309d2859831a14e42df189f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbq77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T18:06:07Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-tfdrt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:35Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:35 crc kubenswrapper[4661]: I0120 18:06:35.997472 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26d03e00aaf9fc7a94d8fe25f4f6f7a028f4e5eb9956411442757ca8b2046d27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:35Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:36 crc kubenswrapper[4661]: I0120 18:06:36.020092 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:36Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:36 crc kubenswrapper[4661]: I0120 18:06:36.045298 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:36 crc kubenswrapper[4661]: I0120 18:06:36.045368 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:36 crc kubenswrapper[4661]: I0120 18:06:36.045387 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:36 crc kubenswrapper[4661]: I0120 18:06:36.045416 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:36 crc kubenswrapper[4661]: I0120 18:06:36.045442 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:36Z","lastTransitionTime":"2026-01-20T18:06:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:36 crc kubenswrapper[4661]: I0120 18:06:36.047904 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aafdc595f8f331b863d71124f1aa3c686ec883829377108268dd78de88f498ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a15e7bb714cbcf03a4ed8925508be80b06b04f3cd455d293237554c8ad0fdeee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:36Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:36 crc kubenswrapper[4661]: I0120 18:06:36.084290 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-j9j6p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e190abed-d178-4ce7-9485-f6090ecf8578\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ad84b24b0398f3f900b9440d55a7914e661a18580ef8b248ffdce4d8a6c75c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxwgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6923af243783c919b8d74338d7221f91f7c6b770d97eb3a2f7e30360376f071d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6923af243783c919b8d74338d7221f91f7c6b770d97eb3a2f7e30360376f071d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T18:06:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T18:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxwgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d61ecbabdd991af4f3f3005e3d6fab0d3f7fa863e7503f45dd91633dfc68c597\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d61ecbabdd991af4f3f3005e3d6fab0d3f7fa863e7503f45dd91633dfc68c597\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T18:06:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T18:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxwgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31c8fb341a4de1d1144737f83eb46ad0b301f7eb48dee0969da7ade7fbd513da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31c8fb341a4de1d1144737f83eb46ad0b301f7eb48dee0969da7ade7fbd513da\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T18:06:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T18:06:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxwgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db8122764bd0508f39da125b5849fbe3bad9558e511c18f26bdcf4e5b23ca3a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db8122764bd0508f39da125b5849fbe3bad9558e511c18f26bdcf4e5b23ca3a1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T18:06:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T18:06:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxwgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://60e382a199aa3a85c11fdf8c490a4f039a191cff8a604b004e2f4ea6dacb6800\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://60e382a199aa3a85c11fdf8c490a4f039a191cff8a604b004e2f4ea6dacb6800\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T18:06:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T18:06:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxwgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb0e9f6dd4681c1b791524e22d3f668ce544cdc72a33af01fa70f2dd93d2972f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb0e9f6dd4681c1b791524e22d3f668ce544cdc72a33af01fa70f2dd93d2972f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T18:06:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T18:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxwgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T18:06:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-j9j6p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:36Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:36 crc kubenswrapper[4661]: I0120 18:06:36.099422 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e82a8ce9-e23c-4fbc-9d26-0e81374193ba\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:05:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:05:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:05:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://887bb0c57d5fcddfad0ffb44f39fb809f945050689a5fb64f145b607b2dcd4f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://786a785de87e5345f01ed57fb6cd17efebe4633953b9e6bc9c169469621aea5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fdf1da4bed4fed480e327c750f01ea201663449a9975540540859463e5b4821f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://892bacf66ebee9e56348d6d6f391b0fd23a5c99369ddaf9280590d8598b32e62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://892bacf66ebee9e56348d6d6f391b0fd23a5c99369ddaf9280590d8598b32e62\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T18:05:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T18:05:44Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T18:05:44Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:36Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:36 crc kubenswrapper[4661]: I0120 18:06:36.114881 4661 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-23 10:21:42.44063666 +0000 UTC Jan 20 18:06:36 crc kubenswrapper[4661]: I0120 18:06:36.147363 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:36 crc kubenswrapper[4661]: I0120 18:06:36.147392 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:36 crc kubenswrapper[4661]: I0120 18:06:36.147400 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:36 crc kubenswrapper[4661]: I0120 18:06:36.147412 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:36 crc kubenswrapper[4661]: I0120 18:06:36.147421 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:36Z","lastTransitionTime":"2026-01-20T18:06:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:36 crc kubenswrapper[4661]: I0120 18:06:36.250762 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:36 crc kubenswrapper[4661]: I0120 18:06:36.251224 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:36 crc kubenswrapper[4661]: I0120 18:06:36.251468 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:36 crc kubenswrapper[4661]: I0120 18:06:36.251742 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:36 crc kubenswrapper[4661]: I0120 18:06:36.251973 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:36Z","lastTransitionTime":"2026-01-20T18:06:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:36 crc kubenswrapper[4661]: I0120 18:06:36.355246 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:36 crc kubenswrapper[4661]: I0120 18:06:36.355602 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:36 crc kubenswrapper[4661]: I0120 18:06:36.355785 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:36 crc kubenswrapper[4661]: I0120 18:06:36.355921 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:36 crc kubenswrapper[4661]: I0120 18:06:36.356040 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:36Z","lastTransitionTime":"2026-01-20T18:06:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:36 crc kubenswrapper[4661]: I0120 18:06:36.459795 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:36 crc kubenswrapper[4661]: I0120 18:06:36.459862 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:36 crc kubenswrapper[4661]: I0120 18:06:36.459885 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:36 crc kubenswrapper[4661]: I0120 18:06:36.459917 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:36 crc kubenswrapper[4661]: I0120 18:06:36.459940 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:36Z","lastTransitionTime":"2026-01-20T18:06:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:36 crc kubenswrapper[4661]: I0120 18:06:36.563638 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:36 crc kubenswrapper[4661]: I0120 18:06:36.563713 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:36 crc kubenswrapper[4661]: I0120 18:06:36.563725 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:36 crc kubenswrapper[4661]: I0120 18:06:36.563745 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:36 crc kubenswrapper[4661]: I0120 18:06:36.563759 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:36Z","lastTransitionTime":"2026-01-20T18:06:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:36 crc kubenswrapper[4661]: I0120 18:06:36.665828 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:36 crc kubenswrapper[4661]: I0120 18:06:36.665877 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:36 crc kubenswrapper[4661]: I0120 18:06:36.665890 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:36 crc kubenswrapper[4661]: I0120 18:06:36.665910 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:36 crc kubenswrapper[4661]: I0120 18:06:36.665923 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:36Z","lastTransitionTime":"2026-01-20T18:06:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:36 crc kubenswrapper[4661]: I0120 18:06:36.753644 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-fxb9d_3856f23c-8dc3-4b36-b3b7-955dff315250/ovnkube-controller/2.log" Jan 20 18:06:36 crc kubenswrapper[4661]: I0120 18:06:36.756720 4661 scope.go:117] "RemoveContainer" containerID="823de7d3705ac9e99525bc0fcc4f577fb363555af9dd66346f33065839076105" Jan 20 18:06:36 crc kubenswrapper[4661]: E0120 18:06:36.756876 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-fxb9d_openshift-ovn-kubernetes(3856f23c-8dc3-4b36-b3b7-955dff315250)\"" pod="openshift-ovn-kubernetes/ovnkube-node-fxb9d" podUID="3856f23c-8dc3-4b36-b3b7-955dff315250" Jan 20 18:06:36 crc kubenswrapper[4661]: I0120 18:06:36.771698 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:36 crc kubenswrapper[4661]: I0120 18:06:36.771744 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:36 crc kubenswrapper[4661]: I0120 18:06:36.771759 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:36 crc kubenswrapper[4661]: I0120 18:06:36.771782 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:36 crc kubenswrapper[4661]: I0120 18:06:36.771795 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:36Z","lastTransitionTime":"2026-01-20T18:06:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:36 crc kubenswrapper[4661]: I0120 18:06:36.781917 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fxb9d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3856f23c-8dc3-4b36-b3b7-955dff315250\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:06Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:06Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54a53d0636da9c6e7974633697967fa21ba02b0357019aca7c83994f57d06d84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66kpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://37fb98a4cea5fe59a694ef52ebebfd3366649970415c8bd3b1307e6d150ffe66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66kpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6bac19d8c5ba66dc20e5e4b90b2ba10efe69f218908b04abb221416f47e47f5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66kpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f5f5d96326cd37c1101488fff8b4ce215ce84766faf13112bed7df0a767de0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66kpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d53da47c39bd1f10fe866890f30f12f27cb0cfce0348c89fc0e89b3e8f563f2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66kpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://407e4d66f22050b80251fcb98ac7168d601d70dff1679bdaca0fc82d6068da41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66kpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://823de7d3705ac9e99525bc0fcc4f577fb363555af9dd66346f33065839076105\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://823de7d3705ac9e99525bc0fcc4f577fb363555af9dd66346f33065839076105\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-20T18:06:35Z\\\",\\\"message\\\":\\\".BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0120 18:06:35.219065 6217 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0120 18:06:35.219313 6217 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0120 18:06:35.219547 6217 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0120 18:06:35.220089 6217 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0120 18:06:35.220357 6217 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0120 18:06:35.220410 6217 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0120 18:06:35.220459 6217 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0120 18:06:35.221651 6217 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0120 18:06:35.221712 6217 factory.go:656] Stopping watch factory\\\\nI0120 18:06:35.221730 6217 ovnkube.go:599] Stopped ovnkube\\\\nI0120 1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T18:06:34Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-fxb9d_openshift-ovn-kubernetes(3856f23c-8dc3-4b36-b3b7-955dff315250)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66kpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dfbc19df20b659446872267891c3a922b6a01e39d8f0557505f25cdc5ba1a648\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66kpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://babd416d0d33b286f533dc5bd8d6904d24fd23632efce36edb6e13183fbd390a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://babd416d0d33b286f533dc5bd8d6904d24fd23632efce36edb6e13183fbd390a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T18:06:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66kpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T18:06:06Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fxb9d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:36Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:36 crc kubenswrapper[4661]: I0120 18:06:36.797284 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-dhd6h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"58dc28d6-51a6-49ce-ab8e-ac0b6bbe7131\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j95bf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j95bf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T18:06:19Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-dhd6h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:36Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:36 crc kubenswrapper[4661]: I0120 18:06:36.813493 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5947c5f0-b932-4127-a183-6b9023784c81\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:05:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:05:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:05:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2286c38d543136df613b2611b8d494d0777a950158adb169c26708335c024251\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7995b8e096ce8c7adf28d9baa4e12d943a697db80ee2b6e6b347b334e44b0df\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a1fb928361cffd6f14855b6c1cf5964eccc9f923435bf79dddd8f0c94decd9a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a8e025f49d745d0d846c606a3ec9dd6fbd2d255e8662ba1fd1a65f0d4289e77\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3584f02089912eecb6ea77d78d4f093929ce92631cb9ea758f1311268963b6b1\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-20T18:06:02Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0120 18:05:56.920405 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0120 18:05:56.921589 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1862726087/tls.crt::/tmp/serving-cert-1862726087/tls.key\\\\\\\"\\\\nI0120 18:06:02.544098 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0120 18:06:02.549414 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0120 18:06:02.549439 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0120 18:06:02.549472 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0120 18:06:02.549479 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0120 18:06:02.569160 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0120 18:06:02.569400 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0120 18:06:02.569474 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0120 18:06:02.569536 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0120 18:06:02.569594 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0120 18:06:02.569648 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0120 18:06:02.569744 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0120 18:06:02.569342 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0120 18:06:02.573278 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T18:05:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f09e5fcc7fafac7a11257184f5919c06b5b2e56a677b67c664e6489d9a581a20\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:05:46Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6eedc9bdf3c37af238cf9ad5172a8d93751c0641cbf43057016157f086c77538\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6eedc9bdf3c37af238cf9ad5172a8d93751c0641cbf43057016157f086c77538\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T18:05:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T18:05:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T18:05:44Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:36Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:36 crc kubenswrapper[4661]: I0120 18:06:36.836342 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:03Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:36Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:36 crc kubenswrapper[4661]: I0120 18:06:36.857439 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:03Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:36Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:36 crc kubenswrapper[4661]: I0120 18:06:36.875801 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-9m9jm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c44ff326-6791-438a-8d65-b2be26e9c819\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de5a607340e429cf954b1b6e147c4dbff99ffee4d311e9692410698574915af2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kn7nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T18:06:04Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-9m9jm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:36Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:36 crc kubenswrapper[4661]: I0120 18:06:36.876153 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:36 crc kubenswrapper[4661]: I0120 18:06:36.876194 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:36 crc kubenswrapper[4661]: I0120 18:06:36.876205 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:36 crc kubenswrapper[4661]: I0120 18:06:36.876226 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:36 crc kubenswrapper[4661]: I0120 18:06:36.876239 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:36Z","lastTransitionTime":"2026-01-20T18:06:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:36 crc kubenswrapper[4661]: I0120 18:06:36.892436 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4hf4s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2cada643-eb7b-4036-8788-500338f73fac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://59b5cf3db3513f82b52401408842627d3e40bdc3009c226548556808410b2289\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gwqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://846c1cab30f986276eb919ac7474fbde1b6d5edb6557ab47057723b68d78b782\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gwqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T18:06:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-4hf4s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:36Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:36 crc kubenswrapper[4661]: I0120 18:06:36.910578 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f7511825-196e-48ea-a80c-f30a6806a15f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:05:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:05:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f30ca85f0d31021dde3b56c646ddd5d841e699b809c85e54afa944cc8035df6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://008613eee577926f777b6eba5a93379dca1203429fb29918bb057f2aba5eba4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:05:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://baf1692fe971ebe4534bc83cc471812d2b2883b6f97e53728ded6cd57b40c6f1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://faea3c0fefa61b8f0e07a050f59ca7b88d89a7ac8dba19ab019cff00fd782da3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T18:05:44Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:36Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:36 crc kubenswrapper[4661]: I0120 18:06:36.926773 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d831477bdf455582c54cba87020fc1141541282a25169c4b9730a78855e5719\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:36Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:36 crc kubenswrapper[4661]: I0120 18:06:36.943412 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-z97p2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b6f2401-3eb9-4ee4-b79c-6faee06bc21c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d04be3c87130e9506908a5ff0bf35490bafa64b4cec7b6ae1b67c4a8bd7df5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff8qg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T18:06:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-z97p2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:36Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:36 crc kubenswrapper[4661]: I0120 18:06:36.957615 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26d03e00aaf9fc7a94d8fe25f4f6f7a028f4e5eb9956411442757ca8b2046d27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:36Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:36 crc kubenswrapper[4661]: I0120 18:06:36.971887 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:36Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:36 crc kubenswrapper[4661]: I0120 18:06:36.979352 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:36 crc kubenswrapper[4661]: I0120 18:06:36.979521 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:36 crc kubenswrapper[4661]: I0120 18:06:36.979587 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:36 crc kubenswrapper[4661]: I0120 18:06:36.979707 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:36 crc kubenswrapper[4661]: I0120 18:06:36.979777 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:36Z","lastTransitionTime":"2026-01-20T18:06:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:36 crc kubenswrapper[4661]: I0120 18:06:36.986088 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-svf7c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"78855c94-da90-4523-8d65-70f7fd153dee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce85015f47761ddd35031a4b2aa10eddde92a1f1ee206e6454b967b03b49372e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zvj2k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7dad5141c6e2e07d42bee1c473efffa900d0d900467b1524cd59962582696a3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zvj2k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T18:06:05Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-svf7c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:36Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:36 crc kubenswrapper[4661]: I0120 18:06:36.999321 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-tfdrt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1c3f1ce7-0584-4bf1-8398-a277e9a4599b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://163c719cffaaa547e54e81b543b5f5b2ce5abf7f6309d2859831a14e42df189f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbq77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T18:06:07Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-tfdrt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:36Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:37 crc kubenswrapper[4661]: I0120 18:06:37.009621 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e82a8ce9-e23c-4fbc-9d26-0e81374193ba\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:05:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:05:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:05:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://887bb0c57d5fcddfad0ffb44f39fb809f945050689a5fb64f145b607b2dcd4f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://786a785de87e5345f01ed57fb6cd17efebe4633953b9e6bc9c169469621aea5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fdf1da4bed4fed480e327c750f01ea201663449a9975540540859463e5b4821f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://892bacf66ebee9e56348d6d6f391b0fd23a5c99369ddaf9280590d8598b32e62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://892bacf66ebee9e56348d6d6f391b0fd23a5c99369ddaf9280590d8598b32e62\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T18:05:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T18:05:44Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T18:05:44Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:37Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:37 crc kubenswrapper[4661]: I0120 18:06:37.027379 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aafdc595f8f331b863d71124f1aa3c686ec883829377108268dd78de88f498ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a15e7bb714cbcf03a4ed8925508be80b06b04f3cd455d293237554c8ad0fdeee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:37Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:37 crc kubenswrapper[4661]: I0120 18:06:37.049156 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-j9j6p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e190abed-d178-4ce7-9485-f6090ecf8578\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ad84b24b0398f3f900b9440d55a7914e661a18580ef8b248ffdce4d8a6c75c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxwgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6923af243783c919b8d74338d7221f91f7c6b770d97eb3a2f7e30360376f071d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6923af243783c919b8d74338d7221f91f7c6b770d97eb3a2f7e30360376f071d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T18:06:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T18:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxwgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d61ecbabdd991af4f3f3005e3d6fab0d3f7fa863e7503f45dd91633dfc68c597\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d61ecbabdd991af4f3f3005e3d6fab0d3f7fa863e7503f45dd91633dfc68c597\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T18:06:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T18:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxwgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31c8fb341a4de1d1144737f83eb46ad0b301f7eb48dee0969da7ade7fbd513da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31c8fb341a4de1d1144737f83eb46ad0b301f7eb48dee0969da7ade7fbd513da\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T18:06:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T18:06:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxwgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db8122764bd0508f39da125b5849fbe3bad9558e511c18f26bdcf4e5b23ca3a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db8122764bd0508f39da125b5849fbe3bad9558e511c18f26bdcf4e5b23ca3a1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T18:06:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T18:06:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxwgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://60e382a199aa3a85c11fdf8c490a4f039a191cff8a604b004e2f4ea6dacb6800\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://60e382a199aa3a85c11fdf8c490a4f039a191cff8a604b004e2f4ea6dacb6800\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T18:06:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T18:06:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxwgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb0e9f6dd4681c1b791524e22d3f668ce544cdc72a33af01fa70f2dd93d2972f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb0e9f6dd4681c1b791524e22d3f668ce544cdc72a33af01fa70f2dd93d2972f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T18:06:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T18:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxwgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T18:06:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-j9j6p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:37Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:37 crc kubenswrapper[4661]: I0120 18:06:37.082801 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:37 crc kubenswrapper[4661]: I0120 18:06:37.082859 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:37 crc kubenswrapper[4661]: I0120 18:06:37.082869 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:37 crc kubenswrapper[4661]: I0120 18:06:37.082884 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:37 crc kubenswrapper[4661]: I0120 18:06:37.082894 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:37Z","lastTransitionTime":"2026-01-20T18:06:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:37 crc kubenswrapper[4661]: I0120 18:06:37.116582 4661 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-07 01:06:48.959361029 +0000 UTC Jan 20 18:06:37 crc kubenswrapper[4661]: I0120 18:06:37.141336 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 18:06:37 crc kubenswrapper[4661]: E0120 18:06:37.141533 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 20 18:06:37 crc kubenswrapper[4661]: I0120 18:06:37.141370 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 18:06:37 crc kubenswrapper[4661]: E0120 18:06:37.141644 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 20 18:06:37 crc kubenswrapper[4661]: I0120 18:06:37.141336 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dhd6h" Jan 20 18:06:37 crc kubenswrapper[4661]: E0120 18:06:37.141761 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dhd6h" podUID="58dc28d6-51a6-49ce-ab8e-ac0b6bbe7131" Jan 20 18:06:37 crc kubenswrapper[4661]: I0120 18:06:37.141953 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 18:06:37 crc kubenswrapper[4661]: E0120 18:06:37.142138 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 20 18:06:37 crc kubenswrapper[4661]: I0120 18:06:37.185852 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:37 crc kubenswrapper[4661]: I0120 18:06:37.185910 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:37 crc kubenswrapper[4661]: I0120 18:06:37.185919 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:37 crc kubenswrapper[4661]: I0120 18:06:37.185933 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:37 crc kubenswrapper[4661]: I0120 18:06:37.185944 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:37Z","lastTransitionTime":"2026-01-20T18:06:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:37 crc kubenswrapper[4661]: I0120 18:06:37.291007 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:37 crc kubenswrapper[4661]: I0120 18:06:37.291421 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:37 crc kubenswrapper[4661]: I0120 18:06:37.291595 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:37 crc kubenswrapper[4661]: I0120 18:06:37.291757 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:37 crc kubenswrapper[4661]: I0120 18:06:37.291886 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:37Z","lastTransitionTime":"2026-01-20T18:06:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:37 crc kubenswrapper[4661]: I0120 18:06:37.397253 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:37 crc kubenswrapper[4661]: I0120 18:06:37.397310 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:37 crc kubenswrapper[4661]: I0120 18:06:37.397322 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:37 crc kubenswrapper[4661]: I0120 18:06:37.397342 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:37 crc kubenswrapper[4661]: I0120 18:06:37.397352 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:37Z","lastTransitionTime":"2026-01-20T18:06:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:37 crc kubenswrapper[4661]: I0120 18:06:37.499579 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:37 crc kubenswrapper[4661]: I0120 18:06:37.499984 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:37 crc kubenswrapper[4661]: I0120 18:06:37.500119 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:37 crc kubenswrapper[4661]: I0120 18:06:37.500267 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:37 crc kubenswrapper[4661]: I0120 18:06:37.500468 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:37Z","lastTransitionTime":"2026-01-20T18:06:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:37 crc kubenswrapper[4661]: I0120 18:06:37.603503 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:37 crc kubenswrapper[4661]: I0120 18:06:37.603545 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:37 crc kubenswrapper[4661]: I0120 18:06:37.603556 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:37 crc kubenswrapper[4661]: I0120 18:06:37.603573 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:37 crc kubenswrapper[4661]: I0120 18:06:37.603585 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:37Z","lastTransitionTime":"2026-01-20T18:06:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:37 crc kubenswrapper[4661]: I0120 18:06:37.707056 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:37 crc kubenswrapper[4661]: I0120 18:06:37.707140 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:37 crc kubenswrapper[4661]: I0120 18:06:37.707152 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:37 crc kubenswrapper[4661]: I0120 18:06:37.707171 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:37 crc kubenswrapper[4661]: I0120 18:06:37.707194 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:37Z","lastTransitionTime":"2026-01-20T18:06:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:37 crc kubenswrapper[4661]: I0120 18:06:37.810643 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:37 crc kubenswrapper[4661]: I0120 18:06:37.810740 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:37 crc kubenswrapper[4661]: I0120 18:06:37.810757 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:37 crc kubenswrapper[4661]: I0120 18:06:37.810780 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:37 crc kubenswrapper[4661]: I0120 18:06:37.810795 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:37Z","lastTransitionTime":"2026-01-20T18:06:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:37 crc kubenswrapper[4661]: I0120 18:06:37.884556 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:37 crc kubenswrapper[4661]: I0120 18:06:37.884610 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:37 crc kubenswrapper[4661]: I0120 18:06:37.884623 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:37 crc kubenswrapper[4661]: I0120 18:06:37.884644 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:37 crc kubenswrapper[4661]: I0120 18:06:37.884657 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:37Z","lastTransitionTime":"2026-01-20T18:06:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:37 crc kubenswrapper[4661]: E0120 18:06:37.897301 4661 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T18:06:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T18:06:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:37Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T18:06:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T18:06:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:37Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7f2069d5-53e0-4198-b42b-b73aa1252865\\\",\\\"systemUUID\\\":\\\"727045d4-7edb-4891-a9ee-dd5ccba890df\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:37Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:37 crc kubenswrapper[4661]: I0120 18:06:37.902811 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:37 crc kubenswrapper[4661]: I0120 18:06:37.902872 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:37 crc kubenswrapper[4661]: I0120 18:06:37.902887 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:37 crc kubenswrapper[4661]: I0120 18:06:37.902912 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:37 crc kubenswrapper[4661]: I0120 18:06:37.902927 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:37Z","lastTransitionTime":"2026-01-20T18:06:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:37 crc kubenswrapper[4661]: E0120 18:06:37.917981 4661 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T18:06:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T18:06:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:37Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T18:06:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T18:06:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:37Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7f2069d5-53e0-4198-b42b-b73aa1252865\\\",\\\"systemUUID\\\":\\\"727045d4-7edb-4891-a9ee-dd5ccba890df\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:37Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:37 crc kubenswrapper[4661]: I0120 18:06:37.923296 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:37 crc kubenswrapper[4661]: I0120 18:06:37.923337 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:37 crc kubenswrapper[4661]: I0120 18:06:37.923354 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:37 crc kubenswrapper[4661]: I0120 18:06:37.923376 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:37 crc kubenswrapper[4661]: I0120 18:06:37.923390 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:37Z","lastTransitionTime":"2026-01-20T18:06:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:37 crc kubenswrapper[4661]: E0120 18:06:37.947072 4661 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T18:06:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T18:06:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:37Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T18:06:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T18:06:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:37Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7f2069d5-53e0-4198-b42b-b73aa1252865\\\",\\\"systemUUID\\\":\\\"727045d4-7edb-4891-a9ee-dd5ccba890df\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:37Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:37 crc kubenswrapper[4661]: I0120 18:06:37.952313 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:37 crc kubenswrapper[4661]: I0120 18:06:37.952364 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:37 crc kubenswrapper[4661]: I0120 18:06:37.952374 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:37 crc kubenswrapper[4661]: I0120 18:06:37.952393 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:37 crc kubenswrapper[4661]: I0120 18:06:37.952405 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:37Z","lastTransitionTime":"2026-01-20T18:06:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:37 crc kubenswrapper[4661]: E0120 18:06:37.968291 4661 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T18:06:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T18:06:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:37Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T18:06:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T18:06:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:37Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7f2069d5-53e0-4198-b42b-b73aa1252865\\\",\\\"systemUUID\\\":\\\"727045d4-7edb-4891-a9ee-dd5ccba890df\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:37Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:37 crc kubenswrapper[4661]: I0120 18:06:37.973712 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:37 crc kubenswrapper[4661]: I0120 18:06:37.973762 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:37 crc kubenswrapper[4661]: I0120 18:06:37.973772 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:37 crc kubenswrapper[4661]: I0120 18:06:37.973789 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:37 crc kubenswrapper[4661]: I0120 18:06:37.973801 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:37Z","lastTransitionTime":"2026-01-20T18:06:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:37 crc kubenswrapper[4661]: E0120 18:06:37.988545 4661 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T18:06:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T18:06:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:37Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T18:06:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T18:06:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:37Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7f2069d5-53e0-4198-b42b-b73aa1252865\\\",\\\"systemUUID\\\":\\\"727045d4-7edb-4891-a9ee-dd5ccba890df\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:37Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:37 crc kubenswrapper[4661]: E0120 18:06:37.989050 4661 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 20 18:06:37 crc kubenswrapper[4661]: I0120 18:06:37.990971 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:37 crc kubenswrapper[4661]: I0120 18:06:37.991081 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:37 crc kubenswrapper[4661]: I0120 18:06:37.991176 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:37 crc kubenswrapper[4661]: I0120 18:06:37.991283 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:37 crc kubenswrapper[4661]: I0120 18:06:37.991371 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:37Z","lastTransitionTime":"2026-01-20T18:06:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:38 crc kubenswrapper[4661]: I0120 18:06:38.094386 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:38 crc kubenswrapper[4661]: I0120 18:06:38.094435 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:38 crc kubenswrapper[4661]: I0120 18:06:38.094452 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:38 crc kubenswrapper[4661]: I0120 18:06:38.094473 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:38 crc kubenswrapper[4661]: I0120 18:06:38.094486 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:38Z","lastTransitionTime":"2026-01-20T18:06:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:38 crc kubenswrapper[4661]: I0120 18:06:38.116820 4661 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-03 23:30:21.307527191 +0000 UTC Jan 20 18:06:38 crc kubenswrapper[4661]: I0120 18:06:38.197562 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:38 crc kubenswrapper[4661]: I0120 18:06:38.197605 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:38 crc kubenswrapper[4661]: I0120 18:06:38.197614 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:38 crc kubenswrapper[4661]: I0120 18:06:38.197630 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:38 crc kubenswrapper[4661]: I0120 18:06:38.197640 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:38Z","lastTransitionTime":"2026-01-20T18:06:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:38 crc kubenswrapper[4661]: I0120 18:06:38.301161 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:38 crc kubenswrapper[4661]: I0120 18:06:38.301213 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:38 crc kubenswrapper[4661]: I0120 18:06:38.301226 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:38 crc kubenswrapper[4661]: I0120 18:06:38.301251 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:38 crc kubenswrapper[4661]: I0120 18:06:38.301266 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:38Z","lastTransitionTime":"2026-01-20T18:06:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:38 crc kubenswrapper[4661]: I0120 18:06:38.404391 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:38 crc kubenswrapper[4661]: I0120 18:06:38.404433 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:38 crc kubenswrapper[4661]: I0120 18:06:38.404442 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:38 crc kubenswrapper[4661]: I0120 18:06:38.404459 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:38 crc kubenswrapper[4661]: I0120 18:06:38.404469 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:38Z","lastTransitionTime":"2026-01-20T18:06:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:38 crc kubenswrapper[4661]: I0120 18:06:38.507555 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:38 crc kubenswrapper[4661]: I0120 18:06:38.507631 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:38 crc kubenswrapper[4661]: I0120 18:06:38.507649 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:38 crc kubenswrapper[4661]: I0120 18:06:38.507686 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:38 crc kubenswrapper[4661]: I0120 18:06:38.507698 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:38Z","lastTransitionTime":"2026-01-20T18:06:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:38 crc kubenswrapper[4661]: I0120 18:06:38.610397 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:38 crc kubenswrapper[4661]: I0120 18:06:38.610462 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:38 crc kubenswrapper[4661]: I0120 18:06:38.610474 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:38 crc kubenswrapper[4661]: I0120 18:06:38.610496 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:38 crc kubenswrapper[4661]: I0120 18:06:38.610512 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:38Z","lastTransitionTime":"2026-01-20T18:06:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:38 crc kubenswrapper[4661]: I0120 18:06:38.713270 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:38 crc kubenswrapper[4661]: I0120 18:06:38.713315 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:38 crc kubenswrapper[4661]: I0120 18:06:38.713324 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:38 crc kubenswrapper[4661]: I0120 18:06:38.713343 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:38 crc kubenswrapper[4661]: I0120 18:06:38.713355 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:38Z","lastTransitionTime":"2026-01-20T18:06:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:38 crc kubenswrapper[4661]: I0120 18:06:38.816545 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:38 crc kubenswrapper[4661]: I0120 18:06:38.816838 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:38 crc kubenswrapper[4661]: I0120 18:06:38.816864 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:38 crc kubenswrapper[4661]: I0120 18:06:38.816891 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:38 crc kubenswrapper[4661]: I0120 18:06:38.816907 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:38Z","lastTransitionTime":"2026-01-20T18:06:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:38 crc kubenswrapper[4661]: I0120 18:06:38.920197 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:38 crc kubenswrapper[4661]: I0120 18:06:38.920272 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:38 crc kubenswrapper[4661]: I0120 18:06:38.920296 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:38 crc kubenswrapper[4661]: I0120 18:06:38.920328 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:38 crc kubenswrapper[4661]: I0120 18:06:38.920351 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:38Z","lastTransitionTime":"2026-01-20T18:06:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:39 crc kubenswrapper[4661]: I0120 18:06:39.025388 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:39 crc kubenswrapper[4661]: I0120 18:06:39.025438 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:39 crc kubenswrapper[4661]: I0120 18:06:39.025449 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:39 crc kubenswrapper[4661]: I0120 18:06:39.025472 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:39 crc kubenswrapper[4661]: I0120 18:06:39.025488 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:39Z","lastTransitionTime":"2026-01-20T18:06:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:39 crc kubenswrapper[4661]: I0120 18:06:39.117291 4661 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-07 23:42:27.815809205 +0000 UTC Jan 20 18:06:39 crc kubenswrapper[4661]: I0120 18:06:39.128943 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:39 crc kubenswrapper[4661]: I0120 18:06:39.128991 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:39 crc kubenswrapper[4661]: I0120 18:06:39.129008 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:39 crc kubenswrapper[4661]: I0120 18:06:39.129035 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:39 crc kubenswrapper[4661]: I0120 18:06:39.129056 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:39Z","lastTransitionTime":"2026-01-20T18:06:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:39 crc kubenswrapper[4661]: I0120 18:06:39.141470 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 18:06:39 crc kubenswrapper[4661]: E0120 18:06:39.141614 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 20 18:06:39 crc kubenswrapper[4661]: I0120 18:06:39.141846 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dhd6h" Jan 20 18:06:39 crc kubenswrapper[4661]: I0120 18:06:39.141995 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 18:06:39 crc kubenswrapper[4661]: E0120 18:06:39.142043 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dhd6h" podUID="58dc28d6-51a6-49ce-ab8e-ac0b6bbe7131" Jan 20 18:06:39 crc kubenswrapper[4661]: I0120 18:06:39.142065 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 18:06:39 crc kubenswrapper[4661]: E0120 18:06:39.142128 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 20 18:06:39 crc kubenswrapper[4661]: E0120 18:06:39.142184 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 20 18:06:39 crc kubenswrapper[4661]: I0120 18:06:39.232126 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:39 crc kubenswrapper[4661]: I0120 18:06:39.232178 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:39 crc kubenswrapper[4661]: I0120 18:06:39.232189 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:39 crc kubenswrapper[4661]: I0120 18:06:39.232210 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:39 crc kubenswrapper[4661]: I0120 18:06:39.232233 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:39Z","lastTransitionTime":"2026-01-20T18:06:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:39 crc kubenswrapper[4661]: I0120 18:06:39.335429 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:39 crc kubenswrapper[4661]: I0120 18:06:39.335872 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:39 crc kubenswrapper[4661]: I0120 18:06:39.335990 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:39 crc kubenswrapper[4661]: I0120 18:06:39.336131 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:39 crc kubenswrapper[4661]: I0120 18:06:39.336245 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:39Z","lastTransitionTime":"2026-01-20T18:06:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:39 crc kubenswrapper[4661]: I0120 18:06:39.439945 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:39 crc kubenswrapper[4661]: I0120 18:06:39.440002 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:39 crc kubenswrapper[4661]: I0120 18:06:39.440015 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:39 crc kubenswrapper[4661]: I0120 18:06:39.440035 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:39 crc kubenswrapper[4661]: I0120 18:06:39.440047 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:39Z","lastTransitionTime":"2026-01-20T18:06:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:39 crc kubenswrapper[4661]: I0120 18:06:39.544114 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:39 crc kubenswrapper[4661]: I0120 18:06:39.544189 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:39 crc kubenswrapper[4661]: I0120 18:06:39.544212 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:39 crc kubenswrapper[4661]: I0120 18:06:39.544238 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:39 crc kubenswrapper[4661]: I0120 18:06:39.544257 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:39Z","lastTransitionTime":"2026-01-20T18:06:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:39 crc kubenswrapper[4661]: I0120 18:06:39.646862 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:39 crc kubenswrapper[4661]: I0120 18:06:39.646904 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:39 crc kubenswrapper[4661]: I0120 18:06:39.646912 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:39 crc kubenswrapper[4661]: I0120 18:06:39.646931 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:39 crc kubenswrapper[4661]: I0120 18:06:39.646942 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:39Z","lastTransitionTime":"2026-01-20T18:06:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:39 crc kubenswrapper[4661]: I0120 18:06:39.750980 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:39 crc kubenswrapper[4661]: I0120 18:06:39.751041 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:39 crc kubenswrapper[4661]: I0120 18:06:39.751054 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:39 crc kubenswrapper[4661]: I0120 18:06:39.751075 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:39 crc kubenswrapper[4661]: I0120 18:06:39.751092 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:39Z","lastTransitionTime":"2026-01-20T18:06:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:39 crc kubenswrapper[4661]: I0120 18:06:39.854189 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:39 crc kubenswrapper[4661]: I0120 18:06:39.854240 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:39 crc kubenswrapper[4661]: I0120 18:06:39.854252 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:39 crc kubenswrapper[4661]: I0120 18:06:39.854270 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:39 crc kubenswrapper[4661]: I0120 18:06:39.854284 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:39Z","lastTransitionTime":"2026-01-20T18:06:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:39 crc kubenswrapper[4661]: I0120 18:06:39.957186 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:39 crc kubenswrapper[4661]: I0120 18:06:39.957235 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:39 crc kubenswrapper[4661]: I0120 18:06:39.957247 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:39 crc kubenswrapper[4661]: I0120 18:06:39.957265 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:39 crc kubenswrapper[4661]: I0120 18:06:39.957277 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:39Z","lastTransitionTime":"2026-01-20T18:06:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:40 crc kubenswrapper[4661]: I0120 18:06:40.060457 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:40 crc kubenswrapper[4661]: I0120 18:06:40.060495 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:40 crc kubenswrapper[4661]: I0120 18:06:40.060505 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:40 crc kubenswrapper[4661]: I0120 18:06:40.060521 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:40 crc kubenswrapper[4661]: I0120 18:06:40.060532 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:40Z","lastTransitionTime":"2026-01-20T18:06:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:40 crc kubenswrapper[4661]: I0120 18:06:40.117869 4661 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-15 18:47:39.113059603 +0000 UTC Jan 20 18:06:40 crc kubenswrapper[4661]: I0120 18:06:40.162886 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:40 crc kubenswrapper[4661]: I0120 18:06:40.162938 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:40 crc kubenswrapper[4661]: I0120 18:06:40.162954 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:40 crc kubenswrapper[4661]: I0120 18:06:40.162975 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:40 crc kubenswrapper[4661]: I0120 18:06:40.162987 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:40Z","lastTransitionTime":"2026-01-20T18:06:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:40 crc kubenswrapper[4661]: I0120 18:06:40.266482 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:40 crc kubenswrapper[4661]: I0120 18:06:40.266519 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:40 crc kubenswrapper[4661]: I0120 18:06:40.266528 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:40 crc kubenswrapper[4661]: I0120 18:06:40.266545 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:40 crc kubenswrapper[4661]: I0120 18:06:40.266557 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:40Z","lastTransitionTime":"2026-01-20T18:06:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:40 crc kubenswrapper[4661]: I0120 18:06:40.369982 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:40 crc kubenswrapper[4661]: I0120 18:06:40.370049 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:40 crc kubenswrapper[4661]: I0120 18:06:40.370067 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:40 crc kubenswrapper[4661]: I0120 18:06:40.370092 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:40 crc kubenswrapper[4661]: I0120 18:06:40.370110 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:40Z","lastTransitionTime":"2026-01-20T18:06:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:40 crc kubenswrapper[4661]: I0120 18:06:40.474897 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:40 crc kubenswrapper[4661]: I0120 18:06:40.474973 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:40 crc kubenswrapper[4661]: I0120 18:06:40.474994 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:40 crc kubenswrapper[4661]: I0120 18:06:40.475022 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:40 crc kubenswrapper[4661]: I0120 18:06:40.475047 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:40Z","lastTransitionTime":"2026-01-20T18:06:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:40 crc kubenswrapper[4661]: I0120 18:06:40.578946 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:40 crc kubenswrapper[4661]: I0120 18:06:40.579079 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:40 crc kubenswrapper[4661]: I0120 18:06:40.579101 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:40 crc kubenswrapper[4661]: I0120 18:06:40.579175 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:40 crc kubenswrapper[4661]: I0120 18:06:40.579197 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:40Z","lastTransitionTime":"2026-01-20T18:06:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:40 crc kubenswrapper[4661]: I0120 18:06:40.682468 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:40 crc kubenswrapper[4661]: I0120 18:06:40.682536 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:40 crc kubenswrapper[4661]: I0120 18:06:40.682559 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:40 crc kubenswrapper[4661]: I0120 18:06:40.682591 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:40 crc kubenswrapper[4661]: I0120 18:06:40.682613 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:40Z","lastTransitionTime":"2026-01-20T18:06:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:40 crc kubenswrapper[4661]: I0120 18:06:40.786772 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:40 crc kubenswrapper[4661]: I0120 18:06:40.786828 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:40 crc kubenswrapper[4661]: I0120 18:06:40.786843 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:40 crc kubenswrapper[4661]: I0120 18:06:40.786865 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:40 crc kubenswrapper[4661]: I0120 18:06:40.786881 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:40Z","lastTransitionTime":"2026-01-20T18:06:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:40 crc kubenswrapper[4661]: I0120 18:06:40.889605 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:40 crc kubenswrapper[4661]: I0120 18:06:40.889698 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:40 crc kubenswrapper[4661]: I0120 18:06:40.889713 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:40 crc kubenswrapper[4661]: I0120 18:06:40.889737 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:40 crc kubenswrapper[4661]: I0120 18:06:40.889782 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:40Z","lastTransitionTime":"2026-01-20T18:06:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:40 crc kubenswrapper[4661]: I0120 18:06:40.993486 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:40 crc kubenswrapper[4661]: I0120 18:06:40.993558 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:40 crc kubenswrapper[4661]: I0120 18:06:40.993576 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:40 crc kubenswrapper[4661]: I0120 18:06:40.993606 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:40 crc kubenswrapper[4661]: I0120 18:06:40.993627 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:40Z","lastTransitionTime":"2026-01-20T18:06:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:41 crc kubenswrapper[4661]: I0120 18:06:41.096841 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:41 crc kubenswrapper[4661]: I0120 18:06:41.096926 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:41 crc kubenswrapper[4661]: I0120 18:06:41.096946 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:41 crc kubenswrapper[4661]: I0120 18:06:41.096974 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:41 crc kubenswrapper[4661]: I0120 18:06:41.096995 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:41Z","lastTransitionTime":"2026-01-20T18:06:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:41 crc kubenswrapper[4661]: I0120 18:06:41.118386 4661 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-22 11:51:53.358835252 +0000 UTC Jan 20 18:06:41 crc kubenswrapper[4661]: I0120 18:06:41.141821 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dhd6h" Jan 20 18:06:41 crc kubenswrapper[4661]: I0120 18:06:41.141860 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 18:06:41 crc kubenswrapper[4661]: I0120 18:06:41.141821 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 18:06:41 crc kubenswrapper[4661]: E0120 18:06:41.141993 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 20 18:06:41 crc kubenswrapper[4661]: I0120 18:06:41.141857 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 18:06:41 crc kubenswrapper[4661]: E0120 18:06:41.142233 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 20 18:06:41 crc kubenswrapper[4661]: E0120 18:06:41.142317 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 20 18:06:41 crc kubenswrapper[4661]: E0120 18:06:41.142865 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dhd6h" podUID="58dc28d6-51a6-49ce-ab8e-ac0b6bbe7131" Jan 20 18:06:41 crc kubenswrapper[4661]: I0120 18:06:41.200549 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:41 crc kubenswrapper[4661]: I0120 18:06:41.200617 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:41 crc kubenswrapper[4661]: I0120 18:06:41.200634 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:41 crc kubenswrapper[4661]: I0120 18:06:41.200700 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:41 crc kubenswrapper[4661]: I0120 18:06:41.200727 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:41Z","lastTransitionTime":"2026-01-20T18:06:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:41 crc kubenswrapper[4661]: I0120 18:06:41.304117 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:41 crc kubenswrapper[4661]: I0120 18:06:41.304169 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:41 crc kubenswrapper[4661]: I0120 18:06:41.304179 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:41 crc kubenswrapper[4661]: I0120 18:06:41.304203 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:41 crc kubenswrapper[4661]: I0120 18:06:41.304217 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:41Z","lastTransitionTime":"2026-01-20T18:06:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:41 crc kubenswrapper[4661]: I0120 18:06:41.406928 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:41 crc kubenswrapper[4661]: I0120 18:06:41.407038 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:41 crc kubenswrapper[4661]: I0120 18:06:41.407067 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:41 crc kubenswrapper[4661]: I0120 18:06:41.407098 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:41 crc kubenswrapper[4661]: I0120 18:06:41.407120 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:41Z","lastTransitionTime":"2026-01-20T18:06:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:41 crc kubenswrapper[4661]: I0120 18:06:41.510497 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:41 crc kubenswrapper[4661]: I0120 18:06:41.510562 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:41 crc kubenswrapper[4661]: I0120 18:06:41.510585 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:41 crc kubenswrapper[4661]: I0120 18:06:41.510616 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:41 crc kubenswrapper[4661]: I0120 18:06:41.510640 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:41Z","lastTransitionTime":"2026-01-20T18:06:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:41 crc kubenswrapper[4661]: I0120 18:06:41.614757 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:41 crc kubenswrapper[4661]: I0120 18:06:41.614828 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:41 crc kubenswrapper[4661]: I0120 18:06:41.614851 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:41 crc kubenswrapper[4661]: I0120 18:06:41.614886 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:41 crc kubenswrapper[4661]: I0120 18:06:41.614908 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:41Z","lastTransitionTime":"2026-01-20T18:06:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:41 crc kubenswrapper[4661]: I0120 18:06:41.717946 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:41 crc kubenswrapper[4661]: I0120 18:06:41.718024 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:41 crc kubenswrapper[4661]: I0120 18:06:41.718042 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:41 crc kubenswrapper[4661]: I0120 18:06:41.718070 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:41 crc kubenswrapper[4661]: I0120 18:06:41.718090 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:41Z","lastTransitionTime":"2026-01-20T18:06:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:41 crc kubenswrapper[4661]: I0120 18:06:41.821307 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:41 crc kubenswrapper[4661]: I0120 18:06:41.821363 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:41 crc kubenswrapper[4661]: I0120 18:06:41.821380 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:41 crc kubenswrapper[4661]: I0120 18:06:41.821405 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:41 crc kubenswrapper[4661]: I0120 18:06:41.821424 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:41Z","lastTransitionTime":"2026-01-20T18:06:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:41 crc kubenswrapper[4661]: I0120 18:06:41.925424 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:41 crc kubenswrapper[4661]: I0120 18:06:41.925466 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:41 crc kubenswrapper[4661]: I0120 18:06:41.925476 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:41 crc kubenswrapper[4661]: I0120 18:06:41.925494 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:41 crc kubenswrapper[4661]: I0120 18:06:41.925507 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:41Z","lastTransitionTime":"2026-01-20T18:06:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:42 crc kubenswrapper[4661]: I0120 18:06:42.029338 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:42 crc kubenswrapper[4661]: I0120 18:06:42.029392 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:42 crc kubenswrapper[4661]: I0120 18:06:42.029404 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:42 crc kubenswrapper[4661]: I0120 18:06:42.029426 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:42 crc kubenswrapper[4661]: I0120 18:06:42.029445 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:42Z","lastTransitionTime":"2026-01-20T18:06:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:42 crc kubenswrapper[4661]: I0120 18:06:42.119694 4661 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-14 05:30:24.846477546 +0000 UTC Jan 20 18:06:42 crc kubenswrapper[4661]: I0120 18:06:42.132473 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:42 crc kubenswrapper[4661]: I0120 18:06:42.132518 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:42 crc kubenswrapper[4661]: I0120 18:06:42.132529 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:42 crc kubenswrapper[4661]: I0120 18:06:42.132551 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:42 crc kubenswrapper[4661]: I0120 18:06:42.132568 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:42Z","lastTransitionTime":"2026-01-20T18:06:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:42 crc kubenswrapper[4661]: I0120 18:06:42.235353 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:42 crc kubenswrapper[4661]: I0120 18:06:42.235389 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:42 crc kubenswrapper[4661]: I0120 18:06:42.235398 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:42 crc kubenswrapper[4661]: I0120 18:06:42.235415 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:42 crc kubenswrapper[4661]: I0120 18:06:42.235427 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:42Z","lastTransitionTime":"2026-01-20T18:06:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:42 crc kubenswrapper[4661]: I0120 18:06:42.339319 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:42 crc kubenswrapper[4661]: I0120 18:06:42.339386 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:42 crc kubenswrapper[4661]: I0120 18:06:42.339411 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:42 crc kubenswrapper[4661]: I0120 18:06:42.339439 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:42 crc kubenswrapper[4661]: I0120 18:06:42.339456 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:42Z","lastTransitionTime":"2026-01-20T18:06:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:42 crc kubenswrapper[4661]: I0120 18:06:42.443725 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:42 crc kubenswrapper[4661]: I0120 18:06:42.443796 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:42 crc kubenswrapper[4661]: I0120 18:06:42.443816 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:42 crc kubenswrapper[4661]: I0120 18:06:42.443842 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:42 crc kubenswrapper[4661]: I0120 18:06:42.443859 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:42Z","lastTransitionTime":"2026-01-20T18:06:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:42 crc kubenswrapper[4661]: I0120 18:06:42.547289 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:42 crc kubenswrapper[4661]: I0120 18:06:42.548122 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:42 crc kubenswrapper[4661]: I0120 18:06:42.548391 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:42 crc kubenswrapper[4661]: I0120 18:06:42.548492 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:42 crc kubenswrapper[4661]: I0120 18:06:42.548578 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:42Z","lastTransitionTime":"2026-01-20T18:06:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:42 crc kubenswrapper[4661]: I0120 18:06:42.651800 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:42 crc kubenswrapper[4661]: I0120 18:06:42.652142 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:42 crc kubenswrapper[4661]: I0120 18:06:42.652238 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:42 crc kubenswrapper[4661]: I0120 18:06:42.652332 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:42 crc kubenswrapper[4661]: I0120 18:06:42.652411 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:42Z","lastTransitionTime":"2026-01-20T18:06:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:42 crc kubenswrapper[4661]: I0120 18:06:42.756229 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:42 crc kubenswrapper[4661]: I0120 18:06:42.756438 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:42 crc kubenswrapper[4661]: I0120 18:06:42.756456 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:42 crc kubenswrapper[4661]: I0120 18:06:42.756486 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:42 crc kubenswrapper[4661]: I0120 18:06:42.756503 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:42Z","lastTransitionTime":"2026-01-20T18:06:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:42 crc kubenswrapper[4661]: I0120 18:06:42.859139 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:42 crc kubenswrapper[4661]: I0120 18:06:42.859234 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:42 crc kubenswrapper[4661]: I0120 18:06:42.859267 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:42 crc kubenswrapper[4661]: I0120 18:06:42.859305 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:42 crc kubenswrapper[4661]: I0120 18:06:42.859333 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:42Z","lastTransitionTime":"2026-01-20T18:06:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:42 crc kubenswrapper[4661]: I0120 18:06:42.963018 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:42 crc kubenswrapper[4661]: I0120 18:06:42.963076 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:42 crc kubenswrapper[4661]: I0120 18:06:42.963089 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:42 crc kubenswrapper[4661]: I0120 18:06:42.963112 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:42 crc kubenswrapper[4661]: I0120 18:06:42.963126 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:42Z","lastTransitionTime":"2026-01-20T18:06:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:43 crc kubenswrapper[4661]: I0120 18:06:43.066714 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:43 crc kubenswrapper[4661]: I0120 18:06:43.066788 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:43 crc kubenswrapper[4661]: I0120 18:06:43.066806 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:43 crc kubenswrapper[4661]: I0120 18:06:43.066833 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:43 crc kubenswrapper[4661]: I0120 18:06:43.066850 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:43Z","lastTransitionTime":"2026-01-20T18:06:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:43 crc kubenswrapper[4661]: I0120 18:06:43.120294 4661 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-10 07:05:09.753191283 +0000 UTC Jan 20 18:06:43 crc kubenswrapper[4661]: I0120 18:06:43.141890 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 18:06:43 crc kubenswrapper[4661]: I0120 18:06:43.141958 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 18:06:43 crc kubenswrapper[4661]: I0120 18:06:43.142036 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dhd6h" Jan 20 18:06:43 crc kubenswrapper[4661]: I0120 18:06:43.141890 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 18:06:43 crc kubenswrapper[4661]: E0120 18:06:43.142153 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 20 18:06:43 crc kubenswrapper[4661]: E0120 18:06:43.142269 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dhd6h" podUID="58dc28d6-51a6-49ce-ab8e-ac0b6bbe7131" Jan 20 18:06:43 crc kubenswrapper[4661]: E0120 18:06:43.142398 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 20 18:06:43 crc kubenswrapper[4661]: E0120 18:06:43.142376 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 20 18:06:43 crc kubenswrapper[4661]: I0120 18:06:43.170358 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:43 crc kubenswrapper[4661]: I0120 18:06:43.170407 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:43 crc kubenswrapper[4661]: I0120 18:06:43.170419 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:43 crc kubenswrapper[4661]: I0120 18:06:43.170440 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:43 crc kubenswrapper[4661]: I0120 18:06:43.170452 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:43Z","lastTransitionTime":"2026-01-20T18:06:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:43 crc kubenswrapper[4661]: I0120 18:06:43.273296 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:43 crc kubenswrapper[4661]: I0120 18:06:43.273369 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:43 crc kubenswrapper[4661]: I0120 18:06:43.273396 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:43 crc kubenswrapper[4661]: I0120 18:06:43.273429 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:43 crc kubenswrapper[4661]: I0120 18:06:43.273453 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:43Z","lastTransitionTime":"2026-01-20T18:06:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:43 crc kubenswrapper[4661]: I0120 18:06:43.376854 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:43 crc kubenswrapper[4661]: I0120 18:06:43.376995 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:43 crc kubenswrapper[4661]: I0120 18:06:43.377048 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:43 crc kubenswrapper[4661]: I0120 18:06:43.377080 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:43 crc kubenswrapper[4661]: I0120 18:06:43.377199 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:43Z","lastTransitionTime":"2026-01-20T18:06:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:43 crc kubenswrapper[4661]: I0120 18:06:43.481320 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:43 crc kubenswrapper[4661]: I0120 18:06:43.481434 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:43 crc kubenswrapper[4661]: I0120 18:06:43.481452 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:43 crc kubenswrapper[4661]: I0120 18:06:43.481480 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:43 crc kubenswrapper[4661]: I0120 18:06:43.481498 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:43Z","lastTransitionTime":"2026-01-20T18:06:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:43 crc kubenswrapper[4661]: I0120 18:06:43.590643 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:43 crc kubenswrapper[4661]: I0120 18:06:43.590763 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:43 crc kubenswrapper[4661]: I0120 18:06:43.590785 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:43 crc kubenswrapper[4661]: I0120 18:06:43.590815 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:43 crc kubenswrapper[4661]: I0120 18:06:43.590857 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:43Z","lastTransitionTime":"2026-01-20T18:06:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:43 crc kubenswrapper[4661]: I0120 18:06:43.698128 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:43 crc kubenswrapper[4661]: I0120 18:06:43.698175 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:43 crc kubenswrapper[4661]: I0120 18:06:43.698186 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:43 crc kubenswrapper[4661]: I0120 18:06:43.698207 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:43 crc kubenswrapper[4661]: I0120 18:06:43.698220 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:43Z","lastTransitionTime":"2026-01-20T18:06:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:43 crc kubenswrapper[4661]: I0120 18:06:43.801849 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:43 crc kubenswrapper[4661]: I0120 18:06:43.801927 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:43 crc kubenswrapper[4661]: I0120 18:06:43.801947 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:43 crc kubenswrapper[4661]: I0120 18:06:43.801979 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:43 crc kubenswrapper[4661]: I0120 18:06:43.802000 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:43Z","lastTransitionTime":"2026-01-20T18:06:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:43 crc kubenswrapper[4661]: I0120 18:06:43.910310 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:43 crc kubenswrapper[4661]: I0120 18:06:43.910374 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:43 crc kubenswrapper[4661]: I0120 18:06:43.910387 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:43 crc kubenswrapper[4661]: I0120 18:06:43.910416 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:43 crc kubenswrapper[4661]: I0120 18:06:43.910428 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:43Z","lastTransitionTime":"2026-01-20T18:06:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:44 crc kubenswrapper[4661]: I0120 18:06:44.013521 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:44 crc kubenswrapper[4661]: I0120 18:06:44.013575 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:44 crc kubenswrapper[4661]: I0120 18:06:44.013587 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:44 crc kubenswrapper[4661]: I0120 18:06:44.013607 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:44 crc kubenswrapper[4661]: I0120 18:06:44.013622 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:44Z","lastTransitionTime":"2026-01-20T18:06:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:44 crc kubenswrapper[4661]: I0120 18:06:44.116175 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:44 crc kubenswrapper[4661]: I0120 18:06:44.116236 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:44 crc kubenswrapper[4661]: I0120 18:06:44.116253 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:44 crc kubenswrapper[4661]: I0120 18:06:44.116278 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:44 crc kubenswrapper[4661]: I0120 18:06:44.116297 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:44Z","lastTransitionTime":"2026-01-20T18:06:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:44 crc kubenswrapper[4661]: I0120 18:06:44.121358 4661 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-22 15:56:01.005429282 +0000 UTC Jan 20 18:06:44 crc kubenswrapper[4661]: I0120 18:06:44.157263 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e82a8ce9-e23c-4fbc-9d26-0e81374193ba\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:05:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:05:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:05:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://887bb0c57d5fcddfad0ffb44f39fb809f945050689a5fb64f145b607b2dcd4f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://786a785de87e5345f01ed57fb6cd17efebe4633953b9e6bc9c169469621aea5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fdf1da4bed4fed480e327c750f01ea201663449a9975540540859463e5b4821f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://892bacf66ebee9e56348d6d6f391b0fd23a5c99369ddaf9280590d8598b32e62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://892bacf66ebee9e56348d6d6f391b0fd23a5c99369ddaf9280590d8598b32e62\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T18:05:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T18:05:44Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T18:05:44Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:44Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:44 crc kubenswrapper[4661]: I0120 18:06:44.178085 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aafdc595f8f331b863d71124f1aa3c686ec883829377108268dd78de88f498ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a15e7bb714cbcf03a4ed8925508be80b06b04f3cd455d293237554c8ad0fdeee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:44Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:44 crc kubenswrapper[4661]: I0120 18:06:44.198885 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-j9j6p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e190abed-d178-4ce7-9485-f6090ecf8578\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ad84b24b0398f3f900b9440d55a7914e661a18580ef8b248ffdce4d8a6c75c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxwgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6923af243783c919b8d74338d7221f91f7c6b770d97eb3a2f7e30360376f071d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6923af243783c919b8d74338d7221f91f7c6b770d97eb3a2f7e30360376f071d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T18:06:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T18:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxwgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d61ecbabdd991af4f3f3005e3d6fab0d3f7fa863e7503f45dd91633dfc68c597\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d61ecbabdd991af4f3f3005e3d6fab0d3f7fa863e7503f45dd91633dfc68c597\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T18:06:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T18:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxwgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31c8fb341a4de1d1144737f83eb46ad0b301f7eb48dee0969da7ade7fbd513da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31c8fb341a4de1d1144737f83eb46ad0b301f7eb48dee0969da7ade7fbd513da\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T18:06:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T18:06:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxwgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db8122764bd0508f39da125b5849fbe3bad9558e511c18f26bdcf4e5b23ca3a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db8122764bd0508f39da125b5849fbe3bad9558e511c18f26bdcf4e5b23ca3a1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T18:06:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T18:06:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxwgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://60e382a199aa3a85c11fdf8c490a4f039a191cff8a604b004e2f4ea6dacb6800\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://60e382a199aa3a85c11fdf8c490a4f039a191cff8a604b004e2f4ea6dacb6800\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T18:06:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T18:06:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxwgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb0e9f6dd4681c1b791524e22d3f668ce544cdc72a33af01fa70f2dd93d2972f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb0e9f6dd4681c1b791524e22d3f668ce544cdc72a33af01fa70f2dd93d2972f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T18:06:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T18:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxwgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T18:06:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-j9j6p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:44Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:44 crc kubenswrapper[4661]: I0120 18:06:44.219949 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:44 crc kubenswrapper[4661]: I0120 18:06:44.220001 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:44 crc kubenswrapper[4661]: I0120 18:06:44.220015 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:44 crc kubenswrapper[4661]: I0120 18:06:44.220038 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:44 crc kubenswrapper[4661]: I0120 18:06:44.220058 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:44Z","lastTransitionTime":"2026-01-20T18:06:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:44 crc kubenswrapper[4661]: I0120 18:06:44.225659 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5947c5f0-b932-4127-a183-6b9023784c81\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:05:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:05:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:05:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2286c38d543136df613b2611b8d494d0777a950158adb169c26708335c024251\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7995b8e096ce8c7adf28d9baa4e12d943a697db80ee2b6e6b347b334e44b0df\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a1fb928361cffd6f14855b6c1cf5964eccc9f923435bf79dddd8f0c94decd9a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a8e025f49d745d0d846c606a3ec9dd6fbd2d255e8662ba1fd1a65f0d4289e77\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3584f02089912eecb6ea77d78d4f093929ce92631cb9ea758f1311268963b6b1\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-20T18:06:02Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0120 18:05:56.920405 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0120 18:05:56.921589 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1862726087/tls.crt::/tmp/serving-cert-1862726087/tls.key\\\\\\\"\\\\nI0120 18:06:02.544098 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0120 18:06:02.549414 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0120 18:06:02.549439 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0120 18:06:02.549472 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0120 18:06:02.549479 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0120 18:06:02.569160 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0120 18:06:02.569400 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0120 18:06:02.569474 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0120 18:06:02.569536 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0120 18:06:02.569594 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0120 18:06:02.569648 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0120 18:06:02.569744 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0120 18:06:02.569342 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0120 18:06:02.573278 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T18:05:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f09e5fcc7fafac7a11257184f5919c06b5b2e56a677b67c664e6489d9a581a20\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:05:46Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6eedc9bdf3c37af238cf9ad5172a8d93751c0641cbf43057016157f086c77538\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6eedc9bdf3c37af238cf9ad5172a8d93751c0641cbf43057016157f086c77538\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T18:05:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T18:05:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T18:05:44Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:44Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:44 crc kubenswrapper[4661]: I0120 18:06:44.241836 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:03Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:44Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:44 crc kubenswrapper[4661]: I0120 18:06:44.253998 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:03Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:44Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:44 crc kubenswrapper[4661]: I0120 18:06:44.263448 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-9m9jm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c44ff326-6791-438a-8d65-b2be26e9c819\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de5a607340e429cf954b1b6e147c4dbff99ffee4d311e9692410698574915af2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kn7nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T18:06:04Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-9m9jm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:44Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:44 crc kubenswrapper[4661]: I0120 18:06:44.285355 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fxb9d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3856f23c-8dc3-4b36-b3b7-955dff315250\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:06Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:06Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54a53d0636da9c6e7974633697967fa21ba02b0357019aca7c83994f57d06d84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66kpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://37fb98a4cea5fe59a694ef52ebebfd3366649970415c8bd3b1307e6d150ffe66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66kpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6bac19d8c5ba66dc20e5e4b90b2ba10efe69f218908b04abb221416f47e47f5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66kpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f5f5d96326cd37c1101488fff8b4ce215ce84766faf13112bed7df0a767de0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66kpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d53da47c39bd1f10fe866890f30f12f27cb0cfce0348c89fc0e89b3e8f563f2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66kpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://407e4d66f22050b80251fcb98ac7168d601d70dff1679bdaca0fc82d6068da41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66kpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://823de7d3705ac9e99525bc0fcc4f577fb363555af9dd66346f33065839076105\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://823de7d3705ac9e99525bc0fcc4f577fb363555af9dd66346f33065839076105\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-20T18:06:35Z\\\",\\\"message\\\":\\\".BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0120 18:06:35.219065 6217 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0120 18:06:35.219313 6217 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0120 18:06:35.219547 6217 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0120 18:06:35.220089 6217 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0120 18:06:35.220357 6217 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0120 18:06:35.220410 6217 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0120 18:06:35.220459 6217 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0120 18:06:35.221651 6217 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0120 18:06:35.221712 6217 factory.go:656] Stopping watch factory\\\\nI0120 18:06:35.221730 6217 ovnkube.go:599] Stopped ovnkube\\\\nI0120 1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T18:06:34Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-fxb9d_openshift-ovn-kubernetes(3856f23c-8dc3-4b36-b3b7-955dff315250)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66kpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dfbc19df20b659446872267891c3a922b6a01e39d8f0557505f25cdc5ba1a648\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66kpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://babd416d0d33b286f533dc5bd8d6904d24fd23632efce36edb6e13183fbd390a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://babd416d0d33b286f533dc5bd8d6904d24fd23632efce36edb6e13183fbd390a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T18:06:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66kpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T18:06:06Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fxb9d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:44Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:44 crc kubenswrapper[4661]: I0120 18:06:44.297772 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-dhd6h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"58dc28d6-51a6-49ce-ab8e-ac0b6bbe7131\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j95bf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j95bf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T18:06:19Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-dhd6h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:44Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:44 crc kubenswrapper[4661]: I0120 18:06:44.311655 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f7511825-196e-48ea-a80c-f30a6806a15f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:05:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:05:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f30ca85f0d31021dde3b56c646ddd5d841e699b809c85e54afa944cc8035df6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://008613eee577926f777b6eba5a93379dca1203429fb29918bb057f2aba5eba4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:05:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://baf1692fe971ebe4534bc83cc471812d2b2883b6f97e53728ded6cd57b40c6f1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://faea3c0fefa61b8f0e07a050f59ca7b88d89a7ac8dba19ab019cff00fd782da3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T18:05:44Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:44Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:44 crc kubenswrapper[4661]: I0120 18:06:44.322586 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:44 crc kubenswrapper[4661]: I0120 18:06:44.322623 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:44 crc kubenswrapper[4661]: I0120 18:06:44.322635 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:44 crc kubenswrapper[4661]: I0120 18:06:44.322656 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:44 crc kubenswrapper[4661]: I0120 18:06:44.322689 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:44Z","lastTransitionTime":"2026-01-20T18:06:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:44 crc kubenswrapper[4661]: I0120 18:06:44.325036 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d831477bdf455582c54cba87020fc1141541282a25169c4b9730a78855e5719\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:44Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:44 crc kubenswrapper[4661]: I0120 18:06:44.337869 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-z97p2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b6f2401-3eb9-4ee4-b79c-6faee06bc21c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d04be3c87130e9506908a5ff0bf35490bafa64b4cec7b6ae1b67c4a8bd7df5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff8qg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T18:06:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-z97p2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:44Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:44 crc kubenswrapper[4661]: I0120 18:06:44.351295 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4hf4s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2cada643-eb7b-4036-8788-500338f73fac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://59b5cf3db3513f82b52401408842627d3e40bdc3009c226548556808410b2289\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gwqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://846c1cab30f986276eb919ac7474fbde1b6d5edb6557ab47057723b68d78b782\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gwqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T18:06:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-4hf4s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:44Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:44 crc kubenswrapper[4661]: I0120 18:06:44.365025 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26d03e00aaf9fc7a94d8fe25f4f6f7a028f4e5eb9956411442757ca8b2046d27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:44Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:44 crc kubenswrapper[4661]: I0120 18:06:44.379615 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:44Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:44 crc kubenswrapper[4661]: I0120 18:06:44.394124 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-svf7c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"78855c94-da90-4523-8d65-70f7fd153dee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce85015f47761ddd35031a4b2aa10eddde92a1f1ee206e6454b967b03b49372e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zvj2k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7dad5141c6e2e07d42bee1c473efffa900d0d900467b1524cd59962582696a3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zvj2k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T18:06:05Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-svf7c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:44Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:44 crc kubenswrapper[4661]: I0120 18:06:44.407880 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-tfdrt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1c3f1ce7-0584-4bf1-8398-a277e9a4599b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://163c719cffaaa547e54e81b543b5f5b2ce5abf7f6309d2859831a14e42df189f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbq77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T18:06:07Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-tfdrt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:44Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:44 crc kubenswrapper[4661]: I0120 18:06:44.425275 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:44 crc kubenswrapper[4661]: I0120 18:06:44.425330 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:44 crc kubenswrapper[4661]: I0120 18:06:44.425345 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:44 crc kubenswrapper[4661]: I0120 18:06:44.425369 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:44 crc kubenswrapper[4661]: I0120 18:06:44.425385 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:44Z","lastTransitionTime":"2026-01-20T18:06:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:44 crc kubenswrapper[4661]: I0120 18:06:44.527714 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:44 crc kubenswrapper[4661]: I0120 18:06:44.527750 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:44 crc kubenswrapper[4661]: I0120 18:06:44.527761 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:44 crc kubenswrapper[4661]: I0120 18:06:44.527779 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:44 crc kubenswrapper[4661]: I0120 18:06:44.527790 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:44Z","lastTransitionTime":"2026-01-20T18:06:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:44 crc kubenswrapper[4661]: I0120 18:06:44.630171 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:44 crc kubenswrapper[4661]: I0120 18:06:44.630233 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:44 crc kubenswrapper[4661]: I0120 18:06:44.630249 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:44 crc kubenswrapper[4661]: I0120 18:06:44.630271 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:44 crc kubenswrapper[4661]: I0120 18:06:44.630285 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:44Z","lastTransitionTime":"2026-01-20T18:06:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:44 crc kubenswrapper[4661]: I0120 18:06:44.732802 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:44 crc kubenswrapper[4661]: I0120 18:06:44.732889 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:44 crc kubenswrapper[4661]: I0120 18:06:44.732902 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:44 crc kubenswrapper[4661]: I0120 18:06:44.732923 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:44 crc kubenswrapper[4661]: I0120 18:06:44.732938 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:44Z","lastTransitionTime":"2026-01-20T18:06:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:44 crc kubenswrapper[4661]: I0120 18:06:44.836080 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:44 crc kubenswrapper[4661]: I0120 18:06:44.836129 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:44 crc kubenswrapper[4661]: I0120 18:06:44.836140 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:44 crc kubenswrapper[4661]: I0120 18:06:44.836160 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:44 crc kubenswrapper[4661]: I0120 18:06:44.836173 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:44Z","lastTransitionTime":"2026-01-20T18:06:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:44 crc kubenswrapper[4661]: I0120 18:06:44.939011 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:44 crc kubenswrapper[4661]: I0120 18:06:44.939062 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:44 crc kubenswrapper[4661]: I0120 18:06:44.939079 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:44 crc kubenswrapper[4661]: I0120 18:06:44.939100 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:44 crc kubenswrapper[4661]: I0120 18:06:44.939114 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:44Z","lastTransitionTime":"2026-01-20T18:06:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:45 crc kubenswrapper[4661]: I0120 18:06:45.042048 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:45 crc kubenswrapper[4661]: I0120 18:06:45.042104 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:45 crc kubenswrapper[4661]: I0120 18:06:45.042114 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:45 crc kubenswrapper[4661]: I0120 18:06:45.042133 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:45 crc kubenswrapper[4661]: I0120 18:06:45.042145 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:45Z","lastTransitionTime":"2026-01-20T18:06:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:45 crc kubenswrapper[4661]: I0120 18:06:45.121649 4661 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-16 02:40:34.668393815 +0000 UTC Jan 20 18:06:45 crc kubenswrapper[4661]: I0120 18:06:45.142063 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 18:06:45 crc kubenswrapper[4661]: I0120 18:06:45.142121 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 18:06:45 crc kubenswrapper[4661]: I0120 18:06:45.142159 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 18:06:45 crc kubenswrapper[4661]: E0120 18:06:45.142264 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 20 18:06:45 crc kubenswrapper[4661]: E0120 18:06:45.142361 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 20 18:06:45 crc kubenswrapper[4661]: I0120 18:06:45.142433 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dhd6h" Jan 20 18:06:45 crc kubenswrapper[4661]: E0120 18:06:45.142914 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dhd6h" podUID="58dc28d6-51a6-49ce-ab8e-ac0b6bbe7131" Jan 20 18:06:45 crc kubenswrapper[4661]: E0120 18:06:45.142994 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 20 18:06:45 crc kubenswrapper[4661]: I0120 18:06:45.145083 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:45 crc kubenswrapper[4661]: I0120 18:06:45.145241 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:45 crc kubenswrapper[4661]: I0120 18:06:45.145254 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:45 crc kubenswrapper[4661]: I0120 18:06:45.145274 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:45 crc kubenswrapper[4661]: I0120 18:06:45.145291 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:45Z","lastTransitionTime":"2026-01-20T18:06:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:45 crc kubenswrapper[4661]: I0120 18:06:45.248174 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:45 crc kubenswrapper[4661]: I0120 18:06:45.248208 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:45 crc kubenswrapper[4661]: I0120 18:06:45.248226 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:45 crc kubenswrapper[4661]: I0120 18:06:45.248245 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:45 crc kubenswrapper[4661]: I0120 18:06:45.248257 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:45Z","lastTransitionTime":"2026-01-20T18:06:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:45 crc kubenswrapper[4661]: I0120 18:06:45.349845 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:45 crc kubenswrapper[4661]: I0120 18:06:45.349879 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:45 crc kubenswrapper[4661]: I0120 18:06:45.349889 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:45 crc kubenswrapper[4661]: I0120 18:06:45.349902 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:45 crc kubenswrapper[4661]: I0120 18:06:45.349911 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:45Z","lastTransitionTime":"2026-01-20T18:06:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:45 crc kubenswrapper[4661]: I0120 18:06:45.453977 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:45 crc kubenswrapper[4661]: I0120 18:06:45.454052 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:45 crc kubenswrapper[4661]: I0120 18:06:45.454070 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:45 crc kubenswrapper[4661]: I0120 18:06:45.454099 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:45 crc kubenswrapper[4661]: I0120 18:06:45.454118 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:45Z","lastTransitionTime":"2026-01-20T18:06:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:45 crc kubenswrapper[4661]: I0120 18:06:45.557817 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:45 crc kubenswrapper[4661]: I0120 18:06:45.557904 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:45 crc kubenswrapper[4661]: I0120 18:06:45.557930 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:45 crc kubenswrapper[4661]: I0120 18:06:45.557974 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:45 crc kubenswrapper[4661]: I0120 18:06:45.557997 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:45Z","lastTransitionTime":"2026-01-20T18:06:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:45 crc kubenswrapper[4661]: I0120 18:06:45.661057 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:45 crc kubenswrapper[4661]: I0120 18:06:45.661110 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:45 crc kubenswrapper[4661]: I0120 18:06:45.661120 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:45 crc kubenswrapper[4661]: I0120 18:06:45.661139 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:45 crc kubenswrapper[4661]: I0120 18:06:45.661151 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:45Z","lastTransitionTime":"2026-01-20T18:06:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:45 crc kubenswrapper[4661]: I0120 18:06:45.768178 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:45 crc kubenswrapper[4661]: I0120 18:06:45.768277 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:45 crc kubenswrapper[4661]: I0120 18:06:45.768290 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:45 crc kubenswrapper[4661]: I0120 18:06:45.768312 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:45 crc kubenswrapper[4661]: I0120 18:06:45.768327 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:45Z","lastTransitionTime":"2026-01-20T18:06:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:45 crc kubenswrapper[4661]: I0120 18:06:45.877872 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:45 crc kubenswrapper[4661]: I0120 18:06:45.877921 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:45 crc kubenswrapper[4661]: I0120 18:06:45.877931 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:45 crc kubenswrapper[4661]: I0120 18:06:45.877948 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:45 crc kubenswrapper[4661]: I0120 18:06:45.877998 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:45Z","lastTransitionTime":"2026-01-20T18:06:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:45 crc kubenswrapper[4661]: I0120 18:06:45.981483 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:45 crc kubenswrapper[4661]: I0120 18:06:45.981523 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:45 crc kubenswrapper[4661]: I0120 18:06:45.981531 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:45 crc kubenswrapper[4661]: I0120 18:06:45.981546 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:45 crc kubenswrapper[4661]: I0120 18:06:45.981555 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:45Z","lastTransitionTime":"2026-01-20T18:06:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:46 crc kubenswrapper[4661]: I0120 18:06:46.083717 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:46 crc kubenswrapper[4661]: I0120 18:06:46.083747 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:46 crc kubenswrapper[4661]: I0120 18:06:46.083755 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:46 crc kubenswrapper[4661]: I0120 18:06:46.083769 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:46 crc kubenswrapper[4661]: I0120 18:06:46.083778 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:46Z","lastTransitionTime":"2026-01-20T18:06:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:46 crc kubenswrapper[4661]: I0120 18:06:46.122745 4661 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-06 16:24:58.403460475 +0000 UTC Jan 20 18:06:46 crc kubenswrapper[4661]: I0120 18:06:46.187517 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:46 crc kubenswrapper[4661]: I0120 18:06:46.187563 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:46 crc kubenswrapper[4661]: I0120 18:06:46.187579 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:46 crc kubenswrapper[4661]: I0120 18:06:46.187602 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:46 crc kubenswrapper[4661]: I0120 18:06:46.187618 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:46Z","lastTransitionTime":"2026-01-20T18:06:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:46 crc kubenswrapper[4661]: I0120 18:06:46.290687 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:46 crc kubenswrapper[4661]: I0120 18:06:46.290726 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:46 crc kubenswrapper[4661]: I0120 18:06:46.290735 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:46 crc kubenswrapper[4661]: I0120 18:06:46.290749 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:46 crc kubenswrapper[4661]: I0120 18:06:46.290759 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:46Z","lastTransitionTime":"2026-01-20T18:06:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:46 crc kubenswrapper[4661]: I0120 18:06:46.393681 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:46 crc kubenswrapper[4661]: I0120 18:06:46.393714 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:46 crc kubenswrapper[4661]: I0120 18:06:46.393731 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:46 crc kubenswrapper[4661]: I0120 18:06:46.393746 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:46 crc kubenswrapper[4661]: I0120 18:06:46.393757 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:46Z","lastTransitionTime":"2026-01-20T18:06:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:46 crc kubenswrapper[4661]: I0120 18:06:46.496531 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:46 crc kubenswrapper[4661]: I0120 18:06:46.496626 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:46 crc kubenswrapper[4661]: I0120 18:06:46.496658 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:46 crc kubenswrapper[4661]: I0120 18:06:46.496735 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:46 crc kubenswrapper[4661]: I0120 18:06:46.496799 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:46Z","lastTransitionTime":"2026-01-20T18:06:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:46 crc kubenswrapper[4661]: I0120 18:06:46.599964 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:46 crc kubenswrapper[4661]: I0120 18:06:46.600023 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:46 crc kubenswrapper[4661]: I0120 18:06:46.600043 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:46 crc kubenswrapper[4661]: I0120 18:06:46.600071 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:46 crc kubenswrapper[4661]: I0120 18:06:46.600089 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:46Z","lastTransitionTime":"2026-01-20T18:06:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:46 crc kubenswrapper[4661]: I0120 18:06:46.704403 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:46 crc kubenswrapper[4661]: I0120 18:06:46.704456 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:46 crc kubenswrapper[4661]: I0120 18:06:46.704473 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:46 crc kubenswrapper[4661]: I0120 18:06:46.704498 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:46 crc kubenswrapper[4661]: I0120 18:06:46.704516 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:46Z","lastTransitionTime":"2026-01-20T18:06:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:46 crc kubenswrapper[4661]: I0120 18:06:46.807812 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:46 crc kubenswrapper[4661]: I0120 18:06:46.807852 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:46 crc kubenswrapper[4661]: I0120 18:06:46.807862 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:46 crc kubenswrapper[4661]: I0120 18:06:46.807879 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:46 crc kubenswrapper[4661]: I0120 18:06:46.807888 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:46Z","lastTransitionTime":"2026-01-20T18:06:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:46 crc kubenswrapper[4661]: I0120 18:06:46.911512 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:46 crc kubenswrapper[4661]: I0120 18:06:46.911561 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:46 crc kubenswrapper[4661]: I0120 18:06:46.911576 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:46 crc kubenswrapper[4661]: I0120 18:06:46.911623 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:46 crc kubenswrapper[4661]: I0120 18:06:46.911639 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:46Z","lastTransitionTime":"2026-01-20T18:06:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:47 crc kubenswrapper[4661]: I0120 18:06:47.014986 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:47 crc kubenswrapper[4661]: I0120 18:06:47.015050 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:47 crc kubenswrapper[4661]: I0120 18:06:47.015063 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:47 crc kubenswrapper[4661]: I0120 18:06:47.015085 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:47 crc kubenswrapper[4661]: I0120 18:06:47.015101 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:47Z","lastTransitionTime":"2026-01-20T18:06:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:47 crc kubenswrapper[4661]: I0120 18:06:47.117898 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:47 crc kubenswrapper[4661]: I0120 18:06:47.118290 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:47 crc kubenswrapper[4661]: I0120 18:06:47.118412 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:47 crc kubenswrapper[4661]: I0120 18:06:47.118510 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:47 crc kubenswrapper[4661]: I0120 18:06:47.118632 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:47Z","lastTransitionTime":"2026-01-20T18:06:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:47 crc kubenswrapper[4661]: I0120 18:06:47.123208 4661 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-09 05:41:39.88993895 +0000 UTC Jan 20 18:06:47 crc kubenswrapper[4661]: I0120 18:06:47.141681 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 18:06:47 crc kubenswrapper[4661]: E0120 18:06:47.142017 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 20 18:06:47 crc kubenswrapper[4661]: I0120 18:06:47.141749 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 18:06:47 crc kubenswrapper[4661]: E0120 18:06:47.142276 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 20 18:06:47 crc kubenswrapper[4661]: I0120 18:06:47.141702 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 18:06:47 crc kubenswrapper[4661]: E0120 18:06:47.142529 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 20 18:06:47 crc kubenswrapper[4661]: I0120 18:06:47.141790 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dhd6h" Jan 20 18:06:47 crc kubenswrapper[4661]: E0120 18:06:47.142898 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dhd6h" podUID="58dc28d6-51a6-49ce-ab8e-ac0b6bbe7131" Jan 20 18:06:47 crc kubenswrapper[4661]: I0120 18:06:47.222324 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:47 crc kubenswrapper[4661]: I0120 18:06:47.222377 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:47 crc kubenswrapper[4661]: I0120 18:06:47.222391 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:47 crc kubenswrapper[4661]: I0120 18:06:47.222411 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:47 crc kubenswrapper[4661]: I0120 18:06:47.222425 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:47Z","lastTransitionTime":"2026-01-20T18:06:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:47 crc kubenswrapper[4661]: I0120 18:06:47.325381 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:47 crc kubenswrapper[4661]: I0120 18:06:47.325445 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:47 crc kubenswrapper[4661]: I0120 18:06:47.325464 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:47 crc kubenswrapper[4661]: I0120 18:06:47.325489 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:47 crc kubenswrapper[4661]: I0120 18:06:47.325506 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:47Z","lastTransitionTime":"2026-01-20T18:06:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:47 crc kubenswrapper[4661]: I0120 18:06:47.428161 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:47 crc kubenswrapper[4661]: I0120 18:06:47.428206 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:47 crc kubenswrapper[4661]: I0120 18:06:47.428222 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:47 crc kubenswrapper[4661]: I0120 18:06:47.428238 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:47 crc kubenswrapper[4661]: I0120 18:06:47.428251 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:47Z","lastTransitionTime":"2026-01-20T18:06:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:47 crc kubenswrapper[4661]: I0120 18:06:47.531196 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:47 crc kubenswrapper[4661]: I0120 18:06:47.531253 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:47 crc kubenswrapper[4661]: I0120 18:06:47.531267 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:47 crc kubenswrapper[4661]: I0120 18:06:47.531290 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:47 crc kubenswrapper[4661]: I0120 18:06:47.531305 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:47Z","lastTransitionTime":"2026-01-20T18:06:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:47 crc kubenswrapper[4661]: I0120 18:06:47.634081 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:47 crc kubenswrapper[4661]: I0120 18:06:47.634133 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:47 crc kubenswrapper[4661]: I0120 18:06:47.634144 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:47 crc kubenswrapper[4661]: I0120 18:06:47.634164 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:47 crc kubenswrapper[4661]: I0120 18:06:47.634183 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:47Z","lastTransitionTime":"2026-01-20T18:06:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:47 crc kubenswrapper[4661]: I0120 18:06:47.737469 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:47 crc kubenswrapper[4661]: I0120 18:06:47.737511 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:47 crc kubenswrapper[4661]: I0120 18:06:47.737520 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:47 crc kubenswrapper[4661]: I0120 18:06:47.737535 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:47 crc kubenswrapper[4661]: I0120 18:06:47.737545 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:47Z","lastTransitionTime":"2026-01-20T18:06:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:47 crc kubenswrapper[4661]: I0120 18:06:47.841259 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:47 crc kubenswrapper[4661]: I0120 18:06:47.841334 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:47 crc kubenswrapper[4661]: I0120 18:06:47.841353 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:47 crc kubenswrapper[4661]: I0120 18:06:47.841384 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:47 crc kubenswrapper[4661]: I0120 18:06:47.841402 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:47Z","lastTransitionTime":"2026-01-20T18:06:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:47 crc kubenswrapper[4661]: I0120 18:06:47.943853 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:47 crc kubenswrapper[4661]: I0120 18:06:47.943904 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:47 crc kubenswrapper[4661]: I0120 18:06:47.943915 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:47 crc kubenswrapper[4661]: I0120 18:06:47.943936 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:47 crc kubenswrapper[4661]: I0120 18:06:47.943946 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:47Z","lastTransitionTime":"2026-01-20T18:06:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:48 crc kubenswrapper[4661]: I0120 18:06:48.047875 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:48 crc kubenswrapper[4661]: I0120 18:06:48.047939 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:48 crc kubenswrapper[4661]: I0120 18:06:48.047948 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:48 crc kubenswrapper[4661]: I0120 18:06:48.047966 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:48 crc kubenswrapper[4661]: I0120 18:06:48.047983 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:48Z","lastTransitionTime":"2026-01-20T18:06:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:48 crc kubenswrapper[4661]: I0120 18:06:48.073996 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:48 crc kubenswrapper[4661]: I0120 18:06:48.074071 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:48 crc kubenswrapper[4661]: I0120 18:06:48.074090 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:48 crc kubenswrapper[4661]: I0120 18:06:48.074122 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:48 crc kubenswrapper[4661]: I0120 18:06:48.074154 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:48Z","lastTransitionTime":"2026-01-20T18:06:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:48 crc kubenswrapper[4661]: E0120 18:06:48.094620 4661 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T18:06:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T18:06:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:48Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T18:06:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T18:06:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:48Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7f2069d5-53e0-4198-b42b-b73aa1252865\\\",\\\"systemUUID\\\":\\\"727045d4-7edb-4891-a9ee-dd5ccba890df\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:48Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:48 crc kubenswrapper[4661]: I0120 18:06:48.100827 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:48 crc kubenswrapper[4661]: I0120 18:06:48.100880 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:48 crc kubenswrapper[4661]: I0120 18:06:48.100895 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:48 crc kubenswrapper[4661]: I0120 18:06:48.100919 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:48 crc kubenswrapper[4661]: I0120 18:06:48.100932 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:48Z","lastTransitionTime":"2026-01-20T18:06:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:48 crc kubenswrapper[4661]: E0120 18:06:48.118297 4661 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T18:06:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T18:06:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:48Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T18:06:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T18:06:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:48Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7f2069d5-53e0-4198-b42b-b73aa1252865\\\",\\\"systemUUID\\\":\\\"727045d4-7edb-4891-a9ee-dd5ccba890df\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:48Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:48 crc kubenswrapper[4661]: I0120 18:06:48.123102 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:48 crc kubenswrapper[4661]: I0120 18:06:48.123153 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:48 crc kubenswrapper[4661]: I0120 18:06:48.123163 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:48 crc kubenswrapper[4661]: I0120 18:06:48.123181 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:48 crc kubenswrapper[4661]: I0120 18:06:48.123193 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:48Z","lastTransitionTime":"2026-01-20T18:06:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:48 crc kubenswrapper[4661]: I0120 18:06:48.123338 4661 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-10 12:35:07.971682759 +0000 UTC Jan 20 18:06:48 crc kubenswrapper[4661]: E0120 18:06:48.138777 4661 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T18:06:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T18:06:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:48Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T18:06:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T18:06:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:48Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7f2069d5-53e0-4198-b42b-b73aa1252865\\\",\\\"systemUUID\\\":\\\"727045d4-7edb-4891-a9ee-dd5ccba890df\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:48Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:48 crc kubenswrapper[4661]: I0120 18:06:48.143247 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:48 crc kubenswrapper[4661]: I0120 18:06:48.143289 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:48 crc kubenswrapper[4661]: I0120 18:06:48.143304 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:48 crc kubenswrapper[4661]: I0120 18:06:48.143322 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:48 crc kubenswrapper[4661]: I0120 18:06:48.143336 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:48Z","lastTransitionTime":"2026-01-20T18:06:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:48 crc kubenswrapper[4661]: E0120 18:06:48.157738 4661 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T18:06:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T18:06:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:48Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T18:06:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T18:06:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:48Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7f2069d5-53e0-4198-b42b-b73aa1252865\\\",\\\"systemUUID\\\":\\\"727045d4-7edb-4891-a9ee-dd5ccba890df\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:48Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:48 crc kubenswrapper[4661]: I0120 18:06:48.162721 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:48 crc kubenswrapper[4661]: I0120 18:06:48.162770 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:48 crc kubenswrapper[4661]: I0120 18:06:48.162782 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:48 crc kubenswrapper[4661]: I0120 18:06:48.162804 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:48 crc kubenswrapper[4661]: I0120 18:06:48.162819 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:48Z","lastTransitionTime":"2026-01-20T18:06:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:48 crc kubenswrapper[4661]: E0120 18:06:48.178838 4661 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T18:06:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T18:06:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:48Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T18:06:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T18:06:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:48Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7f2069d5-53e0-4198-b42b-b73aa1252865\\\",\\\"systemUUID\\\":\\\"727045d4-7edb-4891-a9ee-dd5ccba890df\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:48Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:48 crc kubenswrapper[4661]: E0120 18:06:48.179011 4661 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 20 18:06:48 crc kubenswrapper[4661]: I0120 18:06:48.185923 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:48 crc kubenswrapper[4661]: I0120 18:06:48.185967 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:48 crc kubenswrapper[4661]: I0120 18:06:48.185987 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:48 crc kubenswrapper[4661]: I0120 18:06:48.186014 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:48 crc kubenswrapper[4661]: I0120 18:06:48.186033 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:48Z","lastTransitionTime":"2026-01-20T18:06:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:48 crc kubenswrapper[4661]: I0120 18:06:48.289744 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:48 crc kubenswrapper[4661]: I0120 18:06:48.289778 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:48 crc kubenswrapper[4661]: I0120 18:06:48.289787 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:48 crc kubenswrapper[4661]: I0120 18:06:48.289802 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:48 crc kubenswrapper[4661]: I0120 18:06:48.289811 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:48Z","lastTransitionTime":"2026-01-20T18:06:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:48 crc kubenswrapper[4661]: I0120 18:06:48.391921 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:48 crc kubenswrapper[4661]: I0120 18:06:48.391975 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:48 crc kubenswrapper[4661]: I0120 18:06:48.391985 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:48 crc kubenswrapper[4661]: I0120 18:06:48.392011 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:48 crc kubenswrapper[4661]: I0120 18:06:48.392023 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:48Z","lastTransitionTime":"2026-01-20T18:06:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:48 crc kubenswrapper[4661]: I0120 18:06:48.495128 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:48 crc kubenswrapper[4661]: I0120 18:06:48.495170 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:48 crc kubenswrapper[4661]: I0120 18:06:48.495178 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:48 crc kubenswrapper[4661]: I0120 18:06:48.495195 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:48 crc kubenswrapper[4661]: I0120 18:06:48.495207 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:48Z","lastTransitionTime":"2026-01-20T18:06:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:48 crc kubenswrapper[4661]: I0120 18:06:48.598329 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:48 crc kubenswrapper[4661]: I0120 18:06:48.598381 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:48 crc kubenswrapper[4661]: I0120 18:06:48.598392 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:48 crc kubenswrapper[4661]: I0120 18:06:48.598410 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:48 crc kubenswrapper[4661]: I0120 18:06:48.598425 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:48Z","lastTransitionTime":"2026-01-20T18:06:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:48 crc kubenswrapper[4661]: I0120 18:06:48.715840 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:48 crc kubenswrapper[4661]: I0120 18:06:48.715871 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:48 crc kubenswrapper[4661]: I0120 18:06:48.715884 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:48 crc kubenswrapper[4661]: I0120 18:06:48.715898 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:48 crc kubenswrapper[4661]: I0120 18:06:48.715909 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:48Z","lastTransitionTime":"2026-01-20T18:06:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:48 crc kubenswrapper[4661]: I0120 18:06:48.818010 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:48 crc kubenswrapper[4661]: I0120 18:06:48.818061 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:48 crc kubenswrapper[4661]: I0120 18:06:48.818072 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:48 crc kubenswrapper[4661]: I0120 18:06:48.818091 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:48 crc kubenswrapper[4661]: I0120 18:06:48.818106 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:48Z","lastTransitionTime":"2026-01-20T18:06:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:48 crc kubenswrapper[4661]: I0120 18:06:48.921285 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:48 crc kubenswrapper[4661]: I0120 18:06:48.921318 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:48 crc kubenswrapper[4661]: I0120 18:06:48.921328 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:48 crc kubenswrapper[4661]: I0120 18:06:48.921342 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:48 crc kubenswrapper[4661]: I0120 18:06:48.921354 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:48Z","lastTransitionTime":"2026-01-20T18:06:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:49 crc kubenswrapper[4661]: I0120 18:06:49.024021 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:49 crc kubenswrapper[4661]: I0120 18:06:49.024066 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:49 crc kubenswrapper[4661]: I0120 18:06:49.024077 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:49 crc kubenswrapper[4661]: I0120 18:06:49.024095 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:49 crc kubenswrapper[4661]: I0120 18:06:49.024105 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:49Z","lastTransitionTime":"2026-01-20T18:06:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:49 crc kubenswrapper[4661]: I0120 18:06:49.123940 4661 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-26 09:28:26.344035864 +0000 UTC Jan 20 18:06:49 crc kubenswrapper[4661]: I0120 18:06:49.127215 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:49 crc kubenswrapper[4661]: I0120 18:06:49.127260 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:49 crc kubenswrapper[4661]: I0120 18:06:49.127273 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:49 crc kubenswrapper[4661]: I0120 18:06:49.127297 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:49 crc kubenswrapper[4661]: I0120 18:06:49.127313 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:49Z","lastTransitionTime":"2026-01-20T18:06:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:49 crc kubenswrapper[4661]: I0120 18:06:49.141878 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 18:06:49 crc kubenswrapper[4661]: I0120 18:06:49.141928 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 18:06:49 crc kubenswrapper[4661]: I0120 18:06:49.141957 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dhd6h" Jan 20 18:06:49 crc kubenswrapper[4661]: I0120 18:06:49.141936 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 18:06:49 crc kubenswrapper[4661]: E0120 18:06:49.142031 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 20 18:06:49 crc kubenswrapper[4661]: E0120 18:06:49.142128 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dhd6h" podUID="58dc28d6-51a6-49ce-ab8e-ac0b6bbe7131" Jan 20 18:06:49 crc kubenswrapper[4661]: E0120 18:06:49.142168 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 20 18:06:49 crc kubenswrapper[4661]: E0120 18:06:49.142225 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 20 18:06:49 crc kubenswrapper[4661]: I0120 18:06:49.229685 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:49 crc kubenswrapper[4661]: I0120 18:06:49.229730 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:49 crc kubenswrapper[4661]: I0120 18:06:49.229741 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:49 crc kubenswrapper[4661]: I0120 18:06:49.229763 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:49 crc kubenswrapper[4661]: I0120 18:06:49.229777 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:49Z","lastTransitionTime":"2026-01-20T18:06:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:49 crc kubenswrapper[4661]: I0120 18:06:49.332169 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:49 crc kubenswrapper[4661]: I0120 18:06:49.332227 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:49 crc kubenswrapper[4661]: I0120 18:06:49.332244 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:49 crc kubenswrapper[4661]: I0120 18:06:49.332264 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:49 crc kubenswrapper[4661]: I0120 18:06:49.332277 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:49Z","lastTransitionTime":"2026-01-20T18:06:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:49 crc kubenswrapper[4661]: I0120 18:06:49.435501 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:49 crc kubenswrapper[4661]: I0120 18:06:49.435584 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:49 crc kubenswrapper[4661]: I0120 18:06:49.435601 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:49 crc kubenswrapper[4661]: I0120 18:06:49.435627 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:49 crc kubenswrapper[4661]: I0120 18:06:49.435646 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:49Z","lastTransitionTime":"2026-01-20T18:06:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:49 crc kubenswrapper[4661]: I0120 18:06:49.538501 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:49 crc kubenswrapper[4661]: I0120 18:06:49.538584 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:49 crc kubenswrapper[4661]: I0120 18:06:49.538594 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:49 crc kubenswrapper[4661]: I0120 18:06:49.538614 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:49 crc kubenswrapper[4661]: I0120 18:06:49.538626 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:49Z","lastTransitionTime":"2026-01-20T18:06:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:49 crc kubenswrapper[4661]: I0120 18:06:49.640841 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:49 crc kubenswrapper[4661]: I0120 18:06:49.640884 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:49 crc kubenswrapper[4661]: I0120 18:06:49.640894 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:49 crc kubenswrapper[4661]: I0120 18:06:49.640910 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:49 crc kubenswrapper[4661]: I0120 18:06:49.640919 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:49Z","lastTransitionTime":"2026-01-20T18:06:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:49 crc kubenswrapper[4661]: I0120 18:06:49.743833 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:49 crc kubenswrapper[4661]: I0120 18:06:49.743899 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:49 crc kubenswrapper[4661]: I0120 18:06:49.743910 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:49 crc kubenswrapper[4661]: I0120 18:06:49.743929 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:49 crc kubenswrapper[4661]: I0120 18:06:49.743944 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:49Z","lastTransitionTime":"2026-01-20T18:06:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:49 crc kubenswrapper[4661]: I0120 18:06:49.847046 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:49 crc kubenswrapper[4661]: I0120 18:06:49.847103 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:49 crc kubenswrapper[4661]: I0120 18:06:49.847118 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:49 crc kubenswrapper[4661]: I0120 18:06:49.847138 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:49 crc kubenswrapper[4661]: I0120 18:06:49.847151 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:49Z","lastTransitionTime":"2026-01-20T18:06:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:49 crc kubenswrapper[4661]: I0120 18:06:49.950369 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:49 crc kubenswrapper[4661]: I0120 18:06:49.950418 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:49 crc kubenswrapper[4661]: I0120 18:06:49.950428 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:49 crc kubenswrapper[4661]: I0120 18:06:49.950445 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:49 crc kubenswrapper[4661]: I0120 18:06:49.950457 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:49Z","lastTransitionTime":"2026-01-20T18:06:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:50 crc kubenswrapper[4661]: I0120 18:06:50.053152 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:50 crc kubenswrapper[4661]: I0120 18:06:50.053200 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:50 crc kubenswrapper[4661]: I0120 18:06:50.053211 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:50 crc kubenswrapper[4661]: I0120 18:06:50.053228 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:50 crc kubenswrapper[4661]: I0120 18:06:50.053242 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:50Z","lastTransitionTime":"2026-01-20T18:06:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:50 crc kubenswrapper[4661]: I0120 18:06:50.124722 4661 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-16 05:45:12.610727673 +0000 UTC Jan 20 18:06:50 crc kubenswrapper[4661]: I0120 18:06:50.141791 4661 scope.go:117] "RemoveContainer" containerID="823de7d3705ac9e99525bc0fcc4f577fb363555af9dd66346f33065839076105" Jan 20 18:06:50 crc kubenswrapper[4661]: E0120 18:06:50.142008 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-fxb9d_openshift-ovn-kubernetes(3856f23c-8dc3-4b36-b3b7-955dff315250)\"" pod="openshift-ovn-kubernetes/ovnkube-node-fxb9d" podUID="3856f23c-8dc3-4b36-b3b7-955dff315250" Jan 20 18:06:50 crc kubenswrapper[4661]: I0120 18:06:50.155307 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:50 crc kubenswrapper[4661]: I0120 18:06:50.155350 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:50 crc kubenswrapper[4661]: I0120 18:06:50.155363 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:50 crc kubenswrapper[4661]: I0120 18:06:50.155384 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:50 crc kubenswrapper[4661]: I0120 18:06:50.155397 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:50Z","lastTransitionTime":"2026-01-20T18:06:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:50 crc kubenswrapper[4661]: I0120 18:06:50.257582 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:50 crc kubenswrapper[4661]: I0120 18:06:50.257629 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:50 crc kubenswrapper[4661]: I0120 18:06:50.257646 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:50 crc kubenswrapper[4661]: I0120 18:06:50.257683 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:50 crc kubenswrapper[4661]: I0120 18:06:50.257701 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:50Z","lastTransitionTime":"2026-01-20T18:06:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:50 crc kubenswrapper[4661]: I0120 18:06:50.359732 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:50 crc kubenswrapper[4661]: I0120 18:06:50.359765 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:50 crc kubenswrapper[4661]: I0120 18:06:50.359774 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:50 crc kubenswrapper[4661]: I0120 18:06:50.359789 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:50 crc kubenswrapper[4661]: I0120 18:06:50.359799 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:50Z","lastTransitionTime":"2026-01-20T18:06:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:50 crc kubenswrapper[4661]: I0120 18:06:50.462460 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:50 crc kubenswrapper[4661]: I0120 18:06:50.462500 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:50 crc kubenswrapper[4661]: I0120 18:06:50.462512 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:50 crc kubenswrapper[4661]: I0120 18:06:50.462530 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:50 crc kubenswrapper[4661]: I0120 18:06:50.462541 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:50Z","lastTransitionTime":"2026-01-20T18:06:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:50 crc kubenswrapper[4661]: I0120 18:06:50.564825 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:50 crc kubenswrapper[4661]: I0120 18:06:50.564869 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:50 crc kubenswrapper[4661]: I0120 18:06:50.564881 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:50 crc kubenswrapper[4661]: I0120 18:06:50.564901 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:50 crc kubenswrapper[4661]: I0120 18:06:50.564913 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:50Z","lastTransitionTime":"2026-01-20T18:06:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:50 crc kubenswrapper[4661]: I0120 18:06:50.667118 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:50 crc kubenswrapper[4661]: I0120 18:06:50.667157 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:50 crc kubenswrapper[4661]: I0120 18:06:50.667167 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:50 crc kubenswrapper[4661]: I0120 18:06:50.667194 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:50 crc kubenswrapper[4661]: I0120 18:06:50.667204 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:50Z","lastTransitionTime":"2026-01-20T18:06:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:50 crc kubenswrapper[4661]: I0120 18:06:50.770218 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:50 crc kubenswrapper[4661]: I0120 18:06:50.770264 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:50 crc kubenswrapper[4661]: I0120 18:06:50.770276 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:50 crc kubenswrapper[4661]: I0120 18:06:50.770294 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:50 crc kubenswrapper[4661]: I0120 18:06:50.770304 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:50Z","lastTransitionTime":"2026-01-20T18:06:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:50 crc kubenswrapper[4661]: I0120 18:06:50.873184 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:50 crc kubenswrapper[4661]: I0120 18:06:50.873223 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:50 crc kubenswrapper[4661]: I0120 18:06:50.873232 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:50 crc kubenswrapper[4661]: I0120 18:06:50.873354 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:50 crc kubenswrapper[4661]: I0120 18:06:50.873371 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:50Z","lastTransitionTime":"2026-01-20T18:06:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:50 crc kubenswrapper[4661]: I0120 18:06:50.976280 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:50 crc kubenswrapper[4661]: I0120 18:06:50.976329 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:50 crc kubenswrapper[4661]: I0120 18:06:50.976343 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:50 crc kubenswrapper[4661]: I0120 18:06:50.976363 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:50 crc kubenswrapper[4661]: I0120 18:06:50.976377 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:50Z","lastTransitionTime":"2026-01-20T18:06:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:51 crc kubenswrapper[4661]: I0120 18:06:51.078729 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:51 crc kubenswrapper[4661]: I0120 18:06:51.078771 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:51 crc kubenswrapper[4661]: I0120 18:06:51.078783 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:51 crc kubenswrapper[4661]: I0120 18:06:51.078806 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:51 crc kubenswrapper[4661]: I0120 18:06:51.078821 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:51Z","lastTransitionTime":"2026-01-20T18:06:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:51 crc kubenswrapper[4661]: I0120 18:06:51.125872 4661 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-27 23:15:46.016366278 +0000 UTC Jan 20 18:06:51 crc kubenswrapper[4661]: I0120 18:06:51.143074 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 18:06:51 crc kubenswrapper[4661]: I0120 18:06:51.143081 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 18:06:51 crc kubenswrapper[4661]: I0120 18:06:51.143191 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 18:06:51 crc kubenswrapper[4661]: I0120 18:06:51.143408 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dhd6h" Jan 20 18:06:51 crc kubenswrapper[4661]: E0120 18:06:51.143495 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 20 18:06:51 crc kubenswrapper[4661]: E0120 18:06:51.143700 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 20 18:06:51 crc kubenswrapper[4661]: E0120 18:06:51.143750 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dhd6h" podUID="58dc28d6-51a6-49ce-ab8e-ac0b6bbe7131" Jan 20 18:06:51 crc kubenswrapper[4661]: E0120 18:06:51.143831 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 20 18:06:51 crc kubenswrapper[4661]: I0120 18:06:51.181268 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:51 crc kubenswrapper[4661]: I0120 18:06:51.181377 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:51 crc kubenswrapper[4661]: I0120 18:06:51.181457 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:51 crc kubenswrapper[4661]: I0120 18:06:51.181542 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:51 crc kubenswrapper[4661]: I0120 18:06:51.181606 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:51Z","lastTransitionTime":"2026-01-20T18:06:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:51 crc kubenswrapper[4661]: I0120 18:06:51.284845 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:51 crc kubenswrapper[4661]: I0120 18:06:51.284907 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:51 crc kubenswrapper[4661]: I0120 18:06:51.284920 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:51 crc kubenswrapper[4661]: I0120 18:06:51.284941 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:51 crc kubenswrapper[4661]: I0120 18:06:51.285140 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:51Z","lastTransitionTime":"2026-01-20T18:06:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:51 crc kubenswrapper[4661]: I0120 18:06:51.387995 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:51 crc kubenswrapper[4661]: I0120 18:06:51.388507 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:51 crc kubenswrapper[4661]: I0120 18:06:51.388586 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:51 crc kubenswrapper[4661]: I0120 18:06:51.388702 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:51 crc kubenswrapper[4661]: I0120 18:06:51.388812 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:51Z","lastTransitionTime":"2026-01-20T18:06:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:51 crc kubenswrapper[4661]: I0120 18:06:51.492210 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:51 crc kubenswrapper[4661]: I0120 18:06:51.492256 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:51 crc kubenswrapper[4661]: I0120 18:06:51.492270 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:51 crc kubenswrapper[4661]: I0120 18:06:51.492289 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:51 crc kubenswrapper[4661]: I0120 18:06:51.492313 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:51Z","lastTransitionTime":"2026-01-20T18:06:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:51 crc kubenswrapper[4661]: I0120 18:06:51.595732 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:51 crc kubenswrapper[4661]: I0120 18:06:51.595807 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:51 crc kubenswrapper[4661]: I0120 18:06:51.595829 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:51 crc kubenswrapper[4661]: I0120 18:06:51.595857 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:51 crc kubenswrapper[4661]: I0120 18:06:51.595876 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:51Z","lastTransitionTime":"2026-01-20T18:06:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:51 crc kubenswrapper[4661]: I0120 18:06:51.698570 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:51 crc kubenswrapper[4661]: I0120 18:06:51.698619 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:51 crc kubenswrapper[4661]: I0120 18:06:51.698631 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:51 crc kubenswrapper[4661]: I0120 18:06:51.698653 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:51 crc kubenswrapper[4661]: I0120 18:06:51.698703 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:51Z","lastTransitionTime":"2026-01-20T18:06:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:51 crc kubenswrapper[4661]: I0120 18:06:51.802298 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:51 crc kubenswrapper[4661]: I0120 18:06:51.802346 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:51 crc kubenswrapper[4661]: I0120 18:06:51.802359 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:51 crc kubenswrapper[4661]: I0120 18:06:51.802381 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:51 crc kubenswrapper[4661]: I0120 18:06:51.802396 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:51Z","lastTransitionTime":"2026-01-20T18:06:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:51 crc kubenswrapper[4661]: I0120 18:06:51.879349 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/58dc28d6-51a6-49ce-ab8e-ac0b6bbe7131-metrics-certs\") pod \"network-metrics-daemon-dhd6h\" (UID: \"58dc28d6-51a6-49ce-ab8e-ac0b6bbe7131\") " pod="openshift-multus/network-metrics-daemon-dhd6h" Jan 20 18:06:51 crc kubenswrapper[4661]: E0120 18:06:51.879592 4661 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 20 18:06:51 crc kubenswrapper[4661]: E0120 18:06:51.879939 4661 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/58dc28d6-51a6-49ce-ab8e-ac0b6bbe7131-metrics-certs podName:58dc28d6-51a6-49ce-ab8e-ac0b6bbe7131 nodeName:}" failed. No retries permitted until 2026-01-20 18:07:23.879918512 +0000 UTC m=+100.210708174 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/58dc28d6-51a6-49ce-ab8e-ac0b6bbe7131-metrics-certs") pod "network-metrics-daemon-dhd6h" (UID: "58dc28d6-51a6-49ce-ab8e-ac0b6bbe7131") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 20 18:06:51 crc kubenswrapper[4661]: I0120 18:06:51.905931 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:51 crc kubenswrapper[4661]: I0120 18:06:51.905979 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:51 crc kubenswrapper[4661]: I0120 18:06:51.905993 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:51 crc kubenswrapper[4661]: I0120 18:06:51.906011 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:51 crc kubenswrapper[4661]: I0120 18:06:51.906023 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:51Z","lastTransitionTime":"2026-01-20T18:06:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:52 crc kubenswrapper[4661]: I0120 18:06:52.008847 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:52 crc kubenswrapper[4661]: I0120 18:06:52.008910 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:52 crc kubenswrapper[4661]: I0120 18:06:52.008928 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:52 crc kubenswrapper[4661]: I0120 18:06:52.008960 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:52 crc kubenswrapper[4661]: I0120 18:06:52.008977 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:52Z","lastTransitionTime":"2026-01-20T18:06:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:52 crc kubenswrapper[4661]: I0120 18:06:52.112242 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:52 crc kubenswrapper[4661]: I0120 18:06:52.112832 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:52 crc kubenswrapper[4661]: I0120 18:06:52.112850 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:52 crc kubenswrapper[4661]: I0120 18:06:52.112873 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:52 crc kubenswrapper[4661]: I0120 18:06:52.112887 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:52Z","lastTransitionTime":"2026-01-20T18:06:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:52 crc kubenswrapper[4661]: I0120 18:06:52.126695 4661 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-09 02:55:42.803327848 +0000 UTC Jan 20 18:06:52 crc kubenswrapper[4661]: I0120 18:06:52.215463 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:52 crc kubenswrapper[4661]: I0120 18:06:52.215519 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:52 crc kubenswrapper[4661]: I0120 18:06:52.215528 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:52 crc kubenswrapper[4661]: I0120 18:06:52.215545 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:52 crc kubenswrapper[4661]: I0120 18:06:52.215556 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:52Z","lastTransitionTime":"2026-01-20T18:06:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:52 crc kubenswrapper[4661]: I0120 18:06:52.318568 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:52 crc kubenswrapper[4661]: I0120 18:06:52.318977 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:52 crc kubenswrapper[4661]: I0120 18:06:52.319156 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:52 crc kubenswrapper[4661]: I0120 18:06:52.319248 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:52 crc kubenswrapper[4661]: I0120 18:06:52.319319 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:52Z","lastTransitionTime":"2026-01-20T18:06:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:52 crc kubenswrapper[4661]: I0120 18:06:52.421867 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:52 crc kubenswrapper[4661]: I0120 18:06:52.422189 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:52 crc kubenswrapper[4661]: I0120 18:06:52.422253 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:52 crc kubenswrapper[4661]: I0120 18:06:52.422320 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:52 crc kubenswrapper[4661]: I0120 18:06:52.422430 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:52Z","lastTransitionTime":"2026-01-20T18:06:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:52 crc kubenswrapper[4661]: I0120 18:06:52.525833 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:52 crc kubenswrapper[4661]: I0120 18:06:52.525916 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:52 crc kubenswrapper[4661]: I0120 18:06:52.525934 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:52 crc kubenswrapper[4661]: I0120 18:06:52.525964 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:52 crc kubenswrapper[4661]: I0120 18:06:52.525982 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:52Z","lastTransitionTime":"2026-01-20T18:06:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:52 crc kubenswrapper[4661]: I0120 18:06:52.629019 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:52 crc kubenswrapper[4661]: I0120 18:06:52.629365 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:52 crc kubenswrapper[4661]: I0120 18:06:52.629439 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:52 crc kubenswrapper[4661]: I0120 18:06:52.629522 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:52 crc kubenswrapper[4661]: I0120 18:06:52.629585 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:52Z","lastTransitionTime":"2026-01-20T18:06:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:52 crc kubenswrapper[4661]: I0120 18:06:52.733897 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:52 crc kubenswrapper[4661]: I0120 18:06:52.733953 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:52 crc kubenswrapper[4661]: I0120 18:06:52.733965 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:52 crc kubenswrapper[4661]: I0120 18:06:52.733988 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:52 crc kubenswrapper[4661]: I0120 18:06:52.734001 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:52Z","lastTransitionTime":"2026-01-20T18:06:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:52 crc kubenswrapper[4661]: I0120 18:06:52.836460 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:52 crc kubenswrapper[4661]: I0120 18:06:52.836511 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:52 crc kubenswrapper[4661]: I0120 18:06:52.836524 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:52 crc kubenswrapper[4661]: I0120 18:06:52.836548 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:52 crc kubenswrapper[4661]: I0120 18:06:52.836562 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:52Z","lastTransitionTime":"2026-01-20T18:06:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:52 crc kubenswrapper[4661]: I0120 18:06:52.939955 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:52 crc kubenswrapper[4661]: I0120 18:06:52.940009 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:52 crc kubenswrapper[4661]: I0120 18:06:52.940020 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:52 crc kubenswrapper[4661]: I0120 18:06:52.940038 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:52 crc kubenswrapper[4661]: I0120 18:06:52.940050 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:52Z","lastTransitionTime":"2026-01-20T18:06:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:53 crc kubenswrapper[4661]: I0120 18:06:53.042840 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:53 crc kubenswrapper[4661]: I0120 18:06:53.042902 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:53 crc kubenswrapper[4661]: I0120 18:06:53.042918 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:53 crc kubenswrapper[4661]: I0120 18:06:53.042942 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:53 crc kubenswrapper[4661]: I0120 18:06:53.043000 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:53Z","lastTransitionTime":"2026-01-20T18:06:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:53 crc kubenswrapper[4661]: I0120 18:06:53.127102 4661 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-19 05:19:50.80753738 +0000 UTC Jan 20 18:06:53 crc kubenswrapper[4661]: I0120 18:06:53.141053 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 18:06:53 crc kubenswrapper[4661]: I0120 18:06:53.141187 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 18:06:53 crc kubenswrapper[4661]: E0120 18:06:53.141309 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 20 18:06:53 crc kubenswrapper[4661]: I0120 18:06:53.141329 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dhd6h" Jan 20 18:06:53 crc kubenswrapper[4661]: I0120 18:06:53.141367 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 18:06:53 crc kubenswrapper[4661]: E0120 18:06:53.141516 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dhd6h" podUID="58dc28d6-51a6-49ce-ab8e-ac0b6bbe7131" Jan 20 18:06:53 crc kubenswrapper[4661]: E0120 18:06:53.141556 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 20 18:06:53 crc kubenswrapper[4661]: E0120 18:06:53.141633 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 20 18:06:53 crc kubenswrapper[4661]: I0120 18:06:53.147193 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:53 crc kubenswrapper[4661]: I0120 18:06:53.147251 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:53 crc kubenswrapper[4661]: I0120 18:06:53.147270 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:53 crc kubenswrapper[4661]: I0120 18:06:53.147300 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:53 crc kubenswrapper[4661]: I0120 18:06:53.147325 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:53Z","lastTransitionTime":"2026-01-20T18:06:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:53 crc kubenswrapper[4661]: I0120 18:06:53.250858 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:53 crc kubenswrapper[4661]: I0120 18:06:53.250893 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:53 crc kubenswrapper[4661]: I0120 18:06:53.250904 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:53 crc kubenswrapper[4661]: I0120 18:06:53.250927 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:53 crc kubenswrapper[4661]: I0120 18:06:53.250939 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:53Z","lastTransitionTime":"2026-01-20T18:06:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:53 crc kubenswrapper[4661]: I0120 18:06:53.353314 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:53 crc kubenswrapper[4661]: I0120 18:06:53.353355 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:53 crc kubenswrapper[4661]: I0120 18:06:53.353370 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:53 crc kubenswrapper[4661]: I0120 18:06:53.353388 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:53 crc kubenswrapper[4661]: I0120 18:06:53.353399 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:53Z","lastTransitionTime":"2026-01-20T18:06:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:53 crc kubenswrapper[4661]: I0120 18:06:53.456213 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:53 crc kubenswrapper[4661]: I0120 18:06:53.456262 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:53 crc kubenswrapper[4661]: I0120 18:06:53.456273 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:53 crc kubenswrapper[4661]: I0120 18:06:53.456291 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:53 crc kubenswrapper[4661]: I0120 18:06:53.456304 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:53Z","lastTransitionTime":"2026-01-20T18:06:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:53 crc kubenswrapper[4661]: I0120 18:06:53.559298 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:53 crc kubenswrapper[4661]: I0120 18:06:53.559377 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:53 crc kubenswrapper[4661]: I0120 18:06:53.559390 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:53 crc kubenswrapper[4661]: I0120 18:06:53.559414 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:53 crc kubenswrapper[4661]: I0120 18:06:53.559429 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:53Z","lastTransitionTime":"2026-01-20T18:06:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:53 crc kubenswrapper[4661]: I0120 18:06:53.662748 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:53 crc kubenswrapper[4661]: I0120 18:06:53.662786 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:53 crc kubenswrapper[4661]: I0120 18:06:53.662796 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:53 crc kubenswrapper[4661]: I0120 18:06:53.662812 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:53 crc kubenswrapper[4661]: I0120 18:06:53.662821 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:53Z","lastTransitionTime":"2026-01-20T18:06:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:53 crc kubenswrapper[4661]: I0120 18:06:53.765574 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:53 crc kubenswrapper[4661]: I0120 18:06:53.765615 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:53 crc kubenswrapper[4661]: I0120 18:06:53.765628 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:53 crc kubenswrapper[4661]: I0120 18:06:53.765651 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:53 crc kubenswrapper[4661]: I0120 18:06:53.765664 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:53Z","lastTransitionTime":"2026-01-20T18:06:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:53 crc kubenswrapper[4661]: I0120 18:06:53.827143 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-z97p2_5b6f2401-3eb9-4ee4-b79c-6faee06bc21c/kube-multus/0.log" Jan 20 18:06:53 crc kubenswrapper[4661]: I0120 18:06:53.827252 4661 generic.go:334] "Generic (PLEG): container finished" podID="5b6f2401-3eb9-4ee4-b79c-6faee06bc21c" containerID="5d04be3c87130e9506908a5ff0bf35490bafa64b4cec7b6ae1b67c4a8bd7df5d" exitCode=1 Jan 20 18:06:53 crc kubenswrapper[4661]: I0120 18:06:53.827314 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-z97p2" event={"ID":"5b6f2401-3eb9-4ee4-b79c-6faee06bc21c","Type":"ContainerDied","Data":"5d04be3c87130e9506908a5ff0bf35490bafa64b4cec7b6ae1b67c4a8bd7df5d"} Jan 20 18:06:53 crc kubenswrapper[4661]: I0120 18:06:53.828152 4661 scope.go:117] "RemoveContainer" containerID="5d04be3c87130e9506908a5ff0bf35490bafa64b4cec7b6ae1b67c4a8bd7df5d" Jan 20 18:06:53 crc kubenswrapper[4661]: I0120 18:06:53.843751 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e82a8ce9-e23c-4fbc-9d26-0e81374193ba\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:05:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:05:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:05:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://887bb0c57d5fcddfad0ffb44f39fb809f945050689a5fb64f145b607b2dcd4f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://786a785de87e5345f01ed57fb6cd17efebe4633953b9e6bc9c169469621aea5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fdf1da4bed4fed480e327c750f01ea201663449a9975540540859463e5b4821f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://892bacf66ebee9e56348d6d6f391b0fd23a5c99369ddaf9280590d8598b32e62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://892bacf66ebee9e56348d6d6f391b0fd23a5c99369ddaf9280590d8598b32e62\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T18:05:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T18:05:44Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T18:05:44Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:53Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:53 crc kubenswrapper[4661]: I0120 18:06:53.860015 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aafdc595f8f331b863d71124f1aa3c686ec883829377108268dd78de88f498ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a15e7bb714cbcf03a4ed8925508be80b06b04f3cd455d293237554c8ad0fdeee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:53Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:53 crc kubenswrapper[4661]: I0120 18:06:53.867829 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:53 crc kubenswrapper[4661]: I0120 18:06:53.867873 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:53 crc kubenswrapper[4661]: I0120 18:06:53.867886 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:53 crc kubenswrapper[4661]: I0120 18:06:53.867907 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:53 crc kubenswrapper[4661]: I0120 18:06:53.867920 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:53Z","lastTransitionTime":"2026-01-20T18:06:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:53 crc kubenswrapper[4661]: I0120 18:06:53.877846 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-j9j6p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e190abed-d178-4ce7-9485-f6090ecf8578\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ad84b24b0398f3f900b9440d55a7914e661a18580ef8b248ffdce4d8a6c75c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxwgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6923af243783c919b8d74338d7221f91f7c6b770d97eb3a2f7e30360376f071d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6923af243783c919b8d74338d7221f91f7c6b770d97eb3a2f7e30360376f071d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T18:06:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T18:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxwgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d61ecbabdd991af4f3f3005e3d6fab0d3f7fa863e7503f45dd91633dfc68c597\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d61ecbabdd991af4f3f3005e3d6fab0d3f7fa863e7503f45dd91633dfc68c597\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T18:06:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T18:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxwgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31c8fb341a4de1d1144737f83eb46ad0b301f7eb48dee0969da7ade7fbd513da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31c8fb341a4de1d1144737f83eb46ad0b301f7eb48dee0969da7ade7fbd513da\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T18:06:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T18:06:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxwgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db8122764bd0508f39da125b5849fbe3bad9558e511c18f26bdcf4e5b23ca3a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db8122764bd0508f39da125b5849fbe3bad9558e511c18f26bdcf4e5b23ca3a1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T18:06:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T18:06:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxwgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://60e382a199aa3a85c11fdf8c490a4f039a191cff8a604b004e2f4ea6dacb6800\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://60e382a199aa3a85c11fdf8c490a4f039a191cff8a604b004e2f4ea6dacb6800\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T18:06:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T18:06:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxwgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb0e9f6dd4681c1b791524e22d3f668ce544cdc72a33af01fa70f2dd93d2972f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb0e9f6dd4681c1b791524e22d3f668ce544cdc72a33af01fa70f2dd93d2972f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T18:06:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T18:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxwgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T18:06:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-j9j6p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:53Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:53 crc kubenswrapper[4661]: I0120 18:06:53.899252 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5947c5f0-b932-4127-a183-6b9023784c81\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:05:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:05:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:05:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2286c38d543136df613b2611b8d494d0777a950158adb169c26708335c024251\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7995b8e096ce8c7adf28d9baa4e12d943a697db80ee2b6e6b347b334e44b0df\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a1fb928361cffd6f14855b6c1cf5964eccc9f923435bf79dddd8f0c94decd9a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a8e025f49d745d0d846c606a3ec9dd6fbd2d255e8662ba1fd1a65f0d4289e77\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3584f02089912eecb6ea77d78d4f093929ce92631cb9ea758f1311268963b6b1\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-20T18:06:02Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0120 18:05:56.920405 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0120 18:05:56.921589 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1862726087/tls.crt::/tmp/serving-cert-1862726087/tls.key\\\\\\\"\\\\nI0120 18:06:02.544098 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0120 18:06:02.549414 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0120 18:06:02.549439 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0120 18:06:02.549472 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0120 18:06:02.549479 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0120 18:06:02.569160 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0120 18:06:02.569400 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0120 18:06:02.569474 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0120 18:06:02.569536 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0120 18:06:02.569594 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0120 18:06:02.569648 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0120 18:06:02.569744 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0120 18:06:02.569342 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0120 18:06:02.573278 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T18:05:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f09e5fcc7fafac7a11257184f5919c06b5b2e56a677b67c664e6489d9a581a20\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:05:46Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6eedc9bdf3c37af238cf9ad5172a8d93751c0641cbf43057016157f086c77538\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6eedc9bdf3c37af238cf9ad5172a8d93751c0641cbf43057016157f086c77538\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T18:05:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T18:05:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T18:05:44Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:53Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:53 crc kubenswrapper[4661]: I0120 18:06:53.915332 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:03Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:53Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:53 crc kubenswrapper[4661]: I0120 18:06:53.929244 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:03Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:53Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:53 crc kubenswrapper[4661]: I0120 18:06:53.940066 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-9m9jm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c44ff326-6791-438a-8d65-b2be26e9c819\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de5a607340e429cf954b1b6e147c4dbff99ffee4d311e9692410698574915af2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kn7nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T18:06:04Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-9m9jm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:53Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:53 crc kubenswrapper[4661]: I0120 18:06:53.968142 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fxb9d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3856f23c-8dc3-4b36-b3b7-955dff315250\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:06Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:06Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54a53d0636da9c6e7974633697967fa21ba02b0357019aca7c83994f57d06d84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66kpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://37fb98a4cea5fe59a694ef52ebebfd3366649970415c8bd3b1307e6d150ffe66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66kpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6bac19d8c5ba66dc20e5e4b90b2ba10efe69f218908b04abb221416f47e47f5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66kpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f5f5d96326cd37c1101488fff8b4ce215ce84766faf13112bed7df0a767de0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66kpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d53da47c39bd1f10fe866890f30f12f27cb0cfce0348c89fc0e89b3e8f563f2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66kpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://407e4d66f22050b80251fcb98ac7168d601d70dff1679bdaca0fc82d6068da41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66kpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://823de7d3705ac9e99525bc0fcc4f577fb363555af9dd66346f33065839076105\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://823de7d3705ac9e99525bc0fcc4f577fb363555af9dd66346f33065839076105\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-20T18:06:35Z\\\",\\\"message\\\":\\\".BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0120 18:06:35.219065 6217 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0120 18:06:35.219313 6217 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0120 18:06:35.219547 6217 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0120 18:06:35.220089 6217 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0120 18:06:35.220357 6217 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0120 18:06:35.220410 6217 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0120 18:06:35.220459 6217 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0120 18:06:35.221651 6217 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0120 18:06:35.221712 6217 factory.go:656] Stopping watch factory\\\\nI0120 18:06:35.221730 6217 ovnkube.go:599] Stopped ovnkube\\\\nI0120 1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T18:06:34Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-fxb9d_openshift-ovn-kubernetes(3856f23c-8dc3-4b36-b3b7-955dff315250)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66kpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dfbc19df20b659446872267891c3a922b6a01e39d8f0557505f25cdc5ba1a648\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66kpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://babd416d0d33b286f533dc5bd8d6904d24fd23632efce36edb6e13183fbd390a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://babd416d0d33b286f533dc5bd8d6904d24fd23632efce36edb6e13183fbd390a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T18:06:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66kpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T18:06:06Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fxb9d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:53Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:53 crc kubenswrapper[4661]: I0120 18:06:53.971881 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:53 crc kubenswrapper[4661]: I0120 18:06:53.971915 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:53 crc kubenswrapper[4661]: I0120 18:06:53.971929 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:53 crc kubenswrapper[4661]: I0120 18:06:53.971950 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:53 crc kubenswrapper[4661]: I0120 18:06:53.971961 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:53Z","lastTransitionTime":"2026-01-20T18:06:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:53 crc kubenswrapper[4661]: I0120 18:06:53.982156 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-dhd6h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"58dc28d6-51a6-49ce-ab8e-ac0b6bbe7131\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j95bf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j95bf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T18:06:19Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-dhd6h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:53Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:53 crc kubenswrapper[4661]: I0120 18:06:53.996321 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f7511825-196e-48ea-a80c-f30a6806a15f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:05:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:05:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f30ca85f0d31021dde3b56c646ddd5d841e699b809c85e54afa944cc8035df6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://008613eee577926f777b6eba5a93379dca1203429fb29918bb057f2aba5eba4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:05:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://baf1692fe971ebe4534bc83cc471812d2b2883b6f97e53728ded6cd57b40c6f1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://faea3c0fefa61b8f0e07a050f59ca7b88d89a7ac8dba19ab019cff00fd782da3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T18:05:44Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:53Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:54 crc kubenswrapper[4661]: I0120 18:06:54.008385 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d831477bdf455582c54cba87020fc1141541282a25169c4b9730a78855e5719\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:54Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:54 crc kubenswrapper[4661]: I0120 18:06:54.022164 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-z97p2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b6f2401-3eb9-4ee4-b79c-6faee06bc21c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d04be3c87130e9506908a5ff0bf35490bafa64b4cec7b6ae1b67c4a8bd7df5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d04be3c87130e9506908a5ff0bf35490bafa64b4cec7b6ae1b67c4a8bd7df5d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-20T18:06:53Z\\\",\\\"message\\\":\\\"2026-01-20T18:06:08+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_6b6eb9a0-3822-47d9-9b83-21e46cfc33fa\\\\n2026-01-20T18:06:08+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_6b6eb9a0-3822-47d9-9b83-21e46cfc33fa to /host/opt/cni/bin/\\\\n2026-01-20T18:06:08Z [verbose] multus-daemon started\\\\n2026-01-20T18:06:08Z [verbose] Readiness Indicator file check\\\\n2026-01-20T18:06:53Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff8qg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T18:06:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-z97p2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:54Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:54 crc kubenswrapper[4661]: I0120 18:06:54.036308 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4hf4s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2cada643-eb7b-4036-8788-500338f73fac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://59b5cf3db3513f82b52401408842627d3e40bdc3009c226548556808410b2289\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gwqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://846c1cab30f986276eb919ac7474fbde1b6d5edb6557ab47057723b68d78b782\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gwqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T18:06:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-4hf4s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:54Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:54 crc kubenswrapper[4661]: I0120 18:06:54.050854 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26d03e00aaf9fc7a94d8fe25f4f6f7a028f4e5eb9956411442757ca8b2046d27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:54Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:54 crc kubenswrapper[4661]: I0120 18:06:54.065322 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:54Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:54 crc kubenswrapper[4661]: I0120 18:06:54.075562 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:54 crc kubenswrapper[4661]: I0120 18:06:54.075636 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:54 crc kubenswrapper[4661]: I0120 18:06:54.075656 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:54 crc kubenswrapper[4661]: I0120 18:06:54.075711 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:54 crc kubenswrapper[4661]: I0120 18:06:54.075733 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:54Z","lastTransitionTime":"2026-01-20T18:06:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:54 crc kubenswrapper[4661]: I0120 18:06:54.078656 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-svf7c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"78855c94-da90-4523-8d65-70f7fd153dee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce85015f47761ddd35031a4b2aa10eddde92a1f1ee206e6454b967b03b49372e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zvj2k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7dad5141c6e2e07d42bee1c473efffa900d0d900467b1524cd59962582696a3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zvj2k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T18:06:05Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-svf7c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:54Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:54 crc kubenswrapper[4661]: I0120 18:06:54.090060 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-tfdrt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1c3f1ce7-0584-4bf1-8398-a277e9a4599b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://163c719cffaaa547e54e81b543b5f5b2ce5abf7f6309d2859831a14e42df189f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbq77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T18:06:07Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-tfdrt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:54Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:54 crc kubenswrapper[4661]: I0120 18:06:54.127711 4661 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-18 11:12:41.8427271 +0000 UTC Jan 20 18:06:54 crc kubenswrapper[4661]: I0120 18:06:54.159390 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e82a8ce9-e23c-4fbc-9d26-0e81374193ba\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:05:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:05:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:05:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://887bb0c57d5fcddfad0ffb44f39fb809f945050689a5fb64f145b607b2dcd4f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://786a785de87e5345f01ed57fb6cd17efebe4633953b9e6bc9c169469621aea5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fdf1da4bed4fed480e327c750f01ea201663449a9975540540859463e5b4821f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://892bacf66ebee9e56348d6d6f391b0fd23a5c99369ddaf9280590d8598b32e62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://892bacf66ebee9e56348d6d6f391b0fd23a5c99369ddaf9280590d8598b32e62\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T18:05:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T18:05:44Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T18:05:44Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:54Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:54 crc kubenswrapper[4661]: I0120 18:06:54.174525 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aafdc595f8f331b863d71124f1aa3c686ec883829377108268dd78de88f498ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a15e7bb714cbcf03a4ed8925508be80b06b04f3cd455d293237554c8ad0fdeee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:54Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:54 crc kubenswrapper[4661]: I0120 18:06:54.178766 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:54 crc kubenswrapper[4661]: I0120 18:06:54.178871 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:54 crc kubenswrapper[4661]: I0120 18:06:54.178899 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:54 crc kubenswrapper[4661]: I0120 18:06:54.178932 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:54 crc kubenswrapper[4661]: I0120 18:06:54.178958 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:54Z","lastTransitionTime":"2026-01-20T18:06:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:54 crc kubenswrapper[4661]: I0120 18:06:54.193590 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-j9j6p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e190abed-d178-4ce7-9485-f6090ecf8578\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ad84b24b0398f3f900b9440d55a7914e661a18580ef8b248ffdce4d8a6c75c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxwgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6923af243783c919b8d74338d7221f91f7c6b770d97eb3a2f7e30360376f071d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6923af243783c919b8d74338d7221f91f7c6b770d97eb3a2f7e30360376f071d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T18:06:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T18:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxwgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d61ecbabdd991af4f3f3005e3d6fab0d3f7fa863e7503f45dd91633dfc68c597\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d61ecbabdd991af4f3f3005e3d6fab0d3f7fa863e7503f45dd91633dfc68c597\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T18:06:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T18:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxwgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31c8fb341a4de1d1144737f83eb46ad0b301f7eb48dee0969da7ade7fbd513da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31c8fb341a4de1d1144737f83eb46ad0b301f7eb48dee0969da7ade7fbd513da\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T18:06:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T18:06:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxwgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db8122764bd0508f39da125b5849fbe3bad9558e511c18f26bdcf4e5b23ca3a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db8122764bd0508f39da125b5849fbe3bad9558e511c18f26bdcf4e5b23ca3a1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T18:06:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T18:06:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxwgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://60e382a199aa3a85c11fdf8c490a4f039a191cff8a604b004e2f4ea6dacb6800\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://60e382a199aa3a85c11fdf8c490a4f039a191cff8a604b004e2f4ea6dacb6800\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T18:06:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T18:06:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxwgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb0e9f6dd4681c1b791524e22d3f668ce544cdc72a33af01fa70f2dd93d2972f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb0e9f6dd4681c1b791524e22d3f668ce544cdc72a33af01fa70f2dd93d2972f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T18:06:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T18:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxwgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T18:06:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-j9j6p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:54Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:54 crc kubenswrapper[4661]: I0120 18:06:54.212369 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5947c5f0-b932-4127-a183-6b9023784c81\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:05:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:05:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:05:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2286c38d543136df613b2611b8d494d0777a950158adb169c26708335c024251\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7995b8e096ce8c7adf28d9baa4e12d943a697db80ee2b6e6b347b334e44b0df\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a1fb928361cffd6f14855b6c1cf5964eccc9f923435bf79dddd8f0c94decd9a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a8e025f49d745d0d846c606a3ec9dd6fbd2d255e8662ba1fd1a65f0d4289e77\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3584f02089912eecb6ea77d78d4f093929ce92631cb9ea758f1311268963b6b1\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-20T18:06:02Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0120 18:05:56.920405 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0120 18:05:56.921589 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1862726087/tls.crt::/tmp/serving-cert-1862726087/tls.key\\\\\\\"\\\\nI0120 18:06:02.544098 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0120 18:06:02.549414 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0120 18:06:02.549439 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0120 18:06:02.549472 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0120 18:06:02.549479 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0120 18:06:02.569160 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0120 18:06:02.569400 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0120 18:06:02.569474 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0120 18:06:02.569536 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0120 18:06:02.569594 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0120 18:06:02.569648 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0120 18:06:02.569744 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0120 18:06:02.569342 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0120 18:06:02.573278 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T18:05:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f09e5fcc7fafac7a11257184f5919c06b5b2e56a677b67c664e6489d9a581a20\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:05:46Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6eedc9bdf3c37af238cf9ad5172a8d93751c0641cbf43057016157f086c77538\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6eedc9bdf3c37af238cf9ad5172a8d93751c0641cbf43057016157f086c77538\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T18:05:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T18:05:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T18:05:44Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:54Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:54 crc kubenswrapper[4661]: I0120 18:06:54.227847 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:03Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:54Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:54 crc kubenswrapper[4661]: I0120 18:06:54.244233 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:03Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:54Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:54 crc kubenswrapper[4661]: I0120 18:06:54.260742 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-9m9jm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c44ff326-6791-438a-8d65-b2be26e9c819\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de5a607340e429cf954b1b6e147c4dbff99ffee4d311e9692410698574915af2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kn7nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T18:06:04Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-9m9jm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:54Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:54 crc kubenswrapper[4661]: I0120 18:06:54.282119 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:54 crc kubenswrapper[4661]: I0120 18:06:54.282179 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:54 crc kubenswrapper[4661]: I0120 18:06:54.282194 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:54 crc kubenswrapper[4661]: I0120 18:06:54.282217 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:54 crc kubenswrapper[4661]: I0120 18:06:54.282229 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:54Z","lastTransitionTime":"2026-01-20T18:06:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:54 crc kubenswrapper[4661]: I0120 18:06:54.283890 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fxb9d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3856f23c-8dc3-4b36-b3b7-955dff315250\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:06Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:06Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54a53d0636da9c6e7974633697967fa21ba02b0357019aca7c83994f57d06d84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66kpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://37fb98a4cea5fe59a694ef52ebebfd3366649970415c8bd3b1307e6d150ffe66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66kpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6bac19d8c5ba66dc20e5e4b90b2ba10efe69f218908b04abb221416f47e47f5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66kpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f5f5d96326cd37c1101488fff8b4ce215ce84766faf13112bed7df0a767de0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66kpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d53da47c39bd1f10fe866890f30f12f27cb0cfce0348c89fc0e89b3e8f563f2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66kpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://407e4d66f22050b80251fcb98ac7168d601d70dff1679bdaca0fc82d6068da41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66kpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://823de7d3705ac9e99525bc0fcc4f577fb363555af9dd66346f33065839076105\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://823de7d3705ac9e99525bc0fcc4f577fb363555af9dd66346f33065839076105\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-20T18:06:35Z\\\",\\\"message\\\":\\\".BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0120 18:06:35.219065 6217 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0120 18:06:35.219313 6217 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0120 18:06:35.219547 6217 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0120 18:06:35.220089 6217 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0120 18:06:35.220357 6217 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0120 18:06:35.220410 6217 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0120 18:06:35.220459 6217 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0120 18:06:35.221651 6217 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0120 18:06:35.221712 6217 factory.go:656] Stopping watch factory\\\\nI0120 18:06:35.221730 6217 ovnkube.go:599] Stopped ovnkube\\\\nI0120 1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T18:06:34Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-fxb9d_openshift-ovn-kubernetes(3856f23c-8dc3-4b36-b3b7-955dff315250)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66kpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dfbc19df20b659446872267891c3a922b6a01e39d8f0557505f25cdc5ba1a648\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66kpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://babd416d0d33b286f533dc5bd8d6904d24fd23632efce36edb6e13183fbd390a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://babd416d0d33b286f533dc5bd8d6904d24fd23632efce36edb6e13183fbd390a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T18:06:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66kpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T18:06:06Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fxb9d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:54Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:54 crc kubenswrapper[4661]: I0120 18:06:54.299111 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-dhd6h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"58dc28d6-51a6-49ce-ab8e-ac0b6bbe7131\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j95bf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j95bf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T18:06:19Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-dhd6h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:54Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:54 crc kubenswrapper[4661]: I0120 18:06:54.313841 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f7511825-196e-48ea-a80c-f30a6806a15f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:05:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:05:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f30ca85f0d31021dde3b56c646ddd5d841e699b809c85e54afa944cc8035df6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://008613eee577926f777b6eba5a93379dca1203429fb29918bb057f2aba5eba4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:05:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://baf1692fe971ebe4534bc83cc471812d2b2883b6f97e53728ded6cd57b40c6f1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://faea3c0fefa61b8f0e07a050f59ca7b88d89a7ac8dba19ab019cff00fd782da3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T18:05:44Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:54Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:54 crc kubenswrapper[4661]: I0120 18:06:54.326215 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d831477bdf455582c54cba87020fc1141541282a25169c4b9730a78855e5719\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:54Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:54 crc kubenswrapper[4661]: I0120 18:06:54.341219 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-z97p2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b6f2401-3eb9-4ee4-b79c-6faee06bc21c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d04be3c87130e9506908a5ff0bf35490bafa64b4cec7b6ae1b67c4a8bd7df5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d04be3c87130e9506908a5ff0bf35490bafa64b4cec7b6ae1b67c4a8bd7df5d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-20T18:06:53Z\\\",\\\"message\\\":\\\"2026-01-20T18:06:08+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_6b6eb9a0-3822-47d9-9b83-21e46cfc33fa\\\\n2026-01-20T18:06:08+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_6b6eb9a0-3822-47d9-9b83-21e46cfc33fa to /host/opt/cni/bin/\\\\n2026-01-20T18:06:08Z [verbose] multus-daemon started\\\\n2026-01-20T18:06:08Z [verbose] Readiness Indicator file check\\\\n2026-01-20T18:06:53Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff8qg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T18:06:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-z97p2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:54Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:54 crc kubenswrapper[4661]: I0120 18:06:54.356562 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4hf4s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2cada643-eb7b-4036-8788-500338f73fac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://59b5cf3db3513f82b52401408842627d3e40bdc3009c226548556808410b2289\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gwqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://846c1cab30f986276eb919ac7474fbde1b6d5edb6557ab47057723b68d78b782\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gwqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T18:06:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-4hf4s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:54Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:54 crc kubenswrapper[4661]: I0120 18:06:54.373149 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26d03e00aaf9fc7a94d8fe25f4f6f7a028f4e5eb9956411442757ca8b2046d27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:54Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:54 crc kubenswrapper[4661]: I0120 18:06:54.386963 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:54 crc kubenswrapper[4661]: I0120 18:06:54.387014 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:54 crc kubenswrapper[4661]: I0120 18:06:54.387028 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:54 crc kubenswrapper[4661]: I0120 18:06:54.387047 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:54 crc kubenswrapper[4661]: I0120 18:06:54.387058 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:54Z","lastTransitionTime":"2026-01-20T18:06:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:54 crc kubenswrapper[4661]: I0120 18:06:54.388889 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:54Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:54 crc kubenswrapper[4661]: I0120 18:06:54.402558 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-svf7c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"78855c94-da90-4523-8d65-70f7fd153dee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce85015f47761ddd35031a4b2aa10eddde92a1f1ee206e6454b967b03b49372e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zvj2k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7dad5141c6e2e07d42bee1c473efffa900d0d900467b1524cd59962582696a3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zvj2k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T18:06:05Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-svf7c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:54Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:54 crc kubenswrapper[4661]: I0120 18:06:54.412830 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-tfdrt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1c3f1ce7-0584-4bf1-8398-a277e9a4599b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://163c719cffaaa547e54e81b543b5f5b2ce5abf7f6309d2859831a14e42df189f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbq77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T18:06:07Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-tfdrt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:54Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:54 crc kubenswrapper[4661]: I0120 18:06:54.490266 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:54 crc kubenswrapper[4661]: I0120 18:06:54.490318 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:54 crc kubenswrapper[4661]: I0120 18:06:54.490329 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:54 crc kubenswrapper[4661]: I0120 18:06:54.490350 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:54 crc kubenswrapper[4661]: I0120 18:06:54.490365 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:54Z","lastTransitionTime":"2026-01-20T18:06:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:54 crc kubenswrapper[4661]: I0120 18:06:54.592443 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:54 crc kubenswrapper[4661]: I0120 18:06:54.592499 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:54 crc kubenswrapper[4661]: I0120 18:06:54.592509 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:54 crc kubenswrapper[4661]: I0120 18:06:54.592526 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:54 crc kubenswrapper[4661]: I0120 18:06:54.592538 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:54Z","lastTransitionTime":"2026-01-20T18:06:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:54 crc kubenswrapper[4661]: I0120 18:06:54.696062 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:54 crc kubenswrapper[4661]: I0120 18:06:54.696125 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:54 crc kubenswrapper[4661]: I0120 18:06:54.696139 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:54 crc kubenswrapper[4661]: I0120 18:06:54.696163 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:54 crc kubenswrapper[4661]: I0120 18:06:54.696180 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:54Z","lastTransitionTime":"2026-01-20T18:06:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:54 crc kubenswrapper[4661]: I0120 18:06:54.798987 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:54 crc kubenswrapper[4661]: I0120 18:06:54.799044 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:54 crc kubenswrapper[4661]: I0120 18:06:54.799055 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:54 crc kubenswrapper[4661]: I0120 18:06:54.799079 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:54 crc kubenswrapper[4661]: I0120 18:06:54.799095 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:54Z","lastTransitionTime":"2026-01-20T18:06:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:54 crc kubenswrapper[4661]: I0120 18:06:54.833298 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-z97p2_5b6f2401-3eb9-4ee4-b79c-6faee06bc21c/kube-multus/0.log" Jan 20 18:06:54 crc kubenswrapper[4661]: I0120 18:06:54.833660 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-z97p2" event={"ID":"5b6f2401-3eb9-4ee4-b79c-6faee06bc21c","Type":"ContainerStarted","Data":"ab1840afe6e204ba16157cfa4140926ab50dd66d6b72a0e49e4ef986f62c7e34"} Jan 20 18:06:54 crc kubenswrapper[4661]: I0120 18:06:54.851962 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26d03e00aaf9fc7a94d8fe25f4f6f7a028f4e5eb9956411442757ca8b2046d27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:54Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:54 crc kubenswrapper[4661]: I0120 18:06:54.868135 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:54Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:54 crc kubenswrapper[4661]: I0120 18:06:54.880565 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-svf7c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"78855c94-da90-4523-8d65-70f7fd153dee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce85015f47761ddd35031a4b2aa10eddde92a1f1ee206e6454b967b03b49372e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zvj2k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7dad5141c6e2e07d42bee1c473efffa900d0d900467b1524cd59962582696a3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zvj2k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T18:06:05Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-svf7c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:54Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:54 crc kubenswrapper[4661]: I0120 18:06:54.890850 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-tfdrt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1c3f1ce7-0584-4bf1-8398-a277e9a4599b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://163c719cffaaa547e54e81b543b5f5b2ce5abf7f6309d2859831a14e42df189f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbq77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T18:06:07Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-tfdrt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:54Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:54 crc kubenswrapper[4661]: I0120 18:06:54.901062 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:54 crc kubenswrapper[4661]: I0120 18:06:54.901095 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:54 crc kubenswrapper[4661]: I0120 18:06:54.901104 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:54 crc kubenswrapper[4661]: I0120 18:06:54.901118 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:54 crc kubenswrapper[4661]: I0120 18:06:54.901128 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:54Z","lastTransitionTime":"2026-01-20T18:06:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:54 crc kubenswrapper[4661]: I0120 18:06:54.902281 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e82a8ce9-e23c-4fbc-9d26-0e81374193ba\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:05:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:05:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:05:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://887bb0c57d5fcddfad0ffb44f39fb809f945050689a5fb64f145b607b2dcd4f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://786a785de87e5345f01ed57fb6cd17efebe4633953b9e6bc9c169469621aea5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fdf1da4bed4fed480e327c750f01ea201663449a9975540540859463e5b4821f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://892bacf66ebee9e56348d6d6f391b0fd23a5c99369ddaf9280590d8598b32e62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://892bacf66ebee9e56348d6d6f391b0fd23a5c99369ddaf9280590d8598b32e62\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T18:05:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T18:05:44Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T18:05:44Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:54Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:54 crc kubenswrapper[4661]: I0120 18:06:54.916052 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aafdc595f8f331b863d71124f1aa3c686ec883829377108268dd78de88f498ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a15e7bb714cbcf03a4ed8925508be80b06b04f3cd455d293237554c8ad0fdeee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:54Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:54 crc kubenswrapper[4661]: I0120 18:06:54.929726 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-j9j6p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e190abed-d178-4ce7-9485-f6090ecf8578\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ad84b24b0398f3f900b9440d55a7914e661a18580ef8b248ffdce4d8a6c75c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxwgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6923af243783c919b8d74338d7221f91f7c6b770d97eb3a2f7e30360376f071d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6923af243783c919b8d74338d7221f91f7c6b770d97eb3a2f7e30360376f071d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T18:06:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T18:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxwgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d61ecbabdd991af4f3f3005e3d6fab0d3f7fa863e7503f45dd91633dfc68c597\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d61ecbabdd991af4f3f3005e3d6fab0d3f7fa863e7503f45dd91633dfc68c597\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T18:06:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T18:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxwgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31c8fb341a4de1d1144737f83eb46ad0b301f7eb48dee0969da7ade7fbd513da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31c8fb341a4de1d1144737f83eb46ad0b301f7eb48dee0969da7ade7fbd513da\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T18:06:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T18:06:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxwgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db8122764bd0508f39da125b5849fbe3bad9558e511c18f26bdcf4e5b23ca3a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db8122764bd0508f39da125b5849fbe3bad9558e511c18f26bdcf4e5b23ca3a1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T18:06:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T18:06:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxwgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://60e382a199aa3a85c11fdf8c490a4f039a191cff8a604b004e2f4ea6dacb6800\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://60e382a199aa3a85c11fdf8c490a4f039a191cff8a604b004e2f4ea6dacb6800\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T18:06:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T18:06:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxwgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb0e9f6dd4681c1b791524e22d3f668ce544cdc72a33af01fa70f2dd93d2972f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb0e9f6dd4681c1b791524e22d3f668ce544cdc72a33af01fa70f2dd93d2972f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T18:06:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T18:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxwgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T18:06:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-j9j6p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:54Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:54 crc kubenswrapper[4661]: I0120 18:06:54.944612 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5947c5f0-b932-4127-a183-6b9023784c81\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:05:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:05:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:05:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2286c38d543136df613b2611b8d494d0777a950158adb169c26708335c024251\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7995b8e096ce8c7adf28d9baa4e12d943a697db80ee2b6e6b347b334e44b0df\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a1fb928361cffd6f14855b6c1cf5964eccc9f923435bf79dddd8f0c94decd9a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a8e025f49d745d0d846c606a3ec9dd6fbd2d255e8662ba1fd1a65f0d4289e77\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3584f02089912eecb6ea77d78d4f093929ce92631cb9ea758f1311268963b6b1\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-20T18:06:02Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0120 18:05:56.920405 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0120 18:05:56.921589 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1862726087/tls.crt::/tmp/serving-cert-1862726087/tls.key\\\\\\\"\\\\nI0120 18:06:02.544098 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0120 18:06:02.549414 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0120 18:06:02.549439 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0120 18:06:02.549472 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0120 18:06:02.549479 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0120 18:06:02.569160 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0120 18:06:02.569400 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0120 18:06:02.569474 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0120 18:06:02.569536 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0120 18:06:02.569594 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0120 18:06:02.569648 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0120 18:06:02.569744 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0120 18:06:02.569342 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0120 18:06:02.573278 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T18:05:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f09e5fcc7fafac7a11257184f5919c06b5b2e56a677b67c664e6489d9a581a20\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:05:46Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6eedc9bdf3c37af238cf9ad5172a8d93751c0641cbf43057016157f086c77538\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6eedc9bdf3c37af238cf9ad5172a8d93751c0641cbf43057016157f086c77538\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T18:05:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T18:05:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T18:05:44Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:54Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:54 crc kubenswrapper[4661]: I0120 18:06:54.961220 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:03Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:54Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:54 crc kubenswrapper[4661]: I0120 18:06:54.974777 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:03Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:54Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:54 crc kubenswrapper[4661]: I0120 18:06:54.987762 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-9m9jm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c44ff326-6791-438a-8d65-b2be26e9c819\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de5a607340e429cf954b1b6e147c4dbff99ffee4d311e9692410698574915af2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kn7nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T18:06:04Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-9m9jm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:54Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:55 crc kubenswrapper[4661]: I0120 18:06:55.003545 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:55 crc kubenswrapper[4661]: I0120 18:06:55.003592 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:55 crc kubenswrapper[4661]: I0120 18:06:55.003607 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:55 crc kubenswrapper[4661]: I0120 18:06:55.003627 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:55 crc kubenswrapper[4661]: I0120 18:06:55.003643 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:55Z","lastTransitionTime":"2026-01-20T18:06:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:55 crc kubenswrapper[4661]: I0120 18:06:55.009819 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fxb9d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3856f23c-8dc3-4b36-b3b7-955dff315250\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:06Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:06Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54a53d0636da9c6e7974633697967fa21ba02b0357019aca7c83994f57d06d84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66kpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://37fb98a4cea5fe59a694ef52ebebfd3366649970415c8bd3b1307e6d150ffe66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66kpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6bac19d8c5ba66dc20e5e4b90b2ba10efe69f218908b04abb221416f47e47f5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66kpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f5f5d96326cd37c1101488fff8b4ce215ce84766faf13112bed7df0a767de0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66kpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d53da47c39bd1f10fe866890f30f12f27cb0cfce0348c89fc0e89b3e8f563f2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66kpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://407e4d66f22050b80251fcb98ac7168d601d70dff1679bdaca0fc82d6068da41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66kpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://823de7d3705ac9e99525bc0fcc4f577fb363555af9dd66346f33065839076105\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://823de7d3705ac9e99525bc0fcc4f577fb363555af9dd66346f33065839076105\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-20T18:06:35Z\\\",\\\"message\\\":\\\".BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0120 18:06:35.219065 6217 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0120 18:06:35.219313 6217 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0120 18:06:35.219547 6217 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0120 18:06:35.220089 6217 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0120 18:06:35.220357 6217 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0120 18:06:35.220410 6217 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0120 18:06:35.220459 6217 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0120 18:06:35.221651 6217 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0120 18:06:35.221712 6217 factory.go:656] Stopping watch factory\\\\nI0120 18:06:35.221730 6217 ovnkube.go:599] Stopped ovnkube\\\\nI0120 1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T18:06:34Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-fxb9d_openshift-ovn-kubernetes(3856f23c-8dc3-4b36-b3b7-955dff315250)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66kpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dfbc19df20b659446872267891c3a922b6a01e39d8f0557505f25cdc5ba1a648\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66kpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://babd416d0d33b286f533dc5bd8d6904d24fd23632efce36edb6e13183fbd390a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://babd416d0d33b286f533dc5bd8d6904d24fd23632efce36edb6e13183fbd390a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T18:06:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66kpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T18:06:06Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fxb9d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:55Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:55 crc kubenswrapper[4661]: I0120 18:06:55.024240 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-dhd6h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"58dc28d6-51a6-49ce-ab8e-ac0b6bbe7131\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j95bf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j95bf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T18:06:19Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-dhd6h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:55Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:55 crc kubenswrapper[4661]: I0120 18:06:55.039079 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f7511825-196e-48ea-a80c-f30a6806a15f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:05:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:05:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f30ca85f0d31021dde3b56c646ddd5d841e699b809c85e54afa944cc8035df6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://008613eee577926f777b6eba5a93379dca1203429fb29918bb057f2aba5eba4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:05:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://baf1692fe971ebe4534bc83cc471812d2b2883b6f97e53728ded6cd57b40c6f1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://faea3c0fefa61b8f0e07a050f59ca7b88d89a7ac8dba19ab019cff00fd782da3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T18:05:44Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:55Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:55 crc kubenswrapper[4661]: I0120 18:06:55.053056 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d831477bdf455582c54cba87020fc1141541282a25169c4b9730a78855e5719\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:55Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:55 crc kubenswrapper[4661]: I0120 18:06:55.068848 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-z97p2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b6f2401-3eb9-4ee4-b79c-6faee06bc21c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab1840afe6e204ba16157cfa4140926ab50dd66d6b72a0e49e4ef986f62c7e34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d04be3c87130e9506908a5ff0bf35490bafa64b4cec7b6ae1b67c4a8bd7df5d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-20T18:06:53Z\\\",\\\"message\\\":\\\"2026-01-20T18:06:08+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_6b6eb9a0-3822-47d9-9b83-21e46cfc33fa\\\\n2026-01-20T18:06:08+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_6b6eb9a0-3822-47d9-9b83-21e46cfc33fa to /host/opt/cni/bin/\\\\n2026-01-20T18:06:08Z [verbose] multus-daemon started\\\\n2026-01-20T18:06:08Z [verbose] Readiness Indicator file check\\\\n2026-01-20T18:06:53Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T18:06:06Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff8qg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T18:06:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-z97p2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:55Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:55 crc kubenswrapper[4661]: I0120 18:06:55.083586 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4hf4s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2cada643-eb7b-4036-8788-500338f73fac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://59b5cf3db3513f82b52401408842627d3e40bdc3009c226548556808410b2289\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gwqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://846c1cab30f986276eb919ac7474fbde1b6d5edb6557ab47057723b68d78b782\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gwqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T18:06:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-4hf4s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:55Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:55 crc kubenswrapper[4661]: I0120 18:06:55.106395 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:55 crc kubenswrapper[4661]: I0120 18:06:55.106452 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:55 crc kubenswrapper[4661]: I0120 18:06:55.106463 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:55 crc kubenswrapper[4661]: I0120 18:06:55.106486 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:55 crc kubenswrapper[4661]: I0120 18:06:55.106502 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:55Z","lastTransitionTime":"2026-01-20T18:06:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:55 crc kubenswrapper[4661]: I0120 18:06:55.128929 4661 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-12 11:02:37.059630761 +0000 UTC Jan 20 18:06:55 crc kubenswrapper[4661]: I0120 18:06:55.141268 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 18:06:55 crc kubenswrapper[4661]: I0120 18:06:55.141329 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dhd6h" Jan 20 18:06:55 crc kubenswrapper[4661]: I0120 18:06:55.141329 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 18:06:55 crc kubenswrapper[4661]: E0120 18:06:55.141434 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 20 18:06:55 crc kubenswrapper[4661]: E0120 18:06:55.141497 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dhd6h" podUID="58dc28d6-51a6-49ce-ab8e-ac0b6bbe7131" Jan 20 18:06:55 crc kubenswrapper[4661]: E0120 18:06:55.141531 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 20 18:06:55 crc kubenswrapper[4661]: I0120 18:06:55.141720 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 18:06:55 crc kubenswrapper[4661]: E0120 18:06:55.141933 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 20 18:06:55 crc kubenswrapper[4661]: I0120 18:06:55.210798 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:55 crc kubenswrapper[4661]: I0120 18:06:55.210855 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:55 crc kubenswrapper[4661]: I0120 18:06:55.210868 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:55 crc kubenswrapper[4661]: I0120 18:06:55.210893 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:55 crc kubenswrapper[4661]: I0120 18:06:55.210909 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:55Z","lastTransitionTime":"2026-01-20T18:06:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:55 crc kubenswrapper[4661]: I0120 18:06:55.313695 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:55 crc kubenswrapper[4661]: I0120 18:06:55.313760 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:55 crc kubenswrapper[4661]: I0120 18:06:55.313778 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:55 crc kubenswrapper[4661]: I0120 18:06:55.313799 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:55 crc kubenswrapper[4661]: I0120 18:06:55.313812 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:55Z","lastTransitionTime":"2026-01-20T18:06:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:55 crc kubenswrapper[4661]: I0120 18:06:55.417048 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:55 crc kubenswrapper[4661]: I0120 18:06:55.417400 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:55 crc kubenswrapper[4661]: I0120 18:06:55.417499 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:55 crc kubenswrapper[4661]: I0120 18:06:55.417587 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:55 crc kubenswrapper[4661]: I0120 18:06:55.417653 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:55Z","lastTransitionTime":"2026-01-20T18:06:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:55 crc kubenswrapper[4661]: I0120 18:06:55.520339 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:55 crc kubenswrapper[4661]: I0120 18:06:55.520402 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:55 crc kubenswrapper[4661]: I0120 18:06:55.520416 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:55 crc kubenswrapper[4661]: I0120 18:06:55.520439 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:55 crc kubenswrapper[4661]: I0120 18:06:55.520457 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:55Z","lastTransitionTime":"2026-01-20T18:06:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:55 crc kubenswrapper[4661]: I0120 18:06:55.623355 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:55 crc kubenswrapper[4661]: I0120 18:06:55.623778 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:55 crc kubenswrapper[4661]: I0120 18:06:55.623889 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:55 crc kubenswrapper[4661]: I0120 18:06:55.623987 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:55 crc kubenswrapper[4661]: I0120 18:06:55.624099 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:55Z","lastTransitionTime":"2026-01-20T18:06:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:55 crc kubenswrapper[4661]: I0120 18:06:55.726979 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:55 crc kubenswrapper[4661]: I0120 18:06:55.727398 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:55 crc kubenswrapper[4661]: I0120 18:06:55.727565 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:55 crc kubenswrapper[4661]: I0120 18:06:55.727734 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:55 crc kubenswrapper[4661]: I0120 18:06:55.727887 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:55Z","lastTransitionTime":"2026-01-20T18:06:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:55 crc kubenswrapper[4661]: I0120 18:06:55.831887 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:55 crc kubenswrapper[4661]: I0120 18:06:55.831947 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:55 crc kubenswrapper[4661]: I0120 18:06:55.831957 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:55 crc kubenswrapper[4661]: I0120 18:06:55.831979 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:55 crc kubenswrapper[4661]: I0120 18:06:55.831994 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:55Z","lastTransitionTime":"2026-01-20T18:06:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:55 crc kubenswrapper[4661]: I0120 18:06:55.934379 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:55 crc kubenswrapper[4661]: I0120 18:06:55.934418 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:55 crc kubenswrapper[4661]: I0120 18:06:55.934427 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:55 crc kubenswrapper[4661]: I0120 18:06:55.934444 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:55 crc kubenswrapper[4661]: I0120 18:06:55.934456 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:55Z","lastTransitionTime":"2026-01-20T18:06:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:56 crc kubenswrapper[4661]: I0120 18:06:56.036909 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:56 crc kubenswrapper[4661]: I0120 18:06:56.036969 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:56 crc kubenswrapper[4661]: I0120 18:06:56.036981 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:56 crc kubenswrapper[4661]: I0120 18:06:56.037006 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:56 crc kubenswrapper[4661]: I0120 18:06:56.037020 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:56Z","lastTransitionTime":"2026-01-20T18:06:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:56 crc kubenswrapper[4661]: I0120 18:06:56.129615 4661 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-12 23:51:59.858800552 +0000 UTC Jan 20 18:06:56 crc kubenswrapper[4661]: I0120 18:06:56.140037 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:56 crc kubenswrapper[4661]: I0120 18:06:56.140079 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:56 crc kubenswrapper[4661]: I0120 18:06:56.140090 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:56 crc kubenswrapper[4661]: I0120 18:06:56.140109 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:56 crc kubenswrapper[4661]: I0120 18:06:56.140120 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:56Z","lastTransitionTime":"2026-01-20T18:06:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:56 crc kubenswrapper[4661]: I0120 18:06:56.243161 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:56 crc kubenswrapper[4661]: I0120 18:06:56.243197 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:56 crc kubenswrapper[4661]: I0120 18:06:56.243206 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:56 crc kubenswrapper[4661]: I0120 18:06:56.243226 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:56 crc kubenswrapper[4661]: I0120 18:06:56.243237 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:56Z","lastTransitionTime":"2026-01-20T18:06:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:56 crc kubenswrapper[4661]: I0120 18:06:56.346434 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:56 crc kubenswrapper[4661]: I0120 18:06:56.346547 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:56 crc kubenswrapper[4661]: I0120 18:06:56.346576 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:56 crc kubenswrapper[4661]: I0120 18:06:56.346608 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:56 crc kubenswrapper[4661]: I0120 18:06:56.346632 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:56Z","lastTransitionTime":"2026-01-20T18:06:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:56 crc kubenswrapper[4661]: I0120 18:06:56.450369 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:56 crc kubenswrapper[4661]: I0120 18:06:56.450453 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:56 crc kubenswrapper[4661]: I0120 18:06:56.450476 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:56 crc kubenswrapper[4661]: I0120 18:06:56.450511 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:56 crc kubenswrapper[4661]: I0120 18:06:56.450533 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:56Z","lastTransitionTime":"2026-01-20T18:06:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:56 crc kubenswrapper[4661]: I0120 18:06:56.554341 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:56 crc kubenswrapper[4661]: I0120 18:06:56.554431 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:56 crc kubenswrapper[4661]: I0120 18:06:56.554450 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:56 crc kubenswrapper[4661]: I0120 18:06:56.554482 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:56 crc kubenswrapper[4661]: I0120 18:06:56.554505 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:56Z","lastTransitionTime":"2026-01-20T18:06:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:56 crc kubenswrapper[4661]: I0120 18:06:56.658392 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:56 crc kubenswrapper[4661]: I0120 18:06:56.658445 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:56 crc kubenswrapper[4661]: I0120 18:06:56.658463 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:56 crc kubenswrapper[4661]: I0120 18:06:56.658491 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:56 crc kubenswrapper[4661]: I0120 18:06:56.658511 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:56Z","lastTransitionTime":"2026-01-20T18:06:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:56 crc kubenswrapper[4661]: I0120 18:06:56.761729 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:56 crc kubenswrapper[4661]: I0120 18:06:56.761805 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:56 crc kubenswrapper[4661]: I0120 18:06:56.761831 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:56 crc kubenswrapper[4661]: I0120 18:06:56.761865 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:56 crc kubenswrapper[4661]: I0120 18:06:56.761890 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:56Z","lastTransitionTime":"2026-01-20T18:06:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:56 crc kubenswrapper[4661]: I0120 18:06:56.865216 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:56 crc kubenswrapper[4661]: I0120 18:06:56.865286 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:56 crc kubenswrapper[4661]: I0120 18:06:56.865304 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:56 crc kubenswrapper[4661]: I0120 18:06:56.865333 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:56 crc kubenswrapper[4661]: I0120 18:06:56.865353 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:56Z","lastTransitionTime":"2026-01-20T18:06:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:56 crc kubenswrapper[4661]: I0120 18:06:56.967440 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:56 crc kubenswrapper[4661]: I0120 18:06:56.967488 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:56 crc kubenswrapper[4661]: I0120 18:06:56.967501 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:56 crc kubenswrapper[4661]: I0120 18:06:56.967522 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:56 crc kubenswrapper[4661]: I0120 18:06:56.967534 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:56Z","lastTransitionTime":"2026-01-20T18:06:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:57 crc kubenswrapper[4661]: I0120 18:06:57.070493 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:57 crc kubenswrapper[4661]: I0120 18:06:57.070544 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:57 crc kubenswrapper[4661]: I0120 18:06:57.070557 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:57 crc kubenswrapper[4661]: I0120 18:06:57.070576 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:57 crc kubenswrapper[4661]: I0120 18:06:57.070588 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:57Z","lastTransitionTime":"2026-01-20T18:06:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:57 crc kubenswrapper[4661]: I0120 18:06:57.132116 4661 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-01 00:06:47.942451185 +0000 UTC Jan 20 18:06:57 crc kubenswrapper[4661]: I0120 18:06:57.141559 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dhd6h" Jan 20 18:06:57 crc kubenswrapper[4661]: I0120 18:06:57.141595 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 18:06:57 crc kubenswrapper[4661]: I0120 18:06:57.141646 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 18:06:57 crc kubenswrapper[4661]: I0120 18:06:57.141645 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 18:06:57 crc kubenswrapper[4661]: E0120 18:06:57.141775 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dhd6h" podUID="58dc28d6-51a6-49ce-ab8e-ac0b6bbe7131" Jan 20 18:06:57 crc kubenswrapper[4661]: E0120 18:06:57.141864 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 20 18:06:57 crc kubenswrapper[4661]: E0120 18:06:57.142049 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 20 18:06:57 crc kubenswrapper[4661]: E0120 18:06:57.142262 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 20 18:06:57 crc kubenswrapper[4661]: I0120 18:06:57.173081 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:57 crc kubenswrapper[4661]: I0120 18:06:57.173111 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:57 crc kubenswrapper[4661]: I0120 18:06:57.173122 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:57 crc kubenswrapper[4661]: I0120 18:06:57.173140 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:57 crc kubenswrapper[4661]: I0120 18:06:57.173153 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:57Z","lastTransitionTime":"2026-01-20T18:06:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:57 crc kubenswrapper[4661]: I0120 18:06:57.276575 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:57 crc kubenswrapper[4661]: I0120 18:06:57.276657 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:57 crc kubenswrapper[4661]: I0120 18:06:57.276713 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:57 crc kubenswrapper[4661]: I0120 18:06:57.276749 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:57 crc kubenswrapper[4661]: I0120 18:06:57.276771 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:57Z","lastTransitionTime":"2026-01-20T18:06:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:57 crc kubenswrapper[4661]: I0120 18:06:57.380506 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:57 crc kubenswrapper[4661]: I0120 18:06:57.380944 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:57 crc kubenswrapper[4661]: I0120 18:06:57.381153 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:57 crc kubenswrapper[4661]: I0120 18:06:57.381296 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:57 crc kubenswrapper[4661]: I0120 18:06:57.381437 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:57Z","lastTransitionTime":"2026-01-20T18:06:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:57 crc kubenswrapper[4661]: I0120 18:06:57.484974 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:57 crc kubenswrapper[4661]: I0120 18:06:57.485039 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:57 crc kubenswrapper[4661]: I0120 18:06:57.485059 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:57 crc kubenswrapper[4661]: I0120 18:06:57.485090 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:57 crc kubenswrapper[4661]: I0120 18:06:57.485110 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:57Z","lastTransitionTime":"2026-01-20T18:06:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:57 crc kubenswrapper[4661]: I0120 18:06:57.587944 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:57 crc kubenswrapper[4661]: I0120 18:06:57.588003 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:57 crc kubenswrapper[4661]: I0120 18:06:57.588019 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:57 crc kubenswrapper[4661]: I0120 18:06:57.588045 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:57 crc kubenswrapper[4661]: I0120 18:06:57.588061 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:57Z","lastTransitionTime":"2026-01-20T18:06:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:57 crc kubenswrapper[4661]: I0120 18:06:57.691614 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:57 crc kubenswrapper[4661]: I0120 18:06:57.691681 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:57 crc kubenswrapper[4661]: I0120 18:06:57.691694 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:57 crc kubenswrapper[4661]: I0120 18:06:57.691716 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:57 crc kubenswrapper[4661]: I0120 18:06:57.691730 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:57Z","lastTransitionTime":"2026-01-20T18:06:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:57 crc kubenswrapper[4661]: I0120 18:06:57.795591 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:57 crc kubenswrapper[4661]: I0120 18:06:57.796306 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:57 crc kubenswrapper[4661]: I0120 18:06:57.796457 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:57 crc kubenswrapper[4661]: I0120 18:06:57.796642 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:57 crc kubenswrapper[4661]: I0120 18:06:57.796879 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:57Z","lastTransitionTime":"2026-01-20T18:06:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:57 crc kubenswrapper[4661]: I0120 18:06:57.900265 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:57 crc kubenswrapper[4661]: I0120 18:06:57.900809 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:57 crc kubenswrapper[4661]: I0120 18:06:57.901020 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:57 crc kubenswrapper[4661]: I0120 18:06:57.901234 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:57 crc kubenswrapper[4661]: I0120 18:06:57.901607 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:57Z","lastTransitionTime":"2026-01-20T18:06:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:58 crc kubenswrapper[4661]: I0120 18:06:58.004982 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:58 crc kubenswrapper[4661]: I0120 18:06:58.005018 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:58 crc kubenswrapper[4661]: I0120 18:06:58.005026 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:58 crc kubenswrapper[4661]: I0120 18:06:58.005043 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:58 crc kubenswrapper[4661]: I0120 18:06:58.005052 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:58Z","lastTransitionTime":"2026-01-20T18:06:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:58 crc kubenswrapper[4661]: I0120 18:06:58.108693 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:58 crc kubenswrapper[4661]: I0120 18:06:58.109166 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:58 crc kubenswrapper[4661]: I0120 18:06:58.109187 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:58 crc kubenswrapper[4661]: I0120 18:06:58.109222 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:58 crc kubenswrapper[4661]: I0120 18:06:58.109246 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:58Z","lastTransitionTime":"2026-01-20T18:06:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:58 crc kubenswrapper[4661]: I0120 18:06:58.133118 4661 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-22 11:34:10.80406551 +0000 UTC Jan 20 18:06:58 crc kubenswrapper[4661]: I0120 18:06:58.195220 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:58 crc kubenswrapper[4661]: I0120 18:06:58.195287 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:58 crc kubenswrapper[4661]: I0120 18:06:58.195305 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:58 crc kubenswrapper[4661]: I0120 18:06:58.195338 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:58 crc kubenswrapper[4661]: I0120 18:06:58.195358 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:58Z","lastTransitionTime":"2026-01-20T18:06:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:58 crc kubenswrapper[4661]: E0120 18:06:58.215799 4661 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T18:06:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T18:06:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:58Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T18:06:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T18:06:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:58Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7f2069d5-53e0-4198-b42b-b73aa1252865\\\",\\\"systemUUID\\\":\\\"727045d4-7edb-4891-a9ee-dd5ccba890df\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:58Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:58 crc kubenswrapper[4661]: I0120 18:06:58.221163 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:58 crc kubenswrapper[4661]: I0120 18:06:58.221231 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:58 crc kubenswrapper[4661]: I0120 18:06:58.221252 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:58 crc kubenswrapper[4661]: I0120 18:06:58.221283 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:58 crc kubenswrapper[4661]: I0120 18:06:58.221303 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:58Z","lastTransitionTime":"2026-01-20T18:06:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:58 crc kubenswrapper[4661]: E0120 18:06:58.245703 4661 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T18:06:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T18:06:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:58Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T18:06:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T18:06:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:58Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7f2069d5-53e0-4198-b42b-b73aa1252865\\\",\\\"systemUUID\\\":\\\"727045d4-7edb-4891-a9ee-dd5ccba890df\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:58Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:58 crc kubenswrapper[4661]: I0120 18:06:58.250106 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:58 crc kubenswrapper[4661]: I0120 18:06:58.250177 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:58 crc kubenswrapper[4661]: I0120 18:06:58.250815 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:58 crc kubenswrapper[4661]: I0120 18:06:58.250900 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:58 crc kubenswrapper[4661]: I0120 18:06:58.251190 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:58Z","lastTransitionTime":"2026-01-20T18:06:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:58 crc kubenswrapper[4661]: E0120 18:06:58.273571 4661 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T18:06:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T18:06:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:58Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T18:06:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T18:06:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:58Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7f2069d5-53e0-4198-b42b-b73aa1252865\\\",\\\"systemUUID\\\":\\\"727045d4-7edb-4891-a9ee-dd5ccba890df\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:58Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:58 crc kubenswrapper[4661]: I0120 18:06:58.280453 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:58 crc kubenswrapper[4661]: I0120 18:06:58.280519 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:58 crc kubenswrapper[4661]: I0120 18:06:58.280534 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:58 crc kubenswrapper[4661]: I0120 18:06:58.280588 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:58 crc kubenswrapper[4661]: I0120 18:06:58.280609 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:58Z","lastTransitionTime":"2026-01-20T18:06:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:58 crc kubenswrapper[4661]: E0120 18:06:58.303508 4661 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T18:06:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T18:06:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:58Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T18:06:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T18:06:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:58Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7f2069d5-53e0-4198-b42b-b73aa1252865\\\",\\\"systemUUID\\\":\\\"727045d4-7edb-4891-a9ee-dd5ccba890df\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:58Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:58 crc kubenswrapper[4661]: I0120 18:06:58.307651 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:58 crc kubenswrapper[4661]: I0120 18:06:58.307716 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:58 crc kubenswrapper[4661]: I0120 18:06:58.307733 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:58 crc kubenswrapper[4661]: I0120 18:06:58.307754 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:58 crc kubenswrapper[4661]: I0120 18:06:58.307770 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:58Z","lastTransitionTime":"2026-01-20T18:06:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:58 crc kubenswrapper[4661]: E0120 18:06:58.322473 4661 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T18:06:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T18:06:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:58Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T18:06:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T18:06:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:58Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7f2069d5-53e0-4198-b42b-b73aa1252865\\\",\\\"systemUUID\\\":\\\"727045d4-7edb-4891-a9ee-dd5ccba890df\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:06:58Z is after 2025-08-24T17:21:41Z" Jan 20 18:06:58 crc kubenswrapper[4661]: E0120 18:06:58.322744 4661 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 20 18:06:58 crc kubenswrapper[4661]: I0120 18:06:58.325189 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:58 crc kubenswrapper[4661]: I0120 18:06:58.325240 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:58 crc kubenswrapper[4661]: I0120 18:06:58.325253 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:58 crc kubenswrapper[4661]: I0120 18:06:58.325273 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:58 crc kubenswrapper[4661]: I0120 18:06:58.325288 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:58Z","lastTransitionTime":"2026-01-20T18:06:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:58 crc kubenswrapper[4661]: I0120 18:06:58.429215 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:58 crc kubenswrapper[4661]: I0120 18:06:58.429294 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:58 crc kubenswrapper[4661]: I0120 18:06:58.429318 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:58 crc kubenswrapper[4661]: I0120 18:06:58.429350 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:58 crc kubenswrapper[4661]: I0120 18:06:58.429372 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:58Z","lastTransitionTime":"2026-01-20T18:06:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:58 crc kubenswrapper[4661]: I0120 18:06:58.533458 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:58 crc kubenswrapper[4661]: I0120 18:06:58.533543 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:58 crc kubenswrapper[4661]: I0120 18:06:58.533568 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:58 crc kubenswrapper[4661]: I0120 18:06:58.533600 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:58 crc kubenswrapper[4661]: I0120 18:06:58.533624 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:58Z","lastTransitionTime":"2026-01-20T18:06:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:58 crc kubenswrapper[4661]: I0120 18:06:58.637848 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:58 crc kubenswrapper[4661]: I0120 18:06:58.637895 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:58 crc kubenswrapper[4661]: I0120 18:06:58.637912 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:58 crc kubenswrapper[4661]: I0120 18:06:58.637938 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:58 crc kubenswrapper[4661]: I0120 18:06:58.637955 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:58Z","lastTransitionTime":"2026-01-20T18:06:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:58 crc kubenswrapper[4661]: I0120 18:06:58.741436 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:58 crc kubenswrapper[4661]: I0120 18:06:58.741497 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:58 crc kubenswrapper[4661]: I0120 18:06:58.741513 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:58 crc kubenswrapper[4661]: I0120 18:06:58.741539 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:58 crc kubenswrapper[4661]: I0120 18:06:58.741557 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:58Z","lastTransitionTime":"2026-01-20T18:06:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:58 crc kubenswrapper[4661]: I0120 18:06:58.845089 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:58 crc kubenswrapper[4661]: I0120 18:06:58.845163 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:58 crc kubenswrapper[4661]: I0120 18:06:58.845183 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:58 crc kubenswrapper[4661]: I0120 18:06:58.845214 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:58 crc kubenswrapper[4661]: I0120 18:06:58.845232 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:58Z","lastTransitionTime":"2026-01-20T18:06:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:58 crc kubenswrapper[4661]: I0120 18:06:58.956162 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:58 crc kubenswrapper[4661]: I0120 18:06:58.956247 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:58 crc kubenswrapper[4661]: I0120 18:06:58.956271 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:58 crc kubenswrapper[4661]: I0120 18:06:58.956304 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:58 crc kubenswrapper[4661]: I0120 18:06:58.956331 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:58Z","lastTransitionTime":"2026-01-20T18:06:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:59 crc kubenswrapper[4661]: I0120 18:06:59.060428 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:59 crc kubenswrapper[4661]: I0120 18:06:59.060489 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:59 crc kubenswrapper[4661]: I0120 18:06:59.060506 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:59 crc kubenswrapper[4661]: I0120 18:06:59.060533 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:59 crc kubenswrapper[4661]: I0120 18:06:59.060553 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:59Z","lastTransitionTime":"2026-01-20T18:06:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:59 crc kubenswrapper[4661]: I0120 18:06:59.133857 4661 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-27 16:48:59.520737704 +0000 UTC Jan 20 18:06:59 crc kubenswrapper[4661]: I0120 18:06:59.141158 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 18:06:59 crc kubenswrapper[4661]: I0120 18:06:59.141229 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 18:06:59 crc kubenswrapper[4661]: I0120 18:06:59.141164 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dhd6h" Jan 20 18:06:59 crc kubenswrapper[4661]: I0120 18:06:59.141158 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 18:06:59 crc kubenswrapper[4661]: E0120 18:06:59.141384 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 20 18:06:59 crc kubenswrapper[4661]: E0120 18:06:59.141549 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 20 18:06:59 crc kubenswrapper[4661]: E0120 18:06:59.141781 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 20 18:06:59 crc kubenswrapper[4661]: E0120 18:06:59.142156 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dhd6h" podUID="58dc28d6-51a6-49ce-ab8e-ac0b6bbe7131" Jan 20 18:06:59 crc kubenswrapper[4661]: I0120 18:06:59.163765 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:59 crc kubenswrapper[4661]: I0120 18:06:59.163833 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:59 crc kubenswrapper[4661]: I0120 18:06:59.163856 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:59 crc kubenswrapper[4661]: I0120 18:06:59.163883 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:59 crc kubenswrapper[4661]: I0120 18:06:59.163903 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:59Z","lastTransitionTime":"2026-01-20T18:06:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:59 crc kubenswrapper[4661]: I0120 18:06:59.267434 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:59 crc kubenswrapper[4661]: I0120 18:06:59.267494 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:59 crc kubenswrapper[4661]: I0120 18:06:59.267511 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:59 crc kubenswrapper[4661]: I0120 18:06:59.267537 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:59 crc kubenswrapper[4661]: I0120 18:06:59.267556 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:59Z","lastTransitionTime":"2026-01-20T18:06:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:59 crc kubenswrapper[4661]: I0120 18:06:59.370530 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:59 crc kubenswrapper[4661]: I0120 18:06:59.370652 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:59 crc kubenswrapper[4661]: I0120 18:06:59.370710 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:59 crc kubenswrapper[4661]: I0120 18:06:59.370735 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:59 crc kubenswrapper[4661]: I0120 18:06:59.370751 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:59Z","lastTransitionTime":"2026-01-20T18:06:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:59 crc kubenswrapper[4661]: I0120 18:06:59.473900 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:59 crc kubenswrapper[4661]: I0120 18:06:59.473981 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:59 crc kubenswrapper[4661]: I0120 18:06:59.474034 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:59 crc kubenswrapper[4661]: I0120 18:06:59.474059 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:59 crc kubenswrapper[4661]: I0120 18:06:59.474078 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:59Z","lastTransitionTime":"2026-01-20T18:06:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:59 crc kubenswrapper[4661]: I0120 18:06:59.577406 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:59 crc kubenswrapper[4661]: I0120 18:06:59.577464 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:59 crc kubenswrapper[4661]: I0120 18:06:59.577479 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:59 crc kubenswrapper[4661]: I0120 18:06:59.577501 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:59 crc kubenswrapper[4661]: I0120 18:06:59.577517 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:59Z","lastTransitionTime":"2026-01-20T18:06:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:59 crc kubenswrapper[4661]: I0120 18:06:59.680839 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:59 crc kubenswrapper[4661]: I0120 18:06:59.680894 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:59 crc kubenswrapper[4661]: I0120 18:06:59.680906 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:59 crc kubenswrapper[4661]: I0120 18:06:59.680927 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:59 crc kubenswrapper[4661]: I0120 18:06:59.680941 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:59Z","lastTransitionTime":"2026-01-20T18:06:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:59 crc kubenswrapper[4661]: I0120 18:06:59.784468 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:59 crc kubenswrapper[4661]: I0120 18:06:59.784517 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:59 crc kubenswrapper[4661]: I0120 18:06:59.784528 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:59 crc kubenswrapper[4661]: I0120 18:06:59.784549 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:59 crc kubenswrapper[4661]: I0120 18:06:59.784567 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:59Z","lastTransitionTime":"2026-01-20T18:06:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:59 crc kubenswrapper[4661]: I0120 18:06:59.887564 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:59 crc kubenswrapper[4661]: I0120 18:06:59.887629 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:59 crc kubenswrapper[4661]: I0120 18:06:59.887653 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:59 crc kubenswrapper[4661]: I0120 18:06:59.887690 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:59 crc kubenswrapper[4661]: I0120 18:06:59.887752 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:59Z","lastTransitionTime":"2026-01-20T18:06:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:06:59 crc kubenswrapper[4661]: I0120 18:06:59.991917 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:06:59 crc kubenswrapper[4661]: I0120 18:06:59.992086 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:06:59 crc kubenswrapper[4661]: I0120 18:06:59.992292 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:06:59 crc kubenswrapper[4661]: I0120 18:06:59.992333 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:06:59 crc kubenswrapper[4661]: I0120 18:06:59.992362 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:06:59Z","lastTransitionTime":"2026-01-20T18:06:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:07:00 crc kubenswrapper[4661]: I0120 18:07:00.095821 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:07:00 crc kubenswrapper[4661]: I0120 18:07:00.095911 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:07:00 crc kubenswrapper[4661]: I0120 18:07:00.095949 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:07:00 crc kubenswrapper[4661]: I0120 18:07:00.095987 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:07:00 crc kubenswrapper[4661]: I0120 18:07:00.096015 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:07:00Z","lastTransitionTime":"2026-01-20T18:07:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:07:00 crc kubenswrapper[4661]: I0120 18:07:00.135167 4661 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-18 12:33:42.134082318 +0000 UTC Jan 20 18:07:00 crc kubenswrapper[4661]: I0120 18:07:00.199945 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:07:00 crc kubenswrapper[4661]: I0120 18:07:00.200026 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:07:00 crc kubenswrapper[4661]: I0120 18:07:00.200046 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:07:00 crc kubenswrapper[4661]: I0120 18:07:00.200079 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:07:00 crc kubenswrapper[4661]: I0120 18:07:00.200100 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:07:00Z","lastTransitionTime":"2026-01-20T18:07:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:07:00 crc kubenswrapper[4661]: I0120 18:07:00.303485 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:07:00 crc kubenswrapper[4661]: I0120 18:07:00.303572 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:07:00 crc kubenswrapper[4661]: I0120 18:07:00.303593 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:07:00 crc kubenswrapper[4661]: I0120 18:07:00.303625 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:07:00 crc kubenswrapper[4661]: I0120 18:07:00.303645 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:07:00Z","lastTransitionTime":"2026-01-20T18:07:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:07:00 crc kubenswrapper[4661]: I0120 18:07:00.407177 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:07:00 crc kubenswrapper[4661]: I0120 18:07:00.407239 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:07:00 crc kubenswrapper[4661]: I0120 18:07:00.407256 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:07:00 crc kubenswrapper[4661]: I0120 18:07:00.407283 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:07:00 crc kubenswrapper[4661]: I0120 18:07:00.407304 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:07:00Z","lastTransitionTime":"2026-01-20T18:07:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:07:00 crc kubenswrapper[4661]: I0120 18:07:00.511404 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:07:00 crc kubenswrapper[4661]: I0120 18:07:00.511483 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:07:00 crc kubenswrapper[4661]: I0120 18:07:00.511513 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:07:00 crc kubenswrapper[4661]: I0120 18:07:00.511551 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:07:00 crc kubenswrapper[4661]: I0120 18:07:00.511577 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:07:00Z","lastTransitionTime":"2026-01-20T18:07:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:07:00 crc kubenswrapper[4661]: I0120 18:07:00.614588 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:07:00 crc kubenswrapper[4661]: I0120 18:07:00.614636 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:07:00 crc kubenswrapper[4661]: I0120 18:07:00.614655 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:07:00 crc kubenswrapper[4661]: I0120 18:07:00.614686 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:07:00 crc kubenswrapper[4661]: I0120 18:07:00.614747 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:07:00Z","lastTransitionTime":"2026-01-20T18:07:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:07:00 crc kubenswrapper[4661]: I0120 18:07:00.717394 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:07:00 crc kubenswrapper[4661]: I0120 18:07:00.717462 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:07:00 crc kubenswrapper[4661]: I0120 18:07:00.717488 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:07:00 crc kubenswrapper[4661]: I0120 18:07:00.717521 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:07:00 crc kubenswrapper[4661]: I0120 18:07:00.717544 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:07:00Z","lastTransitionTime":"2026-01-20T18:07:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:07:00 crc kubenswrapper[4661]: I0120 18:07:00.820613 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:07:00 crc kubenswrapper[4661]: I0120 18:07:00.820655 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:07:00 crc kubenswrapper[4661]: I0120 18:07:00.820679 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:07:00 crc kubenswrapper[4661]: I0120 18:07:00.820809 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:07:00 crc kubenswrapper[4661]: I0120 18:07:00.820839 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:07:00Z","lastTransitionTime":"2026-01-20T18:07:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:07:00 crc kubenswrapper[4661]: I0120 18:07:00.924111 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:07:00 crc kubenswrapper[4661]: I0120 18:07:00.924185 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:07:00 crc kubenswrapper[4661]: I0120 18:07:00.924204 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:07:00 crc kubenswrapper[4661]: I0120 18:07:00.924233 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:07:00 crc kubenswrapper[4661]: I0120 18:07:00.924254 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:07:00Z","lastTransitionTime":"2026-01-20T18:07:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:07:01 crc kubenswrapper[4661]: I0120 18:07:01.027773 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:07:01 crc kubenswrapper[4661]: I0120 18:07:01.027850 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:07:01 crc kubenswrapper[4661]: I0120 18:07:01.027868 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:07:01 crc kubenswrapper[4661]: I0120 18:07:01.027894 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:07:01 crc kubenswrapper[4661]: I0120 18:07:01.027912 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:07:01Z","lastTransitionTime":"2026-01-20T18:07:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:07:01 crc kubenswrapper[4661]: I0120 18:07:01.130839 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:07:01 crc kubenswrapper[4661]: I0120 18:07:01.130908 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:07:01 crc kubenswrapper[4661]: I0120 18:07:01.130931 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:07:01 crc kubenswrapper[4661]: I0120 18:07:01.130964 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:07:01 crc kubenswrapper[4661]: I0120 18:07:01.130986 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:07:01Z","lastTransitionTime":"2026-01-20T18:07:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:07:01 crc kubenswrapper[4661]: I0120 18:07:01.136093 4661 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-29 05:24:22.101058307 +0000 UTC Jan 20 18:07:01 crc kubenswrapper[4661]: I0120 18:07:01.141767 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dhd6h" Jan 20 18:07:01 crc kubenswrapper[4661]: I0120 18:07:01.141832 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 18:07:01 crc kubenswrapper[4661]: I0120 18:07:01.141845 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 18:07:01 crc kubenswrapper[4661]: E0120 18:07:01.142009 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dhd6h" podUID="58dc28d6-51a6-49ce-ab8e-ac0b6bbe7131" Jan 20 18:07:01 crc kubenswrapper[4661]: E0120 18:07:01.142172 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 20 18:07:01 crc kubenswrapper[4661]: E0120 18:07:01.142363 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 20 18:07:01 crc kubenswrapper[4661]: I0120 18:07:01.142650 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 18:07:01 crc kubenswrapper[4661]: E0120 18:07:01.142970 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 20 18:07:01 crc kubenswrapper[4661]: I0120 18:07:01.234427 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:07:01 crc kubenswrapper[4661]: I0120 18:07:01.234489 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:07:01 crc kubenswrapper[4661]: I0120 18:07:01.234501 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:07:01 crc kubenswrapper[4661]: I0120 18:07:01.234532 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:07:01 crc kubenswrapper[4661]: I0120 18:07:01.234548 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:07:01Z","lastTransitionTime":"2026-01-20T18:07:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:07:01 crc kubenswrapper[4661]: I0120 18:07:01.338440 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:07:01 crc kubenswrapper[4661]: I0120 18:07:01.338495 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:07:01 crc kubenswrapper[4661]: I0120 18:07:01.338510 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:07:01 crc kubenswrapper[4661]: I0120 18:07:01.338536 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:07:01 crc kubenswrapper[4661]: I0120 18:07:01.338552 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:07:01Z","lastTransitionTime":"2026-01-20T18:07:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:07:01 crc kubenswrapper[4661]: I0120 18:07:01.442039 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:07:01 crc kubenswrapper[4661]: I0120 18:07:01.442509 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:07:01 crc kubenswrapper[4661]: I0120 18:07:01.442750 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:07:01 crc kubenswrapper[4661]: I0120 18:07:01.443019 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:07:01 crc kubenswrapper[4661]: I0120 18:07:01.443241 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:07:01Z","lastTransitionTime":"2026-01-20T18:07:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:07:01 crc kubenswrapper[4661]: I0120 18:07:01.546743 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:07:01 crc kubenswrapper[4661]: I0120 18:07:01.546820 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:07:01 crc kubenswrapper[4661]: I0120 18:07:01.546833 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:07:01 crc kubenswrapper[4661]: I0120 18:07:01.546860 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:07:01 crc kubenswrapper[4661]: I0120 18:07:01.546884 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:07:01Z","lastTransitionTime":"2026-01-20T18:07:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:07:01 crc kubenswrapper[4661]: I0120 18:07:01.650253 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:07:01 crc kubenswrapper[4661]: I0120 18:07:01.650326 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:07:01 crc kubenswrapper[4661]: I0120 18:07:01.650345 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:07:01 crc kubenswrapper[4661]: I0120 18:07:01.650375 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:07:01 crc kubenswrapper[4661]: I0120 18:07:01.650395 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:07:01Z","lastTransitionTime":"2026-01-20T18:07:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:07:01 crc kubenswrapper[4661]: I0120 18:07:01.754065 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:07:01 crc kubenswrapper[4661]: I0120 18:07:01.754129 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:07:01 crc kubenswrapper[4661]: I0120 18:07:01.754141 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:07:01 crc kubenswrapper[4661]: I0120 18:07:01.754162 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:07:01 crc kubenswrapper[4661]: I0120 18:07:01.754175 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:07:01Z","lastTransitionTime":"2026-01-20T18:07:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:07:01 crc kubenswrapper[4661]: I0120 18:07:01.858168 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:07:01 crc kubenswrapper[4661]: I0120 18:07:01.858260 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:07:01 crc kubenswrapper[4661]: I0120 18:07:01.858279 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:07:01 crc kubenswrapper[4661]: I0120 18:07:01.858303 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:07:01 crc kubenswrapper[4661]: I0120 18:07:01.858352 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:07:01Z","lastTransitionTime":"2026-01-20T18:07:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:07:01 crc kubenswrapper[4661]: I0120 18:07:01.962105 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:07:01 crc kubenswrapper[4661]: I0120 18:07:01.962168 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:07:01 crc kubenswrapper[4661]: I0120 18:07:01.962187 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:07:01 crc kubenswrapper[4661]: I0120 18:07:01.962213 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:07:01 crc kubenswrapper[4661]: I0120 18:07:01.962231 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:07:01Z","lastTransitionTime":"2026-01-20T18:07:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:07:02 crc kubenswrapper[4661]: I0120 18:07:02.065442 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:07:02 crc kubenswrapper[4661]: I0120 18:07:02.065531 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:07:02 crc kubenswrapper[4661]: I0120 18:07:02.065553 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:07:02 crc kubenswrapper[4661]: I0120 18:07:02.065584 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:07:02 crc kubenswrapper[4661]: I0120 18:07:02.065603 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:07:02Z","lastTransitionTime":"2026-01-20T18:07:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:07:02 crc kubenswrapper[4661]: I0120 18:07:02.136514 4661 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-14 14:38:29.780051077 +0000 UTC Jan 20 18:07:02 crc kubenswrapper[4661]: I0120 18:07:02.169013 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:07:02 crc kubenswrapper[4661]: I0120 18:07:02.169075 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:07:02 crc kubenswrapper[4661]: I0120 18:07:02.169098 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:07:02 crc kubenswrapper[4661]: I0120 18:07:02.169127 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:07:02 crc kubenswrapper[4661]: I0120 18:07:02.169150 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:07:02Z","lastTransitionTime":"2026-01-20T18:07:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:07:02 crc kubenswrapper[4661]: I0120 18:07:02.273650 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:07:02 crc kubenswrapper[4661]: I0120 18:07:02.273750 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:07:02 crc kubenswrapper[4661]: I0120 18:07:02.273767 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:07:02 crc kubenswrapper[4661]: I0120 18:07:02.273795 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:07:02 crc kubenswrapper[4661]: I0120 18:07:02.273816 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:07:02Z","lastTransitionTime":"2026-01-20T18:07:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:07:02 crc kubenswrapper[4661]: I0120 18:07:02.377917 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:07:02 crc kubenswrapper[4661]: I0120 18:07:02.378000 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:07:02 crc kubenswrapper[4661]: I0120 18:07:02.378029 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:07:02 crc kubenswrapper[4661]: I0120 18:07:02.378066 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:07:02 crc kubenswrapper[4661]: I0120 18:07:02.378090 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:07:02Z","lastTransitionTime":"2026-01-20T18:07:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:07:02 crc kubenswrapper[4661]: I0120 18:07:02.481743 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:07:02 crc kubenswrapper[4661]: I0120 18:07:02.481810 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:07:02 crc kubenswrapper[4661]: I0120 18:07:02.481833 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:07:02 crc kubenswrapper[4661]: I0120 18:07:02.481924 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:07:02 crc kubenswrapper[4661]: I0120 18:07:02.482000 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:07:02Z","lastTransitionTime":"2026-01-20T18:07:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:07:02 crc kubenswrapper[4661]: I0120 18:07:02.585522 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:07:02 crc kubenswrapper[4661]: I0120 18:07:02.585586 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:07:02 crc kubenswrapper[4661]: I0120 18:07:02.585604 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:07:02 crc kubenswrapper[4661]: I0120 18:07:02.585631 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:07:02 crc kubenswrapper[4661]: I0120 18:07:02.585650 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:07:02Z","lastTransitionTime":"2026-01-20T18:07:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:07:02 crc kubenswrapper[4661]: I0120 18:07:02.689386 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:07:02 crc kubenswrapper[4661]: I0120 18:07:02.689437 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:07:02 crc kubenswrapper[4661]: I0120 18:07:02.689452 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:07:02 crc kubenswrapper[4661]: I0120 18:07:02.689478 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:07:02 crc kubenswrapper[4661]: I0120 18:07:02.689494 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:07:02Z","lastTransitionTime":"2026-01-20T18:07:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:07:02 crc kubenswrapper[4661]: I0120 18:07:02.792861 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:07:02 crc kubenswrapper[4661]: I0120 18:07:02.792921 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:07:02 crc kubenswrapper[4661]: I0120 18:07:02.792939 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:07:02 crc kubenswrapper[4661]: I0120 18:07:02.792965 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:07:02 crc kubenswrapper[4661]: I0120 18:07:02.792983 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:07:02Z","lastTransitionTime":"2026-01-20T18:07:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:07:02 crc kubenswrapper[4661]: I0120 18:07:02.896307 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:07:02 crc kubenswrapper[4661]: I0120 18:07:02.896364 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:07:02 crc kubenswrapper[4661]: I0120 18:07:02.896384 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:07:02 crc kubenswrapper[4661]: I0120 18:07:02.896411 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:07:02 crc kubenswrapper[4661]: I0120 18:07:02.896430 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:07:02Z","lastTransitionTime":"2026-01-20T18:07:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:07:03 crc kubenswrapper[4661]: I0120 18:07:03.000013 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:07:03 crc kubenswrapper[4661]: I0120 18:07:03.000089 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:07:03 crc kubenswrapper[4661]: I0120 18:07:03.000114 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:07:03 crc kubenswrapper[4661]: I0120 18:07:03.000150 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:07:03 crc kubenswrapper[4661]: I0120 18:07:03.000177 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:07:03Z","lastTransitionTime":"2026-01-20T18:07:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:07:03 crc kubenswrapper[4661]: I0120 18:07:03.103772 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:07:03 crc kubenswrapper[4661]: I0120 18:07:03.104207 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:07:03 crc kubenswrapper[4661]: I0120 18:07:03.104372 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:07:03 crc kubenswrapper[4661]: I0120 18:07:03.104528 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:07:03 crc kubenswrapper[4661]: I0120 18:07:03.104688 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:07:03Z","lastTransitionTime":"2026-01-20T18:07:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:07:03 crc kubenswrapper[4661]: I0120 18:07:03.137483 4661 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-08 15:45:28.382459434 +0000 UTC Jan 20 18:07:03 crc kubenswrapper[4661]: I0120 18:07:03.142223 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 18:07:03 crc kubenswrapper[4661]: I0120 18:07:03.142322 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dhd6h" Jan 20 18:07:03 crc kubenswrapper[4661]: I0120 18:07:03.142215 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 18:07:03 crc kubenswrapper[4661]: E0120 18:07:03.142577 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dhd6h" podUID="58dc28d6-51a6-49ce-ab8e-ac0b6bbe7131" Jan 20 18:07:03 crc kubenswrapper[4661]: I0120 18:07:03.142306 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 18:07:03 crc kubenswrapper[4661]: E0120 18:07:03.142798 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 20 18:07:03 crc kubenswrapper[4661]: E0120 18:07:03.142444 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 20 18:07:03 crc kubenswrapper[4661]: E0120 18:07:03.142900 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 20 18:07:03 crc kubenswrapper[4661]: I0120 18:07:03.208234 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:07:03 crc kubenswrapper[4661]: I0120 18:07:03.208288 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:07:03 crc kubenswrapper[4661]: I0120 18:07:03.208302 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:07:03 crc kubenswrapper[4661]: I0120 18:07:03.208321 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:07:03 crc kubenswrapper[4661]: I0120 18:07:03.208335 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:07:03Z","lastTransitionTime":"2026-01-20T18:07:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:07:03 crc kubenswrapper[4661]: I0120 18:07:03.313195 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:07:03 crc kubenswrapper[4661]: I0120 18:07:03.313275 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:07:03 crc kubenswrapper[4661]: I0120 18:07:03.313293 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:07:03 crc kubenswrapper[4661]: I0120 18:07:03.313321 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:07:03 crc kubenswrapper[4661]: I0120 18:07:03.313341 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:07:03Z","lastTransitionTime":"2026-01-20T18:07:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:07:03 crc kubenswrapper[4661]: I0120 18:07:03.416848 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:07:03 crc kubenswrapper[4661]: I0120 18:07:03.416893 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:07:03 crc kubenswrapper[4661]: I0120 18:07:03.416905 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:07:03 crc kubenswrapper[4661]: I0120 18:07:03.416924 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:07:03 crc kubenswrapper[4661]: I0120 18:07:03.416938 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:07:03Z","lastTransitionTime":"2026-01-20T18:07:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:07:03 crc kubenswrapper[4661]: I0120 18:07:03.520306 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:07:03 crc kubenswrapper[4661]: I0120 18:07:03.520382 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:07:03 crc kubenswrapper[4661]: I0120 18:07:03.520394 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:07:03 crc kubenswrapper[4661]: I0120 18:07:03.520415 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:07:03 crc kubenswrapper[4661]: I0120 18:07:03.520430 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:07:03Z","lastTransitionTime":"2026-01-20T18:07:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:07:03 crc kubenswrapper[4661]: I0120 18:07:03.624236 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:07:03 crc kubenswrapper[4661]: I0120 18:07:03.624295 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:07:03 crc kubenswrapper[4661]: I0120 18:07:03.624313 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:07:03 crc kubenswrapper[4661]: I0120 18:07:03.624338 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:07:03 crc kubenswrapper[4661]: I0120 18:07:03.624355 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:07:03Z","lastTransitionTime":"2026-01-20T18:07:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:07:03 crc kubenswrapper[4661]: I0120 18:07:03.728361 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:07:03 crc kubenswrapper[4661]: I0120 18:07:03.728423 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:07:03 crc kubenswrapper[4661]: I0120 18:07:03.728440 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:07:03 crc kubenswrapper[4661]: I0120 18:07:03.728464 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:07:03 crc kubenswrapper[4661]: I0120 18:07:03.728481 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:07:03Z","lastTransitionTime":"2026-01-20T18:07:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:07:03 crc kubenswrapper[4661]: I0120 18:07:03.832133 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:07:03 crc kubenswrapper[4661]: I0120 18:07:03.832192 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:07:03 crc kubenswrapper[4661]: I0120 18:07:03.832207 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:07:03 crc kubenswrapper[4661]: I0120 18:07:03.832234 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:07:03 crc kubenswrapper[4661]: I0120 18:07:03.832256 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:07:03Z","lastTransitionTime":"2026-01-20T18:07:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:07:03 crc kubenswrapper[4661]: I0120 18:07:03.935725 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:07:03 crc kubenswrapper[4661]: I0120 18:07:03.935811 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:07:03 crc kubenswrapper[4661]: I0120 18:07:03.935838 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:07:03 crc kubenswrapper[4661]: I0120 18:07:03.935872 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:07:03 crc kubenswrapper[4661]: I0120 18:07:03.935896 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:07:03Z","lastTransitionTime":"2026-01-20T18:07:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:07:04 crc kubenswrapper[4661]: I0120 18:07:04.039302 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:07:04 crc kubenswrapper[4661]: I0120 18:07:04.039363 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:07:04 crc kubenswrapper[4661]: I0120 18:07:04.039380 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:07:04 crc kubenswrapper[4661]: I0120 18:07:04.039406 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:07:04 crc kubenswrapper[4661]: I0120 18:07:04.039422 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:07:04Z","lastTransitionTime":"2026-01-20T18:07:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:07:04 crc kubenswrapper[4661]: I0120 18:07:04.137737 4661 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-13 20:40:07.809082322 +0000 UTC Jan 20 18:07:04 crc kubenswrapper[4661]: I0120 18:07:04.142587 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:07:04 crc kubenswrapper[4661]: I0120 18:07:04.142646 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:07:04 crc kubenswrapper[4661]: I0120 18:07:04.142671 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:07:04 crc kubenswrapper[4661]: I0120 18:07:04.142730 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:07:04 crc kubenswrapper[4661]: I0120 18:07:04.142756 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:07:04Z","lastTransitionTime":"2026-01-20T18:07:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:07:04 crc kubenswrapper[4661]: I0120 18:07:04.158382 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-dhd6h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"58dc28d6-51a6-49ce-ab8e-ac0b6bbe7131\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j95bf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j95bf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T18:06:19Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-dhd6h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:07:04Z is after 2025-08-24T17:21:41Z" Jan 20 18:07:04 crc kubenswrapper[4661]: I0120 18:07:04.181502 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5947c5f0-b932-4127-a183-6b9023784c81\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:05:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:05:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:05:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2286c38d543136df613b2611b8d494d0777a950158adb169c26708335c024251\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7995b8e096ce8c7adf28d9baa4e12d943a697db80ee2b6e6b347b334e44b0df\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a1fb928361cffd6f14855b6c1cf5964eccc9f923435bf79dddd8f0c94decd9a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a8e025f49d745d0d846c606a3ec9dd6fbd2d255e8662ba1fd1a65f0d4289e77\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3584f02089912eecb6ea77d78d4f093929ce92631cb9ea758f1311268963b6b1\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-20T18:06:02Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0120 18:05:56.920405 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0120 18:05:56.921589 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1862726087/tls.crt::/tmp/serving-cert-1862726087/tls.key\\\\\\\"\\\\nI0120 18:06:02.544098 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0120 18:06:02.549414 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0120 18:06:02.549439 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0120 18:06:02.549472 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0120 18:06:02.549479 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0120 18:06:02.569160 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0120 18:06:02.569400 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0120 18:06:02.569474 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0120 18:06:02.569536 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0120 18:06:02.569594 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0120 18:06:02.569648 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0120 18:06:02.569744 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0120 18:06:02.569342 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0120 18:06:02.573278 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T18:05:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f09e5fcc7fafac7a11257184f5919c06b5b2e56a677b67c664e6489d9a581a20\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:05:46Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6eedc9bdf3c37af238cf9ad5172a8d93751c0641cbf43057016157f086c77538\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6eedc9bdf3c37af238cf9ad5172a8d93751c0641cbf43057016157f086c77538\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T18:05:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T18:05:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T18:05:44Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:07:04Z is after 2025-08-24T17:21:41Z" Jan 20 18:07:04 crc kubenswrapper[4661]: I0120 18:07:04.202414 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:03Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:07:04Z is after 2025-08-24T17:21:41Z" Jan 20 18:07:04 crc kubenswrapper[4661]: I0120 18:07:04.225658 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:03Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:07:04Z is after 2025-08-24T17:21:41Z" Jan 20 18:07:04 crc kubenswrapper[4661]: I0120 18:07:04.247510 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-9m9jm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c44ff326-6791-438a-8d65-b2be26e9c819\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de5a607340e429cf954b1b6e147c4dbff99ffee4d311e9692410698574915af2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kn7nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T18:06:04Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-9m9jm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:07:04Z is after 2025-08-24T17:21:41Z" Jan 20 18:07:04 crc kubenswrapper[4661]: I0120 18:07:04.249631 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:07:04 crc kubenswrapper[4661]: I0120 18:07:04.249748 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:07:04 crc kubenswrapper[4661]: I0120 18:07:04.249775 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:07:04 crc kubenswrapper[4661]: I0120 18:07:04.249817 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:07:04 crc kubenswrapper[4661]: I0120 18:07:04.249838 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:07:04Z","lastTransitionTime":"2026-01-20T18:07:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:07:04 crc kubenswrapper[4661]: I0120 18:07:04.280847 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fxb9d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3856f23c-8dc3-4b36-b3b7-955dff315250\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:06Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:06Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54a53d0636da9c6e7974633697967fa21ba02b0357019aca7c83994f57d06d84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66kpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://37fb98a4cea5fe59a694ef52ebebfd3366649970415c8bd3b1307e6d150ffe66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66kpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6bac19d8c5ba66dc20e5e4b90b2ba10efe69f218908b04abb221416f47e47f5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66kpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f5f5d96326cd37c1101488fff8b4ce215ce84766faf13112bed7df0a767de0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66kpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d53da47c39bd1f10fe866890f30f12f27cb0cfce0348c89fc0e89b3e8f563f2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66kpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://407e4d66f22050b80251fcb98ac7168d601d70dff1679bdaca0fc82d6068da41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66kpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://823de7d3705ac9e99525bc0fcc4f577fb363555af9dd66346f33065839076105\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://823de7d3705ac9e99525bc0fcc4f577fb363555af9dd66346f33065839076105\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-20T18:06:35Z\\\",\\\"message\\\":\\\".BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0120 18:06:35.219065 6217 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0120 18:06:35.219313 6217 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0120 18:06:35.219547 6217 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0120 18:06:35.220089 6217 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0120 18:06:35.220357 6217 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0120 18:06:35.220410 6217 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0120 18:06:35.220459 6217 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0120 18:06:35.221651 6217 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0120 18:06:35.221712 6217 factory.go:656] Stopping watch factory\\\\nI0120 18:06:35.221730 6217 ovnkube.go:599] Stopped ovnkube\\\\nI0120 1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T18:06:34Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-fxb9d_openshift-ovn-kubernetes(3856f23c-8dc3-4b36-b3b7-955dff315250)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66kpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dfbc19df20b659446872267891c3a922b6a01e39d8f0557505f25cdc5ba1a648\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66kpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://babd416d0d33b286f533dc5bd8d6904d24fd23632efce36edb6e13183fbd390a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://babd416d0d33b286f533dc5bd8d6904d24fd23632efce36edb6e13183fbd390a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T18:06:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66kpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T18:06:06Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fxb9d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:07:04Z is after 2025-08-24T17:21:41Z" Jan 20 18:07:04 crc kubenswrapper[4661]: I0120 18:07:04.300043 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f7511825-196e-48ea-a80c-f30a6806a15f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:05:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:05:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f30ca85f0d31021dde3b56c646ddd5d841e699b809c85e54afa944cc8035df6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://008613eee577926f777b6eba5a93379dca1203429fb29918bb057f2aba5eba4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:05:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://baf1692fe971ebe4534bc83cc471812d2b2883b6f97e53728ded6cd57b40c6f1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://faea3c0fefa61b8f0e07a050f59ca7b88d89a7ac8dba19ab019cff00fd782da3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T18:05:44Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:07:04Z is after 2025-08-24T17:21:41Z" Jan 20 18:07:04 crc kubenswrapper[4661]: I0120 18:07:04.317655 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d831477bdf455582c54cba87020fc1141541282a25169c4b9730a78855e5719\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:07:04Z is after 2025-08-24T17:21:41Z" Jan 20 18:07:04 crc kubenswrapper[4661]: I0120 18:07:04.335001 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-z97p2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b6f2401-3eb9-4ee4-b79c-6faee06bc21c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab1840afe6e204ba16157cfa4140926ab50dd66d6b72a0e49e4ef986f62c7e34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d04be3c87130e9506908a5ff0bf35490bafa64b4cec7b6ae1b67c4a8bd7df5d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-20T18:06:53Z\\\",\\\"message\\\":\\\"2026-01-20T18:06:08+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_6b6eb9a0-3822-47d9-9b83-21e46cfc33fa\\\\n2026-01-20T18:06:08+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_6b6eb9a0-3822-47d9-9b83-21e46cfc33fa to /host/opt/cni/bin/\\\\n2026-01-20T18:06:08Z [verbose] multus-daemon started\\\\n2026-01-20T18:06:08Z [verbose] Readiness Indicator file check\\\\n2026-01-20T18:06:53Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T18:06:06Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff8qg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T18:06:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-z97p2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:07:04Z is after 2025-08-24T17:21:41Z" Jan 20 18:07:04 crc kubenswrapper[4661]: I0120 18:07:04.352000 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4hf4s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2cada643-eb7b-4036-8788-500338f73fac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://59b5cf3db3513f82b52401408842627d3e40bdc3009c226548556808410b2289\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gwqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://846c1cab30f986276eb919ac7474fbde1b6d5edb6557ab47057723b68d78b782\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gwqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T18:06:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-4hf4s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:07:04Z is after 2025-08-24T17:21:41Z" Jan 20 18:07:04 crc kubenswrapper[4661]: I0120 18:07:04.355884 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:07:04 crc kubenswrapper[4661]: I0120 18:07:04.355933 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:07:04 crc kubenswrapper[4661]: I0120 18:07:04.355976 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:07:04 crc kubenswrapper[4661]: I0120 18:07:04.356001 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:07:04 crc kubenswrapper[4661]: I0120 18:07:04.356016 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:07:04Z","lastTransitionTime":"2026-01-20T18:07:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:07:04 crc kubenswrapper[4661]: I0120 18:07:04.369644 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26d03e00aaf9fc7a94d8fe25f4f6f7a028f4e5eb9956411442757ca8b2046d27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:07:04Z is after 2025-08-24T17:21:41Z" Jan 20 18:07:04 crc kubenswrapper[4661]: I0120 18:07:04.387312 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:07:04Z is after 2025-08-24T17:21:41Z" Jan 20 18:07:04 crc kubenswrapper[4661]: I0120 18:07:04.403749 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-svf7c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"78855c94-da90-4523-8d65-70f7fd153dee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce85015f47761ddd35031a4b2aa10eddde92a1f1ee206e6454b967b03b49372e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zvj2k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7dad5141c6e2e07d42bee1c473efffa900d0d900467b1524cd59962582696a3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zvj2k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T18:06:05Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-svf7c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:07:04Z is after 2025-08-24T17:21:41Z" Jan 20 18:07:04 crc kubenswrapper[4661]: I0120 18:07:04.415885 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-tfdrt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1c3f1ce7-0584-4bf1-8398-a277e9a4599b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://163c719cffaaa547e54e81b543b5f5b2ce5abf7f6309d2859831a14e42df189f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbq77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T18:06:07Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-tfdrt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:07:04Z is after 2025-08-24T17:21:41Z" Jan 20 18:07:04 crc kubenswrapper[4661]: I0120 18:07:04.432417 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e82a8ce9-e23c-4fbc-9d26-0e81374193ba\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:05:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:05:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:05:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://887bb0c57d5fcddfad0ffb44f39fb809f945050689a5fb64f145b607b2dcd4f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://786a785de87e5345f01ed57fb6cd17efebe4633953b9e6bc9c169469621aea5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fdf1da4bed4fed480e327c750f01ea201663449a9975540540859463e5b4821f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://892bacf66ebee9e56348d6d6f391b0fd23a5c99369ddaf9280590d8598b32e62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://892bacf66ebee9e56348d6d6f391b0fd23a5c99369ddaf9280590d8598b32e62\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T18:05:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T18:05:44Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T18:05:44Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:07:04Z is after 2025-08-24T17:21:41Z" Jan 20 18:07:04 crc kubenswrapper[4661]: I0120 18:07:04.447201 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aafdc595f8f331b863d71124f1aa3c686ec883829377108268dd78de88f498ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a15e7bb714cbcf03a4ed8925508be80b06b04f3cd455d293237554c8ad0fdeee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:07:04Z is after 2025-08-24T17:21:41Z" Jan 20 18:07:04 crc kubenswrapper[4661]: I0120 18:07:04.458367 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:07:04 crc kubenswrapper[4661]: I0120 18:07:04.458397 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:07:04 crc kubenswrapper[4661]: I0120 18:07:04.458404 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:07:04 crc kubenswrapper[4661]: I0120 18:07:04.458439 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:07:04 crc kubenswrapper[4661]: I0120 18:07:04.458449 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:07:04Z","lastTransitionTime":"2026-01-20T18:07:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:07:04 crc kubenswrapper[4661]: I0120 18:07:04.468042 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-j9j6p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e190abed-d178-4ce7-9485-f6090ecf8578\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ad84b24b0398f3f900b9440d55a7914e661a18580ef8b248ffdce4d8a6c75c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxwgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6923af243783c919b8d74338d7221f91f7c6b770d97eb3a2f7e30360376f071d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6923af243783c919b8d74338d7221f91f7c6b770d97eb3a2f7e30360376f071d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T18:06:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T18:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxwgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d61ecbabdd991af4f3f3005e3d6fab0d3f7fa863e7503f45dd91633dfc68c597\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d61ecbabdd991af4f3f3005e3d6fab0d3f7fa863e7503f45dd91633dfc68c597\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T18:06:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T18:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxwgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31c8fb341a4de1d1144737f83eb46ad0b301f7eb48dee0969da7ade7fbd513da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31c8fb341a4de1d1144737f83eb46ad0b301f7eb48dee0969da7ade7fbd513da\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T18:06:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T18:06:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxwgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db8122764bd0508f39da125b5849fbe3bad9558e511c18f26bdcf4e5b23ca3a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db8122764bd0508f39da125b5849fbe3bad9558e511c18f26bdcf4e5b23ca3a1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T18:06:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T18:06:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxwgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://60e382a199aa3a85c11fdf8c490a4f039a191cff8a604b004e2f4ea6dacb6800\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://60e382a199aa3a85c11fdf8c490a4f039a191cff8a604b004e2f4ea6dacb6800\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T18:06:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T18:06:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxwgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb0e9f6dd4681c1b791524e22d3f668ce544cdc72a33af01fa70f2dd93d2972f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb0e9f6dd4681c1b791524e22d3f668ce544cdc72a33af01fa70f2dd93d2972f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T18:06:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T18:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxwgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T18:06:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-j9j6p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:07:04Z is after 2025-08-24T17:21:41Z" Jan 20 18:07:04 crc kubenswrapper[4661]: I0120 18:07:04.562651 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:07:04 crc kubenswrapper[4661]: I0120 18:07:04.563412 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:07:04 crc kubenswrapper[4661]: I0120 18:07:04.563848 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:07:04 crc kubenswrapper[4661]: I0120 18:07:04.564150 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:07:04 crc kubenswrapper[4661]: I0120 18:07:04.564456 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:07:04Z","lastTransitionTime":"2026-01-20T18:07:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:07:04 crc kubenswrapper[4661]: I0120 18:07:04.669286 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:07:04 crc kubenswrapper[4661]: I0120 18:07:04.669758 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:07:04 crc kubenswrapper[4661]: I0120 18:07:04.669911 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:07:04 crc kubenswrapper[4661]: I0120 18:07:04.670066 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:07:04 crc kubenswrapper[4661]: I0120 18:07:04.670179 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:07:04Z","lastTransitionTime":"2026-01-20T18:07:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:07:04 crc kubenswrapper[4661]: I0120 18:07:04.774220 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:07:04 crc kubenswrapper[4661]: I0120 18:07:04.774819 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:07:04 crc kubenswrapper[4661]: I0120 18:07:04.775107 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:07:04 crc kubenswrapper[4661]: I0120 18:07:04.775345 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:07:04 crc kubenswrapper[4661]: I0120 18:07:04.775575 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:07:04Z","lastTransitionTime":"2026-01-20T18:07:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:07:04 crc kubenswrapper[4661]: I0120 18:07:04.881299 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:07:04 crc kubenswrapper[4661]: I0120 18:07:04.881862 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:07:04 crc kubenswrapper[4661]: I0120 18:07:04.882100 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:07:04 crc kubenswrapper[4661]: I0120 18:07:04.882303 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:07:04 crc kubenswrapper[4661]: I0120 18:07:04.882603 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:07:04Z","lastTransitionTime":"2026-01-20T18:07:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:07:04 crc kubenswrapper[4661]: I0120 18:07:04.987036 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:07:04 crc kubenswrapper[4661]: I0120 18:07:04.987118 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:07:04 crc kubenswrapper[4661]: I0120 18:07:04.987141 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:07:04 crc kubenswrapper[4661]: I0120 18:07:04.987212 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:07:04 crc kubenswrapper[4661]: I0120 18:07:04.987232 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:07:04Z","lastTransitionTime":"2026-01-20T18:07:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:07:05 crc kubenswrapper[4661]: I0120 18:07:05.089782 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:07:05 crc kubenswrapper[4661]: I0120 18:07:05.089828 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:07:05 crc kubenswrapper[4661]: I0120 18:07:05.089844 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:07:05 crc kubenswrapper[4661]: I0120 18:07:05.089887 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:07:05 crc kubenswrapper[4661]: I0120 18:07:05.089899 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:07:05Z","lastTransitionTime":"2026-01-20T18:07:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:07:05 crc kubenswrapper[4661]: I0120 18:07:05.138661 4661 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-30 00:05:21.300035296 +0000 UTC Jan 20 18:07:05 crc kubenswrapper[4661]: I0120 18:07:05.142138 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 18:07:05 crc kubenswrapper[4661]: E0120 18:07:05.142359 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 20 18:07:05 crc kubenswrapper[4661]: I0120 18:07:05.142375 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 18:07:05 crc kubenswrapper[4661]: I0120 18:07:05.142429 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dhd6h" Jan 20 18:07:05 crc kubenswrapper[4661]: I0120 18:07:05.142404 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 18:07:05 crc kubenswrapper[4661]: E0120 18:07:05.143399 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 20 18:07:05 crc kubenswrapper[4661]: E0120 18:07:05.143478 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dhd6h" podUID="58dc28d6-51a6-49ce-ab8e-ac0b6bbe7131" Jan 20 18:07:05 crc kubenswrapper[4661]: E0120 18:07:05.143629 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 20 18:07:05 crc kubenswrapper[4661]: I0120 18:07:05.144067 4661 scope.go:117] "RemoveContainer" containerID="823de7d3705ac9e99525bc0fcc4f577fb363555af9dd66346f33065839076105" Jan 20 18:07:05 crc kubenswrapper[4661]: I0120 18:07:05.228404 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:07:05 crc kubenswrapper[4661]: I0120 18:07:05.228474 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:07:05 crc kubenswrapper[4661]: I0120 18:07:05.228497 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:07:05 crc kubenswrapper[4661]: I0120 18:07:05.228529 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:07:05 crc kubenswrapper[4661]: I0120 18:07:05.228552 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:07:05Z","lastTransitionTime":"2026-01-20T18:07:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:07:05 crc kubenswrapper[4661]: I0120 18:07:05.332056 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:07:05 crc kubenswrapper[4661]: I0120 18:07:05.332146 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:07:05 crc kubenswrapper[4661]: I0120 18:07:05.332213 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:07:05 crc kubenswrapper[4661]: I0120 18:07:05.332247 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:07:05 crc kubenswrapper[4661]: I0120 18:07:05.332264 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:07:05Z","lastTransitionTime":"2026-01-20T18:07:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:07:05 crc kubenswrapper[4661]: I0120 18:07:05.435482 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:07:05 crc kubenswrapper[4661]: I0120 18:07:05.435556 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:07:05 crc kubenswrapper[4661]: I0120 18:07:05.435576 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:07:05 crc kubenswrapper[4661]: I0120 18:07:05.435604 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:07:05 crc kubenswrapper[4661]: I0120 18:07:05.435625 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:07:05Z","lastTransitionTime":"2026-01-20T18:07:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:07:05 crc kubenswrapper[4661]: I0120 18:07:05.539183 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:07:05 crc kubenswrapper[4661]: I0120 18:07:05.539266 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:07:05 crc kubenswrapper[4661]: I0120 18:07:05.539287 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:07:05 crc kubenswrapper[4661]: I0120 18:07:05.539329 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:07:05 crc kubenswrapper[4661]: I0120 18:07:05.539350 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:07:05Z","lastTransitionTime":"2026-01-20T18:07:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:07:05 crc kubenswrapper[4661]: I0120 18:07:05.642848 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:07:05 crc kubenswrapper[4661]: I0120 18:07:05.642923 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:07:05 crc kubenswrapper[4661]: I0120 18:07:05.642941 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:07:05 crc kubenswrapper[4661]: I0120 18:07:05.642980 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:07:05 crc kubenswrapper[4661]: I0120 18:07:05.643002 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:07:05Z","lastTransitionTime":"2026-01-20T18:07:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:07:05 crc kubenswrapper[4661]: I0120 18:07:05.746222 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:07:05 crc kubenswrapper[4661]: I0120 18:07:05.746305 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:07:05 crc kubenswrapper[4661]: I0120 18:07:05.746332 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:07:05 crc kubenswrapper[4661]: I0120 18:07:05.746368 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:07:05 crc kubenswrapper[4661]: I0120 18:07:05.746395 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:07:05Z","lastTransitionTime":"2026-01-20T18:07:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:07:05 crc kubenswrapper[4661]: I0120 18:07:05.850400 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:07:05 crc kubenswrapper[4661]: I0120 18:07:05.850462 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:07:05 crc kubenswrapper[4661]: I0120 18:07:05.850481 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:07:05 crc kubenswrapper[4661]: I0120 18:07:05.850509 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:07:05 crc kubenswrapper[4661]: I0120 18:07:05.850528 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:07:05Z","lastTransitionTime":"2026-01-20T18:07:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:07:05 crc kubenswrapper[4661]: I0120 18:07:05.954105 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:07:05 crc kubenswrapper[4661]: I0120 18:07:05.954181 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:07:05 crc kubenswrapper[4661]: I0120 18:07:05.954205 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:07:05 crc kubenswrapper[4661]: I0120 18:07:05.954236 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:07:05 crc kubenswrapper[4661]: I0120 18:07:05.954261 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:07:05Z","lastTransitionTime":"2026-01-20T18:07:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:07:06 crc kubenswrapper[4661]: I0120 18:07:06.058059 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:07:06 crc kubenswrapper[4661]: I0120 18:07:06.058160 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:07:06 crc kubenswrapper[4661]: I0120 18:07:06.058179 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:07:06 crc kubenswrapper[4661]: I0120 18:07:06.058208 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:07:06 crc kubenswrapper[4661]: I0120 18:07:06.058229 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:07:06Z","lastTransitionTime":"2026-01-20T18:07:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:07:06 crc kubenswrapper[4661]: I0120 18:07:06.139450 4661 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-02 08:32:40.16353851 +0000 UTC Jan 20 18:07:06 crc kubenswrapper[4661]: I0120 18:07:06.161157 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:07:06 crc kubenswrapper[4661]: I0120 18:07:06.161223 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:07:06 crc kubenswrapper[4661]: I0120 18:07:06.161241 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:07:06 crc kubenswrapper[4661]: I0120 18:07:06.161266 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:07:06 crc kubenswrapper[4661]: I0120 18:07:06.161289 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:07:06Z","lastTransitionTime":"2026-01-20T18:07:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:07:06 crc kubenswrapper[4661]: I0120 18:07:06.266627 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:07:06 crc kubenswrapper[4661]: I0120 18:07:06.266748 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:07:06 crc kubenswrapper[4661]: I0120 18:07:06.266810 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:07:06 crc kubenswrapper[4661]: I0120 18:07:06.266848 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:07:06 crc kubenswrapper[4661]: I0120 18:07:06.266873 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:07:06Z","lastTransitionTime":"2026-01-20T18:07:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:07:06 crc kubenswrapper[4661]: I0120 18:07:06.370768 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:07:06 crc kubenswrapper[4661]: I0120 18:07:06.370808 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:07:06 crc kubenswrapper[4661]: I0120 18:07:06.370818 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:07:06 crc kubenswrapper[4661]: I0120 18:07:06.370835 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:07:06 crc kubenswrapper[4661]: I0120 18:07:06.370846 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:07:06Z","lastTransitionTime":"2026-01-20T18:07:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:07:06 crc kubenswrapper[4661]: I0120 18:07:06.473929 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:07:06 crc kubenswrapper[4661]: I0120 18:07:06.474004 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:07:06 crc kubenswrapper[4661]: I0120 18:07:06.474022 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:07:06 crc kubenswrapper[4661]: I0120 18:07:06.474051 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:07:06 crc kubenswrapper[4661]: I0120 18:07:06.474069 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:07:06Z","lastTransitionTime":"2026-01-20T18:07:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:07:06 crc kubenswrapper[4661]: I0120 18:07:06.582279 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:07:06 crc kubenswrapper[4661]: I0120 18:07:06.582349 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:07:06 crc kubenswrapper[4661]: I0120 18:07:06.582374 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:07:06 crc kubenswrapper[4661]: I0120 18:07:06.582404 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:07:06 crc kubenswrapper[4661]: I0120 18:07:06.582429 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:07:06Z","lastTransitionTime":"2026-01-20T18:07:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:07:06 crc kubenswrapper[4661]: I0120 18:07:06.685934 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:07:06 crc kubenswrapper[4661]: I0120 18:07:06.686004 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:07:06 crc kubenswrapper[4661]: I0120 18:07:06.686022 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:07:06 crc kubenswrapper[4661]: I0120 18:07:06.686052 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:07:06 crc kubenswrapper[4661]: I0120 18:07:06.686072 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:07:06Z","lastTransitionTime":"2026-01-20T18:07:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:07:06 crc kubenswrapper[4661]: I0120 18:07:06.789147 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:07:06 crc kubenswrapper[4661]: I0120 18:07:06.789213 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:07:06 crc kubenswrapper[4661]: I0120 18:07:06.789224 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:07:06 crc kubenswrapper[4661]: I0120 18:07:06.789244 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:07:06 crc kubenswrapper[4661]: I0120 18:07:06.789257 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:07:06Z","lastTransitionTime":"2026-01-20T18:07:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:07:06 crc kubenswrapper[4661]: I0120 18:07:06.884864 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-fxb9d_3856f23c-8dc3-4b36-b3b7-955dff315250/ovnkube-controller/2.log" Jan 20 18:07:06 crc kubenswrapper[4661]: I0120 18:07:06.893219 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:07:06 crc kubenswrapper[4661]: I0120 18:07:06.893356 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:07:06 crc kubenswrapper[4661]: I0120 18:07:06.893381 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:07:06 crc kubenswrapper[4661]: I0120 18:07:06.893462 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:07:06 crc kubenswrapper[4661]: I0120 18:07:06.893494 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:07:06Z","lastTransitionTime":"2026-01-20T18:07:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:07:06 crc kubenswrapper[4661]: I0120 18:07:06.894633 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fxb9d" event={"ID":"3856f23c-8dc3-4b36-b3b7-955dff315250","Type":"ContainerStarted","Data":"a87060cdc681c7299812827e762152da6ae48e5862cda4b15a238c2ac16c60e7"} Jan 20 18:07:06 crc kubenswrapper[4661]: I0120 18:07:06.895250 4661 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-fxb9d" Jan 20 18:07:06 crc kubenswrapper[4661]: I0120 18:07:06.920584 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d831477bdf455582c54cba87020fc1141541282a25169c4b9730a78855e5719\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:07:06Z is after 2025-08-24T17:21:41Z" Jan 20 18:07:06 crc kubenswrapper[4661]: I0120 18:07:06.948736 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-z97p2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b6f2401-3eb9-4ee4-b79c-6faee06bc21c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab1840afe6e204ba16157cfa4140926ab50dd66d6b72a0e49e4ef986f62c7e34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d04be3c87130e9506908a5ff0bf35490bafa64b4cec7b6ae1b67c4a8bd7df5d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-20T18:06:53Z\\\",\\\"message\\\":\\\"2026-01-20T18:06:08+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_6b6eb9a0-3822-47d9-9b83-21e46cfc33fa\\\\n2026-01-20T18:06:08+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_6b6eb9a0-3822-47d9-9b83-21e46cfc33fa to /host/opt/cni/bin/\\\\n2026-01-20T18:06:08Z [verbose] multus-daemon started\\\\n2026-01-20T18:06:08Z [verbose] Readiness Indicator file check\\\\n2026-01-20T18:06:53Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T18:06:06Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff8qg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T18:06:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-z97p2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:07:06Z is after 2025-08-24T17:21:41Z" Jan 20 18:07:06 crc kubenswrapper[4661]: I0120 18:07:06.967629 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4hf4s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2cada643-eb7b-4036-8788-500338f73fac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://59b5cf3db3513f82b52401408842627d3e40bdc3009c226548556808410b2289\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gwqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://846c1cab30f986276eb919ac7474fbde1b6d5edb6557ab47057723b68d78b782\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gwqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T18:06:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-4hf4s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:07:06Z is after 2025-08-24T17:21:41Z" Jan 20 18:07:06 crc kubenswrapper[4661]: I0120 18:07:06.977306 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 18:07:06 crc kubenswrapper[4661]: E0120 18:07:06.977578 4661 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 18:08:10.977528878 +0000 UTC m=+147.308318580 (durationBeforeRetry 1m4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 18:07:06 crc kubenswrapper[4661]: I0120 18:07:06.977698 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 18:07:06 crc kubenswrapper[4661]: E0120 18:07:06.977848 4661 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 20 18:07:06 crc kubenswrapper[4661]: I0120 18:07:06.977849 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 18:07:06 crc kubenswrapper[4661]: E0120 18:07:06.977951 4661 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-20 18:08:10.977922949 +0000 UTC m=+147.308712641 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 20 18:07:06 crc kubenswrapper[4661]: E0120 18:07:06.978013 4661 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 20 18:07:06 crc kubenswrapper[4661]: I0120 18:07:06.978138 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 18:07:06 crc kubenswrapper[4661]: I0120 18:07:06.978245 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 18:07:06 crc kubenswrapper[4661]: E0120 18:07:06.978255 4661 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 20 18:07:06 crc kubenswrapper[4661]: E0120 18:07:06.978434 4661 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 20 18:07:06 crc kubenswrapper[4661]: E0120 18:07:06.978464 4661 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 20 18:07:06 crc kubenswrapper[4661]: E0120 18:07:06.978363 4661 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-20 18:08:10.9783139 +0000 UTC m=+147.309103602 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 20 18:07:06 crc kubenswrapper[4661]: E0120 18:07:06.978625 4661 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 20 18:07:06 crc kubenswrapper[4661]: E0120 18:07:06.978786 4661 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 20 18:07:06 crc kubenswrapper[4661]: E0120 18:07:06.978822 4661 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 20 18:07:06 crc kubenswrapper[4661]: E0120 18:07:06.978637 4661 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-20 18:08:10.978604749 +0000 UTC m=+147.309394531 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 20 18:07:06 crc kubenswrapper[4661]: E0120 18:07:06.978942 4661 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-20 18:08:10.978905748 +0000 UTC m=+147.309695620 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 20 18:07:06 crc kubenswrapper[4661]: I0120 18:07:06.990220 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f7511825-196e-48ea-a80c-f30a6806a15f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:05:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:05:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f30ca85f0d31021dde3b56c646ddd5d841e699b809c85e54afa944cc8035df6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://008613eee577926f777b6eba5a93379dca1203429fb29918bb057f2aba5eba4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:05:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://baf1692fe971ebe4534bc83cc471812d2b2883b6f97e53728ded6cd57b40c6f1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://faea3c0fefa61b8f0e07a050f59ca7b88d89a7ac8dba19ab019cff00fd782da3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T18:05:44Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:07:06Z is after 2025-08-24T17:21:41Z" Jan 20 18:07:06 crc kubenswrapper[4661]: I0120 18:07:06.996935 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:07:06 crc kubenswrapper[4661]: I0120 18:07:06.996996 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:07:06 crc kubenswrapper[4661]: I0120 18:07:06.997008 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:07:06 crc kubenswrapper[4661]: I0120 18:07:06.997029 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:07:06 crc kubenswrapper[4661]: I0120 18:07:06.997042 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:07:06Z","lastTransitionTime":"2026-01-20T18:07:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:07:07 crc kubenswrapper[4661]: I0120 18:07:07.008986 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-svf7c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"78855c94-da90-4523-8d65-70f7fd153dee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce85015f47761ddd35031a4b2aa10eddde92a1f1ee206e6454b967b03b49372e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zvj2k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7dad5141c6e2e07d42bee1c473efffa900d0d900467b1524cd59962582696a3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zvj2k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T18:06:05Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-svf7c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:07:07Z is after 2025-08-24T17:21:41Z" Jan 20 18:07:07 crc kubenswrapper[4661]: I0120 18:07:07.024228 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-tfdrt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1c3f1ce7-0584-4bf1-8398-a277e9a4599b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://163c719cffaaa547e54e81b543b5f5b2ce5abf7f6309d2859831a14e42df189f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbq77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T18:06:07Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-tfdrt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:07:07Z is after 2025-08-24T17:21:41Z" Jan 20 18:07:07 crc kubenswrapper[4661]: I0120 18:07:07.048082 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26d03e00aaf9fc7a94d8fe25f4f6f7a028f4e5eb9956411442757ca8b2046d27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:07:07Z is after 2025-08-24T17:21:41Z" Jan 20 18:07:07 crc kubenswrapper[4661]: I0120 18:07:07.075525 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:07:07Z is after 2025-08-24T17:21:41Z" Jan 20 18:07:07 crc kubenswrapper[4661]: I0120 18:07:07.100574 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:07:07 crc kubenswrapper[4661]: I0120 18:07:07.100621 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:07:07 crc kubenswrapper[4661]: I0120 18:07:07.100630 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:07:07 crc kubenswrapper[4661]: I0120 18:07:07.100648 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:07:07 crc kubenswrapper[4661]: I0120 18:07:07.100658 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:07:07Z","lastTransitionTime":"2026-01-20T18:07:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:07:07 crc kubenswrapper[4661]: I0120 18:07:07.110514 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aafdc595f8f331b863d71124f1aa3c686ec883829377108268dd78de88f498ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a15e7bb714cbcf03a4ed8925508be80b06b04f3cd455d293237554c8ad0fdeee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:07:07Z is after 2025-08-24T17:21:41Z" Jan 20 18:07:07 crc kubenswrapper[4661]: I0120 18:07:07.130757 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-j9j6p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e190abed-d178-4ce7-9485-f6090ecf8578\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ad84b24b0398f3f900b9440d55a7914e661a18580ef8b248ffdce4d8a6c75c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxwgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6923af243783c919b8d74338d7221f91f7c6b770d97eb3a2f7e30360376f071d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6923af243783c919b8d74338d7221f91f7c6b770d97eb3a2f7e30360376f071d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T18:06:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T18:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxwgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d61ecbabdd991af4f3f3005e3d6fab0d3f7fa863e7503f45dd91633dfc68c597\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d61ecbabdd991af4f3f3005e3d6fab0d3f7fa863e7503f45dd91633dfc68c597\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T18:06:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T18:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxwgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31c8fb341a4de1d1144737f83eb46ad0b301f7eb48dee0969da7ade7fbd513da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31c8fb341a4de1d1144737f83eb46ad0b301f7eb48dee0969da7ade7fbd513da\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T18:06:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T18:06:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxwgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db8122764bd0508f39da125b5849fbe3bad9558e511c18f26bdcf4e5b23ca3a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db8122764bd0508f39da125b5849fbe3bad9558e511c18f26bdcf4e5b23ca3a1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T18:06:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T18:06:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxwgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://60e382a199aa3a85c11fdf8c490a4f039a191cff8a604b004e2f4ea6dacb6800\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://60e382a199aa3a85c11fdf8c490a4f039a191cff8a604b004e2f4ea6dacb6800\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T18:06:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T18:06:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxwgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb0e9f6dd4681c1b791524e22d3f668ce544cdc72a33af01fa70f2dd93d2972f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb0e9f6dd4681c1b791524e22d3f668ce544cdc72a33af01fa70f2dd93d2972f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T18:06:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T18:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxwgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T18:06:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-j9j6p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:07:07Z is after 2025-08-24T17:21:41Z" Jan 20 18:07:07 crc kubenswrapper[4661]: I0120 18:07:07.140395 4661 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-11 19:06:31.098126953 +0000 UTC Jan 20 18:07:07 crc kubenswrapper[4661]: I0120 18:07:07.141639 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 18:07:07 crc kubenswrapper[4661]: I0120 18:07:07.141768 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 18:07:07 crc kubenswrapper[4661]: I0120 18:07:07.141774 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 18:07:07 crc kubenswrapper[4661]: I0120 18:07:07.141804 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dhd6h" Jan 20 18:07:07 crc kubenswrapper[4661]: E0120 18:07:07.141987 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 20 18:07:07 crc kubenswrapper[4661]: E0120 18:07:07.142102 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 20 18:07:07 crc kubenswrapper[4661]: E0120 18:07:07.142226 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 20 18:07:07 crc kubenswrapper[4661]: E0120 18:07:07.142382 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dhd6h" podUID="58dc28d6-51a6-49ce-ab8e-ac0b6bbe7131" Jan 20 18:07:07 crc kubenswrapper[4661]: I0120 18:07:07.149270 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e82a8ce9-e23c-4fbc-9d26-0e81374193ba\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:05:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:05:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:05:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://887bb0c57d5fcddfad0ffb44f39fb809f945050689a5fb64f145b607b2dcd4f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://786a785de87e5345f01ed57fb6cd17efebe4633953b9e6bc9c169469621aea5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fdf1da4bed4fed480e327c750f01ea201663449a9975540540859463e5b4821f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://892bacf66ebee9e56348d6d6f391b0fd23a5c99369ddaf9280590d8598b32e62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://892bacf66ebee9e56348d6d6f391b0fd23a5c99369ddaf9280590d8598b32e62\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T18:05:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T18:05:44Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T18:05:44Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:07:07Z is after 2025-08-24T17:21:41Z" Jan 20 18:07:07 crc kubenswrapper[4661]: I0120 18:07:07.163796 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:03Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:07:07Z is after 2025-08-24T17:21:41Z" Jan 20 18:07:07 crc kubenswrapper[4661]: I0120 18:07:07.177799 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-9m9jm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c44ff326-6791-438a-8d65-b2be26e9c819\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de5a607340e429cf954b1b6e147c4dbff99ffee4d311e9692410698574915af2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kn7nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T18:06:04Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-9m9jm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:07:07Z is after 2025-08-24T17:21:41Z" Jan 20 18:07:07 crc kubenswrapper[4661]: I0120 18:07:07.203406 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:07:07 crc kubenswrapper[4661]: I0120 18:07:07.203660 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:07:07 crc kubenswrapper[4661]: I0120 18:07:07.203795 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:07:07 crc kubenswrapper[4661]: I0120 18:07:07.203899 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:07:07 crc kubenswrapper[4661]: I0120 18:07:07.203988 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:07:07Z","lastTransitionTime":"2026-01-20T18:07:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:07:07 crc kubenswrapper[4661]: I0120 18:07:07.205275 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fxb9d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3856f23c-8dc3-4b36-b3b7-955dff315250\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:06Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:06Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54a53d0636da9c6e7974633697967fa21ba02b0357019aca7c83994f57d06d84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66kpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://37fb98a4cea5fe59a694ef52ebebfd3366649970415c8bd3b1307e6d150ffe66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66kpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6bac19d8c5ba66dc20e5e4b90b2ba10efe69f218908b04abb221416f47e47f5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66kpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f5f5d96326cd37c1101488fff8b4ce215ce84766faf13112bed7df0a767de0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66kpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d53da47c39bd1f10fe866890f30f12f27cb0cfce0348c89fc0e89b3e8f563f2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66kpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://407e4d66f22050b80251fcb98ac7168d601d70dff1679bdaca0fc82d6068da41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66kpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a87060cdc681c7299812827e762152da6ae48e5862cda4b15a238c2ac16c60e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://823de7d3705ac9e99525bc0fcc4f577fb363555af9dd66346f33065839076105\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-20T18:06:35Z\\\",\\\"message\\\":\\\".BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0120 18:06:35.219065 6217 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0120 18:06:35.219313 6217 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0120 18:06:35.219547 6217 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0120 18:06:35.220089 6217 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0120 18:06:35.220357 6217 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0120 18:06:35.220410 6217 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0120 18:06:35.220459 6217 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0120 18:06:35.221651 6217 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0120 18:06:35.221712 6217 factory.go:656] Stopping watch factory\\\\nI0120 18:06:35.221730 6217 ovnkube.go:599] Stopped ovnkube\\\\nI0120 1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T18:06:34Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:07:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66kpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dfbc19df20b659446872267891c3a922b6a01e39d8f0557505f25cdc5ba1a648\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66kpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://babd416d0d33b286f533dc5bd8d6904d24fd23632efce36edb6e13183fbd390a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://babd416d0d33b286f533dc5bd8d6904d24fd23632efce36edb6e13183fbd390a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T18:06:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66kpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T18:06:06Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fxb9d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:07:07Z is after 2025-08-24T17:21:41Z" Jan 20 18:07:07 crc kubenswrapper[4661]: I0120 18:07:07.218240 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-dhd6h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"58dc28d6-51a6-49ce-ab8e-ac0b6bbe7131\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j95bf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j95bf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T18:06:19Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-dhd6h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:07:07Z is after 2025-08-24T17:21:41Z" Jan 20 18:07:07 crc kubenswrapper[4661]: I0120 18:07:07.240700 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5947c5f0-b932-4127-a183-6b9023784c81\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:05:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:05:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:05:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2286c38d543136df613b2611b8d494d0777a950158adb169c26708335c024251\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7995b8e096ce8c7adf28d9baa4e12d943a697db80ee2b6e6b347b334e44b0df\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a1fb928361cffd6f14855b6c1cf5964eccc9f923435bf79dddd8f0c94decd9a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a8e025f49d745d0d846c606a3ec9dd6fbd2d255e8662ba1fd1a65f0d4289e77\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3584f02089912eecb6ea77d78d4f093929ce92631cb9ea758f1311268963b6b1\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-20T18:06:02Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0120 18:05:56.920405 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0120 18:05:56.921589 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1862726087/tls.crt::/tmp/serving-cert-1862726087/tls.key\\\\\\\"\\\\nI0120 18:06:02.544098 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0120 18:06:02.549414 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0120 18:06:02.549439 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0120 18:06:02.549472 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0120 18:06:02.549479 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0120 18:06:02.569160 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0120 18:06:02.569400 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0120 18:06:02.569474 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0120 18:06:02.569536 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0120 18:06:02.569594 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0120 18:06:02.569648 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0120 18:06:02.569744 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0120 18:06:02.569342 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0120 18:06:02.573278 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T18:05:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f09e5fcc7fafac7a11257184f5919c06b5b2e56a677b67c664e6489d9a581a20\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:05:46Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6eedc9bdf3c37af238cf9ad5172a8d93751c0641cbf43057016157f086c77538\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6eedc9bdf3c37af238cf9ad5172a8d93751c0641cbf43057016157f086c77538\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T18:05:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T18:05:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T18:05:44Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:07:07Z is after 2025-08-24T17:21:41Z" Jan 20 18:07:07 crc kubenswrapper[4661]: I0120 18:07:07.256958 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:03Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:07:07Z is after 2025-08-24T17:21:41Z" Jan 20 18:07:07 crc kubenswrapper[4661]: I0120 18:07:07.306805 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:07:07 crc kubenswrapper[4661]: I0120 18:07:07.307261 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:07:07 crc kubenswrapper[4661]: I0120 18:07:07.307371 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:07:07 crc kubenswrapper[4661]: I0120 18:07:07.307472 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:07:07 crc kubenswrapper[4661]: I0120 18:07:07.307556 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:07:07Z","lastTransitionTime":"2026-01-20T18:07:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:07:07 crc kubenswrapper[4661]: I0120 18:07:07.410441 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:07:07 crc kubenswrapper[4661]: I0120 18:07:07.410491 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:07:07 crc kubenswrapper[4661]: I0120 18:07:07.410501 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:07:07 crc kubenswrapper[4661]: I0120 18:07:07.410520 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:07:07 crc kubenswrapper[4661]: I0120 18:07:07.410534 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:07:07Z","lastTransitionTime":"2026-01-20T18:07:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:07:07 crc kubenswrapper[4661]: I0120 18:07:07.513938 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:07:07 crc kubenswrapper[4661]: I0120 18:07:07.513985 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:07:07 crc kubenswrapper[4661]: I0120 18:07:07.513998 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:07:07 crc kubenswrapper[4661]: I0120 18:07:07.514017 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:07:07 crc kubenswrapper[4661]: I0120 18:07:07.514028 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:07:07Z","lastTransitionTime":"2026-01-20T18:07:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:07:07 crc kubenswrapper[4661]: I0120 18:07:07.616744 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:07:07 crc kubenswrapper[4661]: I0120 18:07:07.616815 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:07:07 crc kubenswrapper[4661]: I0120 18:07:07.616833 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:07:07 crc kubenswrapper[4661]: I0120 18:07:07.616862 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:07:07 crc kubenswrapper[4661]: I0120 18:07:07.616880 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:07:07Z","lastTransitionTime":"2026-01-20T18:07:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:07:07 crc kubenswrapper[4661]: I0120 18:07:07.720753 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:07:07 crc kubenswrapper[4661]: I0120 18:07:07.721195 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:07:07 crc kubenswrapper[4661]: I0120 18:07:07.721500 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:07:07 crc kubenswrapper[4661]: I0120 18:07:07.721740 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:07:07 crc kubenswrapper[4661]: I0120 18:07:07.721929 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:07:07Z","lastTransitionTime":"2026-01-20T18:07:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:07:07 crc kubenswrapper[4661]: I0120 18:07:07.825297 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:07:07 crc kubenswrapper[4661]: I0120 18:07:07.825376 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:07:07 crc kubenswrapper[4661]: I0120 18:07:07.825394 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:07:07 crc kubenswrapper[4661]: I0120 18:07:07.825421 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:07:07 crc kubenswrapper[4661]: I0120 18:07:07.825443 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:07:07Z","lastTransitionTime":"2026-01-20T18:07:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:07:07 crc kubenswrapper[4661]: I0120 18:07:07.929257 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:07:07 crc kubenswrapper[4661]: I0120 18:07:07.929315 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:07:07 crc kubenswrapper[4661]: I0120 18:07:07.929331 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:07:07 crc kubenswrapper[4661]: I0120 18:07:07.929356 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:07:07 crc kubenswrapper[4661]: I0120 18:07:07.929373 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:07:07Z","lastTransitionTime":"2026-01-20T18:07:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:07:08 crc kubenswrapper[4661]: I0120 18:07:08.033551 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:07:08 crc kubenswrapper[4661]: I0120 18:07:08.033663 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:07:08 crc kubenswrapper[4661]: I0120 18:07:08.033709 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:07:08 crc kubenswrapper[4661]: I0120 18:07:08.033740 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:07:08 crc kubenswrapper[4661]: I0120 18:07:08.033761 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:07:08Z","lastTransitionTime":"2026-01-20T18:07:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:07:08 crc kubenswrapper[4661]: I0120 18:07:08.137462 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:07:08 crc kubenswrapper[4661]: I0120 18:07:08.137599 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:07:08 crc kubenswrapper[4661]: I0120 18:07:08.137624 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:07:08 crc kubenswrapper[4661]: I0120 18:07:08.137659 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:07:08 crc kubenswrapper[4661]: I0120 18:07:08.137731 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:07:08Z","lastTransitionTime":"2026-01-20T18:07:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:07:08 crc kubenswrapper[4661]: I0120 18:07:08.140903 4661 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-03 07:29:40.06055586 +0000 UTC Jan 20 18:07:08 crc kubenswrapper[4661]: I0120 18:07:08.242075 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:07:08 crc kubenswrapper[4661]: I0120 18:07:08.242156 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:07:08 crc kubenswrapper[4661]: I0120 18:07:08.242216 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:07:08 crc kubenswrapper[4661]: I0120 18:07:08.242278 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:07:08 crc kubenswrapper[4661]: I0120 18:07:08.242314 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:07:08Z","lastTransitionTime":"2026-01-20T18:07:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:07:08 crc kubenswrapper[4661]: I0120 18:07:08.345932 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:07:08 crc kubenswrapper[4661]: I0120 18:07:08.346040 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:07:08 crc kubenswrapper[4661]: I0120 18:07:08.346058 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:07:08 crc kubenswrapper[4661]: I0120 18:07:08.346125 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:07:08 crc kubenswrapper[4661]: I0120 18:07:08.346149 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:07:08Z","lastTransitionTime":"2026-01-20T18:07:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:07:08 crc kubenswrapper[4661]: I0120 18:07:08.449549 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:07:08 crc kubenswrapper[4661]: I0120 18:07:08.450011 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:07:08 crc kubenswrapper[4661]: I0120 18:07:08.450026 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:07:08 crc kubenswrapper[4661]: I0120 18:07:08.450047 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:07:08 crc kubenswrapper[4661]: I0120 18:07:08.450060 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:07:08Z","lastTransitionTime":"2026-01-20T18:07:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:07:08 crc kubenswrapper[4661]: I0120 18:07:08.524355 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:07:08 crc kubenswrapper[4661]: I0120 18:07:08.524409 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:07:08 crc kubenswrapper[4661]: I0120 18:07:08.524421 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:07:08 crc kubenswrapper[4661]: I0120 18:07:08.524440 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:07:08 crc kubenswrapper[4661]: I0120 18:07:08.524454 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:07:08Z","lastTransitionTime":"2026-01-20T18:07:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:07:08 crc kubenswrapper[4661]: E0120 18:07:08.540432 4661 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T18:07:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T18:07:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T18:07:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T18:07:08Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T18:07:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T18:07:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T18:07:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T18:07:08Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7f2069d5-53e0-4198-b42b-b73aa1252865\\\",\\\"systemUUID\\\":\\\"727045d4-7edb-4891-a9ee-dd5ccba890df\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:07:08Z is after 2025-08-24T17:21:41Z" Jan 20 18:07:08 crc kubenswrapper[4661]: I0120 18:07:08.546661 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:07:08 crc kubenswrapper[4661]: I0120 18:07:08.546721 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:07:08 crc kubenswrapper[4661]: I0120 18:07:08.546733 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:07:08 crc kubenswrapper[4661]: I0120 18:07:08.546755 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:07:08 crc kubenswrapper[4661]: I0120 18:07:08.546768 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:07:08Z","lastTransitionTime":"2026-01-20T18:07:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:07:08 crc kubenswrapper[4661]: E0120 18:07:08.566257 4661 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T18:07:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T18:07:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T18:07:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T18:07:08Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T18:07:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T18:07:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T18:07:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T18:07:08Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7f2069d5-53e0-4198-b42b-b73aa1252865\\\",\\\"systemUUID\\\":\\\"727045d4-7edb-4891-a9ee-dd5ccba890df\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:07:08Z is after 2025-08-24T17:21:41Z" Jan 20 18:07:08 crc kubenswrapper[4661]: I0120 18:07:08.570939 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:07:08 crc kubenswrapper[4661]: I0120 18:07:08.570971 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:07:08 crc kubenswrapper[4661]: I0120 18:07:08.570979 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:07:08 crc kubenswrapper[4661]: I0120 18:07:08.570995 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:07:08 crc kubenswrapper[4661]: I0120 18:07:08.571007 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:07:08Z","lastTransitionTime":"2026-01-20T18:07:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:07:08 crc kubenswrapper[4661]: E0120 18:07:08.582288 4661 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T18:07:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T18:07:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T18:07:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T18:07:08Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T18:07:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T18:07:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T18:07:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T18:07:08Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7f2069d5-53e0-4198-b42b-b73aa1252865\\\",\\\"systemUUID\\\":\\\"727045d4-7edb-4891-a9ee-dd5ccba890df\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:07:08Z is after 2025-08-24T17:21:41Z" Jan 20 18:07:08 crc kubenswrapper[4661]: I0120 18:07:08.586992 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:07:08 crc kubenswrapper[4661]: I0120 18:07:08.587075 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:07:08 crc kubenswrapper[4661]: I0120 18:07:08.587100 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:07:08 crc kubenswrapper[4661]: I0120 18:07:08.587133 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:07:08 crc kubenswrapper[4661]: I0120 18:07:08.587158 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:07:08Z","lastTransitionTime":"2026-01-20T18:07:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:07:08 crc kubenswrapper[4661]: E0120 18:07:08.604571 4661 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T18:07:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T18:07:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T18:07:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T18:07:08Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T18:07:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T18:07:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T18:07:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T18:07:08Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7f2069d5-53e0-4198-b42b-b73aa1252865\\\",\\\"systemUUID\\\":\\\"727045d4-7edb-4891-a9ee-dd5ccba890df\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:07:08Z is after 2025-08-24T17:21:41Z" Jan 20 18:07:08 crc kubenswrapper[4661]: I0120 18:07:08.609839 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:07:08 crc kubenswrapper[4661]: I0120 18:07:08.609895 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:07:08 crc kubenswrapper[4661]: I0120 18:07:08.609907 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:07:08 crc kubenswrapper[4661]: I0120 18:07:08.609929 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:07:08 crc kubenswrapper[4661]: I0120 18:07:08.609943 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:07:08Z","lastTransitionTime":"2026-01-20T18:07:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:07:08 crc kubenswrapper[4661]: E0120 18:07:08.626924 4661 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T18:07:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T18:07:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T18:07:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T18:07:08Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T18:07:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T18:07:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T18:07:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T18:07:08Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7f2069d5-53e0-4198-b42b-b73aa1252865\\\",\\\"systemUUID\\\":\\\"727045d4-7edb-4891-a9ee-dd5ccba890df\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:07:08Z is after 2025-08-24T17:21:41Z" Jan 20 18:07:08 crc kubenswrapper[4661]: E0120 18:07:08.627135 4661 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 20 18:07:08 crc kubenswrapper[4661]: I0120 18:07:08.629094 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:07:08 crc kubenswrapper[4661]: I0120 18:07:08.629267 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:07:08 crc kubenswrapper[4661]: I0120 18:07:08.629418 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:07:08 crc kubenswrapper[4661]: I0120 18:07:08.629556 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:07:08 crc kubenswrapper[4661]: I0120 18:07:08.629738 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:07:08Z","lastTransitionTime":"2026-01-20T18:07:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:07:08 crc kubenswrapper[4661]: I0120 18:07:08.733296 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:07:08 crc kubenswrapper[4661]: I0120 18:07:08.733399 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:07:08 crc kubenswrapper[4661]: I0120 18:07:08.733426 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:07:08 crc kubenswrapper[4661]: I0120 18:07:08.733464 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:07:08 crc kubenswrapper[4661]: I0120 18:07:08.733490 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:07:08Z","lastTransitionTime":"2026-01-20T18:07:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:07:08 crc kubenswrapper[4661]: I0120 18:07:08.838040 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:07:08 crc kubenswrapper[4661]: I0120 18:07:08.838141 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:07:08 crc kubenswrapper[4661]: I0120 18:07:08.838168 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:07:08 crc kubenswrapper[4661]: I0120 18:07:08.838202 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:07:08 crc kubenswrapper[4661]: I0120 18:07:08.838228 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:07:08Z","lastTransitionTime":"2026-01-20T18:07:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:07:08 crc kubenswrapper[4661]: I0120 18:07:08.908509 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-fxb9d_3856f23c-8dc3-4b36-b3b7-955dff315250/ovnkube-controller/3.log" Jan 20 18:07:08 crc kubenswrapper[4661]: I0120 18:07:08.909649 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-fxb9d_3856f23c-8dc3-4b36-b3b7-955dff315250/ovnkube-controller/2.log" Jan 20 18:07:08 crc kubenswrapper[4661]: I0120 18:07:08.914801 4661 generic.go:334] "Generic (PLEG): container finished" podID="3856f23c-8dc3-4b36-b3b7-955dff315250" containerID="a87060cdc681c7299812827e762152da6ae48e5862cda4b15a238c2ac16c60e7" exitCode=1 Jan 20 18:07:08 crc kubenswrapper[4661]: I0120 18:07:08.914886 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fxb9d" event={"ID":"3856f23c-8dc3-4b36-b3b7-955dff315250","Type":"ContainerDied","Data":"a87060cdc681c7299812827e762152da6ae48e5862cda4b15a238c2ac16c60e7"} Jan 20 18:07:08 crc kubenswrapper[4661]: I0120 18:07:08.914962 4661 scope.go:117] "RemoveContainer" containerID="823de7d3705ac9e99525bc0fcc4f577fb363555af9dd66346f33065839076105" Jan 20 18:07:08 crc kubenswrapper[4661]: I0120 18:07:08.917086 4661 scope.go:117] "RemoveContainer" containerID="a87060cdc681c7299812827e762152da6ae48e5862cda4b15a238c2ac16c60e7" Jan 20 18:07:08 crc kubenswrapper[4661]: E0120 18:07:08.917430 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-fxb9d_openshift-ovn-kubernetes(3856f23c-8dc3-4b36-b3b7-955dff315250)\"" pod="openshift-ovn-kubernetes/ovnkube-node-fxb9d" podUID="3856f23c-8dc3-4b36-b3b7-955dff315250" Jan 20 18:07:08 crc kubenswrapper[4661]: I0120 18:07:08.942231 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e82a8ce9-e23c-4fbc-9d26-0e81374193ba\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:05:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:05:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:05:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://887bb0c57d5fcddfad0ffb44f39fb809f945050689a5fb64f145b607b2dcd4f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://786a785de87e5345f01ed57fb6cd17efebe4633953b9e6bc9c169469621aea5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fdf1da4bed4fed480e327c750f01ea201663449a9975540540859463e5b4821f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://892bacf66ebee9e56348d6d6f391b0fd23a5c99369ddaf9280590d8598b32e62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://892bacf66ebee9e56348d6d6f391b0fd23a5c99369ddaf9280590d8598b32e62\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T18:05:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T18:05:44Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T18:05:44Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:07:08Z is after 2025-08-24T17:21:41Z" Jan 20 18:07:08 crc kubenswrapper[4661]: I0120 18:07:08.943930 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:07:08 crc kubenswrapper[4661]: I0120 18:07:08.943974 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:07:08 crc kubenswrapper[4661]: I0120 18:07:08.943989 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:07:08 crc kubenswrapper[4661]: I0120 18:07:08.944016 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:07:08 crc kubenswrapper[4661]: I0120 18:07:08.944034 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:07:08Z","lastTransitionTime":"2026-01-20T18:07:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:07:08 crc kubenswrapper[4661]: I0120 18:07:08.966596 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aafdc595f8f331b863d71124f1aa3c686ec883829377108268dd78de88f498ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a15e7bb714cbcf03a4ed8925508be80b06b04f3cd455d293237554c8ad0fdeee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:07:08Z is after 2025-08-24T17:21:41Z" Jan 20 18:07:08 crc kubenswrapper[4661]: I0120 18:07:08.997507 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-j9j6p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e190abed-d178-4ce7-9485-f6090ecf8578\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ad84b24b0398f3f900b9440d55a7914e661a18580ef8b248ffdce4d8a6c75c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxwgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6923af243783c919b8d74338d7221f91f7c6b770d97eb3a2f7e30360376f071d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6923af243783c919b8d74338d7221f91f7c6b770d97eb3a2f7e30360376f071d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T18:06:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T18:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxwgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d61ecbabdd991af4f3f3005e3d6fab0d3f7fa863e7503f45dd91633dfc68c597\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d61ecbabdd991af4f3f3005e3d6fab0d3f7fa863e7503f45dd91633dfc68c597\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T18:06:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T18:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxwgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31c8fb341a4de1d1144737f83eb46ad0b301f7eb48dee0969da7ade7fbd513da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31c8fb341a4de1d1144737f83eb46ad0b301f7eb48dee0969da7ade7fbd513da\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T18:06:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T18:06:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxwgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db8122764bd0508f39da125b5849fbe3bad9558e511c18f26bdcf4e5b23ca3a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db8122764bd0508f39da125b5849fbe3bad9558e511c18f26bdcf4e5b23ca3a1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T18:06:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T18:06:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxwgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://60e382a199aa3a85c11fdf8c490a4f039a191cff8a604b004e2f4ea6dacb6800\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://60e382a199aa3a85c11fdf8c490a4f039a191cff8a604b004e2f4ea6dacb6800\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T18:06:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T18:06:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxwgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb0e9f6dd4681c1b791524e22d3f668ce544cdc72a33af01fa70f2dd93d2972f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb0e9f6dd4681c1b791524e22d3f668ce544cdc72a33af01fa70f2dd93d2972f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T18:06:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T18:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxwgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T18:06:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-j9j6p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:07:08Z is after 2025-08-24T17:21:41Z" Jan 20 18:07:09 crc kubenswrapper[4661]: I0120 18:07:09.030546 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fxb9d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3856f23c-8dc3-4b36-b3b7-955dff315250\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:06Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:06Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54a53d0636da9c6e7974633697967fa21ba02b0357019aca7c83994f57d06d84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66kpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://37fb98a4cea5fe59a694ef52ebebfd3366649970415c8bd3b1307e6d150ffe66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66kpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6bac19d8c5ba66dc20e5e4b90b2ba10efe69f218908b04abb221416f47e47f5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66kpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f5f5d96326cd37c1101488fff8b4ce215ce84766faf13112bed7df0a767de0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66kpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d53da47c39bd1f10fe866890f30f12f27cb0cfce0348c89fc0e89b3e8f563f2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66kpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://407e4d66f22050b80251fcb98ac7168d601d70dff1679bdaca0fc82d6068da41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66kpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a87060cdc681c7299812827e762152da6ae48e5862cda4b15a238c2ac16c60e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://823de7d3705ac9e99525bc0fcc4f577fb363555af9dd66346f33065839076105\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-20T18:06:35Z\\\",\\\"message\\\":\\\".BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0120 18:06:35.219065 6217 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0120 18:06:35.219313 6217 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0120 18:06:35.219547 6217 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0120 18:06:35.220089 6217 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0120 18:06:35.220357 6217 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0120 18:06:35.220410 6217 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0120 18:06:35.220459 6217 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0120 18:06:35.221651 6217 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0120 18:06:35.221712 6217 factory.go:656] Stopping watch factory\\\\nI0120 18:06:35.221730 6217 ovnkube.go:599] Stopped ovnkube\\\\nI0120 1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T18:06:34Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a87060cdc681c7299812827e762152da6ae48e5862cda4b15a238c2ac16c60e7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-20T18:07:07Z\\\",\\\"message\\\":\\\" occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:07:07Z is after 2025-08-24T17:21:41Z]\\\\nI0120 18:07:07.515424 6629 obj_retry.go:303] Retry object setup: *v1.Pod openshift-dns/node-resolver-9m9jm\\\\nI0120 18:07:07.515429 6629 obj_retry.go:365] Adding new object: *v1.Pod openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4hf4s\\\\nI0120 18:07:07.515342 6629 ovn.go:134] Ensuring zone local for Pod openshift-multus/multus-additional-cni-plugins-j9j6p in node crc\\\\nI0120 18:07:07.515435 6629 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf\\\\nI0120 18:07:07.515441 6629 obj_retry.go:303] Retry object setup: *v1.Pod openshift-kube-apiserver/kube-apiserver-crc\\\\nI0120 18:07:07.515448 6629 ovn.go:134] Ensuring zone local for Pod openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4hf4s in node crc\\\\nI0120 18:07:07.515451 6629 obj_retry.go:386] Retry successful for *v1.Pod openshift-multus/multus-additional-cni-plugins-j9j6p after 0 failed attempt(s)\\\\nI0120 18:07:07.515454 6629 obj_retry.go:365] Adding new object: *v1.Pod openshift-kube-apiserver/kube-apiserver-crc\\\\nI01\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T18:07:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66kpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dfbc19df20b659446872267891c3a922b6a01e39d8f0557505f25cdc5ba1a648\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66kpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://babd416d0d33b286f533dc5bd8d6904d24fd23632efce36edb6e13183fbd390a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://babd416d0d33b286f533dc5bd8d6904d24fd23632efce36edb6e13183fbd390a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T18:06:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66kpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T18:06:06Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fxb9d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:07:09Z is after 2025-08-24T17:21:41Z" Jan 20 18:07:09 crc kubenswrapper[4661]: I0120 18:07:09.046514 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:07:09 crc kubenswrapper[4661]: I0120 18:07:09.046569 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:07:09 crc kubenswrapper[4661]: I0120 18:07:09.046586 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:07:09 crc kubenswrapper[4661]: I0120 18:07:09.046612 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:07:09 crc kubenswrapper[4661]: I0120 18:07:09.046631 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:07:09Z","lastTransitionTime":"2026-01-20T18:07:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:07:09 crc kubenswrapper[4661]: I0120 18:07:09.047469 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-dhd6h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"58dc28d6-51a6-49ce-ab8e-ac0b6bbe7131\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j95bf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j95bf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T18:06:19Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-dhd6h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:07:09Z is after 2025-08-24T17:21:41Z" Jan 20 18:07:09 crc kubenswrapper[4661]: I0120 18:07:09.066425 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5947c5f0-b932-4127-a183-6b9023784c81\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:05:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:05:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:05:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2286c38d543136df613b2611b8d494d0777a950158adb169c26708335c024251\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7995b8e096ce8c7adf28d9baa4e12d943a697db80ee2b6e6b347b334e44b0df\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a1fb928361cffd6f14855b6c1cf5964eccc9f923435bf79dddd8f0c94decd9a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a8e025f49d745d0d846c606a3ec9dd6fbd2d255e8662ba1fd1a65f0d4289e77\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3584f02089912eecb6ea77d78d4f093929ce92631cb9ea758f1311268963b6b1\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-20T18:06:02Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0120 18:05:56.920405 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0120 18:05:56.921589 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1862726087/tls.crt::/tmp/serving-cert-1862726087/tls.key\\\\\\\"\\\\nI0120 18:06:02.544098 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0120 18:06:02.549414 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0120 18:06:02.549439 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0120 18:06:02.549472 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0120 18:06:02.549479 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0120 18:06:02.569160 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0120 18:06:02.569400 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0120 18:06:02.569474 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0120 18:06:02.569536 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0120 18:06:02.569594 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0120 18:06:02.569648 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0120 18:06:02.569744 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0120 18:06:02.569342 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0120 18:06:02.573278 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T18:05:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f09e5fcc7fafac7a11257184f5919c06b5b2e56a677b67c664e6489d9a581a20\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:05:46Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6eedc9bdf3c37af238cf9ad5172a8d93751c0641cbf43057016157f086c77538\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6eedc9bdf3c37af238cf9ad5172a8d93751c0641cbf43057016157f086c77538\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T18:05:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T18:05:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T18:05:44Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:07:09Z is after 2025-08-24T17:21:41Z" Jan 20 18:07:09 crc kubenswrapper[4661]: I0120 18:07:09.082242 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:03Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:07:09Z is after 2025-08-24T17:21:41Z" Jan 20 18:07:09 crc kubenswrapper[4661]: I0120 18:07:09.102879 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:03Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:07:09Z is after 2025-08-24T17:21:41Z" Jan 20 18:07:09 crc kubenswrapper[4661]: I0120 18:07:09.117811 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-9m9jm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c44ff326-6791-438a-8d65-b2be26e9c819\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de5a607340e429cf954b1b6e147c4dbff99ffee4d311e9692410698574915af2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kn7nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T18:06:04Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-9m9jm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:07:09Z is after 2025-08-24T17:21:41Z" Jan 20 18:07:09 crc kubenswrapper[4661]: I0120 18:07:09.135364 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4hf4s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2cada643-eb7b-4036-8788-500338f73fac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://59b5cf3db3513f82b52401408842627d3e40bdc3009c226548556808410b2289\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gwqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://846c1cab30f986276eb919ac7474fbde1b6d5edb6557ab47057723b68d78b782\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gwqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T18:06:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-4hf4s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:07:09Z is after 2025-08-24T17:21:41Z" Jan 20 18:07:09 crc kubenswrapper[4661]: I0120 18:07:09.141987 4661 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-31 04:15:33.321712839 +0000 UTC Jan 20 18:07:09 crc kubenswrapper[4661]: I0120 18:07:09.142035 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 18:07:09 crc kubenswrapper[4661]: I0120 18:07:09.142091 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 18:07:09 crc kubenswrapper[4661]: I0120 18:07:09.142148 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 18:07:09 crc kubenswrapper[4661]: I0120 18:07:09.142148 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dhd6h" Jan 20 18:07:09 crc kubenswrapper[4661]: E0120 18:07:09.142208 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 20 18:07:09 crc kubenswrapper[4661]: E0120 18:07:09.142368 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 20 18:07:09 crc kubenswrapper[4661]: E0120 18:07:09.142461 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dhd6h" podUID="58dc28d6-51a6-49ce-ab8e-ac0b6bbe7131" Jan 20 18:07:09 crc kubenswrapper[4661]: E0120 18:07:09.142753 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 20 18:07:09 crc kubenswrapper[4661]: I0120 18:07:09.154568 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:07:09 crc kubenswrapper[4661]: I0120 18:07:09.154643 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:07:09 crc kubenswrapper[4661]: I0120 18:07:09.154660 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:07:09 crc kubenswrapper[4661]: I0120 18:07:09.154719 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:07:09 crc kubenswrapper[4661]: I0120 18:07:09.154749 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:07:09Z","lastTransitionTime":"2026-01-20T18:07:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:07:09 crc kubenswrapper[4661]: I0120 18:07:09.157255 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f7511825-196e-48ea-a80c-f30a6806a15f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:05:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:05:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f30ca85f0d31021dde3b56c646ddd5d841e699b809c85e54afa944cc8035df6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://008613eee577926f777b6eba5a93379dca1203429fb29918bb057f2aba5eba4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:05:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://baf1692fe971ebe4534bc83cc471812d2b2883b6f97e53728ded6cd57b40c6f1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://faea3c0fefa61b8f0e07a050f59ca7b88d89a7ac8dba19ab019cff00fd782da3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T18:05:44Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:07:09Z is after 2025-08-24T17:21:41Z" Jan 20 18:07:09 crc kubenswrapper[4661]: I0120 18:07:09.176452 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d831477bdf455582c54cba87020fc1141541282a25169c4b9730a78855e5719\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:07:09Z is after 2025-08-24T17:21:41Z" Jan 20 18:07:09 crc kubenswrapper[4661]: I0120 18:07:09.196046 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-z97p2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b6f2401-3eb9-4ee4-b79c-6faee06bc21c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab1840afe6e204ba16157cfa4140926ab50dd66d6b72a0e49e4ef986f62c7e34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d04be3c87130e9506908a5ff0bf35490bafa64b4cec7b6ae1b67c4a8bd7df5d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-20T18:06:53Z\\\",\\\"message\\\":\\\"2026-01-20T18:06:08+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_6b6eb9a0-3822-47d9-9b83-21e46cfc33fa\\\\n2026-01-20T18:06:08+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_6b6eb9a0-3822-47d9-9b83-21e46cfc33fa to /host/opt/cni/bin/\\\\n2026-01-20T18:06:08Z [verbose] multus-daemon started\\\\n2026-01-20T18:06:08Z [verbose] Readiness Indicator file check\\\\n2026-01-20T18:06:53Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T18:06:06Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff8qg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T18:06:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-z97p2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:07:09Z is after 2025-08-24T17:21:41Z" Jan 20 18:07:09 crc kubenswrapper[4661]: I0120 18:07:09.219701 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26d03e00aaf9fc7a94d8fe25f4f6f7a028f4e5eb9956411442757ca8b2046d27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:07:09Z is after 2025-08-24T17:21:41Z" Jan 20 18:07:09 crc kubenswrapper[4661]: I0120 18:07:09.241260 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:07:09Z is after 2025-08-24T17:21:41Z" Jan 20 18:07:09 crc kubenswrapper[4661]: I0120 18:07:09.258966 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:07:09 crc kubenswrapper[4661]: I0120 18:07:09.259039 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:07:09 crc kubenswrapper[4661]: I0120 18:07:09.259062 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:07:09 crc kubenswrapper[4661]: I0120 18:07:09.259095 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:07:09 crc kubenswrapper[4661]: I0120 18:07:09.259117 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:07:09Z","lastTransitionTime":"2026-01-20T18:07:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:07:09 crc kubenswrapper[4661]: I0120 18:07:09.262201 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-svf7c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"78855c94-da90-4523-8d65-70f7fd153dee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce85015f47761ddd35031a4b2aa10eddde92a1f1ee206e6454b967b03b49372e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zvj2k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7dad5141c6e2e07d42bee1c473efffa900d0d900467b1524cd59962582696a3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zvj2k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T18:06:05Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-svf7c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:07:09Z is after 2025-08-24T17:21:41Z" Jan 20 18:07:09 crc kubenswrapper[4661]: I0120 18:07:09.277232 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-tfdrt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1c3f1ce7-0584-4bf1-8398-a277e9a4599b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://163c719cffaaa547e54e81b543b5f5b2ce5abf7f6309d2859831a14e42df189f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbq77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T18:06:07Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-tfdrt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:07:09Z is after 2025-08-24T17:21:41Z" Jan 20 18:07:09 crc kubenswrapper[4661]: I0120 18:07:09.363192 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:07:09 crc kubenswrapper[4661]: I0120 18:07:09.363261 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:07:09 crc kubenswrapper[4661]: I0120 18:07:09.363278 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:07:09 crc kubenswrapper[4661]: I0120 18:07:09.363306 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:07:09 crc kubenswrapper[4661]: I0120 18:07:09.363324 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:07:09Z","lastTransitionTime":"2026-01-20T18:07:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:07:09 crc kubenswrapper[4661]: I0120 18:07:09.467246 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:07:09 crc kubenswrapper[4661]: I0120 18:07:09.467308 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:07:09 crc kubenswrapper[4661]: I0120 18:07:09.467327 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:07:09 crc kubenswrapper[4661]: I0120 18:07:09.467356 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:07:09 crc kubenswrapper[4661]: I0120 18:07:09.467375 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:07:09Z","lastTransitionTime":"2026-01-20T18:07:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:07:09 crc kubenswrapper[4661]: I0120 18:07:09.571149 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:07:09 crc kubenswrapper[4661]: I0120 18:07:09.571223 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:07:09 crc kubenswrapper[4661]: I0120 18:07:09.571250 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:07:09 crc kubenswrapper[4661]: I0120 18:07:09.571280 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:07:09 crc kubenswrapper[4661]: I0120 18:07:09.571299 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:07:09Z","lastTransitionTime":"2026-01-20T18:07:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:07:09 crc kubenswrapper[4661]: I0120 18:07:09.674917 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:07:09 crc kubenswrapper[4661]: I0120 18:07:09.674996 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:07:09 crc kubenswrapper[4661]: I0120 18:07:09.675013 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:07:09 crc kubenswrapper[4661]: I0120 18:07:09.675034 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:07:09 crc kubenswrapper[4661]: I0120 18:07:09.675047 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:07:09Z","lastTransitionTime":"2026-01-20T18:07:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:07:09 crc kubenswrapper[4661]: I0120 18:07:09.778296 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:07:09 crc kubenswrapper[4661]: I0120 18:07:09.778403 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:07:09 crc kubenswrapper[4661]: I0120 18:07:09.778426 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:07:09 crc kubenswrapper[4661]: I0120 18:07:09.778460 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:07:09 crc kubenswrapper[4661]: I0120 18:07:09.778486 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:07:09Z","lastTransitionTime":"2026-01-20T18:07:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:07:09 crc kubenswrapper[4661]: I0120 18:07:09.882636 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:07:09 crc kubenswrapper[4661]: I0120 18:07:09.882759 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:07:09 crc kubenswrapper[4661]: I0120 18:07:09.882781 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:07:09 crc kubenswrapper[4661]: I0120 18:07:09.882810 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:07:09 crc kubenswrapper[4661]: I0120 18:07:09.882830 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:07:09Z","lastTransitionTime":"2026-01-20T18:07:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:07:09 crc kubenswrapper[4661]: I0120 18:07:09.922306 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-fxb9d_3856f23c-8dc3-4b36-b3b7-955dff315250/ovnkube-controller/3.log" Jan 20 18:07:09 crc kubenswrapper[4661]: I0120 18:07:09.987903 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:07:09 crc kubenswrapper[4661]: I0120 18:07:09.988003 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:07:09 crc kubenswrapper[4661]: I0120 18:07:09.988026 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:07:09 crc kubenswrapper[4661]: I0120 18:07:09.988061 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:07:09 crc kubenswrapper[4661]: I0120 18:07:09.988083 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:07:09Z","lastTransitionTime":"2026-01-20T18:07:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:07:10 crc kubenswrapper[4661]: I0120 18:07:10.092574 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:07:10 crc kubenswrapper[4661]: I0120 18:07:10.092652 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:07:10 crc kubenswrapper[4661]: I0120 18:07:10.092710 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:07:10 crc kubenswrapper[4661]: I0120 18:07:10.092748 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:07:10 crc kubenswrapper[4661]: I0120 18:07:10.092773 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:07:10Z","lastTransitionTime":"2026-01-20T18:07:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:07:10 crc kubenswrapper[4661]: I0120 18:07:10.142328 4661 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-09 01:56:32.185298853 +0000 UTC Jan 20 18:07:10 crc kubenswrapper[4661]: I0120 18:07:10.160647 4661 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Jan 20 18:07:10 crc kubenswrapper[4661]: I0120 18:07:10.196854 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:07:10 crc kubenswrapper[4661]: I0120 18:07:10.196924 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:07:10 crc kubenswrapper[4661]: I0120 18:07:10.196943 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:07:10 crc kubenswrapper[4661]: I0120 18:07:10.196967 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:07:10 crc kubenswrapper[4661]: I0120 18:07:10.196986 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:07:10Z","lastTransitionTime":"2026-01-20T18:07:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:07:10 crc kubenswrapper[4661]: I0120 18:07:10.300662 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:07:10 crc kubenswrapper[4661]: I0120 18:07:10.300784 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:07:10 crc kubenswrapper[4661]: I0120 18:07:10.300805 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:07:10 crc kubenswrapper[4661]: I0120 18:07:10.300836 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:07:10 crc kubenswrapper[4661]: I0120 18:07:10.300856 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:07:10Z","lastTransitionTime":"2026-01-20T18:07:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:07:10 crc kubenswrapper[4661]: I0120 18:07:10.404154 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:07:10 crc kubenswrapper[4661]: I0120 18:07:10.404233 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:07:10 crc kubenswrapper[4661]: I0120 18:07:10.404252 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:07:10 crc kubenswrapper[4661]: I0120 18:07:10.404287 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:07:10 crc kubenswrapper[4661]: I0120 18:07:10.404308 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:07:10Z","lastTransitionTime":"2026-01-20T18:07:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:07:10 crc kubenswrapper[4661]: I0120 18:07:10.508150 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:07:10 crc kubenswrapper[4661]: I0120 18:07:10.508227 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:07:10 crc kubenswrapper[4661]: I0120 18:07:10.508250 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:07:10 crc kubenswrapper[4661]: I0120 18:07:10.508285 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:07:10 crc kubenswrapper[4661]: I0120 18:07:10.508309 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:07:10Z","lastTransitionTime":"2026-01-20T18:07:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:07:10 crc kubenswrapper[4661]: I0120 18:07:10.611733 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:07:10 crc kubenswrapper[4661]: I0120 18:07:10.611811 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:07:10 crc kubenswrapper[4661]: I0120 18:07:10.611829 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:07:10 crc kubenswrapper[4661]: I0120 18:07:10.611858 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:07:10 crc kubenswrapper[4661]: I0120 18:07:10.611878 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:07:10Z","lastTransitionTime":"2026-01-20T18:07:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:07:10 crc kubenswrapper[4661]: I0120 18:07:10.714290 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:07:10 crc kubenswrapper[4661]: I0120 18:07:10.714350 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:07:10 crc kubenswrapper[4661]: I0120 18:07:10.714368 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:07:10 crc kubenswrapper[4661]: I0120 18:07:10.714391 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:07:10 crc kubenswrapper[4661]: I0120 18:07:10.714407 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:07:10Z","lastTransitionTime":"2026-01-20T18:07:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:07:10 crc kubenswrapper[4661]: I0120 18:07:10.817796 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:07:10 crc kubenswrapper[4661]: I0120 18:07:10.817856 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:07:10 crc kubenswrapper[4661]: I0120 18:07:10.817878 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:07:10 crc kubenswrapper[4661]: I0120 18:07:10.817912 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:07:10 crc kubenswrapper[4661]: I0120 18:07:10.817940 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:07:10Z","lastTransitionTime":"2026-01-20T18:07:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:07:10 crc kubenswrapper[4661]: I0120 18:07:10.921943 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:07:10 crc kubenswrapper[4661]: I0120 18:07:10.922025 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:07:10 crc kubenswrapper[4661]: I0120 18:07:10.922044 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:07:10 crc kubenswrapper[4661]: I0120 18:07:10.922072 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:07:10 crc kubenswrapper[4661]: I0120 18:07:10.922093 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:07:10Z","lastTransitionTime":"2026-01-20T18:07:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:07:11 crc kubenswrapper[4661]: I0120 18:07:11.025949 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:07:11 crc kubenswrapper[4661]: I0120 18:07:11.026021 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:07:11 crc kubenswrapper[4661]: I0120 18:07:11.026039 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:07:11 crc kubenswrapper[4661]: I0120 18:07:11.026068 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:07:11 crc kubenswrapper[4661]: I0120 18:07:11.026088 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:07:11Z","lastTransitionTime":"2026-01-20T18:07:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:07:11 crc kubenswrapper[4661]: I0120 18:07:11.129575 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:07:11 crc kubenswrapper[4661]: I0120 18:07:11.129787 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:07:11 crc kubenswrapper[4661]: I0120 18:07:11.129820 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:07:11 crc kubenswrapper[4661]: I0120 18:07:11.129851 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:07:11 crc kubenswrapper[4661]: I0120 18:07:11.129874 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:07:11Z","lastTransitionTime":"2026-01-20T18:07:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:07:11 crc kubenswrapper[4661]: I0120 18:07:11.141546 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dhd6h" Jan 20 18:07:11 crc kubenswrapper[4661]: I0120 18:07:11.141645 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 18:07:11 crc kubenswrapper[4661]: I0120 18:07:11.141728 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 18:07:11 crc kubenswrapper[4661]: E0120 18:07:11.141925 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dhd6h" podUID="58dc28d6-51a6-49ce-ab8e-ac0b6bbe7131" Jan 20 18:07:11 crc kubenswrapper[4661]: I0120 18:07:11.141977 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 18:07:11 crc kubenswrapper[4661]: E0120 18:07:11.142156 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 20 18:07:11 crc kubenswrapper[4661]: E0120 18:07:11.142465 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 20 18:07:11 crc kubenswrapper[4661]: I0120 18:07:11.142800 4661 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-02 20:04:39.510229784 +0000 UTC Jan 20 18:07:11 crc kubenswrapper[4661]: E0120 18:07:11.142765 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 20 18:07:11 crc kubenswrapper[4661]: I0120 18:07:11.233368 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:07:11 crc kubenswrapper[4661]: I0120 18:07:11.233445 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:07:11 crc kubenswrapper[4661]: I0120 18:07:11.233468 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:07:11 crc kubenswrapper[4661]: I0120 18:07:11.233493 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:07:11 crc kubenswrapper[4661]: I0120 18:07:11.233511 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:07:11Z","lastTransitionTime":"2026-01-20T18:07:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:07:11 crc kubenswrapper[4661]: I0120 18:07:11.336072 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:07:11 crc kubenswrapper[4661]: I0120 18:07:11.336115 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:07:11 crc kubenswrapper[4661]: I0120 18:07:11.336125 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:07:11 crc kubenswrapper[4661]: I0120 18:07:11.336141 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:07:11 crc kubenswrapper[4661]: I0120 18:07:11.336152 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:07:11Z","lastTransitionTime":"2026-01-20T18:07:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:07:11 crc kubenswrapper[4661]: I0120 18:07:11.439487 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:07:11 crc kubenswrapper[4661]: I0120 18:07:11.439549 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:07:11 crc kubenswrapper[4661]: I0120 18:07:11.439561 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:07:11 crc kubenswrapper[4661]: I0120 18:07:11.439586 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:07:11 crc kubenswrapper[4661]: I0120 18:07:11.439602 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:07:11Z","lastTransitionTime":"2026-01-20T18:07:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:07:11 crc kubenswrapper[4661]: I0120 18:07:11.543051 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:07:11 crc kubenswrapper[4661]: I0120 18:07:11.543115 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:07:11 crc kubenswrapper[4661]: I0120 18:07:11.543125 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:07:11 crc kubenswrapper[4661]: I0120 18:07:11.543145 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:07:11 crc kubenswrapper[4661]: I0120 18:07:11.543491 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:07:11Z","lastTransitionTime":"2026-01-20T18:07:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:07:11 crc kubenswrapper[4661]: I0120 18:07:11.647521 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:07:11 crc kubenswrapper[4661]: I0120 18:07:11.647636 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:07:11 crc kubenswrapper[4661]: I0120 18:07:11.647663 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:07:11 crc kubenswrapper[4661]: I0120 18:07:11.647737 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:07:11 crc kubenswrapper[4661]: I0120 18:07:11.647761 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:07:11Z","lastTransitionTime":"2026-01-20T18:07:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:07:11 crc kubenswrapper[4661]: I0120 18:07:11.752021 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:07:11 crc kubenswrapper[4661]: I0120 18:07:11.752146 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:07:11 crc kubenswrapper[4661]: I0120 18:07:11.752164 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:07:11 crc kubenswrapper[4661]: I0120 18:07:11.752194 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:07:11 crc kubenswrapper[4661]: I0120 18:07:11.752213 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:07:11Z","lastTransitionTime":"2026-01-20T18:07:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:07:11 crc kubenswrapper[4661]: I0120 18:07:11.856630 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:07:11 crc kubenswrapper[4661]: I0120 18:07:11.856757 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:07:11 crc kubenswrapper[4661]: I0120 18:07:11.856776 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:07:11 crc kubenswrapper[4661]: I0120 18:07:11.856805 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:07:11 crc kubenswrapper[4661]: I0120 18:07:11.856826 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:07:11Z","lastTransitionTime":"2026-01-20T18:07:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:07:11 crc kubenswrapper[4661]: I0120 18:07:11.961115 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:07:11 crc kubenswrapper[4661]: I0120 18:07:11.961163 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:07:11 crc kubenswrapper[4661]: I0120 18:07:11.961175 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:07:11 crc kubenswrapper[4661]: I0120 18:07:11.961191 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:07:11 crc kubenswrapper[4661]: I0120 18:07:11.961201 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:07:11Z","lastTransitionTime":"2026-01-20T18:07:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:07:12 crc kubenswrapper[4661]: I0120 18:07:12.064737 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:07:12 crc kubenswrapper[4661]: I0120 18:07:12.064796 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:07:12 crc kubenswrapper[4661]: I0120 18:07:12.064809 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:07:12 crc kubenswrapper[4661]: I0120 18:07:12.064835 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:07:12 crc kubenswrapper[4661]: I0120 18:07:12.064849 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:07:12Z","lastTransitionTime":"2026-01-20T18:07:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:07:12 crc kubenswrapper[4661]: I0120 18:07:12.143594 4661 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-10 01:38:51.727961283 +0000 UTC Jan 20 18:07:12 crc kubenswrapper[4661]: I0120 18:07:12.168265 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:07:12 crc kubenswrapper[4661]: I0120 18:07:12.168330 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:07:12 crc kubenswrapper[4661]: I0120 18:07:12.168342 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:07:12 crc kubenswrapper[4661]: I0120 18:07:12.168367 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:07:12 crc kubenswrapper[4661]: I0120 18:07:12.168382 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:07:12Z","lastTransitionTime":"2026-01-20T18:07:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:07:12 crc kubenswrapper[4661]: I0120 18:07:12.272789 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:07:12 crc kubenswrapper[4661]: I0120 18:07:12.272876 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:07:12 crc kubenswrapper[4661]: I0120 18:07:12.272902 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:07:12 crc kubenswrapper[4661]: I0120 18:07:12.272998 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:07:12 crc kubenswrapper[4661]: I0120 18:07:12.273024 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:07:12Z","lastTransitionTime":"2026-01-20T18:07:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:07:12 crc kubenswrapper[4661]: I0120 18:07:12.377031 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:07:12 crc kubenswrapper[4661]: I0120 18:07:12.377180 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:07:12 crc kubenswrapper[4661]: I0120 18:07:12.377239 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:07:12 crc kubenswrapper[4661]: I0120 18:07:12.377270 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:07:12 crc kubenswrapper[4661]: I0120 18:07:12.377291 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:07:12Z","lastTransitionTime":"2026-01-20T18:07:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:07:12 crc kubenswrapper[4661]: I0120 18:07:12.481187 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:07:12 crc kubenswrapper[4661]: I0120 18:07:12.481284 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:07:12 crc kubenswrapper[4661]: I0120 18:07:12.481317 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:07:12 crc kubenswrapper[4661]: I0120 18:07:12.481351 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:07:12 crc kubenswrapper[4661]: I0120 18:07:12.481377 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:07:12Z","lastTransitionTime":"2026-01-20T18:07:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:07:12 crc kubenswrapper[4661]: I0120 18:07:12.585294 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:07:12 crc kubenswrapper[4661]: I0120 18:07:12.585373 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:07:12 crc kubenswrapper[4661]: I0120 18:07:12.585390 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:07:12 crc kubenswrapper[4661]: I0120 18:07:12.585422 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:07:12 crc kubenswrapper[4661]: I0120 18:07:12.585445 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:07:12Z","lastTransitionTime":"2026-01-20T18:07:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:07:12 crc kubenswrapper[4661]: I0120 18:07:12.688931 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:07:12 crc kubenswrapper[4661]: I0120 18:07:12.689006 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:07:12 crc kubenswrapper[4661]: I0120 18:07:12.689025 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:07:12 crc kubenswrapper[4661]: I0120 18:07:12.689058 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:07:12 crc kubenswrapper[4661]: I0120 18:07:12.689080 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:07:12Z","lastTransitionTime":"2026-01-20T18:07:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:07:12 crc kubenswrapper[4661]: I0120 18:07:12.792264 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:07:12 crc kubenswrapper[4661]: I0120 18:07:12.792314 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:07:12 crc kubenswrapper[4661]: I0120 18:07:12.792323 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:07:12 crc kubenswrapper[4661]: I0120 18:07:12.792339 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:07:12 crc kubenswrapper[4661]: I0120 18:07:12.792351 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:07:12Z","lastTransitionTime":"2026-01-20T18:07:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:07:12 crc kubenswrapper[4661]: I0120 18:07:12.896165 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:07:12 crc kubenswrapper[4661]: I0120 18:07:12.896236 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:07:12 crc kubenswrapper[4661]: I0120 18:07:12.896255 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:07:12 crc kubenswrapper[4661]: I0120 18:07:12.896284 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:07:12 crc kubenswrapper[4661]: I0120 18:07:12.896306 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:07:12Z","lastTransitionTime":"2026-01-20T18:07:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:07:13 crc kubenswrapper[4661]: I0120 18:07:13.001893 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:07:13 crc kubenswrapper[4661]: I0120 18:07:13.001980 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:07:13 crc kubenswrapper[4661]: I0120 18:07:13.002005 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:07:13 crc kubenswrapper[4661]: I0120 18:07:13.002041 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:07:13 crc kubenswrapper[4661]: I0120 18:07:13.002063 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:07:13Z","lastTransitionTime":"2026-01-20T18:07:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:07:13 crc kubenswrapper[4661]: I0120 18:07:13.106247 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:07:13 crc kubenswrapper[4661]: I0120 18:07:13.106316 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:07:13 crc kubenswrapper[4661]: I0120 18:07:13.106335 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:07:13 crc kubenswrapper[4661]: I0120 18:07:13.106363 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:07:13 crc kubenswrapper[4661]: I0120 18:07:13.106382 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:07:13Z","lastTransitionTime":"2026-01-20T18:07:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:07:13 crc kubenswrapper[4661]: I0120 18:07:13.141515 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dhd6h" Jan 20 18:07:13 crc kubenswrapper[4661]: I0120 18:07:13.141586 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 18:07:13 crc kubenswrapper[4661]: I0120 18:07:13.141606 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 18:07:13 crc kubenswrapper[4661]: I0120 18:07:13.141586 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 18:07:13 crc kubenswrapper[4661]: E0120 18:07:13.141786 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dhd6h" podUID="58dc28d6-51a6-49ce-ab8e-ac0b6bbe7131" Jan 20 18:07:13 crc kubenswrapper[4661]: E0120 18:07:13.142100 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 20 18:07:13 crc kubenswrapper[4661]: E0120 18:07:13.142313 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 20 18:07:13 crc kubenswrapper[4661]: E0120 18:07:13.142555 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 20 18:07:13 crc kubenswrapper[4661]: I0120 18:07:13.143894 4661 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-19 15:43:51.374591688 +0000 UTC Jan 20 18:07:13 crc kubenswrapper[4661]: I0120 18:07:13.210256 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:07:13 crc kubenswrapper[4661]: I0120 18:07:13.210323 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:07:13 crc kubenswrapper[4661]: I0120 18:07:13.210340 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:07:13 crc kubenswrapper[4661]: I0120 18:07:13.210368 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:07:13 crc kubenswrapper[4661]: I0120 18:07:13.210386 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:07:13Z","lastTransitionTime":"2026-01-20T18:07:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:07:13 crc kubenswrapper[4661]: I0120 18:07:13.313950 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:07:13 crc kubenswrapper[4661]: I0120 18:07:13.314090 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:07:13 crc kubenswrapper[4661]: I0120 18:07:13.314111 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:07:13 crc kubenswrapper[4661]: I0120 18:07:13.314139 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:07:13 crc kubenswrapper[4661]: I0120 18:07:13.314161 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:07:13Z","lastTransitionTime":"2026-01-20T18:07:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:07:13 crc kubenswrapper[4661]: I0120 18:07:13.417636 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:07:13 crc kubenswrapper[4661]: I0120 18:07:13.417775 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:07:13 crc kubenswrapper[4661]: I0120 18:07:13.417794 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:07:13 crc kubenswrapper[4661]: I0120 18:07:13.417824 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:07:13 crc kubenswrapper[4661]: I0120 18:07:13.417842 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:07:13Z","lastTransitionTime":"2026-01-20T18:07:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:07:13 crc kubenswrapper[4661]: I0120 18:07:13.521731 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:07:13 crc kubenswrapper[4661]: I0120 18:07:13.521847 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:07:13 crc kubenswrapper[4661]: I0120 18:07:13.521868 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:07:13 crc kubenswrapper[4661]: I0120 18:07:13.521941 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:07:13 crc kubenswrapper[4661]: I0120 18:07:13.521961 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:07:13Z","lastTransitionTime":"2026-01-20T18:07:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:07:13 crc kubenswrapper[4661]: I0120 18:07:13.625617 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:07:13 crc kubenswrapper[4661]: I0120 18:07:13.625736 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:07:13 crc kubenswrapper[4661]: I0120 18:07:13.625760 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:07:13 crc kubenswrapper[4661]: I0120 18:07:13.625799 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:07:13 crc kubenswrapper[4661]: I0120 18:07:13.625820 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:07:13Z","lastTransitionTime":"2026-01-20T18:07:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:07:13 crc kubenswrapper[4661]: I0120 18:07:13.739053 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:07:13 crc kubenswrapper[4661]: I0120 18:07:13.739149 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:07:13 crc kubenswrapper[4661]: I0120 18:07:13.739176 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:07:13 crc kubenswrapper[4661]: I0120 18:07:13.739214 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:07:13 crc kubenswrapper[4661]: I0120 18:07:13.739239 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:07:13Z","lastTransitionTime":"2026-01-20T18:07:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:07:13 crc kubenswrapper[4661]: I0120 18:07:13.842791 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:07:13 crc kubenswrapper[4661]: I0120 18:07:13.842846 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:07:13 crc kubenswrapper[4661]: I0120 18:07:13.842859 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:07:13 crc kubenswrapper[4661]: I0120 18:07:13.842882 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:07:13 crc kubenswrapper[4661]: I0120 18:07:13.842898 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:07:13Z","lastTransitionTime":"2026-01-20T18:07:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:07:13 crc kubenswrapper[4661]: I0120 18:07:13.945529 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:07:13 crc kubenswrapper[4661]: I0120 18:07:13.945593 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:07:13 crc kubenswrapper[4661]: I0120 18:07:13.945606 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:07:13 crc kubenswrapper[4661]: I0120 18:07:13.945628 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:07:13 crc kubenswrapper[4661]: I0120 18:07:13.945644 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:07:13Z","lastTransitionTime":"2026-01-20T18:07:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:07:14 crc kubenswrapper[4661]: I0120 18:07:14.048826 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:07:14 crc kubenswrapper[4661]: I0120 18:07:14.048883 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:07:14 crc kubenswrapper[4661]: I0120 18:07:14.048896 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:07:14 crc kubenswrapper[4661]: I0120 18:07:14.048919 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:07:14 crc kubenswrapper[4661]: I0120 18:07:14.048936 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:07:14Z","lastTransitionTime":"2026-01-20T18:07:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:07:14 crc kubenswrapper[4661]: I0120 18:07:14.144263 4661 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-11 22:04:18.881094063 +0000 UTC Jan 20 18:07:14 crc kubenswrapper[4661]: I0120 18:07:14.152406 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:07:14 crc kubenswrapper[4661]: I0120 18:07:14.152465 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:07:14 crc kubenswrapper[4661]: I0120 18:07:14.152482 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:07:14 crc kubenswrapper[4661]: I0120 18:07:14.152501 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:07:14 crc kubenswrapper[4661]: I0120 18:07:14.152516 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:07:14Z","lastTransitionTime":"2026-01-20T18:07:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:07:14 crc kubenswrapper[4661]: I0120 18:07:14.157609 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:03Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:07:14Z is after 2025-08-24T17:21:41Z" Jan 20 18:07:14 crc kubenswrapper[4661]: I0120 18:07:14.171328 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-9m9jm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c44ff326-6791-438a-8d65-b2be26e9c819\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de5a607340e429cf954b1b6e147c4dbff99ffee4d311e9692410698574915af2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kn7nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T18:06:04Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-9m9jm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:07:14Z is after 2025-08-24T17:21:41Z" Jan 20 18:07:14 crc kubenswrapper[4661]: I0120 18:07:14.192154 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fxb9d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3856f23c-8dc3-4b36-b3b7-955dff315250\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:06Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:06Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54a53d0636da9c6e7974633697967fa21ba02b0357019aca7c83994f57d06d84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66kpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://37fb98a4cea5fe59a694ef52ebebfd3366649970415c8bd3b1307e6d150ffe66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66kpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6bac19d8c5ba66dc20e5e4b90b2ba10efe69f218908b04abb221416f47e47f5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66kpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f5f5d96326cd37c1101488fff8b4ce215ce84766faf13112bed7df0a767de0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66kpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d53da47c39bd1f10fe866890f30f12f27cb0cfce0348c89fc0e89b3e8f563f2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66kpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://407e4d66f22050b80251fcb98ac7168d601d70dff1679bdaca0fc82d6068da41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66kpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a87060cdc681c7299812827e762152da6ae48e5862cda4b15a238c2ac16c60e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://823de7d3705ac9e99525bc0fcc4f577fb363555af9dd66346f33065839076105\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-20T18:06:35Z\\\",\\\"message\\\":\\\".BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0120 18:06:35.219065 6217 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0120 18:06:35.219313 6217 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0120 18:06:35.219547 6217 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0120 18:06:35.220089 6217 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0120 18:06:35.220357 6217 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0120 18:06:35.220410 6217 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0120 18:06:35.220459 6217 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0120 18:06:35.221651 6217 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0120 18:06:35.221712 6217 factory.go:656] Stopping watch factory\\\\nI0120 18:06:35.221730 6217 ovnkube.go:599] Stopped ovnkube\\\\nI0120 1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T18:06:34Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a87060cdc681c7299812827e762152da6ae48e5862cda4b15a238c2ac16c60e7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-20T18:07:07Z\\\",\\\"message\\\":\\\" occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:07:07Z is after 2025-08-24T17:21:41Z]\\\\nI0120 18:07:07.515424 6629 obj_retry.go:303] Retry object setup: *v1.Pod openshift-dns/node-resolver-9m9jm\\\\nI0120 18:07:07.515429 6629 obj_retry.go:365] Adding new object: *v1.Pod openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4hf4s\\\\nI0120 18:07:07.515342 6629 ovn.go:134] Ensuring zone local for Pod openshift-multus/multus-additional-cni-plugins-j9j6p in node crc\\\\nI0120 18:07:07.515435 6629 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf\\\\nI0120 18:07:07.515441 6629 obj_retry.go:303] Retry object setup: *v1.Pod openshift-kube-apiserver/kube-apiserver-crc\\\\nI0120 18:07:07.515448 6629 ovn.go:134] Ensuring zone local for Pod openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4hf4s in node crc\\\\nI0120 18:07:07.515451 6629 obj_retry.go:386] Retry successful for *v1.Pod openshift-multus/multus-additional-cni-plugins-j9j6p after 0 failed attempt(s)\\\\nI0120 18:07:07.515454 6629 obj_retry.go:365] Adding new object: *v1.Pod openshift-kube-apiserver/kube-apiserver-crc\\\\nI01\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T18:07:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66kpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dfbc19df20b659446872267891c3a922b6a01e39d8f0557505f25cdc5ba1a648\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66kpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://babd416d0d33b286f533dc5bd8d6904d24fd23632efce36edb6e13183fbd390a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://babd416d0d33b286f533dc5bd8d6904d24fd23632efce36edb6e13183fbd390a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T18:06:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66kpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T18:06:06Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fxb9d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:07:14Z is after 2025-08-24T17:21:41Z" Jan 20 18:07:14 crc kubenswrapper[4661]: I0120 18:07:14.206061 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-dhd6h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"58dc28d6-51a6-49ce-ab8e-ac0b6bbe7131\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j95bf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j95bf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T18:06:19Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-dhd6h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:07:14Z is after 2025-08-24T17:21:41Z" Jan 20 18:07:14 crc kubenswrapper[4661]: I0120 18:07:14.222778 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5947c5f0-b932-4127-a183-6b9023784c81\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:05:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:05:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:05:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2286c38d543136df613b2611b8d494d0777a950158adb169c26708335c024251\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7995b8e096ce8c7adf28d9baa4e12d943a697db80ee2b6e6b347b334e44b0df\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a1fb928361cffd6f14855b6c1cf5964eccc9f923435bf79dddd8f0c94decd9a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a8e025f49d745d0d846c606a3ec9dd6fbd2d255e8662ba1fd1a65f0d4289e77\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3584f02089912eecb6ea77d78d4f093929ce92631cb9ea758f1311268963b6b1\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-20T18:06:02Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0120 18:05:56.920405 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0120 18:05:56.921589 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1862726087/tls.crt::/tmp/serving-cert-1862726087/tls.key\\\\\\\"\\\\nI0120 18:06:02.544098 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0120 18:06:02.549414 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0120 18:06:02.549439 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0120 18:06:02.549472 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0120 18:06:02.549479 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0120 18:06:02.569160 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0120 18:06:02.569400 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0120 18:06:02.569474 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0120 18:06:02.569536 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0120 18:06:02.569594 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0120 18:06:02.569648 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0120 18:06:02.569744 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0120 18:06:02.569342 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0120 18:06:02.573278 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T18:05:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f09e5fcc7fafac7a11257184f5919c06b5b2e56a677b67c664e6489d9a581a20\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:05:46Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6eedc9bdf3c37af238cf9ad5172a8d93751c0641cbf43057016157f086c77538\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6eedc9bdf3c37af238cf9ad5172a8d93751c0641cbf43057016157f086c77538\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T18:05:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T18:05:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T18:05:44Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:07:14Z is after 2025-08-24T17:21:41Z" Jan 20 18:07:14 crc kubenswrapper[4661]: I0120 18:07:14.238691 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:03Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:07:14Z is after 2025-08-24T17:21:41Z" Jan 20 18:07:14 crc kubenswrapper[4661]: I0120 18:07:14.255061 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:07:14 crc kubenswrapper[4661]: I0120 18:07:14.255160 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:07:14 crc kubenswrapper[4661]: I0120 18:07:14.255174 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:07:14 crc kubenswrapper[4661]: I0120 18:07:14.255230 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:07:14 crc kubenswrapper[4661]: I0120 18:07:14.255245 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:07:14Z","lastTransitionTime":"2026-01-20T18:07:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:07:14 crc kubenswrapper[4661]: I0120 18:07:14.255783 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d831477bdf455582c54cba87020fc1141541282a25169c4b9730a78855e5719\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:07:14Z is after 2025-08-24T17:21:41Z" Jan 20 18:07:14 crc kubenswrapper[4661]: I0120 18:07:14.272245 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-z97p2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b6f2401-3eb9-4ee4-b79c-6faee06bc21c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab1840afe6e204ba16157cfa4140926ab50dd66d6b72a0e49e4ef986f62c7e34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d04be3c87130e9506908a5ff0bf35490bafa64b4cec7b6ae1b67c4a8bd7df5d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-20T18:06:53Z\\\",\\\"message\\\":\\\"2026-01-20T18:06:08+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_6b6eb9a0-3822-47d9-9b83-21e46cfc33fa\\\\n2026-01-20T18:06:08+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_6b6eb9a0-3822-47d9-9b83-21e46cfc33fa to /host/opt/cni/bin/\\\\n2026-01-20T18:06:08Z [verbose] multus-daemon started\\\\n2026-01-20T18:06:08Z [verbose] Readiness Indicator file check\\\\n2026-01-20T18:06:53Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T18:06:06Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff8qg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T18:06:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-z97p2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:07:14Z is after 2025-08-24T17:21:41Z" Jan 20 18:07:14 crc kubenswrapper[4661]: I0120 18:07:14.284371 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4hf4s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2cada643-eb7b-4036-8788-500338f73fac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://59b5cf3db3513f82b52401408842627d3e40bdc3009c226548556808410b2289\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gwqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://846c1cab30f986276eb919ac7474fbde1b6d5edb6557ab47057723b68d78b782\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gwqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T18:06:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-4hf4s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:07:14Z is after 2025-08-24T17:21:41Z" Jan 20 18:07:14 crc kubenswrapper[4661]: I0120 18:07:14.298351 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f7511825-196e-48ea-a80c-f30a6806a15f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:05:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:05:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f30ca85f0d31021dde3b56c646ddd5d841e699b809c85e54afa944cc8035df6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://008613eee577926f777b6eba5a93379dca1203429fb29918bb057f2aba5eba4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:05:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://baf1692fe971ebe4534bc83cc471812d2b2883b6f97e53728ded6cd57b40c6f1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://faea3c0fefa61b8f0e07a050f59ca7b88d89a7ac8dba19ab019cff00fd782da3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T18:05:44Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:07:14Z is after 2025-08-24T17:21:41Z" Jan 20 18:07:14 crc kubenswrapper[4661]: I0120 18:07:14.309178 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8f62f564-11b3-4142-b086-684e2834c38b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:05:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:05:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:05:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e81ace2939c908bb5c186943729767d4a14f0f0f12fc09c3e351774ca38dc47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://470d14940e84e825c26cb01f9310af3ebbbc2107623e2237d96b40e918def207\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://470d14940e84e825c26cb01f9310af3ebbbc2107623e2237d96b40e918def207\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T18:05:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T18:05:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T18:05:44Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:07:14Z is after 2025-08-24T17:21:41Z" Jan 20 18:07:14 crc kubenswrapper[4661]: I0120 18:07:14.320447 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-svf7c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"78855c94-da90-4523-8d65-70f7fd153dee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce85015f47761ddd35031a4b2aa10eddde92a1f1ee206e6454b967b03b49372e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zvj2k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7dad5141c6e2e07d42bee1c473efffa900d0d900467b1524cd59962582696a3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zvj2k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T18:06:05Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-svf7c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:07:14Z is after 2025-08-24T17:21:41Z" Jan 20 18:07:14 crc kubenswrapper[4661]: I0120 18:07:14.329328 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-tfdrt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1c3f1ce7-0584-4bf1-8398-a277e9a4599b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://163c719cffaaa547e54e81b543b5f5b2ce5abf7f6309d2859831a14e42df189f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbq77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T18:06:07Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-tfdrt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:07:14Z is after 2025-08-24T17:21:41Z" Jan 20 18:07:14 crc kubenswrapper[4661]: I0120 18:07:14.342396 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26d03e00aaf9fc7a94d8fe25f4f6f7a028f4e5eb9956411442757ca8b2046d27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:07:14Z is after 2025-08-24T17:21:41Z" Jan 20 18:07:14 crc kubenswrapper[4661]: I0120 18:07:14.354250 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:07:14Z is after 2025-08-24T17:21:41Z" Jan 20 18:07:14 crc kubenswrapper[4661]: I0120 18:07:14.358648 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:07:14 crc kubenswrapper[4661]: I0120 18:07:14.358833 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:07:14 crc kubenswrapper[4661]: I0120 18:07:14.358847 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:07:14 crc kubenswrapper[4661]: I0120 18:07:14.358873 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:07:14 crc kubenswrapper[4661]: I0120 18:07:14.358892 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:07:14Z","lastTransitionTime":"2026-01-20T18:07:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:07:14 crc kubenswrapper[4661]: I0120 18:07:14.370167 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aafdc595f8f331b863d71124f1aa3c686ec883829377108268dd78de88f498ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a15e7bb714cbcf03a4ed8925508be80b06b04f3cd455d293237554c8ad0fdeee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:07:14Z is after 2025-08-24T17:21:41Z" Jan 20 18:07:14 crc kubenswrapper[4661]: I0120 18:07:14.388484 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-j9j6p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e190abed-d178-4ce7-9485-f6090ecf8578\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ad84b24b0398f3f900b9440d55a7914e661a18580ef8b248ffdce4d8a6c75c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxwgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6923af243783c919b8d74338d7221f91f7c6b770d97eb3a2f7e30360376f071d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6923af243783c919b8d74338d7221f91f7c6b770d97eb3a2f7e30360376f071d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T18:06:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T18:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxwgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d61ecbabdd991af4f3f3005e3d6fab0d3f7fa863e7503f45dd91633dfc68c597\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d61ecbabdd991af4f3f3005e3d6fab0d3f7fa863e7503f45dd91633dfc68c597\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T18:06:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T18:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxwgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31c8fb341a4de1d1144737f83eb46ad0b301f7eb48dee0969da7ade7fbd513da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31c8fb341a4de1d1144737f83eb46ad0b301f7eb48dee0969da7ade7fbd513da\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T18:06:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T18:06:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxwgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db8122764bd0508f39da125b5849fbe3bad9558e511c18f26bdcf4e5b23ca3a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db8122764bd0508f39da125b5849fbe3bad9558e511c18f26bdcf4e5b23ca3a1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T18:06:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T18:06:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxwgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://60e382a199aa3a85c11fdf8c490a4f039a191cff8a604b004e2f4ea6dacb6800\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://60e382a199aa3a85c11fdf8c490a4f039a191cff8a604b004e2f4ea6dacb6800\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T18:06:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T18:06:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxwgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb0e9f6dd4681c1b791524e22d3f668ce544cdc72a33af01fa70f2dd93d2972f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb0e9f6dd4681c1b791524e22d3f668ce544cdc72a33af01fa70f2dd93d2972f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T18:06:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T18:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxwgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T18:06:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-j9j6p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:07:14Z is after 2025-08-24T17:21:41Z" Jan 20 18:07:14 crc kubenswrapper[4661]: I0120 18:07:14.406568 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e82a8ce9-e23c-4fbc-9d26-0e81374193ba\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:05:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:05:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:05:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://887bb0c57d5fcddfad0ffb44f39fb809f945050689a5fb64f145b607b2dcd4f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://786a785de87e5345f01ed57fb6cd17efebe4633953b9e6bc9c169469621aea5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fdf1da4bed4fed480e327c750f01ea201663449a9975540540859463e5b4821f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://892bacf66ebee9e56348d6d6f391b0fd23a5c99369ddaf9280590d8598b32e62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://892bacf66ebee9e56348d6d6f391b0fd23a5c99369ddaf9280590d8598b32e62\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T18:05:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T18:05:44Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T18:05:44Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:07:14Z is after 2025-08-24T17:21:41Z" Jan 20 18:07:14 crc kubenswrapper[4661]: I0120 18:07:14.462500 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:07:14 crc kubenswrapper[4661]: I0120 18:07:14.462567 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:07:14 crc kubenswrapper[4661]: I0120 18:07:14.462580 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:07:14 crc kubenswrapper[4661]: I0120 18:07:14.462596 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:07:14 crc kubenswrapper[4661]: I0120 18:07:14.462608 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:07:14Z","lastTransitionTime":"2026-01-20T18:07:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:07:14 crc kubenswrapper[4661]: I0120 18:07:14.565373 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:07:14 crc kubenswrapper[4661]: I0120 18:07:14.565438 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:07:14 crc kubenswrapper[4661]: I0120 18:07:14.565454 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:07:14 crc kubenswrapper[4661]: I0120 18:07:14.565475 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:07:14 crc kubenswrapper[4661]: I0120 18:07:14.565486 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:07:14Z","lastTransitionTime":"2026-01-20T18:07:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:07:14 crc kubenswrapper[4661]: I0120 18:07:14.668450 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:07:14 crc kubenswrapper[4661]: I0120 18:07:14.668550 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:07:14 crc kubenswrapper[4661]: I0120 18:07:14.668572 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:07:14 crc kubenswrapper[4661]: I0120 18:07:14.668645 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:07:14 crc kubenswrapper[4661]: I0120 18:07:14.668690 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:07:14Z","lastTransitionTime":"2026-01-20T18:07:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:07:14 crc kubenswrapper[4661]: I0120 18:07:14.772630 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:07:14 crc kubenswrapper[4661]: I0120 18:07:14.772758 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:07:14 crc kubenswrapper[4661]: I0120 18:07:14.772780 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:07:14 crc kubenswrapper[4661]: I0120 18:07:14.772810 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:07:14 crc kubenswrapper[4661]: I0120 18:07:14.772831 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:07:14Z","lastTransitionTime":"2026-01-20T18:07:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:07:14 crc kubenswrapper[4661]: I0120 18:07:14.876969 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:07:14 crc kubenswrapper[4661]: I0120 18:07:14.877056 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:07:14 crc kubenswrapper[4661]: I0120 18:07:14.877075 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:07:14 crc kubenswrapper[4661]: I0120 18:07:14.877109 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:07:14 crc kubenswrapper[4661]: I0120 18:07:14.877130 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:07:14Z","lastTransitionTime":"2026-01-20T18:07:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:07:14 crc kubenswrapper[4661]: I0120 18:07:14.980559 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:07:14 crc kubenswrapper[4661]: I0120 18:07:14.980641 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:07:14 crc kubenswrapper[4661]: I0120 18:07:14.980661 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:07:14 crc kubenswrapper[4661]: I0120 18:07:14.980734 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:07:14 crc kubenswrapper[4661]: I0120 18:07:14.980757 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:07:14Z","lastTransitionTime":"2026-01-20T18:07:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:07:15 crc kubenswrapper[4661]: I0120 18:07:15.082854 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:07:15 crc kubenswrapper[4661]: I0120 18:07:15.082903 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:07:15 crc kubenswrapper[4661]: I0120 18:07:15.082912 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:07:15 crc kubenswrapper[4661]: I0120 18:07:15.082928 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:07:15 crc kubenswrapper[4661]: I0120 18:07:15.082938 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:07:15Z","lastTransitionTime":"2026-01-20T18:07:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:07:15 crc kubenswrapper[4661]: I0120 18:07:15.141934 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dhd6h" Jan 20 18:07:15 crc kubenswrapper[4661]: I0120 18:07:15.142038 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 18:07:15 crc kubenswrapper[4661]: I0120 18:07:15.141989 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 18:07:15 crc kubenswrapper[4661]: I0120 18:07:15.141948 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 18:07:15 crc kubenswrapper[4661]: E0120 18:07:15.142244 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dhd6h" podUID="58dc28d6-51a6-49ce-ab8e-ac0b6bbe7131" Jan 20 18:07:15 crc kubenswrapper[4661]: E0120 18:07:15.142450 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 20 18:07:15 crc kubenswrapper[4661]: E0120 18:07:15.142500 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 20 18:07:15 crc kubenswrapper[4661]: E0120 18:07:15.142565 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 20 18:07:15 crc kubenswrapper[4661]: I0120 18:07:15.145143 4661 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-31 23:54:10.406416617 +0000 UTC Jan 20 18:07:15 crc kubenswrapper[4661]: I0120 18:07:15.186699 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:07:15 crc kubenswrapper[4661]: I0120 18:07:15.186773 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:07:15 crc kubenswrapper[4661]: I0120 18:07:15.186792 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:07:15 crc kubenswrapper[4661]: I0120 18:07:15.186824 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:07:15 crc kubenswrapper[4661]: I0120 18:07:15.186846 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:07:15Z","lastTransitionTime":"2026-01-20T18:07:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:07:15 crc kubenswrapper[4661]: I0120 18:07:15.290987 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:07:15 crc kubenswrapper[4661]: I0120 18:07:15.291046 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:07:15 crc kubenswrapper[4661]: I0120 18:07:15.291060 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:07:15 crc kubenswrapper[4661]: I0120 18:07:15.291082 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:07:15 crc kubenswrapper[4661]: I0120 18:07:15.291096 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:07:15Z","lastTransitionTime":"2026-01-20T18:07:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:07:15 crc kubenswrapper[4661]: I0120 18:07:15.394659 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:07:15 crc kubenswrapper[4661]: I0120 18:07:15.394796 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:07:15 crc kubenswrapper[4661]: I0120 18:07:15.394820 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:07:15 crc kubenswrapper[4661]: I0120 18:07:15.394867 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:07:15 crc kubenswrapper[4661]: I0120 18:07:15.394894 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:07:15Z","lastTransitionTime":"2026-01-20T18:07:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:07:15 crc kubenswrapper[4661]: I0120 18:07:15.498362 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:07:15 crc kubenswrapper[4661]: I0120 18:07:15.498432 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:07:15 crc kubenswrapper[4661]: I0120 18:07:15.498447 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:07:15 crc kubenswrapper[4661]: I0120 18:07:15.498469 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:07:15 crc kubenswrapper[4661]: I0120 18:07:15.498482 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:07:15Z","lastTransitionTime":"2026-01-20T18:07:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:07:15 crc kubenswrapper[4661]: I0120 18:07:15.602067 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:07:15 crc kubenswrapper[4661]: I0120 18:07:15.602145 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:07:15 crc kubenswrapper[4661]: I0120 18:07:15.602169 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:07:15 crc kubenswrapper[4661]: I0120 18:07:15.602202 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:07:15 crc kubenswrapper[4661]: I0120 18:07:15.602229 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:07:15Z","lastTransitionTime":"2026-01-20T18:07:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:07:15 crc kubenswrapper[4661]: I0120 18:07:15.705646 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:07:15 crc kubenswrapper[4661]: I0120 18:07:15.705717 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:07:15 crc kubenswrapper[4661]: I0120 18:07:15.705728 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:07:15 crc kubenswrapper[4661]: I0120 18:07:15.705746 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:07:15 crc kubenswrapper[4661]: I0120 18:07:15.705758 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:07:15Z","lastTransitionTime":"2026-01-20T18:07:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:07:15 crc kubenswrapper[4661]: I0120 18:07:15.809373 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:07:15 crc kubenswrapper[4661]: I0120 18:07:15.809444 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:07:15 crc kubenswrapper[4661]: I0120 18:07:15.809461 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:07:15 crc kubenswrapper[4661]: I0120 18:07:15.809489 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:07:15 crc kubenswrapper[4661]: I0120 18:07:15.809509 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:07:15Z","lastTransitionTime":"2026-01-20T18:07:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:07:15 crc kubenswrapper[4661]: I0120 18:07:15.912448 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:07:15 crc kubenswrapper[4661]: I0120 18:07:15.912524 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:07:15 crc kubenswrapper[4661]: I0120 18:07:15.912543 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:07:15 crc kubenswrapper[4661]: I0120 18:07:15.912567 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:07:15 crc kubenswrapper[4661]: I0120 18:07:15.912588 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:07:15Z","lastTransitionTime":"2026-01-20T18:07:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:07:16 crc kubenswrapper[4661]: I0120 18:07:16.016244 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:07:16 crc kubenswrapper[4661]: I0120 18:07:16.016864 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:07:16 crc kubenswrapper[4661]: I0120 18:07:16.016886 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:07:16 crc kubenswrapper[4661]: I0120 18:07:16.016915 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:07:16 crc kubenswrapper[4661]: I0120 18:07:16.016936 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:07:16Z","lastTransitionTime":"2026-01-20T18:07:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:07:16 crc kubenswrapper[4661]: I0120 18:07:16.120311 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:07:16 crc kubenswrapper[4661]: I0120 18:07:16.120379 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:07:16 crc kubenswrapper[4661]: I0120 18:07:16.120403 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:07:16 crc kubenswrapper[4661]: I0120 18:07:16.120436 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:07:16 crc kubenswrapper[4661]: I0120 18:07:16.120456 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:07:16Z","lastTransitionTime":"2026-01-20T18:07:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:07:16 crc kubenswrapper[4661]: I0120 18:07:16.145815 4661 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-10 18:05:00.257728471 +0000 UTC Jan 20 18:07:16 crc kubenswrapper[4661]: I0120 18:07:16.224345 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:07:16 crc kubenswrapper[4661]: I0120 18:07:16.224387 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:07:16 crc kubenswrapper[4661]: I0120 18:07:16.224401 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:07:16 crc kubenswrapper[4661]: I0120 18:07:16.224423 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:07:16 crc kubenswrapper[4661]: I0120 18:07:16.224442 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:07:16Z","lastTransitionTime":"2026-01-20T18:07:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:07:16 crc kubenswrapper[4661]: I0120 18:07:16.327906 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:07:16 crc kubenswrapper[4661]: I0120 18:07:16.328463 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:07:16 crc kubenswrapper[4661]: I0120 18:07:16.328603 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:07:16 crc kubenswrapper[4661]: I0120 18:07:16.328810 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:07:16 crc kubenswrapper[4661]: I0120 18:07:16.328945 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:07:16Z","lastTransitionTime":"2026-01-20T18:07:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:07:16 crc kubenswrapper[4661]: I0120 18:07:16.432457 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:07:16 crc kubenswrapper[4661]: I0120 18:07:16.432527 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:07:16 crc kubenswrapper[4661]: I0120 18:07:16.432546 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:07:16 crc kubenswrapper[4661]: I0120 18:07:16.432572 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:07:16 crc kubenswrapper[4661]: I0120 18:07:16.432592 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:07:16Z","lastTransitionTime":"2026-01-20T18:07:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:07:16 crc kubenswrapper[4661]: I0120 18:07:16.536449 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:07:16 crc kubenswrapper[4661]: I0120 18:07:16.536548 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:07:16 crc kubenswrapper[4661]: I0120 18:07:16.536576 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:07:16 crc kubenswrapper[4661]: I0120 18:07:16.536609 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:07:16 crc kubenswrapper[4661]: I0120 18:07:16.536636 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:07:16Z","lastTransitionTime":"2026-01-20T18:07:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:07:16 crc kubenswrapper[4661]: I0120 18:07:16.640192 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:07:16 crc kubenswrapper[4661]: I0120 18:07:16.640290 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:07:16 crc kubenswrapper[4661]: I0120 18:07:16.640309 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:07:16 crc kubenswrapper[4661]: I0120 18:07:16.640338 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:07:16 crc kubenswrapper[4661]: I0120 18:07:16.640357 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:07:16Z","lastTransitionTime":"2026-01-20T18:07:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:07:16 crc kubenswrapper[4661]: I0120 18:07:16.743918 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:07:16 crc kubenswrapper[4661]: I0120 18:07:16.744067 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:07:16 crc kubenswrapper[4661]: I0120 18:07:16.744111 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:07:16 crc kubenswrapper[4661]: I0120 18:07:16.744146 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:07:16 crc kubenswrapper[4661]: I0120 18:07:16.744170 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:07:16Z","lastTransitionTime":"2026-01-20T18:07:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:07:16 crc kubenswrapper[4661]: I0120 18:07:16.858932 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:07:16 crc kubenswrapper[4661]: I0120 18:07:16.859019 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:07:16 crc kubenswrapper[4661]: I0120 18:07:16.859036 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:07:16 crc kubenswrapper[4661]: I0120 18:07:16.859055 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:07:16 crc kubenswrapper[4661]: I0120 18:07:16.859068 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:07:16Z","lastTransitionTime":"2026-01-20T18:07:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:07:16 crc kubenswrapper[4661]: I0120 18:07:16.961556 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:07:16 crc kubenswrapper[4661]: I0120 18:07:16.961616 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:07:16 crc kubenswrapper[4661]: I0120 18:07:16.961627 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:07:16 crc kubenswrapper[4661]: I0120 18:07:16.961647 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:07:16 crc kubenswrapper[4661]: I0120 18:07:16.961659 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:07:16Z","lastTransitionTime":"2026-01-20T18:07:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:07:17 crc kubenswrapper[4661]: I0120 18:07:17.065879 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:07:17 crc kubenswrapper[4661]: I0120 18:07:17.065945 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:07:17 crc kubenswrapper[4661]: I0120 18:07:17.065961 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:07:17 crc kubenswrapper[4661]: I0120 18:07:17.065987 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:07:17 crc kubenswrapper[4661]: I0120 18:07:17.066010 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:07:17Z","lastTransitionTime":"2026-01-20T18:07:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:07:17 crc kubenswrapper[4661]: I0120 18:07:17.141952 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dhd6h" Jan 20 18:07:17 crc kubenswrapper[4661]: I0120 18:07:17.142261 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 18:07:17 crc kubenswrapper[4661]: I0120 18:07:17.142348 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 18:07:17 crc kubenswrapper[4661]: I0120 18:07:17.142214 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 18:07:17 crc kubenswrapper[4661]: E0120 18:07:17.142624 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dhd6h" podUID="58dc28d6-51a6-49ce-ab8e-ac0b6bbe7131" Jan 20 18:07:17 crc kubenswrapper[4661]: E0120 18:07:17.142836 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 20 18:07:17 crc kubenswrapper[4661]: E0120 18:07:17.142948 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 20 18:07:17 crc kubenswrapper[4661]: E0120 18:07:17.143024 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 20 18:07:17 crc kubenswrapper[4661]: I0120 18:07:17.146495 4661 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-16 07:16:12.754232034 +0000 UTC Jan 20 18:07:17 crc kubenswrapper[4661]: I0120 18:07:17.169748 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:07:17 crc kubenswrapper[4661]: I0120 18:07:17.169813 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:07:17 crc kubenswrapper[4661]: I0120 18:07:17.169831 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:07:17 crc kubenswrapper[4661]: I0120 18:07:17.169859 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:07:17 crc kubenswrapper[4661]: I0120 18:07:17.169879 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:07:17Z","lastTransitionTime":"2026-01-20T18:07:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:07:17 crc kubenswrapper[4661]: I0120 18:07:17.274301 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:07:17 crc kubenswrapper[4661]: I0120 18:07:17.274373 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:07:17 crc kubenswrapper[4661]: I0120 18:07:17.274390 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:07:17 crc kubenswrapper[4661]: I0120 18:07:17.274419 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:07:17 crc kubenswrapper[4661]: I0120 18:07:17.274440 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:07:17Z","lastTransitionTime":"2026-01-20T18:07:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:07:17 crc kubenswrapper[4661]: I0120 18:07:17.378187 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:07:17 crc kubenswrapper[4661]: I0120 18:07:17.378257 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:07:17 crc kubenswrapper[4661]: I0120 18:07:17.378272 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:07:17 crc kubenswrapper[4661]: I0120 18:07:17.378297 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:07:17 crc kubenswrapper[4661]: I0120 18:07:17.378315 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:07:17Z","lastTransitionTime":"2026-01-20T18:07:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:07:17 crc kubenswrapper[4661]: I0120 18:07:17.482948 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:07:17 crc kubenswrapper[4661]: I0120 18:07:17.482993 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:07:17 crc kubenswrapper[4661]: I0120 18:07:17.483005 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:07:17 crc kubenswrapper[4661]: I0120 18:07:17.483021 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:07:17 crc kubenswrapper[4661]: I0120 18:07:17.483033 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:07:17Z","lastTransitionTime":"2026-01-20T18:07:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:07:17 crc kubenswrapper[4661]: I0120 18:07:17.586623 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:07:17 crc kubenswrapper[4661]: I0120 18:07:17.586697 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:07:17 crc kubenswrapper[4661]: I0120 18:07:17.586706 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:07:17 crc kubenswrapper[4661]: I0120 18:07:17.586725 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:07:17 crc kubenswrapper[4661]: I0120 18:07:17.586737 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:07:17Z","lastTransitionTime":"2026-01-20T18:07:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:07:17 crc kubenswrapper[4661]: I0120 18:07:17.690298 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:07:17 crc kubenswrapper[4661]: I0120 18:07:17.690355 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:07:17 crc kubenswrapper[4661]: I0120 18:07:17.690373 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:07:17 crc kubenswrapper[4661]: I0120 18:07:17.690399 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:07:17 crc kubenswrapper[4661]: I0120 18:07:17.690415 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:07:17Z","lastTransitionTime":"2026-01-20T18:07:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:07:17 crc kubenswrapper[4661]: I0120 18:07:17.793919 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:07:17 crc kubenswrapper[4661]: I0120 18:07:17.794007 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:07:17 crc kubenswrapper[4661]: I0120 18:07:17.794025 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:07:17 crc kubenswrapper[4661]: I0120 18:07:17.794056 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:07:17 crc kubenswrapper[4661]: I0120 18:07:17.794083 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:07:17Z","lastTransitionTime":"2026-01-20T18:07:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:07:17 crc kubenswrapper[4661]: I0120 18:07:17.898175 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:07:17 crc kubenswrapper[4661]: I0120 18:07:17.898246 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:07:17 crc kubenswrapper[4661]: I0120 18:07:17.898270 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:07:17 crc kubenswrapper[4661]: I0120 18:07:17.898301 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:07:17 crc kubenswrapper[4661]: I0120 18:07:17.898324 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:07:17Z","lastTransitionTime":"2026-01-20T18:07:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:07:18 crc kubenswrapper[4661]: I0120 18:07:18.002012 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:07:18 crc kubenswrapper[4661]: I0120 18:07:18.002109 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:07:18 crc kubenswrapper[4661]: I0120 18:07:18.002131 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:07:18 crc kubenswrapper[4661]: I0120 18:07:18.002166 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:07:18 crc kubenswrapper[4661]: I0120 18:07:18.002185 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:07:18Z","lastTransitionTime":"2026-01-20T18:07:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:07:18 crc kubenswrapper[4661]: I0120 18:07:18.106464 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:07:18 crc kubenswrapper[4661]: I0120 18:07:18.106582 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:07:18 crc kubenswrapper[4661]: I0120 18:07:18.106609 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:07:18 crc kubenswrapper[4661]: I0120 18:07:18.106645 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:07:18 crc kubenswrapper[4661]: I0120 18:07:18.106711 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:07:18Z","lastTransitionTime":"2026-01-20T18:07:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:07:18 crc kubenswrapper[4661]: I0120 18:07:18.146797 4661 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-08 18:07:19.315519717 +0000 UTC Jan 20 18:07:18 crc kubenswrapper[4661]: I0120 18:07:18.210122 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:07:18 crc kubenswrapper[4661]: I0120 18:07:18.210212 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:07:18 crc kubenswrapper[4661]: I0120 18:07:18.210229 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:07:18 crc kubenswrapper[4661]: I0120 18:07:18.210328 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:07:18 crc kubenswrapper[4661]: I0120 18:07:18.210353 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:07:18Z","lastTransitionTime":"2026-01-20T18:07:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:07:18 crc kubenswrapper[4661]: I0120 18:07:18.314399 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:07:18 crc kubenswrapper[4661]: I0120 18:07:18.314471 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:07:18 crc kubenswrapper[4661]: I0120 18:07:18.314491 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:07:18 crc kubenswrapper[4661]: I0120 18:07:18.314520 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:07:18 crc kubenswrapper[4661]: I0120 18:07:18.314540 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:07:18Z","lastTransitionTime":"2026-01-20T18:07:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:07:18 crc kubenswrapper[4661]: I0120 18:07:18.418195 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:07:18 crc kubenswrapper[4661]: I0120 18:07:18.418279 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:07:18 crc kubenswrapper[4661]: I0120 18:07:18.418305 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:07:18 crc kubenswrapper[4661]: I0120 18:07:18.418338 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:07:18 crc kubenswrapper[4661]: I0120 18:07:18.418359 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:07:18Z","lastTransitionTime":"2026-01-20T18:07:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:07:18 crc kubenswrapper[4661]: I0120 18:07:18.522840 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:07:18 crc kubenswrapper[4661]: I0120 18:07:18.522925 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:07:18 crc kubenswrapper[4661]: I0120 18:07:18.522943 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:07:18 crc kubenswrapper[4661]: I0120 18:07:18.522969 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:07:18 crc kubenswrapper[4661]: I0120 18:07:18.522988 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:07:18Z","lastTransitionTime":"2026-01-20T18:07:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:07:18 crc kubenswrapper[4661]: I0120 18:07:18.626347 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:07:18 crc kubenswrapper[4661]: I0120 18:07:18.626436 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:07:18 crc kubenswrapper[4661]: I0120 18:07:18.626459 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:07:18 crc kubenswrapper[4661]: I0120 18:07:18.626492 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:07:18 crc kubenswrapper[4661]: I0120 18:07:18.626514 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:07:18Z","lastTransitionTime":"2026-01-20T18:07:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:07:18 crc kubenswrapper[4661]: I0120 18:07:18.730208 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:07:18 crc kubenswrapper[4661]: I0120 18:07:18.730282 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:07:18 crc kubenswrapper[4661]: I0120 18:07:18.730304 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:07:18 crc kubenswrapper[4661]: I0120 18:07:18.730331 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:07:18 crc kubenswrapper[4661]: I0120 18:07:18.730350 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:07:18Z","lastTransitionTime":"2026-01-20T18:07:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:07:18 crc kubenswrapper[4661]: I0120 18:07:18.834174 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:07:18 crc kubenswrapper[4661]: I0120 18:07:18.834269 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:07:18 crc kubenswrapper[4661]: I0120 18:07:18.834290 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:07:18 crc kubenswrapper[4661]: I0120 18:07:18.834317 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:07:18 crc kubenswrapper[4661]: I0120 18:07:18.834335 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:07:18Z","lastTransitionTime":"2026-01-20T18:07:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:07:18 crc kubenswrapper[4661]: I0120 18:07:18.885252 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:07:18 crc kubenswrapper[4661]: I0120 18:07:18.885306 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:07:18 crc kubenswrapper[4661]: I0120 18:07:18.885323 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:07:18 crc kubenswrapper[4661]: I0120 18:07:18.885351 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:07:18 crc kubenswrapper[4661]: I0120 18:07:18.885369 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:07:18Z","lastTransitionTime":"2026-01-20T18:07:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:07:18 crc kubenswrapper[4661]: E0120 18:07:18.908335 4661 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T18:07:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T18:07:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T18:07:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T18:07:18Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T18:07:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T18:07:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T18:07:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T18:07:18Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7f2069d5-53e0-4198-b42b-b73aa1252865\\\",\\\"systemUUID\\\":\\\"727045d4-7edb-4891-a9ee-dd5ccba890df\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:07:18Z is after 2025-08-24T17:21:41Z" Jan 20 18:07:18 crc kubenswrapper[4661]: I0120 18:07:18.917476 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:07:18 crc kubenswrapper[4661]: I0120 18:07:18.917527 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:07:18 crc kubenswrapper[4661]: I0120 18:07:18.917544 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:07:18 crc kubenswrapper[4661]: I0120 18:07:18.917572 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:07:18 crc kubenswrapper[4661]: I0120 18:07:18.917590 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:07:18Z","lastTransitionTime":"2026-01-20T18:07:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:07:18 crc kubenswrapper[4661]: E0120 18:07:18.940897 4661 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T18:07:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T18:07:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T18:07:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T18:07:18Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T18:07:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T18:07:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T18:07:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T18:07:18Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7f2069d5-53e0-4198-b42b-b73aa1252865\\\",\\\"systemUUID\\\":\\\"727045d4-7edb-4891-a9ee-dd5ccba890df\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:07:18Z is after 2025-08-24T17:21:41Z" Jan 20 18:07:18 crc kubenswrapper[4661]: I0120 18:07:18.947074 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:07:18 crc kubenswrapper[4661]: I0120 18:07:18.947175 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:07:18 crc kubenswrapper[4661]: I0120 18:07:18.947195 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:07:18 crc kubenswrapper[4661]: I0120 18:07:18.947254 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:07:18 crc kubenswrapper[4661]: I0120 18:07:18.947274 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:07:18Z","lastTransitionTime":"2026-01-20T18:07:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:07:18 crc kubenswrapper[4661]: E0120 18:07:18.971227 4661 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T18:07:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T18:07:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T18:07:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T18:07:18Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T18:07:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T18:07:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T18:07:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T18:07:18Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7f2069d5-53e0-4198-b42b-b73aa1252865\\\",\\\"systemUUID\\\":\\\"727045d4-7edb-4891-a9ee-dd5ccba890df\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:07:18Z is after 2025-08-24T17:21:41Z" Jan 20 18:07:18 crc kubenswrapper[4661]: I0120 18:07:18.976811 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:07:18 crc kubenswrapper[4661]: I0120 18:07:18.976875 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:07:18 crc kubenswrapper[4661]: I0120 18:07:18.976894 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:07:18 crc kubenswrapper[4661]: I0120 18:07:18.976920 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:07:18 crc kubenswrapper[4661]: I0120 18:07:18.976943 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:07:18Z","lastTransitionTime":"2026-01-20T18:07:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:07:18 crc kubenswrapper[4661]: E0120 18:07:18.997967 4661 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T18:07:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T18:07:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T18:07:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T18:07:18Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T18:07:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T18:07:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T18:07:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T18:07:18Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7f2069d5-53e0-4198-b42b-b73aa1252865\\\",\\\"systemUUID\\\":\\\"727045d4-7edb-4891-a9ee-dd5ccba890df\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:07:18Z is after 2025-08-24T17:21:41Z" Jan 20 18:07:19 crc kubenswrapper[4661]: I0120 18:07:19.004022 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:07:19 crc kubenswrapper[4661]: I0120 18:07:19.004084 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:07:19 crc kubenswrapper[4661]: I0120 18:07:19.004103 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:07:19 crc kubenswrapper[4661]: I0120 18:07:19.004132 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:07:19 crc kubenswrapper[4661]: I0120 18:07:19.004152 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:07:19Z","lastTransitionTime":"2026-01-20T18:07:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:07:19 crc kubenswrapper[4661]: E0120 18:07:19.026607 4661 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T18:07:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T18:07:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T18:07:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T18:07:19Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T18:07:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T18:07:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T18:07:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T18:07:19Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7f2069d5-53e0-4198-b42b-b73aa1252865\\\",\\\"systemUUID\\\":\\\"727045d4-7edb-4891-a9ee-dd5ccba890df\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:07:19Z is after 2025-08-24T17:21:41Z" Jan 20 18:07:19 crc kubenswrapper[4661]: E0120 18:07:19.026912 4661 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 20 18:07:19 crc kubenswrapper[4661]: I0120 18:07:19.029825 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:07:19 crc kubenswrapper[4661]: I0120 18:07:19.029896 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:07:19 crc kubenswrapper[4661]: I0120 18:07:19.029921 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:07:19 crc kubenswrapper[4661]: I0120 18:07:19.029952 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:07:19 crc kubenswrapper[4661]: I0120 18:07:19.029981 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:07:19Z","lastTransitionTime":"2026-01-20T18:07:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:07:19 crc kubenswrapper[4661]: I0120 18:07:19.136301 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:07:19 crc kubenswrapper[4661]: I0120 18:07:19.136383 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:07:19 crc kubenswrapper[4661]: I0120 18:07:19.136409 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:07:19 crc kubenswrapper[4661]: I0120 18:07:19.136457 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:07:19 crc kubenswrapper[4661]: I0120 18:07:19.136482 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:07:19Z","lastTransitionTime":"2026-01-20T18:07:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:07:19 crc kubenswrapper[4661]: I0120 18:07:19.141509 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dhd6h" Jan 20 18:07:19 crc kubenswrapper[4661]: I0120 18:07:19.141527 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 18:07:19 crc kubenswrapper[4661]: I0120 18:07:19.141524 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 18:07:19 crc kubenswrapper[4661]: I0120 18:07:19.141564 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 18:07:19 crc kubenswrapper[4661]: E0120 18:07:19.143267 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dhd6h" podUID="58dc28d6-51a6-49ce-ab8e-ac0b6bbe7131" Jan 20 18:07:19 crc kubenswrapper[4661]: E0120 18:07:19.143487 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 20 18:07:19 crc kubenswrapper[4661]: E0120 18:07:19.143640 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 20 18:07:19 crc kubenswrapper[4661]: E0120 18:07:19.143788 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 20 18:07:19 crc kubenswrapper[4661]: I0120 18:07:19.147091 4661 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-09 04:54:11.694013395 +0000 UTC Jan 20 18:07:19 crc kubenswrapper[4661]: I0120 18:07:19.240789 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:07:19 crc kubenswrapper[4661]: I0120 18:07:19.240852 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:07:19 crc kubenswrapper[4661]: I0120 18:07:19.240872 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:07:19 crc kubenswrapper[4661]: I0120 18:07:19.240903 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:07:19 crc kubenswrapper[4661]: I0120 18:07:19.240923 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:07:19Z","lastTransitionTime":"2026-01-20T18:07:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:07:19 crc kubenswrapper[4661]: I0120 18:07:19.345253 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:07:19 crc kubenswrapper[4661]: I0120 18:07:19.345640 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:07:19 crc kubenswrapper[4661]: I0120 18:07:19.345716 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:07:19 crc kubenswrapper[4661]: I0120 18:07:19.345753 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:07:19 crc kubenswrapper[4661]: I0120 18:07:19.345777 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:07:19Z","lastTransitionTime":"2026-01-20T18:07:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:07:19 crc kubenswrapper[4661]: I0120 18:07:19.450850 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:07:19 crc kubenswrapper[4661]: I0120 18:07:19.450919 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:07:19 crc kubenswrapper[4661]: I0120 18:07:19.450942 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:07:19 crc kubenswrapper[4661]: I0120 18:07:19.450970 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:07:19 crc kubenswrapper[4661]: I0120 18:07:19.450988 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:07:19Z","lastTransitionTime":"2026-01-20T18:07:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:07:19 crc kubenswrapper[4661]: I0120 18:07:19.554356 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:07:19 crc kubenswrapper[4661]: I0120 18:07:19.554402 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:07:19 crc kubenswrapper[4661]: I0120 18:07:19.554418 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:07:19 crc kubenswrapper[4661]: I0120 18:07:19.554443 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:07:19 crc kubenswrapper[4661]: I0120 18:07:19.554459 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:07:19Z","lastTransitionTime":"2026-01-20T18:07:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:07:19 crc kubenswrapper[4661]: I0120 18:07:19.657874 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:07:19 crc kubenswrapper[4661]: I0120 18:07:19.658443 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:07:19 crc kubenswrapper[4661]: I0120 18:07:19.658628 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:07:19 crc kubenswrapper[4661]: I0120 18:07:19.658797 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:07:19 crc kubenswrapper[4661]: I0120 18:07:19.658936 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:07:19Z","lastTransitionTime":"2026-01-20T18:07:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:07:19 crc kubenswrapper[4661]: I0120 18:07:19.762135 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:07:19 crc kubenswrapper[4661]: I0120 18:07:19.762236 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:07:19 crc kubenswrapper[4661]: I0120 18:07:19.762261 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:07:19 crc kubenswrapper[4661]: I0120 18:07:19.762301 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:07:19 crc kubenswrapper[4661]: I0120 18:07:19.762328 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:07:19Z","lastTransitionTime":"2026-01-20T18:07:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:07:19 crc kubenswrapper[4661]: I0120 18:07:19.866159 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:07:19 crc kubenswrapper[4661]: I0120 18:07:19.866570 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:07:19 crc kubenswrapper[4661]: I0120 18:07:19.866884 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:07:19 crc kubenswrapper[4661]: I0120 18:07:19.867060 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:07:19 crc kubenswrapper[4661]: I0120 18:07:19.867202 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:07:19Z","lastTransitionTime":"2026-01-20T18:07:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:07:19 crc kubenswrapper[4661]: I0120 18:07:19.972576 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:07:19 crc kubenswrapper[4661]: I0120 18:07:19.973192 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:07:19 crc kubenswrapper[4661]: I0120 18:07:19.973385 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:07:19 crc kubenswrapper[4661]: I0120 18:07:19.973569 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:07:19 crc kubenswrapper[4661]: I0120 18:07:19.973844 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:07:19Z","lastTransitionTime":"2026-01-20T18:07:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:07:20 crc kubenswrapper[4661]: I0120 18:07:20.078326 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:07:20 crc kubenswrapper[4661]: I0120 18:07:20.078387 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:07:20 crc kubenswrapper[4661]: I0120 18:07:20.078406 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:07:20 crc kubenswrapper[4661]: I0120 18:07:20.078434 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:07:20 crc kubenswrapper[4661]: I0120 18:07:20.078458 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:07:20Z","lastTransitionTime":"2026-01-20T18:07:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:07:20 crc kubenswrapper[4661]: I0120 18:07:20.147938 4661 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-07 03:24:09.469181126 +0000 UTC Jan 20 18:07:20 crc kubenswrapper[4661]: I0120 18:07:20.182007 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:07:20 crc kubenswrapper[4661]: I0120 18:07:20.182046 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:07:20 crc kubenswrapper[4661]: I0120 18:07:20.182057 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:07:20 crc kubenswrapper[4661]: I0120 18:07:20.182073 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:07:20 crc kubenswrapper[4661]: I0120 18:07:20.182086 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:07:20Z","lastTransitionTime":"2026-01-20T18:07:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:07:20 crc kubenswrapper[4661]: I0120 18:07:20.286327 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:07:20 crc kubenswrapper[4661]: I0120 18:07:20.286409 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:07:20 crc kubenswrapper[4661]: I0120 18:07:20.286429 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:07:20 crc kubenswrapper[4661]: I0120 18:07:20.286459 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:07:20 crc kubenswrapper[4661]: I0120 18:07:20.286477 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:07:20Z","lastTransitionTime":"2026-01-20T18:07:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:07:20 crc kubenswrapper[4661]: I0120 18:07:20.390544 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:07:20 crc kubenswrapper[4661]: I0120 18:07:20.390599 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:07:20 crc kubenswrapper[4661]: I0120 18:07:20.390622 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:07:20 crc kubenswrapper[4661]: I0120 18:07:20.390655 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:07:20 crc kubenswrapper[4661]: I0120 18:07:20.390718 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:07:20Z","lastTransitionTime":"2026-01-20T18:07:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:07:20 crc kubenswrapper[4661]: I0120 18:07:20.494647 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:07:20 crc kubenswrapper[4661]: I0120 18:07:20.494805 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:07:20 crc kubenswrapper[4661]: I0120 18:07:20.494824 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:07:20 crc kubenswrapper[4661]: I0120 18:07:20.494904 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:07:20 crc kubenswrapper[4661]: I0120 18:07:20.494945 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:07:20Z","lastTransitionTime":"2026-01-20T18:07:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:07:20 crc kubenswrapper[4661]: I0120 18:07:20.598661 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:07:20 crc kubenswrapper[4661]: I0120 18:07:20.598751 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:07:20 crc kubenswrapper[4661]: I0120 18:07:20.598770 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:07:20 crc kubenswrapper[4661]: I0120 18:07:20.598798 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:07:20 crc kubenswrapper[4661]: I0120 18:07:20.598817 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:07:20Z","lastTransitionTime":"2026-01-20T18:07:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:07:20 crc kubenswrapper[4661]: I0120 18:07:20.702265 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:07:20 crc kubenswrapper[4661]: I0120 18:07:20.702330 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:07:20 crc kubenswrapper[4661]: I0120 18:07:20.702343 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:07:20 crc kubenswrapper[4661]: I0120 18:07:20.702366 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:07:20 crc kubenswrapper[4661]: I0120 18:07:20.702378 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:07:20Z","lastTransitionTime":"2026-01-20T18:07:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:07:20 crc kubenswrapper[4661]: I0120 18:07:20.805946 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:07:20 crc kubenswrapper[4661]: I0120 18:07:20.806014 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:07:20 crc kubenswrapper[4661]: I0120 18:07:20.806034 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:07:20 crc kubenswrapper[4661]: I0120 18:07:20.806061 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:07:20 crc kubenswrapper[4661]: I0120 18:07:20.806080 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:07:20Z","lastTransitionTime":"2026-01-20T18:07:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:07:20 crc kubenswrapper[4661]: I0120 18:07:20.908935 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:07:20 crc kubenswrapper[4661]: I0120 18:07:20.909000 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:07:20 crc kubenswrapper[4661]: I0120 18:07:20.909065 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:07:20 crc kubenswrapper[4661]: I0120 18:07:20.909094 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:07:20 crc kubenswrapper[4661]: I0120 18:07:20.909117 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:07:20Z","lastTransitionTime":"2026-01-20T18:07:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:07:21 crc kubenswrapper[4661]: I0120 18:07:21.012581 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:07:21 crc kubenswrapper[4661]: I0120 18:07:21.012624 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:07:21 crc kubenswrapper[4661]: I0120 18:07:21.012636 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:07:21 crc kubenswrapper[4661]: I0120 18:07:21.012656 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:07:21 crc kubenswrapper[4661]: I0120 18:07:21.012689 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:07:21Z","lastTransitionTime":"2026-01-20T18:07:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:07:21 crc kubenswrapper[4661]: I0120 18:07:21.116569 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:07:21 crc kubenswrapper[4661]: I0120 18:07:21.116650 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:07:21 crc kubenswrapper[4661]: I0120 18:07:21.116664 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:07:21 crc kubenswrapper[4661]: I0120 18:07:21.116714 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:07:21 crc kubenswrapper[4661]: I0120 18:07:21.116730 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:07:21Z","lastTransitionTime":"2026-01-20T18:07:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:07:21 crc kubenswrapper[4661]: I0120 18:07:21.141311 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 18:07:21 crc kubenswrapper[4661]: E0120 18:07:21.141473 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 20 18:07:21 crc kubenswrapper[4661]: I0120 18:07:21.141596 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dhd6h" Jan 20 18:07:21 crc kubenswrapper[4661]: I0120 18:07:21.141734 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 18:07:21 crc kubenswrapper[4661]: E0120 18:07:21.141827 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dhd6h" podUID="58dc28d6-51a6-49ce-ab8e-ac0b6bbe7131" Jan 20 18:07:21 crc kubenswrapper[4661]: E0120 18:07:21.142055 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 20 18:07:21 crc kubenswrapper[4661]: I0120 18:07:21.142269 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 18:07:21 crc kubenswrapper[4661]: E0120 18:07:21.142588 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 20 18:07:21 crc kubenswrapper[4661]: I0120 18:07:21.148662 4661 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-22 17:19:19.979113893 +0000 UTC Jan 20 18:07:21 crc kubenswrapper[4661]: I0120 18:07:21.220216 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:07:21 crc kubenswrapper[4661]: I0120 18:07:21.220264 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:07:21 crc kubenswrapper[4661]: I0120 18:07:21.220280 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:07:21 crc kubenswrapper[4661]: I0120 18:07:21.220306 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:07:21 crc kubenswrapper[4661]: I0120 18:07:21.220324 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:07:21Z","lastTransitionTime":"2026-01-20T18:07:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:07:21 crc kubenswrapper[4661]: I0120 18:07:21.323958 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:07:21 crc kubenswrapper[4661]: I0120 18:07:21.323996 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:07:21 crc kubenswrapper[4661]: I0120 18:07:21.324006 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:07:21 crc kubenswrapper[4661]: I0120 18:07:21.324024 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:07:21 crc kubenswrapper[4661]: I0120 18:07:21.324037 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:07:21Z","lastTransitionTime":"2026-01-20T18:07:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:07:21 crc kubenswrapper[4661]: I0120 18:07:21.426994 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:07:21 crc kubenswrapper[4661]: I0120 18:07:21.427133 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:07:21 crc kubenswrapper[4661]: I0120 18:07:21.427156 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:07:21 crc kubenswrapper[4661]: I0120 18:07:21.427187 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:07:21 crc kubenswrapper[4661]: I0120 18:07:21.427207 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:07:21Z","lastTransitionTime":"2026-01-20T18:07:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:07:21 crc kubenswrapper[4661]: I0120 18:07:21.531495 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:07:21 crc kubenswrapper[4661]: I0120 18:07:21.531591 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:07:21 crc kubenswrapper[4661]: I0120 18:07:21.531610 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:07:21 crc kubenswrapper[4661]: I0120 18:07:21.531636 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:07:21 crc kubenswrapper[4661]: I0120 18:07:21.531653 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:07:21Z","lastTransitionTime":"2026-01-20T18:07:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:07:21 crc kubenswrapper[4661]: I0120 18:07:21.635263 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:07:21 crc kubenswrapper[4661]: I0120 18:07:21.635334 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:07:21 crc kubenswrapper[4661]: I0120 18:07:21.635359 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:07:21 crc kubenswrapper[4661]: I0120 18:07:21.635393 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:07:21 crc kubenswrapper[4661]: I0120 18:07:21.635416 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:07:21Z","lastTransitionTime":"2026-01-20T18:07:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:07:21 crc kubenswrapper[4661]: I0120 18:07:21.740128 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:07:21 crc kubenswrapper[4661]: I0120 18:07:21.740175 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:07:21 crc kubenswrapper[4661]: I0120 18:07:21.740186 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:07:21 crc kubenswrapper[4661]: I0120 18:07:21.740204 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:07:21 crc kubenswrapper[4661]: I0120 18:07:21.740215 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:07:21Z","lastTransitionTime":"2026-01-20T18:07:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:07:21 crc kubenswrapper[4661]: I0120 18:07:21.844028 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:07:21 crc kubenswrapper[4661]: I0120 18:07:21.844165 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:07:21 crc kubenswrapper[4661]: I0120 18:07:21.844192 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:07:21 crc kubenswrapper[4661]: I0120 18:07:21.844218 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:07:21 crc kubenswrapper[4661]: I0120 18:07:21.844236 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:07:21Z","lastTransitionTime":"2026-01-20T18:07:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:07:21 crc kubenswrapper[4661]: I0120 18:07:21.947715 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:07:21 crc kubenswrapper[4661]: I0120 18:07:21.947814 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:07:21 crc kubenswrapper[4661]: I0120 18:07:21.947872 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:07:21 crc kubenswrapper[4661]: I0120 18:07:21.947900 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:07:21 crc kubenswrapper[4661]: I0120 18:07:21.947956 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:07:21Z","lastTransitionTime":"2026-01-20T18:07:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:07:22 crc kubenswrapper[4661]: I0120 18:07:22.050924 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:07:22 crc kubenswrapper[4661]: I0120 18:07:22.051337 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:07:22 crc kubenswrapper[4661]: I0120 18:07:22.051476 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:07:22 crc kubenswrapper[4661]: I0120 18:07:22.051665 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:07:22 crc kubenswrapper[4661]: I0120 18:07:22.051866 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:07:22Z","lastTransitionTime":"2026-01-20T18:07:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:07:22 crc kubenswrapper[4661]: I0120 18:07:22.143453 4661 scope.go:117] "RemoveContainer" containerID="a87060cdc681c7299812827e762152da6ae48e5862cda4b15a238c2ac16c60e7" Jan 20 18:07:22 crc kubenswrapper[4661]: E0120 18:07:22.143808 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-fxb9d_openshift-ovn-kubernetes(3856f23c-8dc3-4b36-b3b7-955dff315250)\"" pod="openshift-ovn-kubernetes/ovnkube-node-fxb9d" podUID="3856f23c-8dc3-4b36-b3b7-955dff315250" Jan 20 18:07:22 crc kubenswrapper[4661]: I0120 18:07:22.148819 4661 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-20 22:12:40.158325529 +0000 UTC Jan 20 18:07:22 crc kubenswrapper[4661]: I0120 18:07:22.155260 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:07:22 crc kubenswrapper[4661]: I0120 18:07:22.155341 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:07:22 crc kubenswrapper[4661]: I0120 18:07:22.155359 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:07:22 crc kubenswrapper[4661]: I0120 18:07:22.155385 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:07:22 crc kubenswrapper[4661]: I0120 18:07:22.155403 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:07:22Z","lastTransitionTime":"2026-01-20T18:07:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:07:22 crc kubenswrapper[4661]: I0120 18:07:22.164102 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e82a8ce9-e23c-4fbc-9d26-0e81374193ba\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:05:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:05:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:05:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://887bb0c57d5fcddfad0ffb44f39fb809f945050689a5fb64f145b607b2dcd4f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://786a785de87e5345f01ed57fb6cd17efebe4633953b9e6bc9c169469621aea5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fdf1da4bed4fed480e327c750f01ea201663449a9975540540859463e5b4821f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://892bacf66ebee9e56348d6d6f391b0fd23a5c99369ddaf9280590d8598b32e62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://892bacf66ebee9e56348d6d6f391b0fd23a5c99369ddaf9280590d8598b32e62\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T18:05:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T18:05:44Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T18:05:44Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:07:22Z is after 2025-08-24T17:21:41Z" Jan 20 18:07:22 crc kubenswrapper[4661]: I0120 18:07:22.182814 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aafdc595f8f331b863d71124f1aa3c686ec883829377108268dd78de88f498ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a15e7bb714cbcf03a4ed8925508be80b06b04f3cd455d293237554c8ad0fdeee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:07:22Z is after 2025-08-24T17:21:41Z" Jan 20 18:07:22 crc kubenswrapper[4661]: I0120 18:07:22.207276 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-j9j6p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e190abed-d178-4ce7-9485-f6090ecf8578\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ad84b24b0398f3f900b9440d55a7914e661a18580ef8b248ffdce4d8a6c75c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxwgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6923af243783c919b8d74338d7221f91f7c6b770d97eb3a2f7e30360376f071d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6923af243783c919b8d74338d7221f91f7c6b770d97eb3a2f7e30360376f071d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T18:06:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T18:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxwgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d61ecbabdd991af4f3f3005e3d6fab0d3f7fa863e7503f45dd91633dfc68c597\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d61ecbabdd991af4f3f3005e3d6fab0d3f7fa863e7503f45dd91633dfc68c597\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T18:06:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T18:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxwgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31c8fb341a4de1d1144737f83eb46ad0b301f7eb48dee0969da7ade7fbd513da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31c8fb341a4de1d1144737f83eb46ad0b301f7eb48dee0969da7ade7fbd513da\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T18:06:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T18:06:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxwgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db8122764bd0508f39da125b5849fbe3bad9558e511c18f26bdcf4e5b23ca3a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db8122764bd0508f39da125b5849fbe3bad9558e511c18f26bdcf4e5b23ca3a1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T18:06:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T18:06:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxwgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://60e382a199aa3a85c11fdf8c490a4f039a191cff8a604b004e2f4ea6dacb6800\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://60e382a199aa3a85c11fdf8c490a4f039a191cff8a604b004e2f4ea6dacb6800\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T18:06:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T18:06:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxwgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb0e9f6dd4681c1b791524e22d3f668ce544cdc72a33af01fa70f2dd93d2972f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb0e9f6dd4681c1b791524e22d3f668ce544cdc72a33af01fa70f2dd93d2972f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T18:06:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T18:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxwgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T18:06:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-j9j6p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:07:22Z is after 2025-08-24T17:21:41Z" Jan 20 18:07:22 crc kubenswrapper[4661]: I0120 18:07:22.220054 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5947c5f0-b932-4127-a183-6b9023784c81\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:05:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:05:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:05:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2286c38d543136df613b2611b8d494d0777a950158adb169c26708335c024251\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7995b8e096ce8c7adf28d9baa4e12d943a697db80ee2b6e6b347b334e44b0df\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a1fb928361cffd6f14855b6c1cf5964eccc9f923435bf79dddd8f0c94decd9a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a8e025f49d745d0d846c606a3ec9dd6fbd2d255e8662ba1fd1a65f0d4289e77\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3584f02089912eecb6ea77d78d4f093929ce92631cb9ea758f1311268963b6b1\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-20T18:06:02Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0120 18:05:56.920405 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0120 18:05:56.921589 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1862726087/tls.crt::/tmp/serving-cert-1862726087/tls.key\\\\\\\"\\\\nI0120 18:06:02.544098 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0120 18:06:02.549414 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0120 18:06:02.549439 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0120 18:06:02.549472 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0120 18:06:02.549479 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0120 18:06:02.569160 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0120 18:06:02.569400 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0120 18:06:02.569474 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0120 18:06:02.569536 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0120 18:06:02.569594 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0120 18:06:02.569648 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0120 18:06:02.569744 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0120 18:06:02.569342 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0120 18:06:02.573278 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T18:05:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f09e5fcc7fafac7a11257184f5919c06b5b2e56a677b67c664e6489d9a581a20\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:05:46Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6eedc9bdf3c37af238cf9ad5172a8d93751c0641cbf43057016157f086c77538\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6eedc9bdf3c37af238cf9ad5172a8d93751c0641cbf43057016157f086c77538\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T18:05:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T18:05:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T18:05:44Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:07:22Z is after 2025-08-24T17:21:41Z" Jan 20 18:07:22 crc kubenswrapper[4661]: I0120 18:07:22.230753 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:03Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:07:22Z is after 2025-08-24T17:21:41Z" Jan 20 18:07:22 crc kubenswrapper[4661]: I0120 18:07:22.245094 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:03Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:07:22Z is after 2025-08-24T17:21:41Z" Jan 20 18:07:22 crc kubenswrapper[4661]: I0120 18:07:22.259131 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-9m9jm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c44ff326-6791-438a-8d65-b2be26e9c819\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de5a607340e429cf954b1b6e147c4dbff99ffee4d311e9692410698574915af2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kn7nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T18:06:04Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-9m9jm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:07:22Z is after 2025-08-24T17:21:41Z" Jan 20 18:07:22 crc kubenswrapper[4661]: I0120 18:07:22.260931 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:07:22 crc kubenswrapper[4661]: I0120 18:07:22.260993 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:07:22 crc kubenswrapper[4661]: I0120 18:07:22.261008 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:07:22 crc kubenswrapper[4661]: I0120 18:07:22.261053 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:07:22 crc kubenswrapper[4661]: I0120 18:07:22.261071 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:07:22Z","lastTransitionTime":"2026-01-20T18:07:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:07:22 crc kubenswrapper[4661]: I0120 18:07:22.281833 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fxb9d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3856f23c-8dc3-4b36-b3b7-955dff315250\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:06Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:06Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54a53d0636da9c6e7974633697967fa21ba02b0357019aca7c83994f57d06d84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66kpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://37fb98a4cea5fe59a694ef52ebebfd3366649970415c8bd3b1307e6d150ffe66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66kpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6bac19d8c5ba66dc20e5e4b90b2ba10efe69f218908b04abb221416f47e47f5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66kpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f5f5d96326cd37c1101488fff8b4ce215ce84766faf13112bed7df0a767de0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66kpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d53da47c39bd1f10fe866890f30f12f27cb0cfce0348c89fc0e89b3e8f563f2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66kpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://407e4d66f22050b80251fcb98ac7168d601d70dff1679bdaca0fc82d6068da41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66kpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a87060cdc681c7299812827e762152da6ae48e5862cda4b15a238c2ac16c60e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a87060cdc681c7299812827e762152da6ae48e5862cda4b15a238c2ac16c60e7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-20T18:07:07Z\\\",\\\"message\\\":\\\" occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:07:07Z is after 2025-08-24T17:21:41Z]\\\\nI0120 18:07:07.515424 6629 obj_retry.go:303] Retry object setup: *v1.Pod openshift-dns/node-resolver-9m9jm\\\\nI0120 18:07:07.515429 6629 obj_retry.go:365] Adding new object: *v1.Pod openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4hf4s\\\\nI0120 18:07:07.515342 6629 ovn.go:134] Ensuring zone local for Pod openshift-multus/multus-additional-cni-plugins-j9j6p in node crc\\\\nI0120 18:07:07.515435 6629 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf\\\\nI0120 18:07:07.515441 6629 obj_retry.go:303] Retry object setup: *v1.Pod openshift-kube-apiserver/kube-apiserver-crc\\\\nI0120 18:07:07.515448 6629 ovn.go:134] Ensuring zone local for Pod openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4hf4s in node crc\\\\nI0120 18:07:07.515451 6629 obj_retry.go:386] Retry successful for *v1.Pod openshift-multus/multus-additional-cni-plugins-j9j6p after 0 failed attempt(s)\\\\nI0120 18:07:07.515454 6629 obj_retry.go:365] Adding new object: *v1.Pod openshift-kube-apiserver/kube-apiserver-crc\\\\nI01\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T18:07:06Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-fxb9d_openshift-ovn-kubernetes(3856f23c-8dc3-4b36-b3b7-955dff315250)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66kpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dfbc19df20b659446872267891c3a922b6a01e39d8f0557505f25cdc5ba1a648\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66kpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://babd416d0d33b286f533dc5bd8d6904d24fd23632efce36edb6e13183fbd390a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://babd416d0d33b286f533dc5bd8d6904d24fd23632efce36edb6e13183fbd390a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T18:06:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66kpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T18:06:06Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fxb9d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:07:22Z is after 2025-08-24T17:21:41Z" Jan 20 18:07:22 crc kubenswrapper[4661]: I0120 18:07:22.291877 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-dhd6h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"58dc28d6-51a6-49ce-ab8e-ac0b6bbe7131\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j95bf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j95bf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T18:06:19Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-dhd6h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:07:22Z is after 2025-08-24T17:21:41Z" Jan 20 18:07:22 crc kubenswrapper[4661]: I0120 18:07:22.305647 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f7511825-196e-48ea-a80c-f30a6806a15f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:05:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:05:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f30ca85f0d31021dde3b56c646ddd5d841e699b809c85e54afa944cc8035df6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://008613eee577926f777b6eba5a93379dca1203429fb29918bb057f2aba5eba4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:05:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://baf1692fe971ebe4534bc83cc471812d2b2883b6f97e53728ded6cd57b40c6f1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://faea3c0fefa61b8f0e07a050f59ca7b88d89a7ac8dba19ab019cff00fd782da3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T18:05:44Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:07:22Z is after 2025-08-24T17:21:41Z" Jan 20 18:07:22 crc kubenswrapper[4661]: I0120 18:07:22.314865 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8f62f564-11b3-4142-b086-684e2834c38b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:05:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:05:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:05:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e81ace2939c908bb5c186943729767d4a14f0f0f12fc09c3e351774ca38dc47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://470d14940e84e825c26cb01f9310af3ebbbc2107623e2237d96b40e918def207\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://470d14940e84e825c26cb01f9310af3ebbbc2107623e2237d96b40e918def207\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T18:05:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T18:05:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T18:05:44Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:07:22Z is after 2025-08-24T17:21:41Z" Jan 20 18:07:22 crc kubenswrapper[4661]: I0120 18:07:22.325970 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d831477bdf455582c54cba87020fc1141541282a25169c4b9730a78855e5719\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:07:22Z is after 2025-08-24T17:21:41Z" Jan 20 18:07:22 crc kubenswrapper[4661]: I0120 18:07:22.342012 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-z97p2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b6f2401-3eb9-4ee4-b79c-6faee06bc21c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab1840afe6e204ba16157cfa4140926ab50dd66d6b72a0e49e4ef986f62c7e34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d04be3c87130e9506908a5ff0bf35490bafa64b4cec7b6ae1b67c4a8bd7df5d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-20T18:06:53Z\\\",\\\"message\\\":\\\"2026-01-20T18:06:08+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_6b6eb9a0-3822-47d9-9b83-21e46cfc33fa\\\\n2026-01-20T18:06:08+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_6b6eb9a0-3822-47d9-9b83-21e46cfc33fa to /host/opt/cni/bin/\\\\n2026-01-20T18:06:08Z [verbose] multus-daemon started\\\\n2026-01-20T18:06:08Z [verbose] Readiness Indicator file check\\\\n2026-01-20T18:06:53Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T18:06:06Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff8qg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T18:06:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-z97p2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:07:22Z is after 2025-08-24T17:21:41Z" Jan 20 18:07:22 crc kubenswrapper[4661]: I0120 18:07:22.353243 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4hf4s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2cada643-eb7b-4036-8788-500338f73fac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://59b5cf3db3513f82b52401408842627d3e40bdc3009c226548556808410b2289\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gwqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://846c1cab30f986276eb919ac7474fbde1b6d5edb6557ab47057723b68d78b782\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gwqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T18:06:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-4hf4s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:07:22Z is after 2025-08-24T17:21:41Z" Jan 20 18:07:22 crc kubenswrapper[4661]: I0120 18:07:22.363448 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:07:22 crc kubenswrapper[4661]: I0120 18:07:22.363484 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:07:22 crc kubenswrapper[4661]: I0120 18:07:22.363492 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:07:22 crc kubenswrapper[4661]: I0120 18:07:22.363510 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:07:22 crc kubenswrapper[4661]: I0120 18:07:22.363522 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:07:22Z","lastTransitionTime":"2026-01-20T18:07:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:07:22 crc kubenswrapper[4661]: I0120 18:07:22.366723 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26d03e00aaf9fc7a94d8fe25f4f6f7a028f4e5eb9956411442757ca8b2046d27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:07:22Z is after 2025-08-24T17:21:41Z" Jan 20 18:07:22 crc kubenswrapper[4661]: I0120 18:07:22.378139 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:07:22Z is after 2025-08-24T17:21:41Z" Jan 20 18:07:22 crc kubenswrapper[4661]: I0120 18:07:22.390336 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-svf7c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"78855c94-da90-4523-8d65-70f7fd153dee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce85015f47761ddd35031a4b2aa10eddde92a1f1ee206e6454b967b03b49372e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zvj2k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7dad5141c6e2e07d42bee1c473efffa900d0d900467b1524cd59962582696a3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zvj2k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T18:06:05Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-svf7c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:07:22Z is after 2025-08-24T17:21:41Z" Jan 20 18:07:22 crc kubenswrapper[4661]: I0120 18:07:22.400625 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-tfdrt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1c3f1ce7-0584-4bf1-8398-a277e9a4599b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://163c719cffaaa547e54e81b543b5f5b2ce5abf7f6309d2859831a14e42df189f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbq77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T18:06:07Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-tfdrt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:07:22Z is after 2025-08-24T17:21:41Z" Jan 20 18:07:22 crc kubenswrapper[4661]: I0120 18:07:22.466018 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:07:22 crc kubenswrapper[4661]: I0120 18:07:22.466049 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:07:22 crc kubenswrapper[4661]: I0120 18:07:22.466057 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:07:22 crc kubenswrapper[4661]: I0120 18:07:22.466076 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:07:22 crc kubenswrapper[4661]: I0120 18:07:22.466085 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:07:22Z","lastTransitionTime":"2026-01-20T18:07:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:07:22 crc kubenswrapper[4661]: I0120 18:07:22.569177 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:07:22 crc kubenswrapper[4661]: I0120 18:07:22.569281 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:07:22 crc kubenswrapper[4661]: I0120 18:07:22.569305 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:07:22 crc kubenswrapper[4661]: I0120 18:07:22.569342 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:07:22 crc kubenswrapper[4661]: I0120 18:07:22.569366 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:07:22Z","lastTransitionTime":"2026-01-20T18:07:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:07:22 crc kubenswrapper[4661]: I0120 18:07:22.673880 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:07:22 crc kubenswrapper[4661]: I0120 18:07:22.673932 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:07:22 crc kubenswrapper[4661]: I0120 18:07:22.673947 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:07:22 crc kubenswrapper[4661]: I0120 18:07:22.673967 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:07:22 crc kubenswrapper[4661]: I0120 18:07:22.673979 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:07:22Z","lastTransitionTime":"2026-01-20T18:07:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:07:22 crc kubenswrapper[4661]: I0120 18:07:22.777696 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:07:22 crc kubenswrapper[4661]: I0120 18:07:22.777809 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:07:22 crc kubenswrapper[4661]: I0120 18:07:22.777827 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:07:22 crc kubenswrapper[4661]: I0120 18:07:22.778401 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:07:22 crc kubenswrapper[4661]: I0120 18:07:22.778467 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:07:22Z","lastTransitionTime":"2026-01-20T18:07:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:07:22 crc kubenswrapper[4661]: I0120 18:07:22.883715 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:07:22 crc kubenswrapper[4661]: I0120 18:07:22.883796 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:07:22 crc kubenswrapper[4661]: I0120 18:07:22.883814 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:07:22 crc kubenswrapper[4661]: I0120 18:07:22.883841 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:07:22 crc kubenswrapper[4661]: I0120 18:07:22.883862 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:07:22Z","lastTransitionTime":"2026-01-20T18:07:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:07:22 crc kubenswrapper[4661]: I0120 18:07:22.987223 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:07:22 crc kubenswrapper[4661]: I0120 18:07:22.987292 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:07:22 crc kubenswrapper[4661]: I0120 18:07:22.987312 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:07:22 crc kubenswrapper[4661]: I0120 18:07:22.987337 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:07:22 crc kubenswrapper[4661]: I0120 18:07:22.987359 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:07:22Z","lastTransitionTime":"2026-01-20T18:07:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:07:23 crc kubenswrapper[4661]: I0120 18:07:23.091741 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:07:23 crc kubenswrapper[4661]: I0120 18:07:23.091792 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:07:23 crc kubenswrapper[4661]: I0120 18:07:23.091812 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:07:23 crc kubenswrapper[4661]: I0120 18:07:23.091840 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:07:23 crc kubenswrapper[4661]: I0120 18:07:23.091859 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:07:23Z","lastTransitionTime":"2026-01-20T18:07:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:07:23 crc kubenswrapper[4661]: I0120 18:07:23.141464 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 18:07:23 crc kubenswrapper[4661]: I0120 18:07:23.141516 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 18:07:23 crc kubenswrapper[4661]: E0120 18:07:23.141657 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 20 18:07:23 crc kubenswrapper[4661]: E0120 18:07:23.141828 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 20 18:07:23 crc kubenswrapper[4661]: I0120 18:07:23.141619 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dhd6h" Jan 20 18:07:23 crc kubenswrapper[4661]: E0120 18:07:23.142047 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dhd6h" podUID="58dc28d6-51a6-49ce-ab8e-ac0b6bbe7131" Jan 20 18:07:23 crc kubenswrapper[4661]: I0120 18:07:23.142310 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 18:07:23 crc kubenswrapper[4661]: E0120 18:07:23.142599 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 20 18:07:23 crc kubenswrapper[4661]: I0120 18:07:23.149549 4661 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-28 19:31:32.861680826 +0000 UTC Jan 20 18:07:23 crc kubenswrapper[4661]: I0120 18:07:23.195433 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:07:23 crc kubenswrapper[4661]: I0120 18:07:23.195700 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:07:23 crc kubenswrapper[4661]: I0120 18:07:23.195900 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:07:23 crc kubenswrapper[4661]: I0120 18:07:23.196101 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:07:23 crc kubenswrapper[4661]: I0120 18:07:23.196280 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:07:23Z","lastTransitionTime":"2026-01-20T18:07:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:07:23 crc kubenswrapper[4661]: I0120 18:07:23.299947 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:07:23 crc kubenswrapper[4661]: I0120 18:07:23.300352 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:07:23 crc kubenswrapper[4661]: I0120 18:07:23.300526 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:07:23 crc kubenswrapper[4661]: I0120 18:07:23.300726 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:07:23 crc kubenswrapper[4661]: I0120 18:07:23.300906 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:07:23Z","lastTransitionTime":"2026-01-20T18:07:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:07:23 crc kubenswrapper[4661]: I0120 18:07:23.405083 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:07:23 crc kubenswrapper[4661]: I0120 18:07:23.405136 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:07:23 crc kubenswrapper[4661]: I0120 18:07:23.405154 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:07:23 crc kubenswrapper[4661]: I0120 18:07:23.405179 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:07:23 crc kubenswrapper[4661]: I0120 18:07:23.405197 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:07:23Z","lastTransitionTime":"2026-01-20T18:07:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:07:23 crc kubenswrapper[4661]: I0120 18:07:23.508855 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:07:23 crc kubenswrapper[4661]: I0120 18:07:23.508928 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:07:23 crc kubenswrapper[4661]: I0120 18:07:23.508951 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:07:23 crc kubenswrapper[4661]: I0120 18:07:23.508990 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:07:23 crc kubenswrapper[4661]: I0120 18:07:23.509012 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:07:23Z","lastTransitionTime":"2026-01-20T18:07:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:07:23 crc kubenswrapper[4661]: I0120 18:07:23.612821 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:07:23 crc kubenswrapper[4661]: I0120 18:07:23.612882 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:07:23 crc kubenswrapper[4661]: I0120 18:07:23.612905 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:07:23 crc kubenswrapper[4661]: I0120 18:07:23.612937 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:07:23 crc kubenswrapper[4661]: I0120 18:07:23.612960 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:07:23Z","lastTransitionTime":"2026-01-20T18:07:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:07:23 crc kubenswrapper[4661]: I0120 18:07:23.716149 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:07:23 crc kubenswrapper[4661]: I0120 18:07:23.716241 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:07:23 crc kubenswrapper[4661]: I0120 18:07:23.716268 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:07:23 crc kubenswrapper[4661]: I0120 18:07:23.716298 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:07:23 crc kubenswrapper[4661]: I0120 18:07:23.716318 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:07:23Z","lastTransitionTime":"2026-01-20T18:07:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:07:23 crc kubenswrapper[4661]: I0120 18:07:23.819310 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:07:23 crc kubenswrapper[4661]: I0120 18:07:23.819372 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:07:23 crc kubenswrapper[4661]: I0120 18:07:23.819391 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:07:23 crc kubenswrapper[4661]: I0120 18:07:23.819418 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:07:23 crc kubenswrapper[4661]: I0120 18:07:23.819438 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:07:23Z","lastTransitionTime":"2026-01-20T18:07:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:07:23 crc kubenswrapper[4661]: I0120 18:07:23.894828 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/58dc28d6-51a6-49ce-ab8e-ac0b6bbe7131-metrics-certs\") pod \"network-metrics-daemon-dhd6h\" (UID: \"58dc28d6-51a6-49ce-ab8e-ac0b6bbe7131\") " pod="openshift-multus/network-metrics-daemon-dhd6h" Jan 20 18:07:23 crc kubenswrapper[4661]: E0120 18:07:23.895107 4661 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 20 18:07:23 crc kubenswrapper[4661]: E0120 18:07:23.895210 4661 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/58dc28d6-51a6-49ce-ab8e-ac0b6bbe7131-metrics-certs podName:58dc28d6-51a6-49ce-ab8e-ac0b6bbe7131 nodeName:}" failed. No retries permitted until 2026-01-20 18:08:27.895178008 +0000 UTC m=+164.225967700 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/58dc28d6-51a6-49ce-ab8e-ac0b6bbe7131-metrics-certs") pod "network-metrics-daemon-dhd6h" (UID: "58dc28d6-51a6-49ce-ab8e-ac0b6bbe7131") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 20 18:07:23 crc kubenswrapper[4661]: I0120 18:07:23.923106 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:07:23 crc kubenswrapper[4661]: I0120 18:07:23.923182 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:07:23 crc kubenswrapper[4661]: I0120 18:07:23.923203 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:07:23 crc kubenswrapper[4661]: I0120 18:07:23.923236 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:07:23 crc kubenswrapper[4661]: I0120 18:07:23.923260 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:07:23Z","lastTransitionTime":"2026-01-20T18:07:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:07:24 crc kubenswrapper[4661]: I0120 18:07:24.026393 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:07:24 crc kubenswrapper[4661]: I0120 18:07:24.026469 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:07:24 crc kubenswrapper[4661]: I0120 18:07:24.026486 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:07:24 crc kubenswrapper[4661]: I0120 18:07:24.026514 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:07:24 crc kubenswrapper[4661]: I0120 18:07:24.026533 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:07:24Z","lastTransitionTime":"2026-01-20T18:07:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:07:24 crc kubenswrapper[4661]: I0120 18:07:24.130785 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:07:24 crc kubenswrapper[4661]: I0120 18:07:24.130890 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:07:24 crc kubenswrapper[4661]: I0120 18:07:24.130908 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:07:24 crc kubenswrapper[4661]: I0120 18:07:24.130936 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:07:24 crc kubenswrapper[4661]: I0120 18:07:24.130958 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:07:24Z","lastTransitionTime":"2026-01-20T18:07:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:07:24 crc kubenswrapper[4661]: I0120 18:07:24.150081 4661 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-20 12:35:11.237114939 +0000 UTC Jan 20 18:07:24 crc kubenswrapper[4661]: I0120 18:07:24.164602 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26d03e00aaf9fc7a94d8fe25f4f6f7a028f4e5eb9956411442757ca8b2046d27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:07:24Z is after 2025-08-24T17:21:41Z" Jan 20 18:07:24 crc kubenswrapper[4661]: I0120 18:07:24.184373 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:07:24Z is after 2025-08-24T17:21:41Z" Jan 20 18:07:24 crc kubenswrapper[4661]: I0120 18:07:24.199431 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-svf7c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"78855c94-da90-4523-8d65-70f7fd153dee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce85015f47761ddd35031a4b2aa10eddde92a1f1ee206e6454b967b03b49372e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zvj2k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7dad5141c6e2e07d42bee1c473efffa900d0d900467b1524cd59962582696a3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zvj2k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T18:06:05Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-svf7c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:07:24Z is after 2025-08-24T17:21:41Z" Jan 20 18:07:24 crc kubenswrapper[4661]: I0120 18:07:24.210998 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-tfdrt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1c3f1ce7-0584-4bf1-8398-a277e9a4599b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://163c719cffaaa547e54e81b543b5f5b2ce5abf7f6309d2859831a14e42df189f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbq77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T18:06:07Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-tfdrt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:07:24Z is after 2025-08-24T17:21:41Z" Jan 20 18:07:24 crc kubenswrapper[4661]: I0120 18:07:24.226460 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e82a8ce9-e23c-4fbc-9d26-0e81374193ba\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:05:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:05:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:05:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://887bb0c57d5fcddfad0ffb44f39fb809f945050689a5fb64f145b607b2dcd4f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://786a785de87e5345f01ed57fb6cd17efebe4633953b9e6bc9c169469621aea5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fdf1da4bed4fed480e327c750f01ea201663449a9975540540859463e5b4821f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://892bacf66ebee9e56348d6d6f391b0fd23a5c99369ddaf9280590d8598b32e62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://892bacf66ebee9e56348d6d6f391b0fd23a5c99369ddaf9280590d8598b32e62\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T18:05:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T18:05:44Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T18:05:44Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:07:24Z is after 2025-08-24T17:21:41Z" Jan 20 18:07:24 crc kubenswrapper[4661]: I0120 18:07:24.235067 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:07:24 crc kubenswrapper[4661]: I0120 18:07:24.235146 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:07:24 crc kubenswrapper[4661]: I0120 18:07:24.235171 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:07:24 crc kubenswrapper[4661]: I0120 18:07:24.235205 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:07:24 crc kubenswrapper[4661]: I0120 18:07:24.235232 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:07:24Z","lastTransitionTime":"2026-01-20T18:07:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:07:24 crc kubenswrapper[4661]: I0120 18:07:24.246973 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aafdc595f8f331b863d71124f1aa3c686ec883829377108268dd78de88f498ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a15e7bb714cbcf03a4ed8925508be80b06b04f3cd455d293237554c8ad0fdeee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:07:24Z is after 2025-08-24T17:21:41Z" Jan 20 18:07:24 crc kubenswrapper[4661]: I0120 18:07:24.271482 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-j9j6p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e190abed-d178-4ce7-9485-f6090ecf8578\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ad84b24b0398f3f900b9440d55a7914e661a18580ef8b248ffdce4d8a6c75c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxwgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6923af243783c919b8d74338d7221f91f7c6b770d97eb3a2f7e30360376f071d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6923af243783c919b8d74338d7221f91f7c6b770d97eb3a2f7e30360376f071d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T18:06:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T18:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxwgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d61ecbabdd991af4f3f3005e3d6fab0d3f7fa863e7503f45dd91633dfc68c597\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d61ecbabdd991af4f3f3005e3d6fab0d3f7fa863e7503f45dd91633dfc68c597\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T18:06:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T18:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxwgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31c8fb341a4de1d1144737f83eb46ad0b301f7eb48dee0969da7ade7fbd513da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31c8fb341a4de1d1144737f83eb46ad0b301f7eb48dee0969da7ade7fbd513da\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T18:06:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T18:06:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxwgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db8122764bd0508f39da125b5849fbe3bad9558e511c18f26bdcf4e5b23ca3a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db8122764bd0508f39da125b5849fbe3bad9558e511c18f26bdcf4e5b23ca3a1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T18:06:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T18:06:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxwgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://60e382a199aa3a85c11fdf8c490a4f039a191cff8a604b004e2f4ea6dacb6800\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://60e382a199aa3a85c11fdf8c490a4f039a191cff8a604b004e2f4ea6dacb6800\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T18:06:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T18:06:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxwgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb0e9f6dd4681c1b791524e22d3f668ce544cdc72a33af01fa70f2dd93d2972f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb0e9f6dd4681c1b791524e22d3f668ce544cdc72a33af01fa70f2dd93d2972f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T18:06:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T18:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxwgg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T18:06:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-j9j6p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:07:24Z is after 2025-08-24T17:21:41Z" Jan 20 18:07:24 crc kubenswrapper[4661]: I0120 18:07:24.296194 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5947c5f0-b932-4127-a183-6b9023784c81\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:05:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:05:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:05:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2286c38d543136df613b2611b8d494d0777a950158adb169c26708335c024251\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7995b8e096ce8c7adf28d9baa4e12d943a697db80ee2b6e6b347b334e44b0df\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a1fb928361cffd6f14855b6c1cf5964eccc9f923435bf79dddd8f0c94decd9a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a8e025f49d745d0d846c606a3ec9dd6fbd2d255e8662ba1fd1a65f0d4289e77\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3584f02089912eecb6ea77d78d4f093929ce92631cb9ea758f1311268963b6b1\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-20T18:06:02Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0120 18:05:56.920405 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0120 18:05:56.921589 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1862726087/tls.crt::/tmp/serving-cert-1862726087/tls.key\\\\\\\"\\\\nI0120 18:06:02.544098 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0120 18:06:02.549414 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0120 18:06:02.549439 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0120 18:06:02.549472 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0120 18:06:02.549479 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0120 18:06:02.569160 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0120 18:06:02.569400 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0120 18:06:02.569474 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0120 18:06:02.569536 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0120 18:06:02.569594 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0120 18:06:02.569648 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0120 18:06:02.569744 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0120 18:06:02.569342 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0120 18:06:02.573278 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T18:05:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f09e5fcc7fafac7a11257184f5919c06b5b2e56a677b67c664e6489d9a581a20\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:05:46Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6eedc9bdf3c37af238cf9ad5172a8d93751c0641cbf43057016157f086c77538\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6eedc9bdf3c37af238cf9ad5172a8d93751c0641cbf43057016157f086c77538\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T18:05:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T18:05:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T18:05:44Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:07:24Z is after 2025-08-24T17:21:41Z" Jan 20 18:07:24 crc kubenswrapper[4661]: I0120 18:07:24.317276 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:03Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:07:24Z is after 2025-08-24T17:21:41Z" Jan 20 18:07:24 crc kubenswrapper[4661]: I0120 18:07:24.339657 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:07:24 crc kubenswrapper[4661]: I0120 18:07:24.339792 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:07:24 crc kubenswrapper[4661]: I0120 18:07:24.339816 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:07:24 crc kubenswrapper[4661]: I0120 18:07:24.339848 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:07:24 crc kubenswrapper[4661]: I0120 18:07:24.339871 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:07:24Z","lastTransitionTime":"2026-01-20T18:07:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:07:24 crc kubenswrapper[4661]: I0120 18:07:24.342219 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:03Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:07:24Z is after 2025-08-24T17:21:41Z" Jan 20 18:07:24 crc kubenswrapper[4661]: I0120 18:07:24.359564 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-9m9jm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c44ff326-6791-438a-8d65-b2be26e9c819\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de5a607340e429cf954b1b6e147c4dbff99ffee4d311e9692410698574915af2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kn7nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T18:06:04Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-9m9jm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:07:24Z is after 2025-08-24T17:21:41Z" Jan 20 18:07:24 crc kubenswrapper[4661]: I0120 18:07:24.393957 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fxb9d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3856f23c-8dc3-4b36-b3b7-955dff315250\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:06Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:06Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54a53d0636da9c6e7974633697967fa21ba02b0357019aca7c83994f57d06d84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66kpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://37fb98a4cea5fe59a694ef52ebebfd3366649970415c8bd3b1307e6d150ffe66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66kpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6bac19d8c5ba66dc20e5e4b90b2ba10efe69f218908b04abb221416f47e47f5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66kpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f5f5d96326cd37c1101488fff8b4ce215ce84766faf13112bed7df0a767de0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66kpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d53da47c39bd1f10fe866890f30f12f27cb0cfce0348c89fc0e89b3e8f563f2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66kpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://407e4d66f22050b80251fcb98ac7168d601d70dff1679bdaca0fc82d6068da41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66kpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a87060cdc681c7299812827e762152da6ae48e5862cda4b15a238c2ac16c60e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a87060cdc681c7299812827e762152da6ae48e5862cda4b15a238c2ac16c60e7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-20T18:07:07Z\\\",\\\"message\\\":\\\" occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:07:07Z is after 2025-08-24T17:21:41Z]\\\\nI0120 18:07:07.515424 6629 obj_retry.go:303] Retry object setup: *v1.Pod openshift-dns/node-resolver-9m9jm\\\\nI0120 18:07:07.515429 6629 obj_retry.go:365] Adding new object: *v1.Pod openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4hf4s\\\\nI0120 18:07:07.515342 6629 ovn.go:134] Ensuring zone local for Pod openshift-multus/multus-additional-cni-plugins-j9j6p in node crc\\\\nI0120 18:07:07.515435 6629 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf\\\\nI0120 18:07:07.515441 6629 obj_retry.go:303] Retry object setup: *v1.Pod openshift-kube-apiserver/kube-apiserver-crc\\\\nI0120 18:07:07.515448 6629 ovn.go:134] Ensuring zone local for Pod openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4hf4s in node crc\\\\nI0120 18:07:07.515451 6629 obj_retry.go:386] Retry successful for *v1.Pod openshift-multus/multus-additional-cni-plugins-j9j6p after 0 failed attempt(s)\\\\nI0120 18:07:07.515454 6629 obj_retry.go:365] Adding new object: *v1.Pod openshift-kube-apiserver/kube-apiserver-crc\\\\nI01\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T18:07:06Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-fxb9d_openshift-ovn-kubernetes(3856f23c-8dc3-4b36-b3b7-955dff315250)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66kpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dfbc19df20b659446872267891c3a922b6a01e39d8f0557505f25cdc5ba1a648\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66kpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://babd416d0d33b286f533dc5bd8d6904d24fd23632efce36edb6e13183fbd390a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://babd416d0d33b286f533dc5bd8d6904d24fd23632efce36edb6e13183fbd390a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T18:06:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66kpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T18:06:06Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fxb9d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:07:24Z is after 2025-08-24T17:21:41Z" Jan 20 18:07:24 crc kubenswrapper[4661]: I0120 18:07:24.411208 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-dhd6h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"58dc28d6-51a6-49ce-ab8e-ac0b6bbe7131\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j95bf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j95bf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T18:06:19Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-dhd6h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:07:24Z is after 2025-08-24T17:21:41Z" Jan 20 18:07:24 crc kubenswrapper[4661]: I0120 18:07:24.428713 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f7511825-196e-48ea-a80c-f30a6806a15f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:05:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:05:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f30ca85f0d31021dde3b56c646ddd5d841e699b809c85e54afa944cc8035df6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://008613eee577926f777b6eba5a93379dca1203429fb29918bb057f2aba5eba4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:05:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://baf1692fe971ebe4534bc83cc471812d2b2883b6f97e53728ded6cd57b40c6f1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://faea3c0fefa61b8f0e07a050f59ca7b88d89a7ac8dba19ab019cff00fd782da3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T18:05:44Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:07:24Z is after 2025-08-24T17:21:41Z" Jan 20 18:07:24 crc kubenswrapper[4661]: I0120 18:07:24.442894 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8f62f564-11b3-4142-b086-684e2834c38b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:05:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:05:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:05:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e81ace2939c908bb5c186943729767d4a14f0f0f12fc09c3e351774ca38dc47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://470d14940e84e825c26cb01f9310af3ebbbc2107623e2237d96b40e918def207\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://470d14940e84e825c26cb01f9310af3ebbbc2107623e2237d96b40e918def207\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T18:05:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T18:05:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T18:05:44Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:07:24Z is after 2025-08-24T17:21:41Z" Jan 20 18:07:24 crc kubenswrapper[4661]: I0120 18:07:24.444843 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:07:24 crc kubenswrapper[4661]: I0120 18:07:24.445055 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:07:24 crc kubenswrapper[4661]: I0120 18:07:24.445084 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:07:24 crc kubenswrapper[4661]: I0120 18:07:24.445115 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:07:24 crc kubenswrapper[4661]: I0120 18:07:24.445136 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:07:24Z","lastTransitionTime":"2026-01-20T18:07:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:07:24 crc kubenswrapper[4661]: I0120 18:07:24.462966 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d831477bdf455582c54cba87020fc1141541282a25169c4b9730a78855e5719\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:07:24Z is after 2025-08-24T17:21:41Z" Jan 20 18:07:24 crc kubenswrapper[4661]: I0120 18:07:24.486070 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-z97p2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b6f2401-3eb9-4ee4-b79c-6faee06bc21c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab1840afe6e204ba16157cfa4140926ab50dd66d6b72a0e49e4ef986f62c7e34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d04be3c87130e9506908a5ff0bf35490bafa64b4cec7b6ae1b67c4a8bd7df5d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-20T18:06:53Z\\\",\\\"message\\\":\\\"2026-01-20T18:06:08+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_6b6eb9a0-3822-47d9-9b83-21e46cfc33fa\\\\n2026-01-20T18:06:08+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_6b6eb9a0-3822-47d9-9b83-21e46cfc33fa to /host/opt/cni/bin/\\\\n2026-01-20T18:06:08Z [verbose] multus-daemon started\\\\n2026-01-20T18:06:08Z [verbose] Readiness Indicator file check\\\\n2026-01-20T18:06:53Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T18:06:06Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ff8qg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T18:06:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-z97p2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:07:24Z is after 2025-08-24T17:21:41Z" Jan 20 18:07:24 crc kubenswrapper[4661]: I0120 18:07:24.504298 4661 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4hf4s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2cada643-eb7b-4036-8788-500338f73fac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T18:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://59b5cf3db3513f82b52401408842627d3e40bdc3009c226548556808410b2289\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gwqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://846c1cab30f986276eb919ac7474fbde1b6d5edb6557ab47057723b68d78b782\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T18:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gwqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T18:06:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-4hf4s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T18:07:24Z is after 2025-08-24T17:21:41Z" Jan 20 18:07:24 crc kubenswrapper[4661]: I0120 18:07:24.549308 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:07:24 crc kubenswrapper[4661]: I0120 18:07:24.549389 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:07:24 crc kubenswrapper[4661]: I0120 18:07:24.549411 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:07:24 crc kubenswrapper[4661]: I0120 18:07:24.549440 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:07:24 crc kubenswrapper[4661]: I0120 18:07:24.549459 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:07:24Z","lastTransitionTime":"2026-01-20T18:07:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:07:24 crc kubenswrapper[4661]: I0120 18:07:24.653638 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:07:24 crc kubenswrapper[4661]: I0120 18:07:24.653754 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:07:24 crc kubenswrapper[4661]: I0120 18:07:24.653771 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:07:24 crc kubenswrapper[4661]: I0120 18:07:24.653832 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:07:24 crc kubenswrapper[4661]: I0120 18:07:24.653849 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:07:24Z","lastTransitionTime":"2026-01-20T18:07:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:07:24 crc kubenswrapper[4661]: I0120 18:07:24.756905 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:07:24 crc kubenswrapper[4661]: I0120 18:07:24.756978 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:07:24 crc kubenswrapper[4661]: I0120 18:07:24.757000 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:07:24 crc kubenswrapper[4661]: I0120 18:07:24.757032 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:07:24 crc kubenswrapper[4661]: I0120 18:07:24.757057 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:07:24Z","lastTransitionTime":"2026-01-20T18:07:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:07:24 crc kubenswrapper[4661]: I0120 18:07:24.860153 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:07:24 crc kubenswrapper[4661]: I0120 18:07:24.860252 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:07:24 crc kubenswrapper[4661]: I0120 18:07:24.860268 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:07:24 crc kubenswrapper[4661]: I0120 18:07:24.860296 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:07:24 crc kubenswrapper[4661]: I0120 18:07:24.860315 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:07:24Z","lastTransitionTime":"2026-01-20T18:07:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:07:24 crc kubenswrapper[4661]: I0120 18:07:24.964169 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:07:24 crc kubenswrapper[4661]: I0120 18:07:24.964255 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:07:24 crc kubenswrapper[4661]: I0120 18:07:24.964274 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:07:24 crc kubenswrapper[4661]: I0120 18:07:24.964308 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:07:24 crc kubenswrapper[4661]: I0120 18:07:24.964332 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:07:24Z","lastTransitionTime":"2026-01-20T18:07:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:07:25 crc kubenswrapper[4661]: I0120 18:07:25.068129 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:07:25 crc kubenswrapper[4661]: I0120 18:07:25.068187 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:07:25 crc kubenswrapper[4661]: I0120 18:07:25.068201 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:07:25 crc kubenswrapper[4661]: I0120 18:07:25.068227 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:07:25 crc kubenswrapper[4661]: I0120 18:07:25.068239 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:07:25Z","lastTransitionTime":"2026-01-20T18:07:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:07:25 crc kubenswrapper[4661]: I0120 18:07:25.141917 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dhd6h" Jan 20 18:07:25 crc kubenswrapper[4661]: I0120 18:07:25.142087 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 18:07:25 crc kubenswrapper[4661]: I0120 18:07:25.142172 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 18:07:25 crc kubenswrapper[4661]: E0120 18:07:25.142162 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dhd6h" podUID="58dc28d6-51a6-49ce-ab8e-ac0b6bbe7131" Jan 20 18:07:25 crc kubenswrapper[4661]: I0120 18:07:25.142275 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 18:07:25 crc kubenswrapper[4661]: E0120 18:07:25.142357 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 20 18:07:25 crc kubenswrapper[4661]: E0120 18:07:25.142526 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 20 18:07:25 crc kubenswrapper[4661]: E0120 18:07:25.142646 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 20 18:07:25 crc kubenswrapper[4661]: I0120 18:07:25.150967 4661 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-11 12:45:31.30615916 +0000 UTC Jan 20 18:07:25 crc kubenswrapper[4661]: I0120 18:07:25.172353 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:07:25 crc kubenswrapper[4661]: I0120 18:07:25.172421 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:07:25 crc kubenswrapper[4661]: I0120 18:07:25.172441 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:07:25 crc kubenswrapper[4661]: I0120 18:07:25.172475 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:07:25 crc kubenswrapper[4661]: I0120 18:07:25.172499 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:07:25Z","lastTransitionTime":"2026-01-20T18:07:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:07:25 crc kubenswrapper[4661]: I0120 18:07:25.276274 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:07:25 crc kubenswrapper[4661]: I0120 18:07:25.276343 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:07:25 crc kubenswrapper[4661]: I0120 18:07:25.276364 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:07:25 crc kubenswrapper[4661]: I0120 18:07:25.276394 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:07:25 crc kubenswrapper[4661]: I0120 18:07:25.276416 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:07:25Z","lastTransitionTime":"2026-01-20T18:07:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:07:25 crc kubenswrapper[4661]: I0120 18:07:25.379961 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:07:25 crc kubenswrapper[4661]: I0120 18:07:25.380032 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:07:25 crc kubenswrapper[4661]: I0120 18:07:25.380049 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:07:25 crc kubenswrapper[4661]: I0120 18:07:25.380088 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:07:25 crc kubenswrapper[4661]: I0120 18:07:25.380142 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:07:25Z","lastTransitionTime":"2026-01-20T18:07:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:07:25 crc kubenswrapper[4661]: I0120 18:07:25.484514 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:07:25 crc kubenswrapper[4661]: I0120 18:07:25.484596 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:07:25 crc kubenswrapper[4661]: I0120 18:07:25.484614 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:07:25 crc kubenswrapper[4661]: I0120 18:07:25.484642 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:07:25 crc kubenswrapper[4661]: I0120 18:07:25.484659 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:07:25Z","lastTransitionTime":"2026-01-20T18:07:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:07:25 crc kubenswrapper[4661]: I0120 18:07:25.589044 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:07:25 crc kubenswrapper[4661]: I0120 18:07:25.589260 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:07:25 crc kubenswrapper[4661]: I0120 18:07:25.589440 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:07:25 crc kubenswrapper[4661]: I0120 18:07:25.589576 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:07:25 crc kubenswrapper[4661]: I0120 18:07:25.589736 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:07:25Z","lastTransitionTime":"2026-01-20T18:07:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:07:25 crc kubenswrapper[4661]: I0120 18:07:25.692393 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:07:25 crc kubenswrapper[4661]: I0120 18:07:25.692617 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:07:25 crc kubenswrapper[4661]: I0120 18:07:25.692881 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:07:25 crc kubenswrapper[4661]: I0120 18:07:25.693044 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:07:25 crc kubenswrapper[4661]: I0120 18:07:25.693208 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:07:25Z","lastTransitionTime":"2026-01-20T18:07:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:07:25 crc kubenswrapper[4661]: I0120 18:07:25.795991 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:07:25 crc kubenswrapper[4661]: I0120 18:07:25.796397 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:07:25 crc kubenswrapper[4661]: I0120 18:07:25.796532 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:07:25 crc kubenswrapper[4661]: I0120 18:07:25.796710 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:07:25 crc kubenswrapper[4661]: I0120 18:07:25.796910 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:07:25Z","lastTransitionTime":"2026-01-20T18:07:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:07:25 crc kubenswrapper[4661]: I0120 18:07:25.901093 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:07:25 crc kubenswrapper[4661]: I0120 18:07:25.901169 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:07:25 crc kubenswrapper[4661]: I0120 18:07:25.901193 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:07:25 crc kubenswrapper[4661]: I0120 18:07:25.901228 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:07:25 crc kubenswrapper[4661]: I0120 18:07:25.901282 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:07:25Z","lastTransitionTime":"2026-01-20T18:07:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:07:26 crc kubenswrapper[4661]: I0120 18:07:26.005270 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:07:26 crc kubenswrapper[4661]: I0120 18:07:26.005358 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:07:26 crc kubenswrapper[4661]: I0120 18:07:26.005383 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:07:26 crc kubenswrapper[4661]: I0120 18:07:26.005419 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:07:26 crc kubenswrapper[4661]: I0120 18:07:26.005446 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:07:26Z","lastTransitionTime":"2026-01-20T18:07:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:07:26 crc kubenswrapper[4661]: I0120 18:07:26.110996 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:07:26 crc kubenswrapper[4661]: I0120 18:07:26.111080 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:07:26 crc kubenswrapper[4661]: I0120 18:07:26.111099 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:07:26 crc kubenswrapper[4661]: I0120 18:07:26.111132 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:07:26 crc kubenswrapper[4661]: I0120 18:07:26.111166 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:07:26Z","lastTransitionTime":"2026-01-20T18:07:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:07:26 crc kubenswrapper[4661]: I0120 18:07:26.152286 4661 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-23 08:46:35.835255529 +0000 UTC Jan 20 18:07:26 crc kubenswrapper[4661]: I0120 18:07:26.215021 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:07:26 crc kubenswrapper[4661]: I0120 18:07:26.215083 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:07:26 crc kubenswrapper[4661]: I0120 18:07:26.215101 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:07:26 crc kubenswrapper[4661]: I0120 18:07:26.215128 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:07:26 crc kubenswrapper[4661]: I0120 18:07:26.215147 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:07:26Z","lastTransitionTime":"2026-01-20T18:07:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:07:26 crc kubenswrapper[4661]: I0120 18:07:26.318889 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:07:26 crc kubenswrapper[4661]: I0120 18:07:26.318963 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:07:26 crc kubenswrapper[4661]: I0120 18:07:26.318981 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:07:26 crc kubenswrapper[4661]: I0120 18:07:26.319011 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:07:26 crc kubenswrapper[4661]: I0120 18:07:26.319029 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:07:26Z","lastTransitionTime":"2026-01-20T18:07:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:07:26 crc kubenswrapper[4661]: I0120 18:07:26.422310 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:07:26 crc kubenswrapper[4661]: I0120 18:07:26.422415 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:07:26 crc kubenswrapper[4661]: I0120 18:07:26.422437 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:07:26 crc kubenswrapper[4661]: I0120 18:07:26.422466 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:07:26 crc kubenswrapper[4661]: I0120 18:07:26.422487 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:07:26Z","lastTransitionTime":"2026-01-20T18:07:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:07:26 crc kubenswrapper[4661]: I0120 18:07:26.525577 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:07:26 crc kubenswrapper[4661]: I0120 18:07:26.525649 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:07:26 crc kubenswrapper[4661]: I0120 18:07:26.525702 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:07:26 crc kubenswrapper[4661]: I0120 18:07:26.525744 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:07:26 crc kubenswrapper[4661]: I0120 18:07:26.525770 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:07:26Z","lastTransitionTime":"2026-01-20T18:07:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:07:26 crc kubenswrapper[4661]: I0120 18:07:26.629159 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:07:26 crc kubenswrapper[4661]: I0120 18:07:26.629233 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:07:26 crc kubenswrapper[4661]: I0120 18:07:26.629255 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:07:26 crc kubenswrapper[4661]: I0120 18:07:26.629286 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:07:26 crc kubenswrapper[4661]: I0120 18:07:26.629308 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:07:26Z","lastTransitionTime":"2026-01-20T18:07:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:07:26 crc kubenswrapper[4661]: I0120 18:07:26.732319 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:07:26 crc kubenswrapper[4661]: I0120 18:07:26.732381 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:07:26 crc kubenswrapper[4661]: I0120 18:07:26.732397 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:07:26 crc kubenswrapper[4661]: I0120 18:07:26.732422 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:07:26 crc kubenswrapper[4661]: I0120 18:07:26.732444 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:07:26Z","lastTransitionTime":"2026-01-20T18:07:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:07:26 crc kubenswrapper[4661]: I0120 18:07:26.835588 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:07:26 crc kubenswrapper[4661]: I0120 18:07:26.835662 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:07:26 crc kubenswrapper[4661]: I0120 18:07:26.835746 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:07:26 crc kubenswrapper[4661]: I0120 18:07:26.835786 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:07:26 crc kubenswrapper[4661]: I0120 18:07:26.835812 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:07:26Z","lastTransitionTime":"2026-01-20T18:07:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:07:26 crc kubenswrapper[4661]: I0120 18:07:26.939350 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:07:26 crc kubenswrapper[4661]: I0120 18:07:26.939436 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:07:26 crc kubenswrapper[4661]: I0120 18:07:26.939485 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:07:26 crc kubenswrapper[4661]: I0120 18:07:26.939519 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:07:26 crc kubenswrapper[4661]: I0120 18:07:26.939542 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:07:26Z","lastTransitionTime":"2026-01-20T18:07:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:07:27 crc kubenswrapper[4661]: I0120 18:07:27.042719 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:07:27 crc kubenswrapper[4661]: I0120 18:07:27.043303 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:07:27 crc kubenswrapper[4661]: I0120 18:07:27.043453 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:07:27 crc kubenswrapper[4661]: I0120 18:07:27.043596 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:07:27 crc kubenswrapper[4661]: I0120 18:07:27.043785 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:07:27Z","lastTransitionTime":"2026-01-20T18:07:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:07:27 crc kubenswrapper[4661]: I0120 18:07:27.142140 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 18:07:27 crc kubenswrapper[4661]: I0120 18:07:27.142503 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 18:07:27 crc kubenswrapper[4661]: I0120 18:07:27.142281 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dhd6h" Jan 20 18:07:27 crc kubenswrapper[4661]: I0120 18:07:27.142281 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 18:07:27 crc kubenswrapper[4661]: E0120 18:07:27.142982 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 20 18:07:27 crc kubenswrapper[4661]: E0120 18:07:27.143144 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dhd6h" podUID="58dc28d6-51a6-49ce-ab8e-ac0b6bbe7131" Jan 20 18:07:27 crc kubenswrapper[4661]: E0120 18:07:27.143625 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 20 18:07:27 crc kubenswrapper[4661]: E0120 18:07:27.143814 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 20 18:07:27 crc kubenswrapper[4661]: I0120 18:07:27.146573 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:07:27 crc kubenswrapper[4661]: I0120 18:07:27.146602 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:07:27 crc kubenswrapper[4661]: I0120 18:07:27.146612 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:07:27 crc kubenswrapper[4661]: I0120 18:07:27.146626 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:07:27 crc kubenswrapper[4661]: I0120 18:07:27.146640 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:07:27Z","lastTransitionTime":"2026-01-20T18:07:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:07:27 crc kubenswrapper[4661]: I0120 18:07:27.152925 4661 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-08 00:41:34.77653368 +0000 UTC Jan 20 18:07:27 crc kubenswrapper[4661]: I0120 18:07:27.249723 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:07:27 crc kubenswrapper[4661]: I0120 18:07:27.249773 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:07:27 crc kubenswrapper[4661]: I0120 18:07:27.249790 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:07:27 crc kubenswrapper[4661]: I0120 18:07:27.249815 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:07:27 crc kubenswrapper[4661]: I0120 18:07:27.249835 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:07:27Z","lastTransitionTime":"2026-01-20T18:07:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:07:27 crc kubenswrapper[4661]: I0120 18:07:27.353310 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:07:27 crc kubenswrapper[4661]: I0120 18:07:27.353398 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:07:27 crc kubenswrapper[4661]: I0120 18:07:27.353430 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:07:27 crc kubenswrapper[4661]: I0120 18:07:27.353465 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:07:27 crc kubenswrapper[4661]: I0120 18:07:27.353486 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:07:27Z","lastTransitionTime":"2026-01-20T18:07:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:07:27 crc kubenswrapper[4661]: I0120 18:07:27.456225 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:07:27 crc kubenswrapper[4661]: I0120 18:07:27.456291 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:07:27 crc kubenswrapper[4661]: I0120 18:07:27.456314 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:07:27 crc kubenswrapper[4661]: I0120 18:07:27.456346 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:07:27 crc kubenswrapper[4661]: I0120 18:07:27.456368 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:07:27Z","lastTransitionTime":"2026-01-20T18:07:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:07:27 crc kubenswrapper[4661]: I0120 18:07:27.559585 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:07:27 crc kubenswrapper[4661]: I0120 18:07:27.559623 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:07:27 crc kubenswrapper[4661]: I0120 18:07:27.559633 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:07:27 crc kubenswrapper[4661]: I0120 18:07:27.559647 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:07:27 crc kubenswrapper[4661]: I0120 18:07:27.559657 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:07:27Z","lastTransitionTime":"2026-01-20T18:07:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:07:27 crc kubenswrapper[4661]: I0120 18:07:27.664487 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:07:27 crc kubenswrapper[4661]: I0120 18:07:27.665342 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:07:27 crc kubenswrapper[4661]: I0120 18:07:27.665737 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:07:27 crc kubenswrapper[4661]: I0120 18:07:27.665890 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:07:27 crc kubenswrapper[4661]: I0120 18:07:27.666022 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:07:27Z","lastTransitionTime":"2026-01-20T18:07:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:07:27 crc kubenswrapper[4661]: I0120 18:07:27.769620 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:07:27 crc kubenswrapper[4661]: I0120 18:07:27.769753 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:07:27 crc kubenswrapper[4661]: I0120 18:07:27.769775 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:07:27 crc kubenswrapper[4661]: I0120 18:07:27.769802 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:07:27 crc kubenswrapper[4661]: I0120 18:07:27.769822 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:07:27Z","lastTransitionTime":"2026-01-20T18:07:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:07:27 crc kubenswrapper[4661]: I0120 18:07:27.873177 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:07:27 crc kubenswrapper[4661]: I0120 18:07:27.873274 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:07:27 crc kubenswrapper[4661]: I0120 18:07:27.873290 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:07:27 crc kubenswrapper[4661]: I0120 18:07:27.873318 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:07:27 crc kubenswrapper[4661]: I0120 18:07:27.873338 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:07:27Z","lastTransitionTime":"2026-01-20T18:07:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:07:27 crc kubenswrapper[4661]: I0120 18:07:27.976825 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:07:27 crc kubenswrapper[4661]: I0120 18:07:27.976930 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:07:27 crc kubenswrapper[4661]: I0120 18:07:27.976958 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:07:27 crc kubenswrapper[4661]: I0120 18:07:27.976997 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:07:27 crc kubenswrapper[4661]: I0120 18:07:27.977025 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:07:27Z","lastTransitionTime":"2026-01-20T18:07:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:07:28 crc kubenswrapper[4661]: I0120 18:07:28.081921 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:07:28 crc kubenswrapper[4661]: I0120 18:07:28.082004 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:07:28 crc kubenswrapper[4661]: I0120 18:07:28.082022 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:07:28 crc kubenswrapper[4661]: I0120 18:07:28.082050 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:07:28 crc kubenswrapper[4661]: I0120 18:07:28.082071 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:07:28Z","lastTransitionTime":"2026-01-20T18:07:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:07:28 crc kubenswrapper[4661]: I0120 18:07:28.153306 4661 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-21 11:58:30.454214007 +0000 UTC Jan 20 18:07:28 crc kubenswrapper[4661]: I0120 18:07:28.184885 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:07:28 crc kubenswrapper[4661]: I0120 18:07:28.184921 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:07:28 crc kubenswrapper[4661]: I0120 18:07:28.184935 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:07:28 crc kubenswrapper[4661]: I0120 18:07:28.184956 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:07:28 crc kubenswrapper[4661]: I0120 18:07:28.184971 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:07:28Z","lastTransitionTime":"2026-01-20T18:07:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:07:28 crc kubenswrapper[4661]: I0120 18:07:28.288575 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:07:28 crc kubenswrapper[4661]: I0120 18:07:28.288725 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:07:28 crc kubenswrapper[4661]: I0120 18:07:28.288761 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:07:28 crc kubenswrapper[4661]: I0120 18:07:28.288793 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:07:28 crc kubenswrapper[4661]: I0120 18:07:28.288821 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:07:28Z","lastTransitionTime":"2026-01-20T18:07:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:07:28 crc kubenswrapper[4661]: I0120 18:07:28.392732 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:07:28 crc kubenswrapper[4661]: I0120 18:07:28.392801 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:07:28 crc kubenswrapper[4661]: I0120 18:07:28.392823 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:07:28 crc kubenswrapper[4661]: I0120 18:07:28.392855 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:07:28 crc kubenswrapper[4661]: I0120 18:07:28.392878 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:07:28Z","lastTransitionTime":"2026-01-20T18:07:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:07:28 crc kubenswrapper[4661]: I0120 18:07:28.496934 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:07:28 crc kubenswrapper[4661]: I0120 18:07:28.497006 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:07:28 crc kubenswrapper[4661]: I0120 18:07:28.497024 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:07:28 crc kubenswrapper[4661]: I0120 18:07:28.497056 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:07:28 crc kubenswrapper[4661]: I0120 18:07:28.497074 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:07:28Z","lastTransitionTime":"2026-01-20T18:07:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:07:28 crc kubenswrapper[4661]: I0120 18:07:28.600649 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:07:28 crc kubenswrapper[4661]: I0120 18:07:28.600705 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:07:28 crc kubenswrapper[4661]: I0120 18:07:28.600715 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:07:28 crc kubenswrapper[4661]: I0120 18:07:28.600733 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:07:28 crc kubenswrapper[4661]: I0120 18:07:28.600743 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:07:28Z","lastTransitionTime":"2026-01-20T18:07:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:07:28 crc kubenswrapper[4661]: I0120 18:07:28.703587 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:07:28 crc kubenswrapper[4661]: I0120 18:07:28.703697 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:07:28 crc kubenswrapper[4661]: I0120 18:07:28.703724 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:07:28 crc kubenswrapper[4661]: I0120 18:07:28.703766 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:07:28 crc kubenswrapper[4661]: I0120 18:07:28.703792 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:07:28Z","lastTransitionTime":"2026-01-20T18:07:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:07:28 crc kubenswrapper[4661]: I0120 18:07:28.807393 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:07:28 crc kubenswrapper[4661]: I0120 18:07:28.807446 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:07:28 crc kubenswrapper[4661]: I0120 18:07:28.807456 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:07:28 crc kubenswrapper[4661]: I0120 18:07:28.807478 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:07:28 crc kubenswrapper[4661]: I0120 18:07:28.807489 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:07:28Z","lastTransitionTime":"2026-01-20T18:07:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:07:28 crc kubenswrapper[4661]: I0120 18:07:28.911184 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:07:28 crc kubenswrapper[4661]: I0120 18:07:28.911232 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:07:28 crc kubenswrapper[4661]: I0120 18:07:28.911242 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:07:28 crc kubenswrapper[4661]: I0120 18:07:28.911262 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:07:28 crc kubenswrapper[4661]: I0120 18:07:28.911274 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:07:28Z","lastTransitionTime":"2026-01-20T18:07:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:07:29 crc kubenswrapper[4661]: I0120 18:07:29.013505 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:07:29 crc kubenswrapper[4661]: I0120 18:07:29.014021 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:07:29 crc kubenswrapper[4661]: I0120 18:07:29.014172 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:07:29 crc kubenswrapper[4661]: I0120 18:07:29.014318 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:07:29 crc kubenswrapper[4661]: I0120 18:07:29.014449 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:07:29Z","lastTransitionTime":"2026-01-20T18:07:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:07:29 crc kubenswrapper[4661]: I0120 18:07:29.090742 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 18:07:29 crc kubenswrapper[4661]: I0120 18:07:29.090791 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 18:07:29 crc kubenswrapper[4661]: I0120 18:07:29.090803 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 18:07:29 crc kubenswrapper[4661]: I0120 18:07:29.090822 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 18:07:29 crc kubenswrapper[4661]: I0120 18:07:29.090835 4661 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T18:07:29Z","lastTransitionTime":"2026-01-20T18:07:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 18:07:29 crc kubenswrapper[4661]: I0120 18:07:29.142115 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 18:07:29 crc kubenswrapper[4661]: I0120 18:07:29.142179 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 18:07:29 crc kubenswrapper[4661]: I0120 18:07:29.142286 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dhd6h" Jan 20 18:07:29 crc kubenswrapper[4661]: E0120 18:07:29.142277 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 20 18:07:29 crc kubenswrapper[4661]: I0120 18:07:29.142122 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 18:07:29 crc kubenswrapper[4661]: E0120 18:07:29.142483 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dhd6h" podUID="58dc28d6-51a6-49ce-ab8e-ac0b6bbe7131" Jan 20 18:07:29 crc kubenswrapper[4661]: E0120 18:07:29.142555 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 20 18:07:29 crc kubenswrapper[4661]: E0120 18:07:29.142637 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 20 18:07:29 crc kubenswrapper[4661]: I0120 18:07:29.154329 4661 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-20 09:44:54.531822691 +0000 UTC Jan 20 18:07:29 crc kubenswrapper[4661]: I0120 18:07:29.154430 4661 certificate_manager.go:356] kubernetes.io/kubelet-serving: Rotating certificates Jan 20 18:07:29 crc kubenswrapper[4661]: I0120 18:07:29.171043 4661 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-cf8d2"] Jan 20 18:07:29 crc kubenswrapper[4661]: I0120 18:07:29.172737 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-cf8d2" Jan 20 18:07:29 crc kubenswrapper[4661]: I0120 18:07:29.176336 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Jan 20 18:07:29 crc kubenswrapper[4661]: I0120 18:07:29.176599 4661 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Jan 20 18:07:29 crc kubenswrapper[4661]: I0120 18:07:29.176760 4661 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Jan 20 18:07:29 crc kubenswrapper[4661]: I0120 18:07:29.177257 4661 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Jan 20 18:07:29 crc kubenswrapper[4661]: I0120 18:07:29.179527 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Jan 20 18:07:29 crc kubenswrapper[4661]: I0120 18:07:29.262303 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/102e18ee-ad7e-42c7-84b3-f2d2c6d2b8c1-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-cf8d2\" (UID: \"102e18ee-ad7e-42c7-84b3-f2d2c6d2b8c1\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-cf8d2" Jan 20 18:07:29 crc kubenswrapper[4661]: I0120 18:07:29.262348 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/102e18ee-ad7e-42c7-84b3-f2d2c6d2b8c1-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-cf8d2\" (UID: \"102e18ee-ad7e-42c7-84b3-f2d2c6d2b8c1\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-cf8d2" Jan 20 18:07:29 crc kubenswrapper[4661]: I0120 18:07:29.262393 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/102e18ee-ad7e-42c7-84b3-f2d2c6d2b8c1-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-cf8d2\" (UID: \"102e18ee-ad7e-42c7-84b3-f2d2c6d2b8c1\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-cf8d2" Jan 20 18:07:29 crc kubenswrapper[4661]: I0120 18:07:29.262447 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/102e18ee-ad7e-42c7-84b3-f2d2c6d2b8c1-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-cf8d2\" (UID: \"102e18ee-ad7e-42c7-84b3-f2d2c6d2b8c1\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-cf8d2" Jan 20 18:07:29 crc kubenswrapper[4661]: I0120 18:07:29.262461 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/102e18ee-ad7e-42c7-84b3-f2d2c6d2b8c1-service-ca\") pod \"cluster-version-operator-5c965bbfc6-cf8d2\" (UID: \"102e18ee-ad7e-42c7-84b3-f2d2c6d2b8c1\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-cf8d2" Jan 20 18:07:29 crc kubenswrapper[4661]: I0120 18:07:29.270033 4661 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-9m9jm" podStartSLOduration=85.270021498 podStartE2EDuration="1m25.270021498s" podCreationTimestamp="2026-01-20 18:06:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 18:07:29.244322451 +0000 UTC m=+105.575112113" watchObservedRunningTime="2026-01-20 18:07:29.270021498 +0000 UTC m=+105.600811160" Jan 20 18:07:29 crc kubenswrapper[4661]: I0120 18:07:29.310560 4661 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=86.310531614 podStartE2EDuration="1m26.310531614s" podCreationTimestamp="2026-01-20 18:06:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 18:07:29.300076321 +0000 UTC m=+105.630865973" watchObservedRunningTime="2026-01-20 18:07:29.310531614 +0000 UTC m=+105.641321276" Jan 20 18:07:29 crc kubenswrapper[4661]: I0120 18:07:29.310791 4661 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=19.310784771 podStartE2EDuration="19.310784771s" podCreationTimestamp="2026-01-20 18:07:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 18:07:29.310430121 +0000 UTC m=+105.641219783" watchObservedRunningTime="2026-01-20 18:07:29.310784771 +0000 UTC m=+105.641574433" Jan 20 18:07:29 crc kubenswrapper[4661]: I0120 18:07:29.345688 4661 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-z97p2" podStartSLOduration=85.345653784 podStartE2EDuration="1m25.345653784s" podCreationTimestamp="2026-01-20 18:06:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 18:07:29.34516122 +0000 UTC m=+105.675950872" watchObservedRunningTime="2026-01-20 18:07:29.345653784 +0000 UTC m=+105.676443446" Jan 20 18:07:29 crc kubenswrapper[4661]: I0120 18:07:29.363998 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/102e18ee-ad7e-42c7-84b3-f2d2c6d2b8c1-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-cf8d2\" (UID: \"102e18ee-ad7e-42c7-84b3-f2d2c6d2b8c1\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-cf8d2" Jan 20 18:07:29 crc kubenswrapper[4661]: I0120 18:07:29.364032 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/102e18ee-ad7e-42c7-84b3-f2d2c6d2b8c1-service-ca\") pod \"cluster-version-operator-5c965bbfc6-cf8d2\" (UID: \"102e18ee-ad7e-42c7-84b3-f2d2c6d2b8c1\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-cf8d2" Jan 20 18:07:29 crc kubenswrapper[4661]: I0120 18:07:29.364068 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/102e18ee-ad7e-42c7-84b3-f2d2c6d2b8c1-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-cf8d2\" (UID: \"102e18ee-ad7e-42c7-84b3-f2d2c6d2b8c1\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-cf8d2" Jan 20 18:07:29 crc kubenswrapper[4661]: I0120 18:07:29.364091 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/102e18ee-ad7e-42c7-84b3-f2d2c6d2b8c1-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-cf8d2\" (UID: \"102e18ee-ad7e-42c7-84b3-f2d2c6d2b8c1\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-cf8d2" Jan 20 18:07:29 crc kubenswrapper[4661]: I0120 18:07:29.364121 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/102e18ee-ad7e-42c7-84b3-f2d2c6d2b8c1-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-cf8d2\" (UID: \"102e18ee-ad7e-42c7-84b3-f2d2c6d2b8c1\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-cf8d2" Jan 20 18:07:29 crc kubenswrapper[4661]: I0120 18:07:29.364169 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/102e18ee-ad7e-42c7-84b3-f2d2c6d2b8c1-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-cf8d2\" (UID: \"102e18ee-ad7e-42c7-84b3-f2d2c6d2b8c1\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-cf8d2" Jan 20 18:07:29 crc kubenswrapper[4661]: I0120 18:07:29.364314 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/102e18ee-ad7e-42c7-84b3-f2d2c6d2b8c1-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-cf8d2\" (UID: \"102e18ee-ad7e-42c7-84b3-f2d2c6d2b8c1\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-cf8d2" Jan 20 18:07:29 crc kubenswrapper[4661]: I0120 18:07:29.365540 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/102e18ee-ad7e-42c7-84b3-f2d2c6d2b8c1-service-ca\") pod \"cluster-version-operator-5c965bbfc6-cf8d2\" (UID: \"102e18ee-ad7e-42c7-84b3-f2d2c6d2b8c1\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-cf8d2" Jan 20 18:07:29 crc kubenswrapper[4661]: I0120 18:07:29.379823 4661 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4hf4s" podStartSLOduration=84.379789305 podStartE2EDuration="1m24.379789305s" podCreationTimestamp="2026-01-20 18:06:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 18:07:29.36342021 +0000 UTC m=+105.694209872" watchObservedRunningTime="2026-01-20 18:07:29.379789305 +0000 UTC m=+105.710578977" Jan 20 18:07:29 crc kubenswrapper[4661]: I0120 18:07:29.380170 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/102e18ee-ad7e-42c7-84b3-f2d2c6d2b8c1-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-cf8d2\" (UID: \"102e18ee-ad7e-42c7-84b3-f2d2c6d2b8c1\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-cf8d2" Jan 20 18:07:29 crc kubenswrapper[4661]: I0120 18:07:29.389696 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/102e18ee-ad7e-42c7-84b3-f2d2c6d2b8c1-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-cf8d2\" (UID: \"102e18ee-ad7e-42c7-84b3-f2d2c6d2b8c1\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-cf8d2" Jan 20 18:07:29 crc kubenswrapper[4661]: I0120 18:07:29.402406 4661 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=80.402381162 podStartE2EDuration="1m20.402381162s" podCreationTimestamp="2026-01-20 18:06:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 18:07:29.402166945 +0000 UTC m=+105.732956617" watchObservedRunningTime="2026-01-20 18:07:29.402381162 +0000 UTC m=+105.733170824" Jan 20 18:07:29 crc kubenswrapper[4661]: I0120 18:07:29.440727 4661 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-svf7c" podStartSLOduration=85.440694374 podStartE2EDuration="1m25.440694374s" podCreationTimestamp="2026-01-20 18:06:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 18:07:29.440257011 +0000 UTC m=+105.771046683" watchObservedRunningTime="2026-01-20 18:07:29.440694374 +0000 UTC m=+105.771484056" Jan 20 18:07:29 crc kubenswrapper[4661]: I0120 18:07:29.458777 4661 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-tfdrt" podStartSLOduration=85.458749559 podStartE2EDuration="1m25.458749559s" podCreationTimestamp="2026-01-20 18:06:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 18:07:29.457459331 +0000 UTC m=+105.788249013" watchObservedRunningTime="2026-01-20 18:07:29.458749559 +0000 UTC m=+105.789539231" Jan 20 18:07:29 crc kubenswrapper[4661]: I0120 18:07:29.488616 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-cf8d2" Jan 20 18:07:29 crc kubenswrapper[4661]: I0120 18:07:29.533096 4661 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-j9j6p" podStartSLOduration=85.533068487 podStartE2EDuration="1m25.533068487s" podCreationTimestamp="2026-01-20 18:06:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 18:07:29.532878221 +0000 UTC m=+105.863667893" watchObservedRunningTime="2026-01-20 18:07:29.533068487 +0000 UTC m=+105.863858149" Jan 20 18:07:30 crc kubenswrapper[4661]: I0120 18:07:30.018924 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-cf8d2" event={"ID":"102e18ee-ad7e-42c7-84b3-f2d2c6d2b8c1","Type":"ContainerStarted","Data":"e473a66c9f74317779388fcf72bf86eefd029646f06a268d2ad826b0ec4a0876"} Jan 20 18:07:30 crc kubenswrapper[4661]: I0120 18:07:30.019918 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-cf8d2" event={"ID":"102e18ee-ad7e-42c7-84b3-f2d2c6d2b8c1","Type":"ContainerStarted","Data":"5a1e8a72e71617545ad3817a8fa72a4beec9c34c812d6819f60e852422ba9db0"} Jan 20 18:07:30 crc kubenswrapper[4661]: I0120 18:07:30.042425 4661 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=56.042398238 podStartE2EDuration="56.042398238s" podCreationTimestamp="2026-01-20 18:06:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 18:07:29.547334531 +0000 UTC m=+105.878124203" watchObservedRunningTime="2026-01-20 18:07:30.042398238 +0000 UTC m=+106.373187940" Jan 20 18:07:30 crc kubenswrapper[4661]: I0120 18:07:30.173190 4661 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-cf8d2" podStartSLOduration=86.173161476 podStartE2EDuration="1m26.173161476s" podCreationTimestamp="2026-01-20 18:06:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 18:07:30.043026786 +0000 UTC m=+106.373816458" watchObservedRunningTime="2026-01-20 18:07:30.173161476 +0000 UTC m=+106.503951178" Jan 20 18:07:30 crc kubenswrapper[4661]: I0120 18:07:30.173970 4661 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Jan 20 18:07:31 crc kubenswrapper[4661]: I0120 18:07:31.141950 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 18:07:31 crc kubenswrapper[4661]: I0120 18:07:31.142130 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dhd6h" Jan 20 18:07:31 crc kubenswrapper[4661]: I0120 18:07:31.141962 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 18:07:31 crc kubenswrapper[4661]: E0120 18:07:31.142145 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 20 18:07:31 crc kubenswrapper[4661]: I0120 18:07:31.141971 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 18:07:31 crc kubenswrapper[4661]: E0120 18:07:31.142486 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 20 18:07:31 crc kubenswrapper[4661]: E0120 18:07:31.142467 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dhd6h" podUID="58dc28d6-51a6-49ce-ab8e-ac0b6bbe7131" Jan 20 18:07:31 crc kubenswrapper[4661]: E0120 18:07:31.142535 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 20 18:07:33 crc kubenswrapper[4661]: I0120 18:07:33.142022 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 18:07:33 crc kubenswrapper[4661]: I0120 18:07:33.142114 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 18:07:33 crc kubenswrapper[4661]: I0120 18:07:33.142165 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dhd6h" Jan 20 18:07:33 crc kubenswrapper[4661]: I0120 18:07:33.142599 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 18:07:33 crc kubenswrapper[4661]: E0120 18:07:33.142890 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 20 18:07:33 crc kubenswrapper[4661]: E0120 18:07:33.143070 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 20 18:07:33 crc kubenswrapper[4661]: E0120 18:07:33.143199 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dhd6h" podUID="58dc28d6-51a6-49ce-ab8e-ac0b6bbe7131" Jan 20 18:07:33 crc kubenswrapper[4661]: E0120 18:07:33.143309 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 20 18:07:33 crc kubenswrapper[4661]: I0120 18:07:33.143539 4661 scope.go:117] "RemoveContainer" containerID="a87060cdc681c7299812827e762152da6ae48e5862cda4b15a238c2ac16c60e7" Jan 20 18:07:33 crc kubenswrapper[4661]: E0120 18:07:33.143914 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-fxb9d_openshift-ovn-kubernetes(3856f23c-8dc3-4b36-b3b7-955dff315250)\"" pod="openshift-ovn-kubernetes/ovnkube-node-fxb9d" podUID="3856f23c-8dc3-4b36-b3b7-955dff315250" Jan 20 18:07:34 crc kubenswrapper[4661]: I0120 18:07:34.189889 4661 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=4.189848764 podStartE2EDuration="4.189848764s" podCreationTimestamp="2026-01-20 18:07:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 18:07:34.185403605 +0000 UTC m=+110.516193317" watchObservedRunningTime="2026-01-20 18:07:34.189848764 +0000 UTC m=+110.520638466" Jan 20 18:07:35 crc kubenswrapper[4661]: I0120 18:07:35.141717 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 18:07:35 crc kubenswrapper[4661]: I0120 18:07:35.141795 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 18:07:35 crc kubenswrapper[4661]: I0120 18:07:35.141874 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 18:07:35 crc kubenswrapper[4661]: I0120 18:07:35.141875 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dhd6h" Jan 20 18:07:35 crc kubenswrapper[4661]: E0120 18:07:35.142009 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 20 18:07:35 crc kubenswrapper[4661]: E0120 18:07:35.142171 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dhd6h" podUID="58dc28d6-51a6-49ce-ab8e-ac0b6bbe7131" Jan 20 18:07:35 crc kubenswrapper[4661]: E0120 18:07:35.142340 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 20 18:07:35 crc kubenswrapper[4661]: E0120 18:07:35.142441 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 20 18:07:37 crc kubenswrapper[4661]: I0120 18:07:37.141354 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dhd6h" Jan 20 18:07:37 crc kubenswrapper[4661]: E0120 18:07:37.141605 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dhd6h" podUID="58dc28d6-51a6-49ce-ab8e-ac0b6bbe7131" Jan 20 18:07:37 crc kubenswrapper[4661]: I0120 18:07:37.141835 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 18:07:37 crc kubenswrapper[4661]: E0120 18:07:37.142020 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 20 18:07:37 crc kubenswrapper[4661]: I0120 18:07:37.142158 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 18:07:37 crc kubenswrapper[4661]: E0120 18:07:37.142272 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 20 18:07:37 crc kubenswrapper[4661]: I0120 18:07:37.142348 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 18:07:37 crc kubenswrapper[4661]: E0120 18:07:37.142425 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 20 18:07:39 crc kubenswrapper[4661]: I0120 18:07:39.141839 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 18:07:39 crc kubenswrapper[4661]: I0120 18:07:39.141950 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dhd6h" Jan 20 18:07:39 crc kubenswrapper[4661]: E0120 18:07:39.141994 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 20 18:07:39 crc kubenswrapper[4661]: I0120 18:07:39.142060 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 18:07:39 crc kubenswrapper[4661]: E0120 18:07:39.142216 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dhd6h" podUID="58dc28d6-51a6-49ce-ab8e-ac0b6bbe7131" Jan 20 18:07:39 crc kubenswrapper[4661]: I0120 18:07:39.142294 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 18:07:39 crc kubenswrapper[4661]: E0120 18:07:39.142384 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 20 18:07:39 crc kubenswrapper[4661]: E0120 18:07:39.142476 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 20 18:07:40 crc kubenswrapper[4661]: I0120 18:07:40.058719 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-z97p2_5b6f2401-3eb9-4ee4-b79c-6faee06bc21c/kube-multus/1.log" Jan 20 18:07:40 crc kubenswrapper[4661]: I0120 18:07:40.060807 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-z97p2_5b6f2401-3eb9-4ee4-b79c-6faee06bc21c/kube-multus/0.log" Jan 20 18:07:40 crc kubenswrapper[4661]: I0120 18:07:40.060904 4661 generic.go:334] "Generic (PLEG): container finished" podID="5b6f2401-3eb9-4ee4-b79c-6faee06bc21c" containerID="ab1840afe6e204ba16157cfa4140926ab50dd66d6b72a0e49e4ef986f62c7e34" exitCode=1 Jan 20 18:07:40 crc kubenswrapper[4661]: I0120 18:07:40.061044 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-z97p2" event={"ID":"5b6f2401-3eb9-4ee4-b79c-6faee06bc21c","Type":"ContainerDied","Data":"ab1840afe6e204ba16157cfa4140926ab50dd66d6b72a0e49e4ef986f62c7e34"} Jan 20 18:07:40 crc kubenswrapper[4661]: I0120 18:07:40.061275 4661 scope.go:117] "RemoveContainer" containerID="5d04be3c87130e9506908a5ff0bf35490bafa64b4cec7b6ae1b67c4a8bd7df5d" Jan 20 18:07:40 crc kubenswrapper[4661]: I0120 18:07:40.061796 4661 scope.go:117] "RemoveContainer" containerID="ab1840afe6e204ba16157cfa4140926ab50dd66d6b72a0e49e4ef986f62c7e34" Jan 20 18:07:40 crc kubenswrapper[4661]: E0120 18:07:40.062072 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-multus pod=multus-z97p2_openshift-multus(5b6f2401-3eb9-4ee4-b79c-6faee06bc21c)\"" pod="openshift-multus/multus-z97p2" podUID="5b6f2401-3eb9-4ee4-b79c-6faee06bc21c" Jan 20 18:07:41 crc kubenswrapper[4661]: I0120 18:07:41.068418 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-z97p2_5b6f2401-3eb9-4ee4-b79c-6faee06bc21c/kube-multus/1.log" Jan 20 18:07:41 crc kubenswrapper[4661]: I0120 18:07:41.141734 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 18:07:41 crc kubenswrapper[4661]: I0120 18:07:41.141830 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 18:07:41 crc kubenswrapper[4661]: I0120 18:07:41.141805 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dhd6h" Jan 20 18:07:41 crc kubenswrapper[4661]: I0120 18:07:41.142020 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 18:07:41 crc kubenswrapper[4661]: E0120 18:07:41.142866 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 20 18:07:41 crc kubenswrapper[4661]: E0120 18:07:41.143094 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dhd6h" podUID="58dc28d6-51a6-49ce-ab8e-ac0b6bbe7131" Jan 20 18:07:41 crc kubenswrapper[4661]: E0120 18:07:41.143182 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 20 18:07:41 crc kubenswrapper[4661]: E0120 18:07:41.143289 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 20 18:07:43 crc kubenswrapper[4661]: I0120 18:07:43.141627 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 18:07:43 crc kubenswrapper[4661]: I0120 18:07:43.141653 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 18:07:43 crc kubenswrapper[4661]: E0120 18:07:43.142180 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 20 18:07:43 crc kubenswrapper[4661]: E0120 18:07:43.142017 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 20 18:07:43 crc kubenswrapper[4661]: I0120 18:07:43.143081 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 18:07:43 crc kubenswrapper[4661]: I0120 18:07:43.143080 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dhd6h" Jan 20 18:07:43 crc kubenswrapper[4661]: E0120 18:07:43.143234 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 20 18:07:43 crc kubenswrapper[4661]: E0120 18:07:43.143633 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dhd6h" podUID="58dc28d6-51a6-49ce-ab8e-ac0b6bbe7131" Jan 20 18:07:44 crc kubenswrapper[4661]: E0120 18:07:44.089829 4661 kubelet_node_status.go:497] "Node not becoming ready in time after startup" Jan 20 18:07:44 crc kubenswrapper[4661]: E0120 18:07:44.233433 4661 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 20 18:07:45 crc kubenswrapper[4661]: I0120 18:07:45.142065 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 18:07:45 crc kubenswrapper[4661]: E0120 18:07:45.142582 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 20 18:07:45 crc kubenswrapper[4661]: I0120 18:07:45.142164 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 18:07:45 crc kubenswrapper[4661]: E0120 18:07:45.142663 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 20 18:07:45 crc kubenswrapper[4661]: I0120 18:07:45.142196 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 18:07:45 crc kubenswrapper[4661]: E0120 18:07:45.142736 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 20 18:07:45 crc kubenswrapper[4661]: I0120 18:07:45.142083 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dhd6h" Jan 20 18:07:45 crc kubenswrapper[4661]: E0120 18:07:45.142787 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dhd6h" podUID="58dc28d6-51a6-49ce-ab8e-ac0b6bbe7131" Jan 20 18:07:46 crc kubenswrapper[4661]: I0120 18:07:46.142840 4661 scope.go:117] "RemoveContainer" containerID="a87060cdc681c7299812827e762152da6ae48e5862cda4b15a238c2ac16c60e7" Jan 20 18:07:46 crc kubenswrapper[4661]: E0120 18:07:46.143204 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-fxb9d_openshift-ovn-kubernetes(3856f23c-8dc3-4b36-b3b7-955dff315250)\"" pod="openshift-ovn-kubernetes/ovnkube-node-fxb9d" podUID="3856f23c-8dc3-4b36-b3b7-955dff315250" Jan 20 18:07:47 crc kubenswrapper[4661]: I0120 18:07:47.141417 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 18:07:47 crc kubenswrapper[4661]: I0120 18:07:47.141507 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 18:07:47 crc kubenswrapper[4661]: I0120 18:07:47.141417 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dhd6h" Jan 20 18:07:47 crc kubenswrapper[4661]: E0120 18:07:47.141655 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 20 18:07:47 crc kubenswrapper[4661]: I0120 18:07:47.141747 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 18:07:47 crc kubenswrapper[4661]: E0120 18:07:47.141875 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dhd6h" podUID="58dc28d6-51a6-49ce-ab8e-ac0b6bbe7131" Jan 20 18:07:47 crc kubenswrapper[4661]: E0120 18:07:47.141993 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 20 18:07:47 crc kubenswrapper[4661]: E0120 18:07:47.142168 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 20 18:07:49 crc kubenswrapper[4661]: I0120 18:07:49.141225 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 18:07:49 crc kubenswrapper[4661]: I0120 18:07:49.141261 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 18:07:49 crc kubenswrapper[4661]: I0120 18:07:49.141284 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 18:07:49 crc kubenswrapper[4661]: I0120 18:07:49.141361 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dhd6h" Jan 20 18:07:49 crc kubenswrapper[4661]: E0120 18:07:49.141456 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 20 18:07:49 crc kubenswrapper[4661]: E0120 18:07:49.141588 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 20 18:07:49 crc kubenswrapper[4661]: E0120 18:07:49.141727 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dhd6h" podUID="58dc28d6-51a6-49ce-ab8e-ac0b6bbe7131" Jan 20 18:07:49 crc kubenswrapper[4661]: E0120 18:07:49.141909 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 20 18:07:49 crc kubenswrapper[4661]: E0120 18:07:49.235514 4661 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 20 18:07:51 crc kubenswrapper[4661]: I0120 18:07:51.141650 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 18:07:51 crc kubenswrapper[4661]: I0120 18:07:51.141769 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 18:07:51 crc kubenswrapper[4661]: I0120 18:07:51.141651 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dhd6h" Jan 20 18:07:51 crc kubenswrapper[4661]: E0120 18:07:51.141933 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 20 18:07:51 crc kubenswrapper[4661]: E0120 18:07:51.142052 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dhd6h" podUID="58dc28d6-51a6-49ce-ab8e-ac0b6bbe7131" Jan 20 18:07:51 crc kubenswrapper[4661]: E0120 18:07:51.142378 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 20 18:07:51 crc kubenswrapper[4661]: I0120 18:07:51.142860 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 18:07:51 crc kubenswrapper[4661]: E0120 18:07:51.143036 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 20 18:07:53 crc kubenswrapper[4661]: I0120 18:07:53.141387 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 18:07:53 crc kubenswrapper[4661]: I0120 18:07:53.141419 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 18:07:53 crc kubenswrapper[4661]: I0120 18:07:53.141474 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dhd6h" Jan 20 18:07:53 crc kubenswrapper[4661]: I0120 18:07:53.141432 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 18:07:53 crc kubenswrapper[4661]: E0120 18:07:53.141602 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 20 18:07:53 crc kubenswrapper[4661]: E0120 18:07:53.141850 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 20 18:07:53 crc kubenswrapper[4661]: E0120 18:07:53.142016 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 20 18:07:53 crc kubenswrapper[4661]: E0120 18:07:53.142167 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dhd6h" podUID="58dc28d6-51a6-49ce-ab8e-ac0b6bbe7131" Jan 20 18:07:54 crc kubenswrapper[4661]: I0120 18:07:54.143760 4661 scope.go:117] "RemoveContainer" containerID="ab1840afe6e204ba16157cfa4140926ab50dd66d6b72a0e49e4ef986f62c7e34" Jan 20 18:07:54 crc kubenswrapper[4661]: E0120 18:07:54.237825 4661 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 20 18:07:55 crc kubenswrapper[4661]: I0120 18:07:55.134149 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-z97p2_5b6f2401-3eb9-4ee4-b79c-6faee06bc21c/kube-multus/1.log" Jan 20 18:07:55 crc kubenswrapper[4661]: I0120 18:07:55.134251 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-z97p2" event={"ID":"5b6f2401-3eb9-4ee4-b79c-6faee06bc21c","Type":"ContainerStarted","Data":"3b3a01654e524ee1a13ea5553a8ca6b24eb116690557d8b8604407d8577198dd"} Jan 20 18:07:55 crc kubenswrapper[4661]: I0120 18:07:55.141301 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 18:07:55 crc kubenswrapper[4661]: I0120 18:07:55.141333 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 18:07:55 crc kubenswrapper[4661]: I0120 18:07:55.141302 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 18:07:55 crc kubenswrapper[4661]: E0120 18:07:55.141495 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 20 18:07:55 crc kubenswrapper[4661]: E0120 18:07:55.141713 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 20 18:07:55 crc kubenswrapper[4661]: E0120 18:07:55.141824 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 20 18:07:55 crc kubenswrapper[4661]: I0120 18:07:55.143530 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dhd6h" Jan 20 18:07:55 crc kubenswrapper[4661]: E0120 18:07:55.144432 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dhd6h" podUID="58dc28d6-51a6-49ce-ab8e-ac0b6bbe7131" Jan 20 18:07:57 crc kubenswrapper[4661]: I0120 18:07:57.141127 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 18:07:57 crc kubenswrapper[4661]: E0120 18:07:57.141296 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 20 18:07:57 crc kubenswrapper[4661]: I0120 18:07:57.141543 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 18:07:57 crc kubenswrapper[4661]: E0120 18:07:57.141605 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 20 18:07:57 crc kubenswrapper[4661]: I0120 18:07:57.141765 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 18:07:57 crc kubenswrapper[4661]: E0120 18:07:57.141820 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 20 18:07:57 crc kubenswrapper[4661]: I0120 18:07:57.141920 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dhd6h" Jan 20 18:07:57 crc kubenswrapper[4661]: E0120 18:07:57.142121 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dhd6h" podUID="58dc28d6-51a6-49ce-ab8e-ac0b6bbe7131" Jan 20 18:07:59 crc kubenswrapper[4661]: I0120 18:07:59.141978 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 18:07:59 crc kubenswrapper[4661]: I0120 18:07:59.142050 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 18:07:59 crc kubenswrapper[4661]: I0120 18:07:59.142089 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dhd6h" Jan 20 18:07:59 crc kubenswrapper[4661]: I0120 18:07:59.142197 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 18:07:59 crc kubenswrapper[4661]: E0120 18:07:59.143147 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 20 18:07:59 crc kubenswrapper[4661]: E0120 18:07:59.143347 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dhd6h" podUID="58dc28d6-51a6-49ce-ab8e-ac0b6bbe7131" Jan 20 18:07:59 crc kubenswrapper[4661]: E0120 18:07:59.143185 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 20 18:07:59 crc kubenswrapper[4661]: E0120 18:07:59.143590 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 20 18:07:59 crc kubenswrapper[4661]: E0120 18:07:59.239517 4661 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 20 18:08:00 crc kubenswrapper[4661]: I0120 18:08:00.143851 4661 scope.go:117] "RemoveContainer" containerID="a87060cdc681c7299812827e762152da6ae48e5862cda4b15a238c2ac16c60e7" Jan 20 18:08:01 crc kubenswrapper[4661]: I0120 18:08:01.141199 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 18:08:01 crc kubenswrapper[4661]: I0120 18:08:01.141243 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 18:08:01 crc kubenswrapper[4661]: I0120 18:08:01.141307 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 18:08:01 crc kubenswrapper[4661]: E0120 18:08:01.141362 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 20 18:08:01 crc kubenswrapper[4661]: E0120 18:08:01.141496 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 20 18:08:01 crc kubenswrapper[4661]: I0120 18:08:01.141531 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dhd6h" Jan 20 18:08:01 crc kubenswrapper[4661]: E0120 18:08:01.141738 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 20 18:08:01 crc kubenswrapper[4661]: E0120 18:08:01.141793 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dhd6h" podUID="58dc28d6-51a6-49ce-ab8e-ac0b6bbe7131" Jan 20 18:08:01 crc kubenswrapper[4661]: I0120 18:08:01.161860 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-fxb9d_3856f23c-8dc3-4b36-b3b7-955dff315250/ovnkube-controller/3.log" Jan 20 18:08:01 crc kubenswrapper[4661]: I0120 18:08:01.165539 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fxb9d" event={"ID":"3856f23c-8dc3-4b36-b3b7-955dff315250","Type":"ContainerStarted","Data":"229a253605fb06114bb299f6125c0ea1a738620cfb8a51ac9b53d4eb809f736d"} Jan 20 18:08:01 crc kubenswrapper[4661]: I0120 18:08:01.166103 4661 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-fxb9d" Jan 20 18:08:01 crc kubenswrapper[4661]: I0120 18:08:01.333330 4661 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-fxb9d" podStartSLOduration=116.333300591 podStartE2EDuration="1m56.333300591s" podCreationTimestamp="2026-01-20 18:06:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 18:08:01.227919594 +0000 UTC m=+137.558709276" watchObservedRunningTime="2026-01-20 18:08:01.333300591 +0000 UTC m=+137.664090253" Jan 20 18:08:01 crc kubenswrapper[4661]: I0120 18:08:01.334181 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-dhd6h"] Jan 20 18:08:01 crc kubenswrapper[4661]: I0120 18:08:01.334307 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dhd6h" Jan 20 18:08:01 crc kubenswrapper[4661]: E0120 18:08:01.334435 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dhd6h" podUID="58dc28d6-51a6-49ce-ab8e-ac0b6bbe7131" Jan 20 18:08:03 crc kubenswrapper[4661]: I0120 18:08:03.142082 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dhd6h" Jan 20 18:08:03 crc kubenswrapper[4661]: I0120 18:08:03.142103 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 18:08:03 crc kubenswrapper[4661]: I0120 18:08:03.142140 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 18:08:03 crc kubenswrapper[4661]: I0120 18:08:03.142089 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 18:08:03 crc kubenswrapper[4661]: E0120 18:08:03.142231 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dhd6h" podUID="58dc28d6-51a6-49ce-ab8e-ac0b6bbe7131" Jan 20 18:08:03 crc kubenswrapper[4661]: E0120 18:08:03.142409 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 20 18:08:03 crc kubenswrapper[4661]: E0120 18:08:03.142447 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 20 18:08:03 crc kubenswrapper[4661]: E0120 18:08:03.142564 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 20 18:08:05 crc kubenswrapper[4661]: I0120 18:08:05.141233 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 18:08:05 crc kubenswrapper[4661]: I0120 18:08:05.141233 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dhd6h" Jan 20 18:08:05 crc kubenswrapper[4661]: I0120 18:08:05.141275 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 18:08:05 crc kubenswrapper[4661]: I0120 18:08:05.141306 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 18:08:05 crc kubenswrapper[4661]: I0120 18:08:05.145377 4661 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Jan 20 18:08:05 crc kubenswrapper[4661]: I0120 18:08:05.146095 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Jan 20 18:08:05 crc kubenswrapper[4661]: I0120 18:08:05.146286 4661 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Jan 20 18:08:05 crc kubenswrapper[4661]: I0120 18:08:05.146361 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Jan 20 18:08:05 crc kubenswrapper[4661]: I0120 18:08:05.147938 4661 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Jan 20 18:08:05 crc kubenswrapper[4661]: I0120 18:08:05.148567 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Jan 20 18:08:09 crc kubenswrapper[4661]: I0120 18:08:09.782932 4661 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Jan 20 18:08:09 crc kubenswrapper[4661]: I0120 18:08:09.841775 4661 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-wd4nq"] Jan 20 18:08:09 crc kubenswrapper[4661]: I0120 18:08:09.842532 4661 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-446tk"] Jan 20 18:08:09 crc kubenswrapper[4661]: I0120 18:08:09.842595 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-wd4nq" Jan 20 18:08:09 crc kubenswrapper[4661]: I0120 18:08:09.843721 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-446tk" Jan 20 18:08:09 crc kubenswrapper[4661]: I0120 18:08:09.844604 4661 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-j48cg"] Jan 20 18:08:09 crc kubenswrapper[4661]: I0120 18:08:09.845059 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-j48cg" Jan 20 18:08:09 crc kubenswrapper[4661]: I0120 18:08:09.846194 4661 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-ngxm7"] Jan 20 18:08:09 crc kubenswrapper[4661]: I0120 18:08:09.846416 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-ngxm7" Jan 20 18:08:09 crc kubenswrapper[4661]: I0120 18:08:09.849214 4661 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-7hbkg"] Jan 20 18:08:09 crc kubenswrapper[4661]: I0120 18:08:09.849977 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-7hbkg" Jan 20 18:08:09 crc kubenswrapper[4661]: I0120 18:08:09.850540 4661 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 20 18:08:09 crc kubenswrapper[4661]: I0120 18:08:09.852025 4661 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-qwp42"] Jan 20 18:08:09 crc kubenswrapper[4661]: I0120 18:08:09.852490 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-qwp42" Jan 20 18:08:09 crc kubenswrapper[4661]: I0120 18:08:09.854642 4661 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-kh25t"] Jan 20 18:08:09 crc kubenswrapper[4661]: I0120 18:08:09.855238 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-kh25t" Jan 20 18:08:09 crc kubenswrapper[4661]: I0120 18:08:09.856824 4661 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Jan 20 18:08:09 crc kubenswrapper[4661]: I0120 18:08:09.856889 4661 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 20 18:08:09 crc kubenswrapper[4661]: I0120 18:08:09.859219 4661 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 20 18:08:09 crc kubenswrapper[4661]: I0120 18:08:09.859525 4661 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Jan 20 18:08:09 crc kubenswrapper[4661]: I0120 18:08:09.860038 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Jan 20 18:08:09 crc kubenswrapper[4661]: I0120 18:08:09.860150 4661 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Jan 20 18:08:09 crc kubenswrapper[4661]: I0120 18:08:09.860201 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 20 18:08:09 crc kubenswrapper[4661]: I0120 18:08:09.860311 4661 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Jan 20 18:08:09 crc kubenswrapper[4661]: I0120 18:08:09.860042 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Jan 20 18:08:09 crc kubenswrapper[4661]: I0120 18:08:09.860788 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 20 18:08:09 crc kubenswrapper[4661]: I0120 18:08:09.860857 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 20 18:08:09 crc kubenswrapper[4661]: I0120 18:08:09.861140 4661 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Jan 20 18:08:09 crc kubenswrapper[4661]: I0120 18:08:09.862088 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Jan 20 18:08:09 crc kubenswrapper[4661]: I0120 18:08:09.862390 4661 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Jan 20 18:08:09 crc kubenswrapper[4661]: I0120 18:08:09.866773 4661 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-7bk7r"] Jan 20 18:08:09 crc kubenswrapper[4661]: I0120 18:08:09.866852 4661 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 20 18:08:09 crc kubenswrapper[4661]: I0120 18:08:09.867453 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-7bk7r" Jan 20 18:08:09 crc kubenswrapper[4661]: I0120 18:08:09.868089 4661 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-4flqc"] Jan 20 18:08:09 crc kubenswrapper[4661]: I0120 18:08:09.868470 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-4flqc" Jan 20 18:08:09 crc kubenswrapper[4661]: I0120 18:08:09.880211 4661 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Jan 20 18:08:09 crc kubenswrapper[4661]: I0120 18:08:09.880211 4661 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Jan 20 18:08:09 crc kubenswrapper[4661]: I0120 18:08:09.887023 4661 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-tmtgr"] Jan 20 18:08:09 crc kubenswrapper[4661]: I0120 18:08:09.887556 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-tmtgr" Jan 20 18:08:09 crc kubenswrapper[4661]: I0120 18:08:09.889321 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 20 18:08:09 crc kubenswrapper[4661]: I0120 18:08:09.889347 4661 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 20 18:08:09 crc kubenswrapper[4661]: I0120 18:08:09.889407 4661 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Jan 20 18:08:09 crc kubenswrapper[4661]: I0120 18:08:09.889878 4661 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Jan 20 18:08:09 crc kubenswrapper[4661]: I0120 18:08:09.889914 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Jan 20 18:08:09 crc kubenswrapper[4661]: I0120 18:08:09.889933 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Jan 20 18:08:09 crc kubenswrapper[4661]: I0120 18:08:09.890261 4661 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Jan 20 18:08:09 crc kubenswrapper[4661]: I0120 18:08:09.890888 4661 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Jan 20 18:08:09 crc kubenswrapper[4661]: I0120 18:08:09.891219 4661 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Jan 20 18:08:09 crc kubenswrapper[4661]: I0120 18:08:09.892493 4661 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-q7s9r"] Jan 20 18:08:09 crc kubenswrapper[4661]: I0120 18:08:09.903512 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-q7s9r" Jan 20 18:08:09 crc kubenswrapper[4661]: I0120 18:08:09.904890 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Jan 20 18:08:09 crc kubenswrapper[4661]: I0120 18:08:09.905439 4661 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Jan 20 18:08:09 crc kubenswrapper[4661]: I0120 18:08:09.915120 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Jan 20 18:08:09 crc kubenswrapper[4661]: I0120 18:08:09.915377 4661 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Jan 20 18:08:09 crc kubenswrapper[4661]: I0120 18:08:09.915527 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Jan 20 18:08:09 crc kubenswrapper[4661]: I0120 18:08:09.915651 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Jan 20 18:08:09 crc kubenswrapper[4661]: I0120 18:08:09.920204 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/53f0efc5-f6a1-4f6b-a37b-3e9c4e3fea65-etcd-client\") pod \"apiserver-76f77b778f-q7s9r\" (UID: \"53f0efc5-f6a1-4f6b-a37b-3e9c4e3fea65\") " pod="openshift-apiserver/apiserver-76f77b778f-q7s9r" Jan 20 18:08:09 crc kubenswrapper[4661]: I0120 18:08:09.920265 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/302e8226-565c-44a4-bb0e-dee670200ae3-images\") pod \"machine-api-operator-5694c8668f-7hbkg\" (UID: \"302e8226-565c-44a4-bb0e-dee670200ae3\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-7hbkg" Jan 20 18:08:09 crc kubenswrapper[4661]: I0120 18:08:09.920320 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j2rcj\" (UniqueName: \"kubernetes.io/projected/ad28ae9e-274e-45fc-8202-683aadfa3494-kube-api-access-j2rcj\") pod \"service-ca-9c57cc56f-4flqc\" (UID: \"ad28ae9e-274e-45fc-8202-683aadfa3494\") " pod="openshift-service-ca/service-ca-9c57cc56f-4flqc" Jan 20 18:08:09 crc kubenswrapper[4661]: I0120 18:08:09.920362 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1fe4d701-349f-4edf-a59f-092ccfcdd40e-config\") pod \"openshift-apiserver-operator-796bbdcf4f-ngxm7\" (UID: \"1fe4d701-349f-4edf-a59f-092ccfcdd40e\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-ngxm7" Jan 20 18:08:09 crc kubenswrapper[4661]: I0120 18:08:09.920432 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qf8xf\" (UniqueName: \"kubernetes.io/projected/53f0efc5-f6a1-4f6b-a37b-3e9c4e3fea65-kube-api-access-qf8xf\") pod \"apiserver-76f77b778f-q7s9r\" (UID: \"53f0efc5-f6a1-4f6b-a37b-3e9c4e3fea65\") " pod="openshift-apiserver/apiserver-76f77b778f-q7s9r" Jan 20 18:08:09 crc kubenswrapper[4661]: I0120 18:08:09.920467 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/8c6c735b-2d38-430c-a5b7-10b9b06ef623-etcd-client\") pod \"apiserver-7bbb656c7d-kh25t\" (UID: \"8c6c735b-2d38-430c-a5b7-10b9b06ef623\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-kh25t" Jan 20 18:08:09 crc kubenswrapper[4661]: I0120 18:08:09.920499 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-97twf\" (UniqueName: \"kubernetes.io/projected/302e8226-565c-44a4-bb0e-dee670200ae3-kube-api-access-97twf\") pod \"machine-api-operator-5694c8668f-7hbkg\" (UID: \"302e8226-565c-44a4-bb0e-dee670200ae3\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-7hbkg" Jan 20 18:08:09 crc kubenswrapper[4661]: I0120 18:08:09.920536 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/ad28ae9e-274e-45fc-8202-683aadfa3494-signing-key\") pod \"service-ca-9c57cc56f-4flqc\" (UID: \"ad28ae9e-274e-45fc-8202-683aadfa3494\") " pod="openshift-service-ca/service-ca-9c57cc56f-4flqc" Jan 20 18:08:09 crc kubenswrapper[4661]: I0120 18:08:09.920567 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8c6c735b-2d38-430c-a5b7-10b9b06ef623-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-kh25t\" (UID: \"8c6c735b-2d38-430c-a5b7-10b9b06ef623\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-kh25t" Jan 20 18:08:09 crc kubenswrapper[4661]: I0120 18:08:09.920600 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/53f0efc5-f6a1-4f6b-a37b-3e9c4e3fea65-encryption-config\") pod \"apiserver-76f77b778f-q7s9r\" (UID: \"53f0efc5-f6a1-4f6b-a37b-3e9c4e3fea65\") " pod="openshift-apiserver/apiserver-76f77b778f-q7s9r" Jan 20 18:08:09 crc kubenswrapper[4661]: I0120 18:08:09.920629 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/8c6c735b-2d38-430c-a5b7-10b9b06ef623-audit-policies\") pod \"apiserver-7bbb656c7d-kh25t\" (UID: \"8c6c735b-2d38-430c-a5b7-10b9b06ef623\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-kh25t" Jan 20 18:08:09 crc kubenswrapper[4661]: I0120 18:08:09.920656 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/8c6c735b-2d38-430c-a5b7-10b9b06ef623-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-kh25t\" (UID: \"8c6c735b-2d38-430c-a5b7-10b9b06ef623\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-kh25t" Jan 20 18:08:09 crc kubenswrapper[4661]: I0120 18:08:09.920918 4661 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 20 18:08:09 crc kubenswrapper[4661]: I0120 18:08:09.922384 4661 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-hkbjv"] Jan 20 18:08:09 crc kubenswrapper[4661]: I0120 18:08:09.928032 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/302e8226-565c-44a4-bb0e-dee670200ae3-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-7hbkg\" (UID: \"302e8226-565c-44a4-bb0e-dee670200ae3\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-7hbkg" Jan 20 18:08:09 crc kubenswrapper[4661]: I0120 18:08:09.928158 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/18723097-a708-4951-89bc-48ffc2128786-serving-cert\") pod \"authentication-operator-69f744f599-7bk7r\" (UID: \"18723097-a708-4951-89bc-48ffc2128786\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-7bk7r" Jan 20 18:08:09 crc kubenswrapper[4661]: I0120 18:08:09.928206 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/17cc4c8d-5d73-4307-83ea-e826befa5b06-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-tmtgr\" (UID: \"17cc4c8d-5d73-4307-83ea-e826befa5b06\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-tmtgr" Jan 20 18:08:09 crc kubenswrapper[4661]: I0120 18:08:09.936439 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Jan 20 18:08:09 crc kubenswrapper[4661]: I0120 18:08:09.936936 4661 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Jan 20 18:08:09 crc kubenswrapper[4661]: I0120 18:08:09.937057 4661 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Jan 20 18:08:09 crc kubenswrapper[4661]: I0120 18:08:09.937125 4661 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Jan 20 18:08:09 crc kubenswrapper[4661]: I0120 18:08:09.937080 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Jan 20 18:08:09 crc kubenswrapper[4661]: I0120 18:08:09.937273 4661 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Jan 20 18:08:09 crc kubenswrapper[4661]: I0120 18:08:09.937373 4661 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Jan 20 18:08:09 crc kubenswrapper[4661]: I0120 18:08:09.937502 4661 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 20 18:08:09 crc kubenswrapper[4661]: I0120 18:08:09.937581 4661 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Jan 20 18:08:09 crc kubenswrapper[4661]: I0120 18:08:09.937733 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Jan 20 18:08:09 crc kubenswrapper[4661]: I0120 18:08:09.937936 4661 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Jan 20 18:08:09 crc kubenswrapper[4661]: I0120 18:08:09.938503 4661 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-bh9mt"] Jan 20 18:08:09 crc kubenswrapper[4661]: I0120 18:08:09.938783 4661 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-tptl9"] Jan 20 18:08:09 crc kubenswrapper[4661]: I0120 18:08:09.939061 4661 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-lrwcc"] Jan 20 18:08:09 crc kubenswrapper[4661]: I0120 18:08:09.939386 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-lrwcc" Jan 20 18:08:09 crc kubenswrapper[4661]: I0120 18:08:09.939502 4661 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29482200-ckvfc"] Jan 20 18:08:09 crc kubenswrapper[4661]: I0120 18:08:09.939727 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-bh9mt" Jan 20 18:08:09 crc kubenswrapper[4661]: I0120 18:08:09.939804 4661 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 20 18:08:09 crc kubenswrapper[4661]: I0120 18:08:09.940033 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-hkbjv" Jan 20 18:08:09 crc kubenswrapper[4661]: I0120 18:08:09.940079 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-tptl9" Jan 20 18:08:09 crc kubenswrapper[4661]: I0120 18:08:09.940114 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l8vww\" (UniqueName: \"kubernetes.io/projected/c223ef1c-922a-42b8-b8d0-428a27f5ae6d-kube-api-access-l8vww\") pod \"controller-manager-879f6c89f-wd4nq\" (UID: \"c223ef1c-922a-42b8-b8d0-428a27f5ae6d\") " pod="openshift-controller-manager/controller-manager-879f6c89f-wd4nq" Jan 20 18:08:09 crc kubenswrapper[4661]: I0120 18:08:09.940146 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/457c15d5-4066-4d88-bbb4-a9fe13de20cd-serving-cert\") pod \"route-controller-manager-6576b87f9c-j48cg\" (UID: \"457c15d5-4066-4d88-bbb4-a9fe13de20cd\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-j48cg" Jan 20 18:08:09 crc kubenswrapper[4661]: I0120 18:08:09.940174 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8c6c735b-2d38-430c-a5b7-10b9b06ef623-serving-cert\") pod \"apiserver-7bbb656c7d-kh25t\" (UID: \"8c6c735b-2d38-430c-a5b7-10b9b06ef623\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-kh25t" Jan 20 18:08:09 crc kubenswrapper[4661]: I0120 18:08:09.940195 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/53f0efc5-f6a1-4f6b-a37b-3e9c4e3fea65-audit\") pod \"apiserver-76f77b778f-q7s9r\" (UID: \"53f0efc5-f6a1-4f6b-a37b-3e9c4e3fea65\") " pod="openshift-apiserver/apiserver-76f77b778f-q7s9r" Jan 20 18:08:09 crc kubenswrapper[4661]: I0120 18:08:09.940221 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/53f0efc5-f6a1-4f6b-a37b-3e9c4e3fea65-config\") pod \"apiserver-76f77b778f-q7s9r\" (UID: \"53f0efc5-f6a1-4f6b-a37b-3e9c4e3fea65\") " pod="openshift-apiserver/apiserver-76f77b778f-q7s9r" Jan 20 18:08:09 crc kubenswrapper[4661]: I0120 18:08:09.940262 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mlmjt\" (UniqueName: \"kubernetes.io/projected/1fe4d701-349f-4edf-a59f-092ccfcdd40e-kube-api-access-mlmjt\") pod \"openshift-apiserver-operator-796bbdcf4f-ngxm7\" (UID: \"1fe4d701-349f-4edf-a59f-092ccfcdd40e\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-ngxm7" Jan 20 18:08:09 crc kubenswrapper[4661]: I0120 18:08:09.940284 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/ad28ae9e-274e-45fc-8202-683aadfa3494-signing-cabundle\") pod \"service-ca-9c57cc56f-4flqc\" (UID: \"ad28ae9e-274e-45fc-8202-683aadfa3494\") " pod="openshift-service-ca/service-ca-9c57cc56f-4flqc" Jan 20 18:08:09 crc kubenswrapper[4661]: I0120 18:08:09.940302 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/302e8226-565c-44a4-bb0e-dee670200ae3-config\") pod \"machine-api-operator-5694c8668f-7hbkg\" (UID: \"302e8226-565c-44a4-bb0e-dee670200ae3\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-7hbkg" Jan 20 18:08:09 crc kubenswrapper[4661]: I0120 18:08:09.940320 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-99vq6\" (UniqueName: \"kubernetes.io/projected/18723097-a708-4951-89bc-48ffc2128786-kube-api-access-99vq6\") pod \"authentication-operator-69f744f599-7bk7r\" (UID: \"18723097-a708-4951-89bc-48ffc2128786\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-7bk7r" Jan 20 18:08:09 crc kubenswrapper[4661]: I0120 18:08:09.940337 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/53f0efc5-f6a1-4f6b-a37b-3e9c4e3fea65-image-import-ca\") pod \"apiserver-76f77b778f-q7s9r\" (UID: \"53f0efc5-f6a1-4f6b-a37b-3e9c4e3fea65\") " pod="openshift-apiserver/apiserver-76f77b778f-q7s9r" Jan 20 18:08:09 crc kubenswrapper[4661]: I0120 18:08:09.940364 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/e6808303-41c9-4185-beed-5e7460b07075-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-446tk\" (UID: \"e6808303-41c9-4185-beed-5e7460b07075\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-446tk" Jan 20 18:08:09 crc kubenswrapper[4661]: I0120 18:08:09.940386 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/457c15d5-4066-4d88-bbb4-a9fe13de20cd-config\") pod \"route-controller-manager-6576b87f9c-j48cg\" (UID: \"457c15d5-4066-4d88-bbb4-a9fe13de20cd\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-j48cg" Jan 20 18:08:09 crc kubenswrapper[4661]: I0120 18:08:09.940412 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p9dhd\" (UniqueName: \"kubernetes.io/projected/17cc4c8d-5d73-4307-83ea-e826befa5b06-kube-api-access-p9dhd\") pod \"package-server-manager-789f6589d5-tmtgr\" (UID: \"17cc4c8d-5d73-4307-83ea-e826befa5b06\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-tmtgr" Jan 20 18:08:09 crc kubenswrapper[4661]: I0120 18:08:09.940432 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/53f0efc5-f6a1-4f6b-a37b-3e9c4e3fea65-trusted-ca-bundle\") pod \"apiserver-76f77b778f-q7s9r\" (UID: \"53f0efc5-f6a1-4f6b-a37b-3e9c4e3fea65\") " pod="openshift-apiserver/apiserver-76f77b778f-q7s9r" Jan 20 18:08:09 crc kubenswrapper[4661]: I0120 18:08:09.940465 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/8c6c735b-2d38-430c-a5b7-10b9b06ef623-audit-dir\") pod \"apiserver-7bbb656c7d-kh25t\" (UID: \"8c6c735b-2d38-430c-a5b7-10b9b06ef623\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-kh25t" Jan 20 18:08:09 crc kubenswrapper[4661]: I0120 18:08:09.940488 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c223ef1c-922a-42b8-b8d0-428a27f5ae6d-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-wd4nq\" (UID: \"c223ef1c-922a-42b8-b8d0-428a27f5ae6d\") " pod="openshift-controller-manager/controller-manager-879f6c89f-wd4nq" Jan 20 18:08:09 crc kubenswrapper[4661]: I0120 18:08:09.940503 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/457c15d5-4066-4d88-bbb4-a9fe13de20cd-client-ca\") pod \"route-controller-manager-6576b87f9c-j48cg\" (UID: \"457c15d5-4066-4d88-bbb4-a9fe13de20cd\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-j48cg" Jan 20 18:08:09 crc kubenswrapper[4661]: I0120 18:08:09.940523 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/18723097-a708-4951-89bc-48ffc2128786-service-ca-bundle\") pod \"authentication-operator-69f744f599-7bk7r\" (UID: \"18723097-a708-4951-89bc-48ffc2128786\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-7bk7r" Jan 20 18:08:09 crc kubenswrapper[4661]: I0120 18:08:09.940553 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c223ef1c-922a-42b8-b8d0-428a27f5ae6d-serving-cert\") pod \"controller-manager-879f6c89f-wd4nq\" (UID: \"c223ef1c-922a-42b8-b8d0-428a27f5ae6d\") " pod="openshift-controller-manager/controller-manager-879f6c89f-wd4nq" Jan 20 18:08:09 crc kubenswrapper[4661]: I0120 18:08:09.940573 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c223ef1c-922a-42b8-b8d0-428a27f5ae6d-client-ca\") pod \"controller-manager-879f6c89f-wd4nq\" (UID: \"c223ef1c-922a-42b8-b8d0-428a27f5ae6d\") " pod="openshift-controller-manager/controller-manager-879f6c89f-wd4nq" Jan 20 18:08:09 crc kubenswrapper[4661]: I0120 18:08:09.940591 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/53f0efc5-f6a1-4f6b-a37b-3e9c4e3fea65-serving-cert\") pod \"apiserver-76f77b778f-q7s9r\" (UID: \"53f0efc5-f6a1-4f6b-a37b-3e9c4e3fea65\") " pod="openshift-apiserver/apiserver-76f77b778f-q7s9r" Jan 20 18:08:09 crc kubenswrapper[4661]: I0120 18:08:09.940607 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rb5l7\" (UniqueName: \"kubernetes.io/projected/e6808303-41c9-4185-beed-5e7460b07075-kube-api-access-rb5l7\") pod \"cluster-samples-operator-665b6dd947-446tk\" (UID: \"e6808303-41c9-4185-beed-5e7460b07075\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-446tk" Jan 20 18:08:09 crc kubenswrapper[4661]: I0120 18:08:09.940627 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c223ef1c-922a-42b8-b8d0-428a27f5ae6d-config\") pod \"controller-manager-879f6c89f-wd4nq\" (UID: \"c223ef1c-922a-42b8-b8d0-428a27f5ae6d\") " pod="openshift-controller-manager/controller-manager-879f6c89f-wd4nq" Jan 20 18:08:09 crc kubenswrapper[4661]: I0120 18:08:09.940642 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g8xsw\" (UniqueName: \"kubernetes.io/projected/457c15d5-4066-4d88-bbb4-a9fe13de20cd-kube-api-access-g8xsw\") pod \"route-controller-manager-6576b87f9c-j48cg\" (UID: \"457c15d5-4066-4d88-bbb4-a9fe13de20cd\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-j48cg" Jan 20 18:08:09 crc kubenswrapper[4661]: I0120 18:08:09.940659 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/53f0efc5-f6a1-4f6b-a37b-3e9c4e3fea65-etcd-serving-ca\") pod \"apiserver-76f77b778f-q7s9r\" (UID: \"53f0efc5-f6a1-4f6b-a37b-3e9c4e3fea65\") " pod="openshift-apiserver/apiserver-76f77b778f-q7s9r" Jan 20 18:08:09 crc kubenswrapper[4661]: I0120 18:08:09.940679 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-crnqh\" (UniqueName: \"kubernetes.io/projected/00496c34-a198-4516-bff8-0b553db85849-kube-api-access-crnqh\") pod \"migrator-59844c95c7-qwp42\" (UID: \"00496c34-a198-4516-bff8-0b553db85849\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-qwp42" Jan 20 18:08:09 crc kubenswrapper[4661]: I0120 18:08:09.940720 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/18723097-a708-4951-89bc-48ffc2128786-config\") pod \"authentication-operator-69f744f599-7bk7r\" (UID: \"18723097-a708-4951-89bc-48ffc2128786\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-7bk7r" Jan 20 18:08:09 crc kubenswrapper[4661]: I0120 18:08:09.940740 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/18723097-a708-4951-89bc-48ffc2128786-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-7bk7r\" (UID: \"18723097-a708-4951-89bc-48ffc2128786\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-7bk7r" Jan 20 18:08:09 crc kubenswrapper[4661]: I0120 18:08:09.940755 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/53f0efc5-f6a1-4f6b-a37b-3e9c4e3fea65-node-pullsecrets\") pod \"apiserver-76f77b778f-q7s9r\" (UID: \"53f0efc5-f6a1-4f6b-a37b-3e9c4e3fea65\") " pod="openshift-apiserver/apiserver-76f77b778f-q7s9r" Jan 20 18:08:09 crc kubenswrapper[4661]: I0120 18:08:09.940771 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/8c6c735b-2d38-430c-a5b7-10b9b06ef623-encryption-config\") pod \"apiserver-7bbb656c7d-kh25t\" (UID: \"8c6c735b-2d38-430c-a5b7-10b9b06ef623\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-kh25t" Jan 20 18:08:09 crc kubenswrapper[4661]: I0120 18:08:09.940793 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/53f0efc5-f6a1-4f6b-a37b-3e9c4e3fea65-audit-dir\") pod \"apiserver-76f77b778f-q7s9r\" (UID: \"53f0efc5-f6a1-4f6b-a37b-3e9c4e3fea65\") " pod="openshift-apiserver/apiserver-76f77b778f-q7s9r" Jan 20 18:08:09 crc kubenswrapper[4661]: I0120 18:08:09.940810 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4lknt\" (UniqueName: \"kubernetes.io/projected/8c6c735b-2d38-430c-a5b7-10b9b06ef623-kube-api-access-4lknt\") pod \"apiserver-7bbb656c7d-kh25t\" (UID: \"8c6c735b-2d38-430c-a5b7-10b9b06ef623\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-kh25t" Jan 20 18:08:09 crc kubenswrapper[4661]: I0120 18:08:09.940830 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1fe4d701-349f-4edf-a59f-092ccfcdd40e-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-ngxm7\" (UID: \"1fe4d701-349f-4edf-a59f-092ccfcdd40e\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-ngxm7" Jan 20 18:08:09 crc kubenswrapper[4661]: I0120 18:08:09.941107 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29482200-ckvfc" Jan 20 18:08:09 crc kubenswrapper[4661]: I0120 18:08:09.941607 4661 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Jan 20 18:08:09 crc kubenswrapper[4661]: I0120 18:08:09.941805 4661 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Jan 20 18:08:09 crc kubenswrapper[4661]: I0120 18:08:09.954986 4661 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Jan 20 18:08:09 crc kubenswrapper[4661]: I0120 18:08:09.942061 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Jan 20 18:08:09 crc kubenswrapper[4661]: I0120 18:08:09.944596 4661 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-n76xd"] Jan 20 18:08:09 crc kubenswrapper[4661]: I0120 18:08:09.942126 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Jan 20 18:08:09 crc kubenswrapper[4661]: I0120 18:08:09.942170 4661 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Jan 20 18:08:09 crc kubenswrapper[4661]: I0120 18:08:09.942222 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Jan 20 18:08:09 crc kubenswrapper[4661]: I0120 18:08:09.942566 4661 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 20 18:08:09 crc kubenswrapper[4661]: I0120 18:08:09.942824 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Jan 20 18:08:09 crc kubenswrapper[4661]: I0120 18:08:09.943104 4661 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Jan 20 18:08:09 crc kubenswrapper[4661]: I0120 18:08:09.943781 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Jan 20 18:08:09 crc kubenswrapper[4661]: I0120 18:08:09.950338 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Jan 20 18:08:09 crc kubenswrapper[4661]: I0120 18:08:09.950429 4661 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Jan 20 18:08:09 crc kubenswrapper[4661]: I0120 18:08:09.950463 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Jan 20 18:08:09 crc kubenswrapper[4661]: I0120 18:08:09.950489 4661 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Jan 20 18:08:09 crc kubenswrapper[4661]: I0120 18:08:09.958158 4661 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-zdd7g"] Jan 20 18:08:09 crc kubenswrapper[4661]: I0120 18:08:09.973717 4661 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-flvxz"] Jan 20 18:08:09 crc kubenswrapper[4661]: I0120 18:08:09.974251 4661 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-r6qrc"] Jan 20 18:08:09 crc kubenswrapper[4661]: I0120 18:08:09.974550 4661 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-c4pxm"] Jan 20 18:08:09 crc kubenswrapper[4661]: I0120 18:08:09.974854 4661 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-44vhk"] Jan 20 18:08:09 crc kubenswrapper[4661]: I0120 18:08:09.975224 4661 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-4p2m6"] Jan 20 18:08:09 crc kubenswrapper[4661]: I0120 18:08:09.975520 4661 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-bvpn8"] Jan 20 18:08:09 crc kubenswrapper[4661]: I0120 18:08:09.975845 4661 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-9htcv"] Jan 20 18:08:09 crc kubenswrapper[4661]: I0120 18:08:09.976447 4661 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-phg9x"] Jan 20 18:08:09 crc kubenswrapper[4661]: I0120 18:08:09.976777 4661 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-mlncs"] Jan 20 18:08:09 crc kubenswrapper[4661]: I0120 18:08:09.977061 4661 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-xlt4n"] Jan 20 18:08:09 crc kubenswrapper[4661]: I0120 18:08:09.977771 4661 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-mcf6f"] Jan 20 18:08:09 crc kubenswrapper[4661]: I0120 18:08:09.990077 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-zdd7g" Jan 20 18:08:09 crc kubenswrapper[4661]: I0120 18:08:09.992845 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-44vhk" Jan 20 18:08:09 crc kubenswrapper[4661]: I0120 18:08:09.960248 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-n76xd" Jan 20 18:08:09 crc kubenswrapper[4661]: I0120 18:08:09.996620 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-9htcv" Jan 20 18:08:09 crc kubenswrapper[4661]: I0120 18:08:09.963289 4661 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Jan 20 18:08:09 crc kubenswrapper[4661]: I0120 18:08:09.997011 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-flvxz" Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:09.963318 4661 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:09.963557 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:09.963753 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:09.963837 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:09.964330 4661 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:09.965044 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:09.965088 4661 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:09.965297 4661 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:09.965968 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:09.966017 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:09.966086 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:09.966146 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:09.966179 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:09.966206 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:09.966224 4661 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:09.966241 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:09.966273 4661 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:09.966316 4661 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:09.966365 4661 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:10.004865 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-4p2m6" Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:09.966417 4661 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:09.966516 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:09.969491 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:09.969652 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:09.969815 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:09.970146 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:09.970191 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:09.970268 4661 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:09.970509 4661 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:09.970621 4661 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:09.970655 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:09.972649 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:09.992398 4661 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:09.996189 4661 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:09.996305 4661 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:09.996422 4661 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:10.017506 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-c4pxm" Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:10.018427 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-r6qrc" Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:10.018807 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-bvpn8" Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:10.018952 4661 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:10.019077 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-phg9x" Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:10.019086 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:10.019291 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-mlncs" Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:10.019432 4661 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:10.019946 4661 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-9j9rr"] Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:10.020357 4661 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-vx646"] Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:10.020750 4661 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-qdlnn"] Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:10.021059 4661 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-454g9"] Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:10.024733 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:10.025924 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-xlt4n" Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:10.025924 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-mcf6f" Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:10.026612 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-9j9rr" Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:10.026741 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-vx646" Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:10.027078 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-qdlnn" Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:10.027277 4661 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-2qq6g"] Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:10.028633 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-454g9" Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:10.033500 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:10.034154 4661 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:10.034298 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:10.034374 4661 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:10.035152 4661 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-mfmq7"] Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:10.035504 4661 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-xfrkj"] Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:10.035609 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-2qq6g" Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:10.035630 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-mfmq7" Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:10.036366 4661 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-7m2kh"] Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:10.036470 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-xfrkj" Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:10.036753 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-7m2kh" Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:10.036903 4661 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-76h24"] Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:10.037467 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-76h24" Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:10.038238 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-wd4nq"] Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:10.044646 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-446tk"] Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:10.042419 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/1f0c818b-31de-43ee-a20a-1fc174261b42-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-bh9mt\" (UID: \"1f0c818b-31de-43ee-a20a-1fc174261b42\") " pod="openshift-authentication/oauth-openshift-558db77b4-bh9mt" Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:10.044743 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9a7d9c3e-88e9-44b2-98bc-6aab91fbf9b4-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-flvxz\" (UID: \"9a7d9c3e-88e9-44b2-98bc-6aab91fbf9b4\") " pod="openshift-marketplace/marketplace-operator-79b997595-flvxz" Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:10.044820 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/4c500541-c3f2-4f6d-8bb7-1227aa74989a-service-ca\") pod \"console-f9d7485db-phg9x\" (UID: \"4c500541-c3f2-4f6d-8bb7-1227aa74989a\") " pod="openshift-console/console-f9d7485db-phg9x" Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:10.045406 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l8vww\" (UniqueName: \"kubernetes.io/projected/c223ef1c-922a-42b8-b8d0-428a27f5ae6d-kube-api-access-l8vww\") pod \"controller-manager-879f6c89f-wd4nq\" (UID: \"c223ef1c-922a-42b8-b8d0-428a27f5ae6d\") " pod="openshift-controller-manager/controller-manager-879f6c89f-wd4nq" Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:10.045454 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/18723097-a708-4951-89bc-48ffc2128786-serving-cert\") pod \"authentication-operator-69f744f599-7bk7r\" (UID: \"18723097-a708-4951-89bc-48ffc2128786\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-7bk7r" Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:10.045479 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/db930c1a-9c05-419d-b168-b232e2b98e9b-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-r6qrc\" (UID: \"db930c1a-9c05-419d-b168-b232e2b98e9b\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-r6qrc" Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:10.045502 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8a9d6945-a25a-4a09-92ee-7b90664a2edd-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-mlncs\" (UID: \"8a9d6945-a25a-4a09-92ee-7b90664a2edd\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-mlncs" Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:10.045521 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/52978b9f-376f-4f49-9c2d-fc3da32b178f-trusted-ca\") pod \"ingress-operator-5b745b69d9-mcf6f\" (UID: \"52978b9f-376f-4f49-9c2d-fc3da32b178f\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-mcf6f" Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:10.045539 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8c6c735b-2d38-430c-a5b7-10b9b06ef623-serving-cert\") pod \"apiserver-7bbb656c7d-kh25t\" (UID: \"8c6c735b-2d38-430c-a5b7-10b9b06ef623\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-kh25t" Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:10.045561 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7a1f08ef-d9a0-484e-9959-14d3ab178d28-config-volume\") pod \"collect-profiles-29482200-ckvfc\" (UID: \"7a1f08ef-d9a0-484e-9959-14d3ab178d28\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29482200-ckvfc" Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:10.045580 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/4c500541-c3f2-4f6d-8bb7-1227aa74989a-console-serving-cert\") pod \"console-f9d7485db-phg9x\" (UID: \"4c500541-c3f2-4f6d-8bb7-1227aa74989a\") " pod="openshift-console/console-f9d7485db-phg9x" Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:10.045603 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/1f0c818b-31de-43ee-a20a-1fc174261b42-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-bh9mt\" (UID: \"1f0c818b-31de-43ee-a20a-1fc174261b42\") " pod="openshift-authentication/oauth-openshift-558db77b4-bh9mt" Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:10.045624 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8b7cc6dd-b02e-4e3f-b569-42201693f3e7-config\") pod \"kube-controller-manager-operator-78b949d7b-4p2m6\" (UID: \"8b7cc6dd-b02e-4e3f-b569-42201693f3e7\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-4p2m6" Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:10.045649 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/ad28ae9e-274e-45fc-8202-683aadfa3494-signing-cabundle\") pod \"service-ca-9c57cc56f-4flqc\" (UID: \"ad28ae9e-274e-45fc-8202-683aadfa3494\") " pod="openshift-service-ca/service-ca-9c57cc56f-4flqc" Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:10.045679 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/02d802d7-1516-4eb2-98a9-2f1878609216-trusted-ca\") pod \"console-operator-58897d9998-n76xd\" (UID: \"02d802d7-1516-4eb2-98a9-2f1878609216\") " pod="openshift-console-operator/console-operator-58897d9998-n76xd" Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:10.045697 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fhsns\" (UniqueName: \"kubernetes.io/projected/9a7d9c3e-88e9-44b2-98bc-6aab91fbf9b4-kube-api-access-fhsns\") pod \"marketplace-operator-79b997595-flvxz\" (UID: \"9a7d9c3e-88e9-44b2-98bc-6aab91fbf9b4\") " pod="openshift-marketplace/marketplace-operator-79b997595-flvxz" Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:10.045717 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/4c500541-c3f2-4f6d-8bb7-1227aa74989a-oauth-serving-cert\") pod \"console-f9d7485db-phg9x\" (UID: \"4c500541-c3f2-4f6d-8bb7-1227aa74989a\") " pod="openshift-console/console-f9d7485db-phg9x" Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:10.045740 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e192fd3a-6efd-4ce8-8915-2bba0f9dc8c0-config\") pod \"machine-approver-56656f9798-hkbjv\" (UID: \"e192fd3a-6efd-4ce8-8915-2bba0f9dc8c0\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-hkbjv" Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:10.045763 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-99vq6\" (UniqueName: \"kubernetes.io/projected/18723097-a708-4951-89bc-48ffc2128786-kube-api-access-99vq6\") pod \"authentication-operator-69f744f599-7bk7r\" (UID: \"18723097-a708-4951-89bc-48ffc2128786\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-7bk7r" Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:10.045785 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/e6808303-41c9-4185-beed-5e7460b07075-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-446tk\" (UID: \"e6808303-41c9-4185-beed-5e7460b07075\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-446tk" Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:10.045807 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/550a5702-08aa-4dca-a1a0-7adfebbd9312-proxy-tls\") pod \"machine-config-controller-84d6567774-xlt4n\" (UID: \"550a5702-08aa-4dca-a1a0-7adfebbd9312\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-xlt4n" Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:10.045828 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8a9d6945-a25a-4a09-92ee-7b90664a2edd-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-mlncs\" (UID: \"8a9d6945-a25a-4a09-92ee-7b90664a2edd\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-mlncs" Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:10.045851 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/457c15d5-4066-4d88-bbb4-a9fe13de20cd-config\") pod \"route-controller-manager-6576b87f9c-j48cg\" (UID: \"457c15d5-4066-4d88-bbb4-a9fe13de20cd\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-j48cg" Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:10.045873 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/a1a9a02c-4b40-412e-a7f1-94568385465a-profile-collector-cert\") pod \"catalog-operator-68c6474976-lrwcc\" (UID: \"a1a9a02c-4b40-412e-a7f1-94568385465a\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-lrwcc" Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:10.045894 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/1f0c818b-31de-43ee-a20a-1fc174261b42-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-bh9mt\" (UID: \"1f0c818b-31de-43ee-a20a-1fc174261b42\") " pod="openshift-authentication/oauth-openshift-558db77b4-bh9mt" Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:10.045914 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ebd4ae2f-ef41-402c-bc81-83385361e291-config\") pod \"service-ca-operator-777779d784-c4pxm\" (UID: \"ebd4ae2f-ef41-402c-bc81-83385361e291\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-c4pxm" Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:10.045936 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/53f0efc5-f6a1-4f6b-a37b-3e9c4e3fea65-trusted-ca-bundle\") pod \"apiserver-76f77b778f-q7s9r\" (UID: \"53f0efc5-f6a1-4f6b-a37b-3e9c4e3fea65\") " pod="openshift-apiserver/apiserver-76f77b778f-q7s9r" Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:10.045957 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/8c6c735b-2d38-430c-a5b7-10b9b06ef623-audit-dir\") pod \"apiserver-7bbb656c7d-kh25t\" (UID: \"8c6c735b-2d38-430c-a5b7-10b9b06ef623\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-kh25t" Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:10.045976 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q5nmr\" (UniqueName: \"kubernetes.io/projected/8e261f9f-2027-4bb2-9254-40758baaa1ea-kube-api-access-q5nmr\") pod \"openshift-config-operator-7777fb866f-9htcv\" (UID: \"8e261f9f-2027-4bb2-9254-40758baaa1ea\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-9htcv" Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:10.045992 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c223ef1c-922a-42b8-b8d0-428a27f5ae6d-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-wd4nq\" (UID: \"c223ef1c-922a-42b8-b8d0-428a27f5ae6d\") " pod="openshift-controller-manager/controller-manager-879f6c89f-wd4nq" Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:10.046007 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/457c15d5-4066-4d88-bbb4-a9fe13de20cd-client-ca\") pod \"route-controller-manager-6576b87f9c-j48cg\" (UID: \"457c15d5-4066-4d88-bbb4-a9fe13de20cd\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-j48cg" Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:10.046024 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9jgm2\" (UniqueName: \"kubernetes.io/projected/550a5702-08aa-4dca-a1a0-7adfebbd9312-kube-api-access-9jgm2\") pod \"machine-config-controller-84d6567774-xlt4n\" (UID: \"550a5702-08aa-4dca-a1a0-7adfebbd9312\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-xlt4n" Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:10.046043 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/1f0c818b-31de-43ee-a20a-1fc174261b42-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-bh9mt\" (UID: \"1f0c818b-31de-43ee-a20a-1fc174261b42\") " pod="openshift-authentication/oauth-openshift-558db77b4-bh9mt" Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:10.046073 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7a1f08ef-d9a0-484e-9959-14d3ab178d28-secret-volume\") pod \"collect-profiles-29482200-ckvfc\" (UID: \"7a1f08ef-d9a0-484e-9959-14d3ab178d28\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29482200-ckvfc" Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:10.046089 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jrn8w\" (UniqueName: \"kubernetes.io/projected/e4cd0e68-3282-4713-8386-8c86f56f1f70-kube-api-access-jrn8w\") pod \"multus-admission-controller-857f4d67dd-44vhk\" (UID: \"e4cd0e68-3282-4713-8386-8c86f56f1f70\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-44vhk" Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:10.046116 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/1f0c818b-31de-43ee-a20a-1fc174261b42-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-bh9mt\" (UID: \"1f0c818b-31de-43ee-a20a-1fc174261b42\") " pod="openshift-authentication/oauth-openshift-558db77b4-bh9mt" Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:10.046131 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/1f0c818b-31de-43ee-a20a-1fc174261b42-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-bh9mt\" (UID: \"1f0c818b-31de-43ee-a20a-1fc174261b42\") " pod="openshift-authentication/oauth-openshift-558db77b4-bh9mt" Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:10.046167 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c223ef1c-922a-42b8-b8d0-428a27f5ae6d-serving-cert\") pod \"controller-manager-879f6c89f-wd4nq\" (UID: \"c223ef1c-922a-42b8-b8d0-428a27f5ae6d\") " pod="openshift-controller-manager/controller-manager-879f6c89f-wd4nq" Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:10.046189 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c223ef1c-922a-42b8-b8d0-428a27f5ae6d-client-ca\") pod \"controller-manager-879f6c89f-wd4nq\" (UID: \"c223ef1c-922a-42b8-b8d0-428a27f5ae6d\") " pod="openshift-controller-manager/controller-manager-879f6c89f-wd4nq" Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:10.046215 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/53f0efc5-f6a1-4f6b-a37b-3e9c4e3fea65-serving-cert\") pod \"apiserver-76f77b778f-q7s9r\" (UID: \"53f0efc5-f6a1-4f6b-a37b-3e9c4e3fea65\") " pod="openshift-apiserver/apiserver-76f77b778f-q7s9r" Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:10.046230 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/53f0efc5-f6a1-4f6b-a37b-3e9c4e3fea65-etcd-serving-ca\") pod \"apiserver-76f77b778f-q7s9r\" (UID: \"53f0efc5-f6a1-4f6b-a37b-3e9c4e3fea65\") " pod="openshift-apiserver/apiserver-76f77b778f-q7s9r" Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:10.046247 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-crnqh\" (UniqueName: \"kubernetes.io/projected/00496c34-a198-4516-bff8-0b553db85849-kube-api-access-crnqh\") pod \"migrator-59844c95c7-qwp42\" (UID: \"00496c34-a198-4516-bff8-0b553db85849\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-qwp42" Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:10.046265 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xw4mr\" (UniqueName: \"kubernetes.io/projected/ebd4ae2f-ef41-402c-bc81-83385361e291-kube-api-access-xw4mr\") pod \"service-ca-operator-777779d784-c4pxm\" (UID: \"ebd4ae2f-ef41-402c-bc81-83385361e291\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-c4pxm" Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:10.046282 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/02d802d7-1516-4eb2-98a9-2f1878609216-serving-cert\") pod \"console-operator-58897d9998-n76xd\" (UID: \"02d802d7-1516-4eb2-98a9-2f1878609216\") " pod="openshift-console-operator/console-operator-58897d9998-n76xd" Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:10.046303 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/53f0efc5-f6a1-4f6b-a37b-3e9c4e3fea65-node-pullsecrets\") pod \"apiserver-76f77b778f-q7s9r\" (UID: \"53f0efc5-f6a1-4f6b-a37b-3e9c4e3fea65\") " pod="openshift-apiserver/apiserver-76f77b778f-q7s9r" Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:10.046319 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hnr4m\" (UniqueName: \"kubernetes.io/projected/1f0c818b-31de-43ee-a20a-1fc174261b42-kube-api-access-hnr4m\") pod \"oauth-openshift-558db77b4-bh9mt\" (UID: \"1f0c818b-31de-43ee-a20a-1fc174261b42\") " pod="openshift-authentication/oauth-openshift-558db77b4-bh9mt" Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:10.046335 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/4c500541-c3f2-4f6d-8bb7-1227aa74989a-console-config\") pod \"console-f9d7485db-phg9x\" (UID: \"4c500541-c3f2-4f6d-8bb7-1227aa74989a\") " pod="openshift-console/console-f9d7485db-phg9x" Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:10.046356 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c7ff0869-4b3b-447f-a012-9bc155bae99b-metrics-certs\") pod \"router-default-5444994796-tptl9\" (UID: \"c7ff0869-4b3b-447f-a012-9bc155bae99b\") " pod="openshift-ingress/router-default-5444994796-tptl9" Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:10.046371 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l9n45\" (UniqueName: \"kubernetes.io/projected/a1a9a02c-4b40-412e-a7f1-94568385465a-kube-api-access-l9n45\") pod \"catalog-operator-68c6474976-lrwcc\" (UID: \"a1a9a02c-4b40-412e-a7f1-94568385465a\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-lrwcc" Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:10.046394 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j2rcj\" (UniqueName: \"kubernetes.io/projected/ad28ae9e-274e-45fc-8202-683aadfa3494-kube-api-access-j2rcj\") pod \"service-ca-9c57cc56f-4flqc\" (UID: \"ad28ae9e-274e-45fc-8202-683aadfa3494\") " pod="openshift-service-ca/service-ca-9c57cc56f-4flqc" Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:10.046410 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cpnqn\" (UniqueName: \"kubernetes.io/projected/02d802d7-1516-4eb2-98a9-2f1878609216-kube-api-access-cpnqn\") pod \"console-operator-58897d9998-n76xd\" (UID: \"02d802d7-1516-4eb2-98a9-2f1878609216\") " pod="openshift-console-operator/console-operator-58897d9998-n76xd" Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:10.046426 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/1f0c818b-31de-43ee-a20a-1fc174261b42-audit-policies\") pod \"oauth-openshift-558db77b4-bh9mt\" (UID: \"1f0c818b-31de-43ee-a20a-1fc174261b42\") " pod="openshift-authentication/oauth-openshift-558db77b4-bh9mt" Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:10.046448 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qrdx6\" (UniqueName: \"kubernetes.io/projected/52978b9f-376f-4f49-9c2d-fc3da32b178f-kube-api-access-qrdx6\") pod \"ingress-operator-5b745b69d9-mcf6f\" (UID: \"52978b9f-376f-4f49-9c2d-fc3da32b178f\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-mcf6f" Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:10.046463 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4c500541-c3f2-4f6d-8bb7-1227aa74989a-trusted-ca-bundle\") pod \"console-f9d7485db-phg9x\" (UID: \"4c500541-c3f2-4f6d-8bb7-1227aa74989a\") " pod="openshift-console/console-f9d7485db-phg9x" Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:10.046488 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/db930c1a-9c05-419d-b168-b232e2b98e9b-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-r6qrc\" (UID: \"db930c1a-9c05-419d-b168-b232e2b98e9b\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-r6qrc" Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:10.046503 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/a1a9a02c-4b40-412e-a7f1-94568385465a-srv-cert\") pod \"catalog-operator-68c6474976-lrwcc\" (UID: \"a1a9a02c-4b40-412e-a7f1-94568385465a\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-lrwcc" Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:10.046522 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/ad28ae9e-274e-45fc-8202-683aadfa3494-signing-key\") pod \"service-ca-9c57cc56f-4flqc\" (UID: \"ad28ae9e-274e-45fc-8202-683aadfa3494\") " pod="openshift-service-ca/service-ca-9c57cc56f-4flqc" Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:10.046538 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-97twf\" (UniqueName: \"kubernetes.io/projected/302e8226-565c-44a4-bb0e-dee670200ae3-kube-api-access-97twf\") pod \"machine-api-operator-5694c8668f-7hbkg\" (UID: \"302e8226-565c-44a4-bb0e-dee670200ae3\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-7hbkg" Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:10.046556 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c7ff0869-4b3b-447f-a012-9bc155bae99b-default-certificate\") pod \"router-default-5444994796-tptl9\" (UID: \"c7ff0869-4b3b-447f-a012-9bc155bae99b\") " pod="openshift-ingress/router-default-5444994796-tptl9" Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:10.047380 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:10.052072 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/53f0efc5-f6a1-4f6b-a37b-3e9c4e3fea65-encryption-config\") pod \"apiserver-76f77b778f-q7s9r\" (UID: \"53f0efc5-f6a1-4f6b-a37b-3e9c4e3fea65\") " pod="openshift-apiserver/apiserver-76f77b778f-q7s9r" Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:10.052144 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/8c6c735b-2d38-430c-a5b7-10b9b06ef623-audit-policies\") pod \"apiserver-7bbb656c7d-kh25t\" (UID: \"8c6c735b-2d38-430c-a5b7-10b9b06ef623\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-kh25t" Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:10.052161 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/8c6c735b-2d38-430c-a5b7-10b9b06ef623-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-kh25t\" (UID: \"8c6c735b-2d38-430c-a5b7-10b9b06ef623\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-kh25t" Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:10.052196 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/550a5702-08aa-4dca-a1a0-7adfebbd9312-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-xlt4n\" (UID: \"550a5702-08aa-4dca-a1a0-7adfebbd9312\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-xlt4n" Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:10.052217 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/1f0c818b-31de-43ee-a20a-1fc174261b42-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-bh9mt\" (UID: \"1f0c818b-31de-43ee-a20a-1fc174261b42\") " pod="openshift-authentication/oauth-openshift-558db77b4-bh9mt" Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:10.052242 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c7ff0869-4b3b-447f-a012-9bc155bae99b-service-ca-bundle\") pod \"router-default-5444994796-tptl9\" (UID: \"c7ff0869-4b3b-447f-a012-9bc155bae99b\") " pod="openshift-ingress/router-default-5444994796-tptl9" Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:10.052280 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/17cc4c8d-5d73-4307-83ea-e826befa5b06-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-tmtgr\" (UID: \"17cc4c8d-5d73-4307-83ea-e826befa5b06\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-tmtgr" Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:10.052300 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/9a7d9c3e-88e9-44b2-98bc-6aab91fbf9b4-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-flvxz\" (UID: \"9a7d9c3e-88e9-44b2-98bc-6aab91fbf9b4\") " pod="openshift-marketplace/marketplace-operator-79b997595-flvxz" Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:10.052319 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d697c\" (UniqueName: \"kubernetes.io/projected/7a1f08ef-d9a0-484e-9959-14d3ab178d28-kube-api-access-d697c\") pod \"collect-profiles-29482200-ckvfc\" (UID: \"7a1f08ef-d9a0-484e-9959-14d3ab178d28\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29482200-ckvfc" Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:10.052339 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/457c15d5-4066-4d88-bbb4-a9fe13de20cd-serving-cert\") pod \"route-controller-manager-6576b87f9c-j48cg\" (UID: \"457c15d5-4066-4d88-bbb4-a9fe13de20cd\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-j48cg" Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:10.052359 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/02d802d7-1516-4eb2-98a9-2f1878609216-config\") pod \"console-operator-58897d9998-n76xd\" (UID: \"02d802d7-1516-4eb2-98a9-2f1878609216\") " pod="openshift-console-operator/console-operator-58897d9998-n76xd" Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:10.052378 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/4c500541-c3f2-4f6d-8bb7-1227aa74989a-console-oauth-config\") pod \"console-f9d7485db-phg9x\" (UID: \"4c500541-c3f2-4f6d-8bb7-1227aa74989a\") " pod="openshift-console/console-f9d7485db-phg9x" Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:10.052401 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8b7cc6dd-b02e-4e3f-b569-42201693f3e7-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-4p2m6\" (UID: \"8b7cc6dd-b02e-4e3f-b569-42201693f3e7\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-4p2m6" Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:10.052425 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/53f0efc5-f6a1-4f6b-a37b-3e9c4e3fea65-config\") pod \"apiserver-76f77b778f-q7s9r\" (UID: \"53f0efc5-f6a1-4f6b-a37b-3e9c4e3fea65\") " pod="openshift-apiserver/apiserver-76f77b778f-q7s9r" Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:10.052443 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/53f0efc5-f6a1-4f6b-a37b-3e9c4e3fea65-audit\") pod \"apiserver-76f77b778f-q7s9r\" (UID: \"53f0efc5-f6a1-4f6b-a37b-3e9c4e3fea65\") " pod="openshift-apiserver/apiserver-76f77b778f-q7s9r" Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:10.052461 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8b7cc6dd-b02e-4e3f-b569-42201693f3e7-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-4p2m6\" (UID: \"8b7cc6dd-b02e-4e3f-b569-42201693f3e7\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-4p2m6" Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:10.052502 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mlmjt\" (UniqueName: \"kubernetes.io/projected/1fe4d701-349f-4edf-a59f-092ccfcdd40e-kube-api-access-mlmjt\") pod \"openshift-apiserver-operator-796bbdcf4f-ngxm7\" (UID: \"1fe4d701-349f-4edf-a59f-092ccfcdd40e\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-ngxm7" Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:10.052522 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/302e8226-565c-44a4-bb0e-dee670200ae3-config\") pod \"machine-api-operator-5694c8668f-7hbkg\" (UID: \"302e8226-565c-44a4-bb0e-dee670200ae3\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-7hbkg" Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:10.052540 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/db930c1a-9c05-419d-b168-b232e2b98e9b-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-r6qrc\" (UID: \"db930c1a-9c05-419d-b168-b232e2b98e9b\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-r6qrc" Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:10.052560 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/1f0c818b-31de-43ee-a20a-1fc174261b42-audit-dir\") pod \"oauth-openshift-558db77b4-bh9mt\" (UID: \"1f0c818b-31de-43ee-a20a-1fc174261b42\") " pod="openshift-authentication/oauth-openshift-558db77b4-bh9mt" Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:10.052578 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/52978b9f-376f-4f49-9c2d-fc3da32b178f-bound-sa-token\") pod \"ingress-operator-5b745b69d9-mcf6f\" (UID: \"52978b9f-376f-4f49-9c2d-fc3da32b178f\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-mcf6f" Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:10.052595 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hxvmg\" (UniqueName: \"kubernetes.io/projected/4c500541-c3f2-4f6d-8bb7-1227aa74989a-kube-api-access-hxvmg\") pod \"console-f9d7485db-phg9x\" (UID: \"4c500541-c3f2-4f6d-8bb7-1227aa74989a\") " pod="openshift-console/console-f9d7485db-phg9x" Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:10.052622 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/53f0efc5-f6a1-4f6b-a37b-3e9c4e3fea65-image-import-ca\") pod \"apiserver-76f77b778f-q7s9r\" (UID: \"53f0efc5-f6a1-4f6b-a37b-3e9c4e3fea65\") " pod="openshift-apiserver/apiserver-76f77b778f-q7s9r" Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:10.052648 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p9dhd\" (UniqueName: \"kubernetes.io/projected/17cc4c8d-5d73-4307-83ea-e826befa5b06-kube-api-access-p9dhd\") pod \"package-server-manager-789f6589d5-tmtgr\" (UID: \"17cc4c8d-5d73-4307-83ea-e826befa5b06\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-tmtgr" Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:10.052672 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/1f0c818b-31de-43ee-a20a-1fc174261b42-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-bh9mt\" (UID: \"1f0c818b-31de-43ee-a20a-1fc174261b42\") " pod="openshift-authentication/oauth-openshift-558db77b4-bh9mt" Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:10.052720 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/1f0c818b-31de-43ee-a20a-1fc174261b42-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-bh9mt\" (UID: \"1f0c818b-31de-43ee-a20a-1fc174261b42\") " pod="openshift-authentication/oauth-openshift-558db77b4-bh9mt" Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:10.052738 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-599hc\" (UniqueName: \"kubernetes.io/projected/081efbd4-859f-49bb-84d6-778ce124b602-kube-api-access-599hc\") pod \"dns-operator-744455d44c-zdd7g\" (UID: \"081efbd4-859f-49bb-84d6-778ce124b602\") " pod="openshift-dns-operator/dns-operator-744455d44c-zdd7g" Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:10.052755 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ebd4ae2f-ef41-402c-bc81-83385361e291-serving-cert\") pod \"service-ca-operator-777779d784-c4pxm\" (UID: \"ebd4ae2f-ef41-402c-bc81-83385361e291\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-c4pxm" Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:10.052777 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/18723097-a708-4951-89bc-48ffc2128786-service-ca-bundle\") pod \"authentication-operator-69f744f599-7bk7r\" (UID: \"18723097-a708-4951-89bc-48ffc2128786\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-7bk7r" Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:10.052796 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/081efbd4-859f-49bb-84d6-778ce124b602-metrics-tls\") pod \"dns-operator-744455d44c-zdd7g\" (UID: \"081efbd4-859f-49bb-84d6-778ce124b602\") " pod="openshift-dns-operator/dns-operator-744455d44c-zdd7g" Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:10.052816 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rb5l7\" (UniqueName: \"kubernetes.io/projected/e6808303-41c9-4185-beed-5e7460b07075-kube-api-access-rb5l7\") pod \"cluster-samples-operator-665b6dd947-446tk\" (UID: \"e6808303-41c9-4185-beed-5e7460b07075\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-446tk" Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:10.052836 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c223ef1c-922a-42b8-b8d0-428a27f5ae6d-config\") pod \"controller-manager-879f6c89f-wd4nq\" (UID: \"c223ef1c-922a-42b8-b8d0-428a27f5ae6d\") " pod="openshift-controller-manager/controller-manager-879f6c89f-wd4nq" Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:10.052856 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g8xsw\" (UniqueName: \"kubernetes.io/projected/457c15d5-4066-4d88-bbb4-a9fe13de20cd-kube-api-access-g8xsw\") pod \"route-controller-manager-6576b87f9c-j48cg\" (UID: \"457c15d5-4066-4d88-bbb4-a9fe13de20cd\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-j48cg" Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:10.052873 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/52978b9f-376f-4f49-9c2d-fc3da32b178f-metrics-tls\") pod \"ingress-operator-5b745b69d9-mcf6f\" (UID: \"52978b9f-376f-4f49-9c2d-fc3da32b178f\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-mcf6f" Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:10.052903 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/18723097-a708-4951-89bc-48ffc2128786-config\") pod \"authentication-operator-69f744f599-7bk7r\" (UID: \"18723097-a708-4951-89bc-48ffc2128786\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-7bk7r" Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:10.052921 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/e4cd0e68-3282-4713-8386-8c86f56f1f70-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-44vhk\" (UID: \"e4cd0e68-3282-4713-8386-8c86f56f1f70\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-44vhk" Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:10.052940 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/18723097-a708-4951-89bc-48ffc2128786-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-7bk7r\" (UID: \"18723097-a708-4951-89bc-48ffc2128786\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-7bk7r" Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:10.052958 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/8c6c735b-2d38-430c-a5b7-10b9b06ef623-encryption-config\") pod \"apiserver-7bbb656c7d-kh25t\" (UID: \"8c6c735b-2d38-430c-a5b7-10b9b06ef623\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-kh25t" Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:10.052979 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8e261f9f-2027-4bb2-9254-40758baaa1ea-serving-cert\") pod \"openshift-config-operator-7777fb866f-9htcv\" (UID: \"8e261f9f-2027-4bb2-9254-40758baaa1ea\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-9htcv" Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:10.052997 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/53f0efc5-f6a1-4f6b-a37b-3e9c4e3fea65-audit-dir\") pod \"apiserver-76f77b778f-q7s9r\" (UID: \"53f0efc5-f6a1-4f6b-a37b-3e9c4e3fea65\") " pod="openshift-apiserver/apiserver-76f77b778f-q7s9r" Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:10.053013 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4lknt\" (UniqueName: \"kubernetes.io/projected/8c6c735b-2d38-430c-a5b7-10b9b06ef623-kube-api-access-4lknt\") pod \"apiserver-7bbb656c7d-kh25t\" (UID: \"8c6c735b-2d38-430c-a5b7-10b9b06ef623\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-kh25t" Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:10.053027 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c7ff0869-4b3b-447f-a012-9bc155bae99b-stats-auth\") pod \"router-default-5444994796-tptl9\" (UID: \"c7ff0869-4b3b-447f-a012-9bc155bae99b\") " pod="openshift-ingress/router-default-5444994796-tptl9" Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:10.053045 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1fe4d701-349f-4edf-a59f-092ccfcdd40e-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-ngxm7\" (UID: \"1fe4d701-349f-4edf-a59f-092ccfcdd40e\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-ngxm7" Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:10.053062 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qs5q2\" (UniqueName: \"kubernetes.io/projected/e192fd3a-6efd-4ce8-8915-2bba0f9dc8c0-kube-api-access-qs5q2\") pod \"machine-approver-56656f9798-hkbjv\" (UID: \"e192fd3a-6efd-4ce8-8915-2bba0f9dc8c0\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-hkbjv" Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:10.053081 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/53f0efc5-f6a1-4f6b-a37b-3e9c4e3fea65-etcd-client\") pod \"apiserver-76f77b778f-q7s9r\" (UID: \"53f0efc5-f6a1-4f6b-a37b-3e9c4e3fea65\") " pod="openshift-apiserver/apiserver-76f77b778f-q7s9r" Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:10.053097 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/302e8226-565c-44a4-bb0e-dee670200ae3-images\") pod \"machine-api-operator-5694c8668f-7hbkg\" (UID: \"302e8226-565c-44a4-bb0e-dee670200ae3\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-7hbkg" Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:10.053113 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/e192fd3a-6efd-4ce8-8915-2bba0f9dc8c0-machine-approver-tls\") pod \"machine-approver-56656f9798-hkbjv\" (UID: \"e192fd3a-6efd-4ce8-8915-2bba0f9dc8c0\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-hkbjv" Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:10.053131 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1fe4d701-349f-4edf-a59f-092ccfcdd40e-config\") pod \"openshift-apiserver-operator-796bbdcf4f-ngxm7\" (UID: \"1fe4d701-349f-4edf-a59f-092ccfcdd40e\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-ngxm7" Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:10.053150 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f9qpp\" (UniqueName: \"kubernetes.io/projected/c7ff0869-4b3b-447f-a012-9bc155bae99b-kube-api-access-f9qpp\") pod \"router-default-5444994796-tptl9\" (UID: \"c7ff0869-4b3b-447f-a012-9bc155bae99b\") " pod="openshift-ingress/router-default-5444994796-tptl9" Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:10.053168 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qf8xf\" (UniqueName: \"kubernetes.io/projected/53f0efc5-f6a1-4f6b-a37b-3e9c4e3fea65-kube-api-access-qf8xf\") pod \"apiserver-76f77b778f-q7s9r\" (UID: \"53f0efc5-f6a1-4f6b-a37b-3e9c4e3fea65\") " pod="openshift-apiserver/apiserver-76f77b778f-q7s9r" Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:10.053185 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/8c6c735b-2d38-430c-a5b7-10b9b06ef623-etcd-client\") pod \"apiserver-7bbb656c7d-kh25t\" (UID: \"8c6c735b-2d38-430c-a5b7-10b9b06ef623\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-kh25t" Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:10.053201 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/8e261f9f-2027-4bb2-9254-40758baaa1ea-available-featuregates\") pod \"openshift-config-operator-7777fb866f-9htcv\" (UID: \"8e261f9f-2027-4bb2-9254-40758baaa1ea\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-9htcv" Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:10.053218 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1f0c818b-31de-43ee-a20a-1fc174261b42-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-bh9mt\" (UID: \"1f0c818b-31de-43ee-a20a-1fc174261b42\") " pod="openshift-authentication/oauth-openshift-558db77b4-bh9mt" Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:10.053236 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tqrqq\" (UniqueName: \"kubernetes.io/projected/afe96487-2a45-4ad8-8f17-7f33186f55f4-kube-api-access-tqrqq\") pod \"downloads-7954f5f757-bvpn8\" (UID: \"afe96487-2a45-4ad8-8f17-7f33186f55f4\") " pod="openshift-console/downloads-7954f5f757-bvpn8" Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:10.053257 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8c6c735b-2d38-430c-a5b7-10b9b06ef623-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-kh25t\" (UID: \"8c6c735b-2d38-430c-a5b7-10b9b06ef623\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-kh25t" Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:10.053282 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/302e8226-565c-44a4-bb0e-dee670200ae3-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-7hbkg\" (UID: \"302e8226-565c-44a4-bb0e-dee670200ae3\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-7hbkg" Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:10.053301 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nnzzj\" (UniqueName: \"kubernetes.io/projected/8a9d6945-a25a-4a09-92ee-7b90664a2edd-kube-api-access-nnzzj\") pod \"kube-storage-version-migrator-operator-b67b599dd-mlncs\" (UID: \"8a9d6945-a25a-4a09-92ee-7b90664a2edd\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-mlncs" Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:10.053319 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/1f0c818b-31de-43ee-a20a-1fc174261b42-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-bh9mt\" (UID: \"1f0c818b-31de-43ee-a20a-1fc174261b42\") " pod="openshift-authentication/oauth-openshift-558db77b4-bh9mt" Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:10.053336 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/e192fd3a-6efd-4ce8-8915-2bba0f9dc8c0-auth-proxy-config\") pod \"machine-approver-56656f9798-hkbjv\" (UID: \"e192fd3a-6efd-4ce8-8915-2bba0f9dc8c0\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-hkbjv" Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:10.054185 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/457c15d5-4066-4d88-bbb4-a9fe13de20cd-client-ca\") pod \"route-controller-manager-6576b87f9c-j48cg\" (UID: \"457c15d5-4066-4d88-bbb4-a9fe13de20cd\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-j48cg" Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:10.055179 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c223ef1c-922a-42b8-b8d0-428a27f5ae6d-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-wd4nq\" (UID: \"c223ef1c-922a-42b8-b8d0-428a27f5ae6d\") " pod="openshift-controller-manager/controller-manager-879f6c89f-wd4nq" Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:10.056074 4661 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-dwpwk"] Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:10.057557 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-dwpwk" Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:10.057877 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/457c15d5-4066-4d88-bbb4-a9fe13de20cd-config\") pod \"route-controller-manager-6576b87f9c-j48cg\" (UID: \"457c15d5-4066-4d88-bbb4-a9fe13de20cd\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-j48cg" Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:10.057973 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/53f0efc5-f6a1-4f6b-a37b-3e9c4e3fea65-audit-dir\") pod \"apiserver-76f77b778f-q7s9r\" (UID: \"53f0efc5-f6a1-4f6b-a37b-3e9c4e3fea65\") " pod="openshift-apiserver/apiserver-76f77b778f-q7s9r" Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:10.059462 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/e6808303-41c9-4185-beed-5e7460b07075-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-446tk\" (UID: \"e6808303-41c9-4185-beed-5e7460b07075\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-446tk" Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:10.059908 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8c6c735b-2d38-430c-a5b7-10b9b06ef623-serving-cert\") pod \"apiserver-7bbb656c7d-kh25t\" (UID: \"8c6c735b-2d38-430c-a5b7-10b9b06ef623\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-kh25t" Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:10.060605 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c223ef1c-922a-42b8-b8d0-428a27f5ae6d-config\") pod \"controller-manager-879f6c89f-wd4nq\" (UID: \"c223ef1c-922a-42b8-b8d0-428a27f5ae6d\") " pod="openshift-controller-manager/controller-manager-879f6c89f-wd4nq" Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:10.060812 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/8c6c735b-2d38-430c-a5b7-10b9b06ef623-audit-policies\") pod \"apiserver-7bbb656c7d-kh25t\" (UID: \"8c6c735b-2d38-430c-a5b7-10b9b06ef623\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-kh25t" Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:10.061264 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/8c6c735b-2d38-430c-a5b7-10b9b06ef623-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-kh25t\" (UID: \"8c6c735b-2d38-430c-a5b7-10b9b06ef623\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-kh25t" Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:10.061648 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/53f0efc5-f6a1-4f6b-a37b-3e9c4e3fea65-image-import-ca\") pod \"apiserver-76f77b778f-q7s9r\" (UID: \"53f0efc5-f6a1-4f6b-a37b-3e9c4e3fea65\") " pod="openshift-apiserver/apiserver-76f77b778f-q7s9r" Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:10.062450 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/ad28ae9e-274e-45fc-8202-683aadfa3494-signing-cabundle\") pod \"service-ca-9c57cc56f-4flqc\" (UID: \"ad28ae9e-274e-45fc-8202-683aadfa3494\") " pod="openshift-service-ca/service-ca-9c57cc56f-4flqc" Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:10.063054 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/53f0efc5-f6a1-4f6b-a37b-3e9c4e3fea65-config\") pod \"apiserver-76f77b778f-q7s9r\" (UID: \"53f0efc5-f6a1-4f6b-a37b-3e9c4e3fea65\") " pod="openshift-apiserver/apiserver-76f77b778f-q7s9r" Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:10.063130 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/53f0efc5-f6a1-4f6b-a37b-3e9c4e3fea65-encryption-config\") pod \"apiserver-76f77b778f-q7s9r\" (UID: \"53f0efc5-f6a1-4f6b-a37b-3e9c4e3fea65\") " pod="openshift-apiserver/apiserver-76f77b778f-q7s9r" Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:10.063549 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1fe4d701-349f-4edf-a59f-092ccfcdd40e-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-ngxm7\" (UID: \"1fe4d701-349f-4edf-a59f-092ccfcdd40e\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-ngxm7" Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:10.063742 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/53f0efc5-f6a1-4f6b-a37b-3e9c4e3fea65-audit\") pod \"apiserver-76f77b778f-q7s9r\" (UID: \"53f0efc5-f6a1-4f6b-a37b-3e9c4e3fea65\") " pod="openshift-apiserver/apiserver-76f77b778f-q7s9r" Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:10.063817 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c223ef1c-922a-42b8-b8d0-428a27f5ae6d-client-ca\") pod \"controller-manager-879f6c89f-wd4nq\" (UID: \"c223ef1c-922a-42b8-b8d0-428a27f5ae6d\") " pod="openshift-controller-manager/controller-manager-879f6c89f-wd4nq" Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:10.064552 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/302e8226-565c-44a4-bb0e-dee670200ae3-config\") pod \"machine-api-operator-5694c8668f-7hbkg\" (UID: \"302e8226-565c-44a4-bb0e-dee670200ae3\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-7hbkg" Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:10.064674 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/8c6c735b-2d38-430c-a5b7-10b9b06ef623-etcd-client\") pod \"apiserver-7bbb656c7d-kh25t\" (UID: \"8c6c735b-2d38-430c-a5b7-10b9b06ef623\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-kh25t" Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:10.065327 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/53f0efc5-f6a1-4f6b-a37b-3e9c4e3fea65-etcd-serving-ca\") pod \"apiserver-76f77b778f-q7s9r\" (UID: \"53f0efc5-f6a1-4f6b-a37b-3e9c4e3fea65\") " pod="openshift-apiserver/apiserver-76f77b778f-q7s9r" Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:10.065361 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/18723097-a708-4951-89bc-48ffc2128786-config\") pod \"authentication-operator-69f744f599-7bk7r\" (UID: \"18723097-a708-4951-89bc-48ffc2128786\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-7bk7r" Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:10.065384 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/8c6c735b-2d38-430c-a5b7-10b9b06ef623-audit-dir\") pod \"apiserver-7bbb656c7d-kh25t\" (UID: \"8c6c735b-2d38-430c-a5b7-10b9b06ef623\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-kh25t" Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:10.065717 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/53f0efc5-f6a1-4f6b-a37b-3e9c4e3fea65-etcd-client\") pod \"apiserver-76f77b778f-q7s9r\" (UID: \"53f0efc5-f6a1-4f6b-a37b-3e9c4e3fea65\") " pod="openshift-apiserver/apiserver-76f77b778f-q7s9r" Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:10.065988 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/17cc4c8d-5d73-4307-83ea-e826befa5b06-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-tmtgr\" (UID: \"17cc4c8d-5d73-4307-83ea-e826befa5b06\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-tmtgr" Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:10.066253 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/18723097-a708-4951-89bc-48ffc2128786-service-ca-bundle\") pod \"authentication-operator-69f744f599-7bk7r\" (UID: \"18723097-a708-4951-89bc-48ffc2128786\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-7bk7r" Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:10.066281 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/18723097-a708-4951-89bc-48ffc2128786-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-7bk7r\" (UID: \"18723097-a708-4951-89bc-48ffc2128786\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-7bk7r" Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:10.066776 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/53f0efc5-f6a1-4f6b-a37b-3e9c4e3fea65-node-pullsecrets\") pod \"apiserver-76f77b778f-q7s9r\" (UID: \"53f0efc5-f6a1-4f6b-a37b-3e9c4e3fea65\") " pod="openshift-apiserver/apiserver-76f77b778f-q7s9r" Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:10.066844 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/302e8226-565c-44a4-bb0e-dee670200ae3-images\") pod \"machine-api-operator-5694c8668f-7hbkg\" (UID: \"302e8226-565c-44a4-bb0e-dee670200ae3\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-7hbkg" Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:10.066875 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1fe4d701-349f-4edf-a59f-092ccfcdd40e-config\") pod \"openshift-apiserver-operator-796bbdcf4f-ngxm7\" (UID: \"1fe4d701-349f-4edf-a59f-092ccfcdd40e\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-ngxm7" Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:10.066938 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/53f0efc5-f6a1-4f6b-a37b-3e9c4e3fea65-trusted-ca-bundle\") pod \"apiserver-76f77b778f-q7s9r\" (UID: \"53f0efc5-f6a1-4f6b-a37b-3e9c4e3fea65\") " pod="openshift-apiserver/apiserver-76f77b778f-q7s9r" Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:10.067221 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8c6c735b-2d38-430c-a5b7-10b9b06ef623-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-kh25t\" (UID: \"8c6c735b-2d38-430c-a5b7-10b9b06ef623\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-kh25t" Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:10.069161 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/18723097-a708-4951-89bc-48ffc2128786-serving-cert\") pod \"authentication-operator-69f744f599-7bk7r\" (UID: \"18723097-a708-4951-89bc-48ffc2128786\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-7bk7r" Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:10.069307 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/8c6c735b-2d38-430c-a5b7-10b9b06ef623-encryption-config\") pod \"apiserver-7bbb656c7d-kh25t\" (UID: \"8c6c735b-2d38-430c-a5b7-10b9b06ef623\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-kh25t" Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:10.069531 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-7hbkg"] Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:10.070338 4661 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:10.071118 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/457c15d5-4066-4d88-bbb4-a9fe13de20cd-serving-cert\") pod \"route-controller-manager-6576b87f9c-j48cg\" (UID: \"457c15d5-4066-4d88-bbb4-a9fe13de20cd\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-j48cg" Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:10.072179 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/53f0efc5-f6a1-4f6b-a37b-3e9c4e3fea65-serving-cert\") pod \"apiserver-76f77b778f-q7s9r\" (UID: \"53f0efc5-f6a1-4f6b-a37b-3e9c4e3fea65\") " pod="openshift-apiserver/apiserver-76f77b778f-q7s9r" Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:10.072972 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/ad28ae9e-274e-45fc-8202-683aadfa3494-signing-key\") pod \"service-ca-9c57cc56f-4flqc\" (UID: \"ad28ae9e-274e-45fc-8202-683aadfa3494\") " pod="openshift-service-ca/service-ca-9c57cc56f-4flqc" Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:10.073523 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-bh9mt"] Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:10.074242 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c223ef1c-922a-42b8-b8d0-428a27f5ae6d-serving-cert\") pod \"controller-manager-879f6c89f-wd4nq\" (UID: \"c223ef1c-922a-42b8-b8d0-428a27f5ae6d\") " pod="openshift-controller-manager/controller-manager-879f6c89f-wd4nq" Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:10.075106 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-qwp42"] Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:10.075510 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/302e8226-565c-44a4-bb0e-dee670200ae3-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-7hbkg\" (UID: \"302e8226-565c-44a4-bb0e-dee670200ae3\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-7hbkg" Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:10.076208 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-j48cg"] Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:10.077158 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-4flqc"] Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:10.078220 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-tmtgr"] Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:10.081274 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-flvxz"] Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:10.084091 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-kh25t"] Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:10.098466 4661 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:10.106541 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-ngxm7"] Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:10.108374 4661 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:10.109501 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-bvpn8"] Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:10.116413 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-mcf6f"] Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:10.122835 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-phg9x"] Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:10.129568 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-vx646"] Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:10.132043 4661 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:10.136247 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-r6qrc"] Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:10.148612 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-zdd7g"] Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:10.150827 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-7bk7r"] Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:10.153279 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:10.153758 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-xlt4n"] Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:10.155483 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-xfrkj"] Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:10.156923 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/76bf33d8-cdc1-4d99-a84f-4e9289a963af-proxy-tls\") pod \"machine-config-operator-74547568cd-vx646\" (UID: \"76bf33d8-cdc1-4d99-a84f-4e9289a963af\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-vx646" Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:10.156977 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/db930c1a-9c05-419d-b168-b232e2b98e9b-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-r6qrc\" (UID: \"db930c1a-9c05-419d-b168-b232e2b98e9b\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-r6qrc" Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:10.156998 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/1f0c818b-31de-43ee-a20a-1fc174261b42-audit-dir\") pod \"oauth-openshift-558db77b4-bh9mt\" (UID: \"1f0c818b-31de-43ee-a20a-1fc174261b42\") " pod="openshift-authentication/oauth-openshift-558db77b4-bh9mt" Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:10.157019 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/52978b9f-376f-4f49-9c2d-fc3da32b178f-bound-sa-token\") pod \"ingress-operator-5b745b69d9-mcf6f\" (UID: \"52978b9f-376f-4f49-9c2d-fc3da32b178f\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-mcf6f" Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:10.157040 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hxvmg\" (UniqueName: \"kubernetes.io/projected/4c500541-c3f2-4f6d-8bb7-1227aa74989a-kube-api-access-hxvmg\") pod \"console-f9d7485db-phg9x\" (UID: \"4c500541-c3f2-4f6d-8bb7-1227aa74989a\") " pod="openshift-console/console-f9d7485db-phg9x" Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:10.157059 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5x95s\" (UniqueName: \"kubernetes.io/projected/413ba99d-7214-4981-98fb-910c4f5731d8-kube-api-access-5x95s\") pod \"csi-hostpathplugin-dwpwk\" (UID: \"413ba99d-7214-4981-98fb-910c4f5731d8\") " pod="hostpath-provisioner/csi-hostpathplugin-dwpwk" Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:10.157094 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/1f0c818b-31de-43ee-a20a-1fc174261b42-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-bh9mt\" (UID: \"1f0c818b-31de-43ee-a20a-1fc174261b42\") " pod="openshift-authentication/oauth-openshift-558db77b4-bh9mt" Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:10.157116 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/1f0c818b-31de-43ee-a20a-1fc174261b42-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-bh9mt\" (UID: \"1f0c818b-31de-43ee-a20a-1fc174261b42\") " pod="openshift-authentication/oauth-openshift-558db77b4-bh9mt" Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:10.157141 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ebd4ae2f-ef41-402c-bc81-83385361e291-serving-cert\") pod \"service-ca-operator-777779d784-c4pxm\" (UID: \"ebd4ae2f-ef41-402c-bc81-83385361e291\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-c4pxm" Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:10.157160 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-599hc\" (UniqueName: \"kubernetes.io/projected/081efbd4-859f-49bb-84d6-778ce124b602-kube-api-access-599hc\") pod \"dns-operator-744455d44c-zdd7g\" (UID: \"081efbd4-859f-49bb-84d6-778ce124b602\") " pod="openshift-dns-operator/dns-operator-744455d44c-zdd7g" Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:10.157189 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/081efbd4-859f-49bb-84d6-778ce124b602-metrics-tls\") pod \"dns-operator-744455d44c-zdd7g\" (UID: \"081efbd4-859f-49bb-84d6-778ce124b602\") " pod="openshift-dns-operator/dns-operator-744455d44c-zdd7g" Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:10.157222 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/52978b9f-376f-4f49-9c2d-fc3da32b178f-metrics-tls\") pod \"ingress-operator-5b745b69d9-mcf6f\" (UID: \"52978b9f-376f-4f49-9c2d-fc3da32b178f\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-mcf6f" Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:10.157259 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/e4cd0e68-3282-4713-8386-8c86f56f1f70-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-44vhk\" (UID: \"e4cd0e68-3282-4713-8386-8c86f56f1f70\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-44vhk" Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:10.157291 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8e261f9f-2027-4bb2-9254-40758baaa1ea-serving-cert\") pod \"openshift-config-operator-7777fb866f-9htcv\" (UID: \"8e261f9f-2027-4bb2-9254-40758baaa1ea\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-9htcv" Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:10.157317 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c7ff0869-4b3b-447f-a012-9bc155bae99b-stats-auth\") pod \"router-default-5444994796-tptl9\" (UID: \"c7ff0869-4b3b-447f-a012-9bc155bae99b\") " pod="openshift-ingress/router-default-5444994796-tptl9" Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:10.157336 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qs5q2\" (UniqueName: \"kubernetes.io/projected/e192fd3a-6efd-4ce8-8915-2bba0f9dc8c0-kube-api-access-qs5q2\") pod \"machine-approver-56656f9798-hkbjv\" (UID: \"e192fd3a-6efd-4ce8-8915-2bba0f9dc8c0\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-hkbjv" Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:10.157353 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/e192fd3a-6efd-4ce8-8915-2bba0f9dc8c0-machine-approver-tls\") pod \"machine-approver-56656f9798-hkbjv\" (UID: \"e192fd3a-6efd-4ce8-8915-2bba0f9dc8c0\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-hkbjv" Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:10.157372 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/413ba99d-7214-4981-98fb-910c4f5731d8-csi-data-dir\") pod \"csi-hostpathplugin-dwpwk\" (UID: \"413ba99d-7214-4981-98fb-910c4f5731d8\") " pod="hostpath-provisioner/csi-hostpathplugin-dwpwk" Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:10.157392 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f9qpp\" (UniqueName: \"kubernetes.io/projected/c7ff0869-4b3b-447f-a012-9bc155bae99b-kube-api-access-f9qpp\") pod \"router-default-5444994796-tptl9\" (UID: \"c7ff0869-4b3b-447f-a012-9bc155bae99b\") " pod="openshift-ingress/router-default-5444994796-tptl9" Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:10.157418 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/8e261f9f-2027-4bb2-9254-40758baaa1ea-available-featuregates\") pod \"openshift-config-operator-7777fb866f-9htcv\" (UID: \"8e261f9f-2027-4bb2-9254-40758baaa1ea\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-9htcv" Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:10.157437 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1f0c818b-31de-43ee-a20a-1fc174261b42-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-bh9mt\" (UID: \"1f0c818b-31de-43ee-a20a-1fc174261b42\") " pod="openshift-authentication/oauth-openshift-558db77b4-bh9mt" Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:10.157455 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tqrqq\" (UniqueName: \"kubernetes.io/projected/afe96487-2a45-4ad8-8f17-7f33186f55f4-kube-api-access-tqrqq\") pod \"downloads-7954f5f757-bvpn8\" (UID: \"afe96487-2a45-4ad8-8f17-7f33186f55f4\") " pod="openshift-console/downloads-7954f5f757-bvpn8" Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:10.157472 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nnzzj\" (UniqueName: \"kubernetes.io/projected/8a9d6945-a25a-4a09-92ee-7b90664a2edd-kube-api-access-nnzzj\") pod \"kube-storage-version-migrator-operator-b67b599dd-mlncs\" (UID: \"8a9d6945-a25a-4a09-92ee-7b90664a2edd\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-mlncs" Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:10.157488 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/1f0c818b-31de-43ee-a20a-1fc174261b42-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-bh9mt\" (UID: \"1f0c818b-31de-43ee-a20a-1fc174261b42\") " pod="openshift-authentication/oauth-openshift-558db77b4-bh9mt" Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:10.157505 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/e192fd3a-6efd-4ce8-8915-2bba0f9dc8c0-auth-proxy-config\") pod \"machine-approver-56656f9798-hkbjv\" (UID: \"e192fd3a-6efd-4ce8-8915-2bba0f9dc8c0\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-hkbjv" Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:10.157523 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/413ba99d-7214-4981-98fb-910c4f5731d8-socket-dir\") pod \"csi-hostpathplugin-dwpwk\" (UID: \"413ba99d-7214-4981-98fb-910c4f5731d8\") " pod="hostpath-provisioner/csi-hostpathplugin-dwpwk" Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:10.157543 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/76bf33d8-cdc1-4d99-a84f-4e9289a963af-images\") pod \"machine-config-operator-74547568cd-vx646\" (UID: \"76bf33d8-cdc1-4d99-a84f-4e9289a963af\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-vx646" Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:10.157563 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/1f0c818b-31de-43ee-a20a-1fc174261b42-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-bh9mt\" (UID: \"1f0c818b-31de-43ee-a20a-1fc174261b42\") " pod="openshift-authentication/oauth-openshift-558db77b4-bh9mt" Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:10.157582 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0d3a2340-8262-4080-8c72-8ced3d6c0c5a-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-9j9rr\" (UID: \"0d3a2340-8262-4080-8c72-8ced3d6c0c5a\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-9j9rr" Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:10.157599 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9a7d9c3e-88e9-44b2-98bc-6aab91fbf9b4-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-flvxz\" (UID: \"9a7d9c3e-88e9-44b2-98bc-6aab91fbf9b4\") " pod="openshift-marketplace/marketplace-operator-79b997595-flvxz" Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:10.157621 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/4c500541-c3f2-4f6d-8bb7-1227aa74989a-service-ca\") pod \"console-f9d7485db-phg9x\" (UID: \"4c500541-c3f2-4f6d-8bb7-1227aa74989a\") " pod="openshift-console/console-f9d7485db-phg9x" Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:10.157667 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/db930c1a-9c05-419d-b168-b232e2b98e9b-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-r6qrc\" (UID: \"db930c1a-9c05-419d-b168-b232e2b98e9b\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-r6qrc" Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:10.157741 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8a9d6945-a25a-4a09-92ee-7b90664a2edd-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-mlncs\" (UID: \"8a9d6945-a25a-4a09-92ee-7b90664a2edd\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-mlncs" Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:10.157762 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/28e39614-3757-41e0-b164-eb1964ff6a8d-etcd-service-ca\") pod \"etcd-operator-b45778765-xfrkj\" (UID: \"28e39614-3757-41e0-b164-eb1964ff6a8d\") " pod="openshift-etcd-operator/etcd-operator-b45778765-xfrkj" Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:10.157783 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/52978b9f-376f-4f49-9c2d-fc3da32b178f-trusted-ca\") pod \"ingress-operator-5b745b69d9-mcf6f\" (UID: \"52978b9f-376f-4f49-9c2d-fc3da32b178f\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-mcf6f" Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:10.157805 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7a1f08ef-d9a0-484e-9959-14d3ab178d28-config-volume\") pod \"collect-profiles-29482200-ckvfc\" (UID: \"7a1f08ef-d9a0-484e-9959-14d3ab178d28\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29482200-ckvfc" Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:10.157822 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/4c500541-c3f2-4f6d-8bb7-1227aa74989a-console-serving-cert\") pod \"console-f9d7485db-phg9x\" (UID: \"4c500541-c3f2-4f6d-8bb7-1227aa74989a\") " pod="openshift-console/console-f9d7485db-phg9x" Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:10.157840 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/1f0c818b-31de-43ee-a20a-1fc174261b42-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-bh9mt\" (UID: \"1f0c818b-31de-43ee-a20a-1fc174261b42\") " pod="openshift-authentication/oauth-openshift-558db77b4-bh9mt" Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:10.157857 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bdzzp\" (UniqueName: \"kubernetes.io/projected/28e39614-3757-41e0-b164-eb1964ff6a8d-kube-api-access-bdzzp\") pod \"etcd-operator-b45778765-xfrkj\" (UID: \"28e39614-3757-41e0-b164-eb1964ff6a8d\") " pod="openshift-etcd-operator/etcd-operator-b45778765-xfrkj" Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:10.157878 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8b7cc6dd-b02e-4e3f-b569-42201693f3e7-config\") pod \"kube-controller-manager-operator-78b949d7b-4p2m6\" (UID: \"8b7cc6dd-b02e-4e3f-b569-42201693f3e7\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-4p2m6" Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:10.157909 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/02d802d7-1516-4eb2-98a9-2f1878609216-trusted-ca\") pod \"console-operator-58897d9998-n76xd\" (UID: \"02d802d7-1516-4eb2-98a9-2f1878609216\") " pod="openshift-console-operator/console-operator-58897d9998-n76xd" Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:10.157926 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fhsns\" (UniqueName: \"kubernetes.io/projected/9a7d9c3e-88e9-44b2-98bc-6aab91fbf9b4-kube-api-access-fhsns\") pod \"marketplace-operator-79b997595-flvxz\" (UID: \"9a7d9c3e-88e9-44b2-98bc-6aab91fbf9b4\") " pod="openshift-marketplace/marketplace-operator-79b997595-flvxz" Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:10.157936 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-qdlnn"] Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:10.157943 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/4c500541-c3f2-4f6d-8bb7-1227aa74989a-oauth-serving-cert\") pod \"console-f9d7485db-phg9x\" (UID: \"4c500541-c3f2-4f6d-8bb7-1227aa74989a\") " pod="openshift-console/console-f9d7485db-phg9x" Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:10.158017 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e192fd3a-6efd-4ce8-8915-2bba0f9dc8c0-config\") pod \"machine-approver-56656f9798-hkbjv\" (UID: \"e192fd3a-6efd-4ce8-8915-2bba0f9dc8c0\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-hkbjv" Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:10.158231 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/550a5702-08aa-4dca-a1a0-7adfebbd9312-proxy-tls\") pod \"machine-config-controller-84d6567774-xlt4n\" (UID: \"550a5702-08aa-4dca-a1a0-7adfebbd9312\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-xlt4n" Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:10.159321 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8a9d6945-a25a-4a09-92ee-7b90664a2edd-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-mlncs\" (UID: \"8a9d6945-a25a-4a09-92ee-7b90664a2edd\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-mlncs" Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:10.159352 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/a1a9a02c-4b40-412e-a7f1-94568385465a-profile-collector-cert\") pod \"catalog-operator-68c6474976-lrwcc\" (UID: \"a1a9a02c-4b40-412e-a7f1-94568385465a\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-lrwcc" Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:10.159375 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/28e39614-3757-41e0-b164-eb1964ff6a8d-etcd-ca\") pod \"etcd-operator-b45778765-xfrkj\" (UID: \"28e39614-3757-41e0-b164-eb1964ff6a8d\") " pod="openshift-etcd-operator/etcd-operator-b45778765-xfrkj" Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:10.159396 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/28e39614-3757-41e0-b164-eb1964ff6a8d-etcd-client\") pod \"etcd-operator-b45778765-xfrkj\" (UID: \"28e39614-3757-41e0-b164-eb1964ff6a8d\") " pod="openshift-etcd-operator/etcd-operator-b45778765-xfrkj" Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:10.159419 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/1f0c818b-31de-43ee-a20a-1fc174261b42-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-bh9mt\" (UID: \"1f0c818b-31de-43ee-a20a-1fc174261b42\") " pod="openshift-authentication/oauth-openshift-558db77b4-bh9mt" Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:10.159441 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ebd4ae2f-ef41-402c-bc81-83385361e291-config\") pod \"service-ca-operator-777779d784-c4pxm\" (UID: \"ebd4ae2f-ef41-402c-bc81-83385361e291\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-c4pxm" Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:10.159467 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q5nmr\" (UniqueName: \"kubernetes.io/projected/8e261f9f-2027-4bb2-9254-40758baaa1ea-kube-api-access-q5nmr\") pod \"openshift-config-operator-7777fb866f-9htcv\" (UID: \"8e261f9f-2027-4bb2-9254-40758baaa1ea\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-9htcv" Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:10.159489 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-79klp\" (UniqueName: \"kubernetes.io/projected/a507ebcc-7e0b-445b-9688-882358d365ce-kube-api-access-79klp\") pod \"control-plane-machine-set-operator-78cbb6b69f-qdlnn\" (UID: \"a507ebcc-7e0b-445b-9688-882358d365ce\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-qdlnn" Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:10.159508 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/76bf33d8-cdc1-4d99-a84f-4e9289a963af-auth-proxy-config\") pod \"machine-config-operator-74547568cd-vx646\" (UID: \"76bf33d8-cdc1-4d99-a84f-4e9289a963af\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-vx646" Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:10.159529 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9jgm2\" (UniqueName: \"kubernetes.io/projected/550a5702-08aa-4dca-a1a0-7adfebbd9312-kube-api-access-9jgm2\") pod \"machine-config-controller-84d6567774-xlt4n\" (UID: \"550a5702-08aa-4dca-a1a0-7adfebbd9312\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-xlt4n" Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:10.159551 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/1f0c818b-31de-43ee-a20a-1fc174261b42-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-bh9mt\" (UID: \"1f0c818b-31de-43ee-a20a-1fc174261b42\") " pod="openshift-authentication/oauth-openshift-558db77b4-bh9mt" Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:10.159572 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7a1f08ef-d9a0-484e-9959-14d3ab178d28-secret-volume\") pod \"collect-profiles-29482200-ckvfc\" (UID: \"7a1f08ef-d9a0-484e-9959-14d3ab178d28\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29482200-ckvfc" Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:10.159595 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f2m8l\" (UniqueName: \"kubernetes.io/projected/f1297f46-d734-41e9-a7b8-5033ce03f315-kube-api-access-f2m8l\") pod \"openshift-controller-manager-operator-756b6f6bc6-2qq6g\" (UID: \"f1297f46-d734-41e9-a7b8-5033ce03f315\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-2qq6g" Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:10.159613 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/28e39614-3757-41e0-b164-eb1964ff6a8d-serving-cert\") pod \"etcd-operator-b45778765-xfrkj\" (UID: \"28e39614-3757-41e0-b164-eb1964ff6a8d\") " pod="openshift-etcd-operator/etcd-operator-b45778765-xfrkj" Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:10.159638 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/1f0c818b-31de-43ee-a20a-1fc174261b42-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-bh9mt\" (UID: \"1f0c818b-31de-43ee-a20a-1fc174261b42\") " pod="openshift-authentication/oauth-openshift-558db77b4-bh9mt" Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:10.159659 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/1f0c818b-31de-43ee-a20a-1fc174261b42-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-bh9mt\" (UID: \"1f0c818b-31de-43ee-a20a-1fc174261b42\") " pod="openshift-authentication/oauth-openshift-558db77b4-bh9mt" Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:10.159694 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jrn8w\" (UniqueName: \"kubernetes.io/projected/e4cd0e68-3282-4713-8386-8c86f56f1f70-kube-api-access-jrn8w\") pod \"multus-admission-controller-857f4d67dd-44vhk\" (UID: \"e4cd0e68-3282-4713-8386-8c86f56f1f70\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-44vhk" Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:10.159731 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/413ba99d-7214-4981-98fb-910c4f5731d8-registration-dir\") pod \"csi-hostpathplugin-dwpwk\" (UID: \"413ba99d-7214-4981-98fb-910c4f5731d8\") " pod="hostpath-provisioner/csi-hostpathplugin-dwpwk" Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:10.159750 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0d3a2340-8262-4080-8c72-8ced3d6c0c5a-config\") pod \"kube-apiserver-operator-766d6c64bb-9j9rr\" (UID: \"0d3a2340-8262-4080-8c72-8ced3d6c0c5a\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-9j9rr" Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:10.159770 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gr5h5\" (UniqueName: \"kubernetes.io/projected/76bf33d8-cdc1-4d99-a84f-4e9289a963af-kube-api-access-gr5h5\") pod \"machine-config-operator-74547568cd-vx646\" (UID: \"76bf33d8-cdc1-4d99-a84f-4e9289a963af\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-vx646" Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:10.159796 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xw4mr\" (UniqueName: \"kubernetes.io/projected/ebd4ae2f-ef41-402c-bc81-83385361e291-kube-api-access-xw4mr\") pod \"service-ca-operator-777779d784-c4pxm\" (UID: \"ebd4ae2f-ef41-402c-bc81-83385361e291\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-c4pxm" Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:10.159817 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/a507ebcc-7e0b-445b-9688-882358d365ce-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-qdlnn\" (UID: \"a507ebcc-7e0b-445b-9688-882358d365ce\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-qdlnn" Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:10.159836 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/02d802d7-1516-4eb2-98a9-2f1878609216-serving-cert\") pod \"console-operator-58897d9998-n76xd\" (UID: \"02d802d7-1516-4eb2-98a9-2f1878609216\") " pod="openshift-console-operator/console-operator-58897d9998-n76xd" Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:10.159859 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hnr4m\" (UniqueName: \"kubernetes.io/projected/1f0c818b-31de-43ee-a20a-1fc174261b42-kube-api-access-hnr4m\") pod \"oauth-openshift-558db77b4-bh9mt\" (UID: \"1f0c818b-31de-43ee-a20a-1fc174261b42\") " pod="openshift-authentication/oauth-openshift-558db77b4-bh9mt" Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:10.159877 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/4c500541-c3f2-4f6d-8bb7-1227aa74989a-console-config\") pod \"console-f9d7485db-phg9x\" (UID: \"4c500541-c3f2-4f6d-8bb7-1227aa74989a\") " pod="openshift-console/console-f9d7485db-phg9x" Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:10.159897 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l9n45\" (UniqueName: \"kubernetes.io/projected/a1a9a02c-4b40-412e-a7f1-94568385465a-kube-api-access-l9n45\") pod \"catalog-operator-68c6474976-lrwcc\" (UID: \"a1a9a02c-4b40-412e-a7f1-94568385465a\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-lrwcc" Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:10.159929 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c7ff0869-4b3b-447f-a012-9bc155bae99b-metrics-certs\") pod \"router-default-5444994796-tptl9\" (UID: \"c7ff0869-4b3b-447f-a012-9bc155bae99b\") " pod="openshift-ingress/router-default-5444994796-tptl9" Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:10.159952 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cpnqn\" (UniqueName: \"kubernetes.io/projected/02d802d7-1516-4eb2-98a9-2f1878609216-kube-api-access-cpnqn\") pod \"console-operator-58897d9998-n76xd\" (UID: \"02d802d7-1516-4eb2-98a9-2f1878609216\") " pod="openshift-console-operator/console-operator-58897d9998-n76xd" Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:10.159971 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/1f0c818b-31de-43ee-a20a-1fc174261b42-audit-policies\") pod \"oauth-openshift-558db77b4-bh9mt\" (UID: \"1f0c818b-31de-43ee-a20a-1fc174261b42\") " pod="openshift-authentication/oauth-openshift-558db77b4-bh9mt" Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:10.159993 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qrdx6\" (UniqueName: \"kubernetes.io/projected/52978b9f-376f-4f49-9c2d-fc3da32b178f-kube-api-access-qrdx6\") pod \"ingress-operator-5b745b69d9-mcf6f\" (UID: \"52978b9f-376f-4f49-9c2d-fc3da32b178f\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-mcf6f" Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:10.160017 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4c500541-c3f2-4f6d-8bb7-1227aa74989a-trusted-ca-bundle\") pod \"console-f9d7485db-phg9x\" (UID: \"4c500541-c3f2-4f6d-8bb7-1227aa74989a\") " pod="openshift-console/console-f9d7485db-phg9x" Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:10.160054 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/db930c1a-9c05-419d-b168-b232e2b98e9b-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-r6qrc\" (UID: \"db930c1a-9c05-419d-b168-b232e2b98e9b\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-r6qrc" Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:10.160073 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/a1a9a02c-4b40-412e-a7f1-94568385465a-srv-cert\") pod \"catalog-operator-68c6474976-lrwcc\" (UID: \"a1a9a02c-4b40-412e-a7f1-94568385465a\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-lrwcc" Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:10.160095 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f1297f46-d734-41e9-a7b8-5033ce03f315-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-2qq6g\" (UID: \"f1297f46-d734-41e9-a7b8-5033ce03f315\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-2qq6g" Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:10.160140 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c7ff0869-4b3b-447f-a012-9bc155bae99b-default-certificate\") pod \"router-default-5444994796-tptl9\" (UID: \"c7ff0869-4b3b-447f-a012-9bc155bae99b\") " pod="openshift-ingress/router-default-5444994796-tptl9" Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:10.160164 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f1297f46-d734-41e9-a7b8-5033ce03f315-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-2qq6g\" (UID: \"f1297f46-d734-41e9-a7b8-5033ce03f315\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-2qq6g" Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:10.160189 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/550a5702-08aa-4dca-a1a0-7adfebbd9312-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-xlt4n\" (UID: \"550a5702-08aa-4dca-a1a0-7adfebbd9312\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-xlt4n" Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:10.160194 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/1f0c818b-31de-43ee-a20a-1fc174261b42-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-bh9mt\" (UID: \"1f0c818b-31de-43ee-a20a-1fc174261b42\") " pod="openshift-authentication/oauth-openshift-558db77b4-bh9mt" Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:10.160210 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/1f0c818b-31de-43ee-a20a-1fc174261b42-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-bh9mt\" (UID: \"1f0c818b-31de-43ee-a20a-1fc174261b42\") " pod="openshift-authentication/oauth-openshift-558db77b4-bh9mt" Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:10.160233 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c7ff0869-4b3b-447f-a012-9bc155bae99b-service-ca-bundle\") pod \"router-default-5444994796-tptl9\" (UID: \"c7ff0869-4b3b-447f-a012-9bc155bae99b\") " pod="openshift-ingress/router-default-5444994796-tptl9" Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:10.160264 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/9a7d9c3e-88e9-44b2-98bc-6aab91fbf9b4-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-flvxz\" (UID: \"9a7d9c3e-88e9-44b2-98bc-6aab91fbf9b4\") " pod="openshift-marketplace/marketplace-operator-79b997595-flvxz" Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:10.160282 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d697c\" (UniqueName: \"kubernetes.io/projected/7a1f08ef-d9a0-484e-9959-14d3ab178d28-kube-api-access-d697c\") pod \"collect-profiles-29482200-ckvfc\" (UID: \"7a1f08ef-d9a0-484e-9959-14d3ab178d28\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29482200-ckvfc" Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:10.160302 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/28e39614-3757-41e0-b164-eb1964ff6a8d-config\") pod \"etcd-operator-b45778765-xfrkj\" (UID: \"28e39614-3757-41e0-b164-eb1964ff6a8d\") " pod="openshift-etcd-operator/etcd-operator-b45778765-xfrkj" Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:10.160320 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/02d802d7-1516-4eb2-98a9-2f1878609216-config\") pod \"console-operator-58897d9998-n76xd\" (UID: \"02d802d7-1516-4eb2-98a9-2f1878609216\") " pod="openshift-console-operator/console-operator-58897d9998-n76xd" Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:10.160339 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/4c500541-c3f2-4f6d-8bb7-1227aa74989a-console-oauth-config\") pod \"console-f9d7485db-phg9x\" (UID: \"4c500541-c3f2-4f6d-8bb7-1227aa74989a\") " pod="openshift-console/console-f9d7485db-phg9x" Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:10.160356 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/413ba99d-7214-4981-98fb-910c4f5731d8-plugins-dir\") pod \"csi-hostpathplugin-dwpwk\" (UID: \"413ba99d-7214-4981-98fb-910c4f5731d8\") " pod="hostpath-provisioner/csi-hostpathplugin-dwpwk" Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:10.160375 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0d3a2340-8262-4080-8c72-8ced3d6c0c5a-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-9j9rr\" (UID: \"0d3a2340-8262-4080-8c72-8ced3d6c0c5a\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-9j9rr" Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:10.160394 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8b7cc6dd-b02e-4e3f-b569-42201693f3e7-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-4p2m6\" (UID: \"8b7cc6dd-b02e-4e3f-b569-42201693f3e7\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-4p2m6" Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:10.160414 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8b7cc6dd-b02e-4e3f-b569-42201693f3e7-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-4p2m6\" (UID: \"8b7cc6dd-b02e-4e3f-b569-42201693f3e7\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-4p2m6" Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:10.160429 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/413ba99d-7214-4981-98fb-910c4f5731d8-mountpoint-dir\") pod \"csi-hostpathplugin-dwpwk\" (UID: \"413ba99d-7214-4981-98fb-910c4f5731d8\") " pod="hostpath-provisioner/csi-hostpathplugin-dwpwk" Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:10.161251 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7a1f08ef-d9a0-484e-9959-14d3ab178d28-config-volume\") pod \"collect-profiles-29482200-ckvfc\" (UID: \"7a1f08ef-d9a0-484e-9959-14d3ab178d28\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29482200-ckvfc" Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:10.162331 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1f0c818b-31de-43ee-a20a-1fc174261b42-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-bh9mt\" (UID: \"1f0c818b-31de-43ee-a20a-1fc174261b42\") " pod="openshift-authentication/oauth-openshift-558db77b4-bh9mt" Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:10.159835 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/e192fd3a-6efd-4ce8-8915-2bba0f9dc8c0-auth-proxy-config\") pod \"machine-approver-56656f9798-hkbjv\" (UID: \"e192fd3a-6efd-4ce8-8915-2bba0f9dc8c0\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-hkbjv" Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:10.165458 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e192fd3a-6efd-4ce8-8915-2bba0f9dc8c0-config\") pod \"machine-approver-56656f9798-hkbjv\" (UID: \"e192fd3a-6efd-4ce8-8915-2bba0f9dc8c0\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-hkbjv" Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:10.165476 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/02d802d7-1516-4eb2-98a9-2f1878609216-trusted-ca\") pod \"console-operator-58897d9998-n76xd\" (UID: \"02d802d7-1516-4eb2-98a9-2f1878609216\") " pod="openshift-console-operator/console-operator-58897d9998-n76xd" Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:10.158514 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/1f0c818b-31de-43ee-a20a-1fc174261b42-audit-dir\") pod \"oauth-openshift-558db77b4-bh9mt\" (UID: \"1f0c818b-31de-43ee-a20a-1fc174261b42\") " pod="openshift-authentication/oauth-openshift-558db77b4-bh9mt" Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:10.158728 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-9j9rr"] Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:10.165579 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-mlncs"] Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:10.165593 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-lrwcc"] Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:10.165604 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29482200-ckvfc"] Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:10.165822 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/1f0c818b-31de-43ee-a20a-1fc174261b42-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-bh9mt\" (UID: \"1f0c818b-31de-43ee-a20a-1fc174261b42\") " pod="openshift-authentication/oauth-openshift-558db77b4-bh9mt" Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:10.167216 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/550a5702-08aa-4dca-a1a0-7adfebbd9312-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-xlt4n\" (UID: \"550a5702-08aa-4dca-a1a0-7adfebbd9312\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-xlt4n" Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:10.167418 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/1f0c818b-31de-43ee-a20a-1fc174261b42-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-bh9mt\" (UID: \"1f0c818b-31de-43ee-a20a-1fc174261b42\") " pod="openshift-authentication/oauth-openshift-558db77b4-bh9mt" Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:10.167779 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:10.168122 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/1f0c818b-31de-43ee-a20a-1fc174261b42-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-bh9mt\" (UID: \"1f0c818b-31de-43ee-a20a-1fc174261b42\") " pod="openshift-authentication/oauth-openshift-558db77b4-bh9mt" Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:10.168996 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/02d802d7-1516-4eb2-98a9-2f1878609216-config\") pod \"console-operator-58897d9998-n76xd\" (UID: \"02d802d7-1516-4eb2-98a9-2f1878609216\") " pod="openshift-console-operator/console-operator-58897d9998-n76xd" Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:10.169781 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c7ff0869-4b3b-447f-a012-9bc155bae99b-default-certificate\") pod \"router-default-5444994796-tptl9\" (UID: \"c7ff0869-4b3b-447f-a012-9bc155bae99b\") " pod="openshift-ingress/router-default-5444994796-tptl9" Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:10.169900 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c7ff0869-4b3b-447f-a012-9bc155bae99b-service-ca-bundle\") pod \"router-default-5444994796-tptl9\" (UID: \"c7ff0869-4b3b-447f-a012-9bc155bae99b\") " pod="openshift-ingress/router-default-5444994796-tptl9" Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:10.170462 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/e4cd0e68-3282-4713-8386-8c86f56f1f70-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-44vhk\" (UID: \"e4cd0e68-3282-4713-8386-8c86f56f1f70\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-44vhk" Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:10.171159 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/8e261f9f-2027-4bb2-9254-40758baaa1ea-available-featuregates\") pod \"openshift-config-operator-7777fb866f-9htcv\" (UID: \"8e261f9f-2027-4bb2-9254-40758baaa1ea\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-9htcv" Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:10.171336 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/1f0c818b-31de-43ee-a20a-1fc174261b42-audit-policies\") pod \"oauth-openshift-558db77b4-bh9mt\" (UID: \"1f0c818b-31de-43ee-a20a-1fc174261b42\") " pod="openshift-authentication/oauth-openshift-558db77b4-bh9mt" Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:10.171460 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-n76xd"] Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:10.171570 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-q7s9r"] Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:10.172751 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/a1a9a02c-4b40-412e-a7f1-94568385465a-srv-cert\") pod \"catalog-operator-68c6474976-lrwcc\" (UID: \"a1a9a02c-4b40-412e-a7f1-94568385465a\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-lrwcc" Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:10.173921 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c7ff0869-4b3b-447f-a012-9bc155bae99b-stats-auth\") pod \"router-default-5444994796-tptl9\" (UID: \"c7ff0869-4b3b-447f-a012-9bc155bae99b\") " pod="openshift-ingress/router-default-5444994796-tptl9" Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:10.175085 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/02d802d7-1516-4eb2-98a9-2f1878609216-serving-cert\") pod \"console-operator-58897d9998-n76xd\" (UID: \"02d802d7-1516-4eb2-98a9-2f1878609216\") " pod="openshift-console-operator/console-operator-58897d9998-n76xd" Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:10.175104 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/a1a9a02c-4b40-412e-a7f1-94568385465a-profile-collector-cert\") pod \"catalog-operator-68c6474976-lrwcc\" (UID: \"a1a9a02c-4b40-412e-a7f1-94568385465a\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-lrwcc" Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:10.175650 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/1f0c818b-31de-43ee-a20a-1fc174261b42-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-bh9mt\" (UID: \"1f0c818b-31de-43ee-a20a-1fc174261b42\") " pod="openshift-authentication/oauth-openshift-558db77b4-bh9mt" Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:10.176053 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/1f0c818b-31de-43ee-a20a-1fc174261b42-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-bh9mt\" (UID: \"1f0c818b-31de-43ee-a20a-1fc174261b42\") " pod="openshift-authentication/oauth-openshift-558db77b4-bh9mt" Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:10.176161 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/081efbd4-859f-49bb-84d6-778ce124b602-metrics-tls\") pod \"dns-operator-744455d44c-zdd7g\" (UID: \"081efbd4-859f-49bb-84d6-778ce124b602\") " pod="openshift-dns-operator/dns-operator-744455d44c-zdd7g" Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:10.176523 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c7ff0869-4b3b-447f-a012-9bc155bae99b-metrics-certs\") pod \"router-default-5444994796-tptl9\" (UID: \"c7ff0869-4b3b-447f-a012-9bc155bae99b\") " pod="openshift-ingress/router-default-5444994796-tptl9" Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:10.176590 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-4p2m6"] Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:10.176597 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7a1f08ef-d9a0-484e-9959-14d3ab178d28-secret-volume\") pod \"collect-profiles-29482200-ckvfc\" (UID: \"7a1f08ef-d9a0-484e-9959-14d3ab178d28\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29482200-ckvfc" Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:10.177463 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-c4pxm"] Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:10.178549 4661 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-tdtbm"] Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:10.179142 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/1f0c818b-31de-43ee-a20a-1fc174261b42-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-bh9mt\" (UID: \"1f0c818b-31de-43ee-a20a-1fc174261b42\") " pod="openshift-authentication/oauth-openshift-558db77b4-bh9mt" Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:10.179470 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/e192fd3a-6efd-4ce8-8915-2bba0f9dc8c0-machine-approver-tls\") pod \"machine-approver-56656f9798-hkbjv\" (UID: \"e192fd3a-6efd-4ce8-8915-2bba0f9dc8c0\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-hkbjv" Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:10.179872 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/1f0c818b-31de-43ee-a20a-1fc174261b42-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-bh9mt\" (UID: \"1f0c818b-31de-43ee-a20a-1fc174261b42\") " pod="openshift-authentication/oauth-openshift-558db77b4-bh9mt" Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:10.180097 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-9htcv"] Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:10.180389 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-tdtbm" Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:10.180613 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-44vhk"] Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:10.181541 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-dwpwk"] Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:10.182559 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-454g9"] Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:10.184188 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-76h24"] Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:10.185032 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/1f0c818b-31de-43ee-a20a-1fc174261b42-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-bh9mt\" (UID: \"1f0c818b-31de-43ee-a20a-1fc174261b42\") " pod="openshift-authentication/oauth-openshift-558db77b4-bh9mt" Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:10.185299 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:10.185643 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/1f0c818b-31de-43ee-a20a-1fc174261b42-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-bh9mt\" (UID: \"1f0c818b-31de-43ee-a20a-1fc174261b42\") " pod="openshift-authentication/oauth-openshift-558db77b4-bh9mt" Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:10.185854 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-7m2kh"] Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:10.186940 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-mfmq7"] Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:10.187890 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-2qq6g"] Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:10.188867 4661 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-6xth2"] Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:10.189525 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-6xth2" Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:10.189948 4661 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-gbprg"] Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:10.190877 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbprg" Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:10.191076 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-6xth2"] Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:10.192366 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-gbprg"] Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:10.206129 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:10.215465 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8e261f9f-2027-4bb2-9254-40758baaa1ea-serving-cert\") pod \"openshift-config-operator-7777fb866f-9htcv\" (UID: \"8e261f9f-2027-4bb2-9254-40758baaa1ea\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-9htcv" Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:10.227808 4661 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:10.261150 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/413ba99d-7214-4981-98fb-910c4f5731d8-csi-data-dir\") pod \"csi-hostpathplugin-dwpwk\" (UID: \"413ba99d-7214-4981-98fb-910c4f5731d8\") " pod="hostpath-provisioner/csi-hostpathplugin-dwpwk" Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:10.261299 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/413ba99d-7214-4981-98fb-910c4f5731d8-csi-data-dir\") pod \"csi-hostpathplugin-dwpwk\" (UID: \"413ba99d-7214-4981-98fb-910c4f5731d8\") " pod="hostpath-provisioner/csi-hostpathplugin-dwpwk" Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:10.261424 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/413ba99d-7214-4981-98fb-910c4f5731d8-socket-dir\") pod \"csi-hostpathplugin-dwpwk\" (UID: \"413ba99d-7214-4981-98fb-910c4f5731d8\") " pod="hostpath-provisioner/csi-hostpathplugin-dwpwk" Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:10.261499 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/76bf33d8-cdc1-4d99-a84f-4e9289a963af-images\") pod \"machine-config-operator-74547568cd-vx646\" (UID: \"76bf33d8-cdc1-4d99-a84f-4e9289a963af\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-vx646" Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:10.261578 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0d3a2340-8262-4080-8c72-8ced3d6c0c5a-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-9j9rr\" (UID: \"0d3a2340-8262-4080-8c72-8ced3d6c0c5a\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-9j9rr" Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:10.261677 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/28e39614-3757-41e0-b164-eb1964ff6a8d-etcd-service-ca\") pod \"etcd-operator-b45778765-xfrkj\" (UID: \"28e39614-3757-41e0-b164-eb1964ff6a8d\") " pod="openshift-etcd-operator/etcd-operator-b45778765-xfrkj" Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:10.261757 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/413ba99d-7214-4981-98fb-910c4f5731d8-socket-dir\") pod \"csi-hostpathplugin-dwpwk\" (UID: \"413ba99d-7214-4981-98fb-910c4f5731d8\") " pod="hostpath-provisioner/csi-hostpathplugin-dwpwk" Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:10.261867 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bdzzp\" (UniqueName: \"kubernetes.io/projected/28e39614-3757-41e0-b164-eb1964ff6a8d-kube-api-access-bdzzp\") pod \"etcd-operator-b45778765-xfrkj\" (UID: \"28e39614-3757-41e0-b164-eb1964ff6a8d\") " pod="openshift-etcd-operator/etcd-operator-b45778765-xfrkj" Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:10.262005 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/28e39614-3757-41e0-b164-eb1964ff6a8d-etcd-client\") pod \"etcd-operator-b45778765-xfrkj\" (UID: \"28e39614-3757-41e0-b164-eb1964ff6a8d\") " pod="openshift-etcd-operator/etcd-operator-b45778765-xfrkj" Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:10.262094 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/28e39614-3757-41e0-b164-eb1964ff6a8d-etcd-ca\") pod \"etcd-operator-b45778765-xfrkj\" (UID: \"28e39614-3757-41e0-b164-eb1964ff6a8d\") " pod="openshift-etcd-operator/etcd-operator-b45778765-xfrkj" Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:10.262170 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/76bf33d8-cdc1-4d99-a84f-4e9289a963af-auth-proxy-config\") pod \"machine-config-operator-74547568cd-vx646\" (UID: \"76bf33d8-cdc1-4d99-a84f-4e9289a963af\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-vx646" Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:10.262251 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-79klp\" (UniqueName: \"kubernetes.io/projected/a507ebcc-7e0b-445b-9688-882358d365ce-kube-api-access-79klp\") pod \"control-plane-machine-set-operator-78cbb6b69f-qdlnn\" (UID: \"a507ebcc-7e0b-445b-9688-882358d365ce\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-qdlnn" Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:10.262404 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f2m8l\" (UniqueName: \"kubernetes.io/projected/f1297f46-d734-41e9-a7b8-5033ce03f315-kube-api-access-f2m8l\") pod \"openshift-controller-manager-operator-756b6f6bc6-2qq6g\" (UID: \"f1297f46-d734-41e9-a7b8-5033ce03f315\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-2qq6g" Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:10.262475 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/28e39614-3757-41e0-b164-eb1964ff6a8d-serving-cert\") pod \"etcd-operator-b45778765-xfrkj\" (UID: \"28e39614-3757-41e0-b164-eb1964ff6a8d\") " pod="openshift-etcd-operator/etcd-operator-b45778765-xfrkj" Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:10.262568 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0d3a2340-8262-4080-8c72-8ced3d6c0c5a-config\") pod \"kube-apiserver-operator-766d6c64bb-9j9rr\" (UID: \"0d3a2340-8262-4080-8c72-8ced3d6c0c5a\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-9j9rr" Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:10.262639 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gr5h5\" (UniqueName: \"kubernetes.io/projected/76bf33d8-cdc1-4d99-a84f-4e9289a963af-kube-api-access-gr5h5\") pod \"machine-config-operator-74547568cd-vx646\" (UID: \"76bf33d8-cdc1-4d99-a84f-4e9289a963af\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-vx646" Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:10.262734 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/413ba99d-7214-4981-98fb-910c4f5731d8-registration-dir\") pod \"csi-hostpathplugin-dwpwk\" (UID: \"413ba99d-7214-4981-98fb-910c4f5731d8\") " pod="hostpath-provisioner/csi-hostpathplugin-dwpwk" Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:10.262832 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/a507ebcc-7e0b-445b-9688-882358d365ce-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-qdlnn\" (UID: \"a507ebcc-7e0b-445b-9688-882358d365ce\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-qdlnn" Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:10.262946 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f1297f46-d734-41e9-a7b8-5033ce03f315-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-2qq6g\" (UID: \"f1297f46-d734-41e9-a7b8-5033ce03f315\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-2qq6g" Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:10.263033 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f1297f46-d734-41e9-a7b8-5033ce03f315-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-2qq6g\" (UID: \"f1297f46-d734-41e9-a7b8-5033ce03f315\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-2qq6g" Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:10.263121 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/28e39614-3757-41e0-b164-eb1964ff6a8d-config\") pod \"etcd-operator-b45778765-xfrkj\" (UID: \"28e39614-3757-41e0-b164-eb1964ff6a8d\") " pod="openshift-etcd-operator/etcd-operator-b45778765-xfrkj" Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:10.263198 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/413ba99d-7214-4981-98fb-910c4f5731d8-plugins-dir\") pod \"csi-hostpathplugin-dwpwk\" (UID: \"413ba99d-7214-4981-98fb-910c4f5731d8\") " pod="hostpath-provisioner/csi-hostpathplugin-dwpwk" Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:10.263264 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0d3a2340-8262-4080-8c72-8ced3d6c0c5a-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-9j9rr\" (UID: \"0d3a2340-8262-4080-8c72-8ced3d6c0c5a\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-9j9rr" Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:10.263313 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/76bf33d8-cdc1-4d99-a84f-4e9289a963af-auth-proxy-config\") pod \"machine-config-operator-74547568cd-vx646\" (UID: \"76bf33d8-cdc1-4d99-a84f-4e9289a963af\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-vx646" Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:10.263440 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/413ba99d-7214-4981-98fb-910c4f5731d8-mountpoint-dir\") pod \"csi-hostpathplugin-dwpwk\" (UID: \"413ba99d-7214-4981-98fb-910c4f5731d8\") " pod="hostpath-provisioner/csi-hostpathplugin-dwpwk" Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:10.263555 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/76bf33d8-cdc1-4d99-a84f-4e9289a963af-proxy-tls\") pod \"machine-config-operator-74547568cd-vx646\" (UID: \"76bf33d8-cdc1-4d99-a84f-4e9289a963af\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-vx646" Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:10.263655 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5x95s\" (UniqueName: \"kubernetes.io/projected/413ba99d-7214-4981-98fb-910c4f5731d8-kube-api-access-5x95s\") pod \"csi-hostpathplugin-dwpwk\" (UID: \"413ba99d-7214-4981-98fb-910c4f5731d8\") " pod="hostpath-provisioner/csi-hostpathplugin-dwpwk" Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:10.263740 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/413ba99d-7214-4981-98fb-910c4f5731d8-plugins-dir\") pod \"csi-hostpathplugin-dwpwk\" (UID: \"413ba99d-7214-4981-98fb-910c4f5731d8\") " pod="hostpath-provisioner/csi-hostpathplugin-dwpwk" Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:10.263748 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/413ba99d-7214-4981-98fb-910c4f5731d8-mountpoint-dir\") pod \"csi-hostpathplugin-dwpwk\" (UID: \"413ba99d-7214-4981-98fb-910c4f5731d8\") " pod="hostpath-provisioner/csi-hostpathplugin-dwpwk" Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:10.263647 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/413ba99d-7214-4981-98fb-910c4f5731d8-registration-dir\") pod \"csi-hostpathplugin-dwpwk\" (UID: \"413ba99d-7214-4981-98fb-910c4f5731d8\") " pod="hostpath-provisioner/csi-hostpathplugin-dwpwk" Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:10.266039 4661 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:10.284549 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:10.297309 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ebd4ae2f-ef41-402c-bc81-83385361e291-serving-cert\") pod \"service-ca-operator-777779d784-c4pxm\" (UID: \"ebd4ae2f-ef41-402c-bc81-83385361e291\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-c4pxm" Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:10.305072 4661 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:10.311478 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ebd4ae2f-ef41-402c-bc81-83385361e291-config\") pod \"service-ca-operator-777779d784-c4pxm\" (UID: \"ebd4ae2f-ef41-402c-bc81-83385361e291\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-c4pxm" Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:10.325028 4661 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:10.345930 4661 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:10.365303 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:10.386033 4661 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:10.406551 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:10.426118 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:10.433118 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/db930c1a-9c05-419d-b168-b232e2b98e9b-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-r6qrc\" (UID: \"db930c1a-9c05-419d-b168-b232e2b98e9b\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-r6qrc" Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:10.445740 4661 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:10.453190 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/db930c1a-9c05-419d-b168-b232e2b98e9b-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-r6qrc\" (UID: \"db930c1a-9c05-419d-b168-b232e2b98e9b\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-r6qrc" Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:10.467192 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:10.474908 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8b7cc6dd-b02e-4e3f-b569-42201693f3e7-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-4p2m6\" (UID: \"8b7cc6dd-b02e-4e3f-b569-42201693f3e7\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-4p2m6" Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:10.485831 4661 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:10.496466 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8b7cc6dd-b02e-4e3f-b569-42201693f3e7-config\") pod \"kube-controller-manager-operator-78b949d7b-4p2m6\" (UID: \"8b7cc6dd-b02e-4e3f-b569-42201693f3e7\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-4p2m6" Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:10.505800 4661 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:10.525839 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:10.546582 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:10.555575 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/9a7d9c3e-88e9-44b2-98bc-6aab91fbf9b4-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-flvxz\" (UID: \"9a7d9c3e-88e9-44b2-98bc-6aab91fbf9b4\") " pod="openshift-marketplace/marketplace-operator-79b997595-flvxz" Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:10.566877 4661 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:10.576768 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/4c500541-c3f2-4f6d-8bb7-1227aa74989a-oauth-serving-cert\") pod \"console-f9d7485db-phg9x\" (UID: \"4c500541-c3f2-4f6d-8bb7-1227aa74989a\") " pod="openshift-console/console-f9d7485db-phg9x" Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:10.594975 4661 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:10.603945 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9a7d9c3e-88e9-44b2-98bc-6aab91fbf9b4-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-flvxz\" (UID: \"9a7d9c3e-88e9-44b2-98bc-6aab91fbf9b4\") " pod="openshift-marketplace/marketplace-operator-79b997595-flvxz" Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:10.605278 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:10.625549 4661 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:10.645758 4661 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:10.666963 4661 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:10.685505 4661 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:10.706460 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:10.725718 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:10.736412 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/4c500541-c3f2-4f6d-8bb7-1227aa74989a-console-serving-cert\") pod \"console-f9d7485db-phg9x\" (UID: \"4c500541-c3f2-4f6d-8bb7-1227aa74989a\") " pod="openshift-console/console-f9d7485db-phg9x" Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:10.746487 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:10.755305 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/4c500541-c3f2-4f6d-8bb7-1227aa74989a-console-oauth-config\") pod \"console-f9d7485db-phg9x\" (UID: \"4c500541-c3f2-4f6d-8bb7-1227aa74989a\") " pod="openshift-console/console-f9d7485db-phg9x" Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:10.765068 4661 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:10.774523 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/4c500541-c3f2-4f6d-8bb7-1227aa74989a-console-config\") pod \"console-f9d7485db-phg9x\" (UID: \"4c500541-c3f2-4f6d-8bb7-1227aa74989a\") " pod="openshift-console/console-f9d7485db-phg9x" Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:10.785470 4661 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:10.793495 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/4c500541-c3f2-4f6d-8bb7-1227aa74989a-service-ca\") pod \"console-f9d7485db-phg9x\" (UID: \"4c500541-c3f2-4f6d-8bb7-1227aa74989a\") " pod="openshift-console/console-f9d7485db-phg9x" Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:10.812577 4661 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:10.820234 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4c500541-c3f2-4f6d-8bb7-1227aa74989a-trusted-ca-bundle\") pod \"console-f9d7485db-phg9x\" (UID: \"4c500541-c3f2-4f6d-8bb7-1227aa74989a\") " pod="openshift-console/console-f9d7485db-phg9x" Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:10.825999 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:10.846783 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:10.857954 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8a9d6945-a25a-4a09-92ee-7b90664a2edd-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-mlncs\" (UID: \"8a9d6945-a25a-4a09-92ee-7b90664a2edd\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-mlncs" Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:10.864963 4661 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:10.869255 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8a9d6945-a25a-4a09-92ee-7b90664a2edd-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-mlncs\" (UID: \"8a9d6945-a25a-4a09-92ee-7b90664a2edd\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-mlncs" Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:10.886951 4661 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:10.906834 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:10.925878 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:10.934391 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/550a5702-08aa-4dca-a1a0-7adfebbd9312-proxy-tls\") pod \"machine-config-controller-84d6567774-xlt4n\" (UID: \"550a5702-08aa-4dca-a1a0-7adfebbd9312\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-xlt4n" Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:10.945216 4661 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:10.965770 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:10.977953 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 18:08:10 crc kubenswrapper[4661]: E0120 18:08:10.978156 4661 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 18:10:12.978123445 +0000 UTC m=+269.308913147 (durationBeforeRetry 2m2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:10.978453 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:10.978601 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:10.978747 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:10.980170 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:10.982736 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:10.985864 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Jan 20 18:08:10 crc kubenswrapper[4661]: I0120 18:08:10.986506 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 18:08:11 crc kubenswrapper[4661]: I0120 18:08:11.005714 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Jan 20 18:08:11 crc kubenswrapper[4661]: I0120 18:08:11.017206 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/52978b9f-376f-4f49-9c2d-fc3da32b178f-metrics-tls\") pod \"ingress-operator-5b745b69d9-mcf6f\" (UID: \"52978b9f-376f-4f49-9c2d-fc3da32b178f\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-mcf6f" Jan 20 18:08:11 crc kubenswrapper[4661]: I0120 18:08:11.043545 4661 request.go:700] Waited for 1.016913273s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress-operator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0 Jan 20 18:08:11 crc kubenswrapper[4661]: I0120 18:08:11.047006 4661 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Jan 20 18:08:11 crc kubenswrapper[4661]: I0120 18:08:11.052959 4661 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Jan 20 18:08:11 crc kubenswrapper[4661]: I0120 18:08:11.065703 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Jan 20 18:08:11 crc kubenswrapper[4661]: I0120 18:08:11.066452 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/52978b9f-376f-4f49-9c2d-fc3da32b178f-trusted-ca\") pod \"ingress-operator-5b745b69d9-mcf6f\" (UID: \"52978b9f-376f-4f49-9c2d-fc3da32b178f\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-mcf6f" Jan 20 18:08:11 crc kubenswrapper[4661]: I0120 18:08:11.081444 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 18:08:11 crc kubenswrapper[4661]: I0120 18:08:11.086143 4661 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Jan 20 18:08:11 crc kubenswrapper[4661]: I0120 18:08:11.089055 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 18:08:11 crc kubenswrapper[4661]: I0120 18:08:11.106844 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Jan 20 18:08:11 crc kubenswrapper[4661]: I0120 18:08:11.117510 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0d3a2340-8262-4080-8c72-8ced3d6c0c5a-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-9j9rr\" (UID: \"0d3a2340-8262-4080-8c72-8ced3d6c0c5a\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-9j9rr" Jan 20 18:08:11 crc kubenswrapper[4661]: I0120 18:08:11.125809 4661 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Jan 20 18:08:11 crc kubenswrapper[4661]: I0120 18:08:11.135825 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0d3a2340-8262-4080-8c72-8ced3d6c0c5a-config\") pod \"kube-apiserver-operator-766d6c64bb-9j9rr\" (UID: \"0d3a2340-8262-4080-8c72-8ced3d6c0c5a\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-9j9rr" Jan 20 18:08:11 crc kubenswrapper[4661]: I0120 18:08:11.145840 4661 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Jan 20 18:08:11 crc kubenswrapper[4661]: I0120 18:08:11.153057 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/76bf33d8-cdc1-4d99-a84f-4e9289a963af-images\") pod \"machine-config-operator-74547568cd-vx646\" (UID: \"76bf33d8-cdc1-4d99-a84f-4e9289a963af\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-vx646" Jan 20 18:08:11 crc kubenswrapper[4661]: I0120 18:08:11.165615 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 18:08:11 crc kubenswrapper[4661]: I0120 18:08:11.167644 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Jan 20 18:08:11 crc kubenswrapper[4661]: I0120 18:08:11.179297 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 18:08:11 crc kubenswrapper[4661]: I0120 18:08:11.188324 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Jan 20 18:08:11 crc kubenswrapper[4661]: I0120 18:08:11.199132 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 18:08:11 crc kubenswrapper[4661]: I0120 18:08:11.201843 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/76bf33d8-cdc1-4d99-a84f-4e9289a963af-proxy-tls\") pod \"machine-config-operator-74547568cd-vx646\" (UID: \"76bf33d8-cdc1-4d99-a84f-4e9289a963af\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-vx646" Jan 20 18:08:11 crc kubenswrapper[4661]: I0120 18:08:11.207188 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Jan 20 18:08:11 crc kubenswrapper[4661]: I0120 18:08:11.222601 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/a507ebcc-7e0b-445b-9688-882358d365ce-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-qdlnn\" (UID: \"a507ebcc-7e0b-445b-9688-882358d365ce\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-qdlnn" Jan 20 18:08:11 crc kubenswrapper[4661]: I0120 18:08:11.226520 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Jan 20 18:08:11 crc kubenswrapper[4661]: I0120 18:08:11.247214 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Jan 20 18:08:11 crc kubenswrapper[4661]: E0120 18:08:11.262985 4661 secret.go:188] Couldn't get secret openshift-etcd-operator/etcd-client: failed to sync secret cache: timed out waiting for the condition Jan 20 18:08:11 crc kubenswrapper[4661]: E0120 18:08:11.263086 4661 configmap.go:193] Couldn't get configMap openshift-etcd-operator/etcd-ca-bundle: failed to sync configmap cache: timed out waiting for the condition Jan 20 18:08:11 crc kubenswrapper[4661]: E0120 18:08:11.263126 4661 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/28e39614-3757-41e0-b164-eb1964ff6a8d-etcd-client podName:28e39614-3757-41e0-b164-eb1964ff6a8d nodeName:}" failed. No retries permitted until 2026-01-20 18:08:11.763088884 +0000 UTC m=+148.093878586 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etcd-client" (UniqueName: "kubernetes.io/secret/28e39614-3757-41e0-b164-eb1964ff6a8d-etcd-client") pod "etcd-operator-b45778765-xfrkj" (UID: "28e39614-3757-41e0-b164-eb1964ff6a8d") : failed to sync secret cache: timed out waiting for the condition Jan 20 18:08:11 crc kubenswrapper[4661]: E0120 18:08:11.263121 4661 secret.go:188] Couldn't get secret openshift-etcd-operator/etcd-operator-serving-cert: failed to sync secret cache: timed out waiting for the condition Jan 20 18:08:11 crc kubenswrapper[4661]: E0120 18:08:11.263168 4661 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/28e39614-3757-41e0-b164-eb1964ff6a8d-etcd-ca podName:28e39614-3757-41e0-b164-eb1964ff6a8d nodeName:}" failed. No retries permitted until 2026-01-20 18:08:11.763150455 +0000 UTC m=+148.093940127 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etcd-ca" (UniqueName: "kubernetes.io/configmap/28e39614-3757-41e0-b164-eb1964ff6a8d-etcd-ca") pod "etcd-operator-b45778765-xfrkj" (UID: "28e39614-3757-41e0-b164-eb1964ff6a8d") : failed to sync configmap cache: timed out waiting for the condition Jan 20 18:08:11 crc kubenswrapper[4661]: E0120 18:08:11.263237 4661 configmap.go:193] Couldn't get configMap openshift-etcd-operator/etcd-service-ca-bundle: failed to sync configmap cache: timed out waiting for the condition Jan 20 18:08:11 crc kubenswrapper[4661]: E0120 18:08:11.263274 4661 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/28e39614-3757-41e0-b164-eb1964ff6a8d-serving-cert podName:28e39614-3757-41e0-b164-eb1964ff6a8d nodeName:}" failed. No retries permitted until 2026-01-20 18:08:11.763235178 +0000 UTC m=+148.094024880 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/28e39614-3757-41e0-b164-eb1964ff6a8d-serving-cert") pod "etcd-operator-b45778765-xfrkj" (UID: "28e39614-3757-41e0-b164-eb1964ff6a8d") : failed to sync secret cache: timed out waiting for the condition Jan 20 18:08:11 crc kubenswrapper[4661]: E0120 18:08:11.263309 4661 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/28e39614-3757-41e0-b164-eb1964ff6a8d-etcd-service-ca podName:28e39614-3757-41e0-b164-eb1964ff6a8d nodeName:}" failed. No retries permitted until 2026-01-20 18:08:11.763293229 +0000 UTC m=+148.094082931 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etcd-service-ca" (UniqueName: "kubernetes.io/configmap/28e39614-3757-41e0-b164-eb1964ff6a8d-etcd-service-ca") pod "etcd-operator-b45778765-xfrkj" (UID: "28e39614-3757-41e0-b164-eb1964ff6a8d") : failed to sync configmap cache: timed out waiting for the condition Jan 20 18:08:11 crc kubenswrapper[4661]: E0120 18:08:11.263784 4661 configmap.go:193] Couldn't get configMap openshift-controller-manager-operator/openshift-controller-manager-operator-config: failed to sync configmap cache: timed out waiting for the condition Jan 20 18:08:11 crc kubenswrapper[4661]: E0120 18:08:11.263832 4661 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/f1297f46-d734-41e9-a7b8-5033ce03f315-config podName:f1297f46-d734-41e9-a7b8-5033ce03f315 nodeName:}" failed. No retries permitted until 2026-01-20 18:08:11.763820293 +0000 UTC m=+148.094609965 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/f1297f46-d734-41e9-a7b8-5033ce03f315-config") pod "openshift-controller-manager-operator-756b6f6bc6-2qq6g" (UID: "f1297f46-d734-41e9-a7b8-5033ce03f315") : failed to sync configmap cache: timed out waiting for the condition Jan 20 18:08:11 crc kubenswrapper[4661]: E0120 18:08:11.263830 4661 secret.go:188] Couldn't get secret openshift-controller-manager-operator/openshift-controller-manager-operator-serving-cert: failed to sync secret cache: timed out waiting for the condition Jan 20 18:08:11 crc kubenswrapper[4661]: E0120 18:08:11.263875 4661 configmap.go:193] Couldn't get configMap openshift-etcd-operator/etcd-operator-config: failed to sync configmap cache: timed out waiting for the condition Jan 20 18:08:11 crc kubenswrapper[4661]: E0120 18:08:11.263906 4661 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/28e39614-3757-41e0-b164-eb1964ff6a8d-config podName:28e39614-3757-41e0-b164-eb1964ff6a8d nodeName:}" failed. No retries permitted until 2026-01-20 18:08:11.763894865 +0000 UTC m=+148.094684537 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/28e39614-3757-41e0-b164-eb1964ff6a8d-config") pod "etcd-operator-b45778765-xfrkj" (UID: "28e39614-3757-41e0-b164-eb1964ff6a8d") : failed to sync configmap cache: timed out waiting for the condition Jan 20 18:08:11 crc kubenswrapper[4661]: E0120 18:08:11.263946 4661 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f1297f46-d734-41e9-a7b8-5033ce03f315-serving-cert podName:f1297f46-d734-41e9-a7b8-5033ce03f315 nodeName:}" failed. No retries permitted until 2026-01-20 18:08:11.763916436 +0000 UTC m=+148.094706188 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/f1297f46-d734-41e9-a7b8-5033ce03f315-serving-cert") pod "openshift-controller-manager-operator-756b6f6bc6-2qq6g" (UID: "f1297f46-d734-41e9-a7b8-5033ce03f315") : failed to sync secret cache: timed out waiting for the condition Jan 20 18:08:11 crc kubenswrapper[4661]: I0120 18:08:11.266392 4661 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Jan 20 18:08:11 crc kubenswrapper[4661]: I0120 18:08:11.286348 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Jan 20 18:08:11 crc kubenswrapper[4661]: I0120 18:08:11.305792 4661 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Jan 20 18:08:11 crc kubenswrapper[4661]: I0120 18:08:11.330119 4661 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Jan 20 18:08:11 crc kubenswrapper[4661]: I0120 18:08:11.353042 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Jan 20 18:08:11 crc kubenswrapper[4661]: I0120 18:08:11.368254 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Jan 20 18:08:11 crc kubenswrapper[4661]: I0120 18:08:11.393392 4661 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Jan 20 18:08:11 crc kubenswrapper[4661]: I0120 18:08:11.404558 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Jan 20 18:08:11 crc kubenswrapper[4661]: I0120 18:08:11.425944 4661 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Jan 20 18:08:11 crc kubenswrapper[4661]: I0120 18:08:11.446203 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Jan 20 18:08:11 crc kubenswrapper[4661]: I0120 18:08:11.468154 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Jan 20 18:08:11 crc kubenswrapper[4661]: I0120 18:08:11.486024 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Jan 20 18:08:11 crc kubenswrapper[4661]: I0120 18:08:11.505507 4661 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Jan 20 18:08:11 crc kubenswrapper[4661]: I0120 18:08:11.525305 4661 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Jan 20 18:08:11 crc kubenswrapper[4661]: I0120 18:08:11.548501 4661 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Jan 20 18:08:11 crc kubenswrapper[4661]: I0120 18:08:11.565405 4661 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Jan 20 18:08:11 crc kubenswrapper[4661]: I0120 18:08:11.587572 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Jan 20 18:08:11 crc kubenswrapper[4661]: I0120 18:08:11.606908 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Jan 20 18:08:11 crc kubenswrapper[4661]: I0120 18:08:11.627254 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Jan 20 18:08:11 crc kubenswrapper[4661]: I0120 18:08:11.647114 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Jan 20 18:08:11 crc kubenswrapper[4661]: W0120 18:08:11.652642 4661 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9d751cbb_f2e2_430d_9754_c882a5e924a5.slice/crio-b0ecefaa8676602205dadb88f46adf3a632ad6d6cb262458e2c41a6edb28a9a5 WatchSource:0}: Error finding container b0ecefaa8676602205dadb88f46adf3a632ad6d6cb262458e2c41a6edb28a9a5: Status 404 returned error can't find the container with id b0ecefaa8676602205dadb88f46adf3a632ad6d6cb262458e2c41a6edb28a9a5 Jan 20 18:08:11 crc kubenswrapper[4661]: I0120 18:08:11.681814 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l8vww\" (UniqueName: \"kubernetes.io/projected/c223ef1c-922a-42b8-b8d0-428a27f5ae6d-kube-api-access-l8vww\") pod \"controller-manager-879f6c89f-wd4nq\" (UID: \"c223ef1c-922a-42b8-b8d0-428a27f5ae6d\") " pod="openshift-controller-manager/controller-manager-879f6c89f-wd4nq" Jan 20 18:08:11 crc kubenswrapper[4661]: I0120 18:08:11.686352 4661 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Jan 20 18:08:11 crc kubenswrapper[4661]: I0120 18:08:11.706972 4661 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Jan 20 18:08:11 crc kubenswrapper[4661]: I0120 18:08:11.726564 4661 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Jan 20 18:08:11 crc kubenswrapper[4661]: I0120 18:08:11.761650 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4lknt\" (UniqueName: \"kubernetes.io/projected/8c6c735b-2d38-430c-a5b7-10b9b06ef623-kube-api-access-4lknt\") pod \"apiserver-7bbb656c7d-kh25t\" (UID: \"8c6c735b-2d38-430c-a5b7-10b9b06ef623\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-kh25t" Jan 20 18:08:11 crc kubenswrapper[4661]: I0120 18:08:11.779396 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p9dhd\" (UniqueName: \"kubernetes.io/projected/17cc4c8d-5d73-4307-83ea-e826befa5b06-kube-api-access-p9dhd\") pod \"package-server-manager-789f6589d5-tmtgr\" (UID: \"17cc4c8d-5d73-4307-83ea-e826befa5b06\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-tmtgr" Jan 20 18:08:11 crc kubenswrapper[4661]: I0120 18:08:11.793625 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f1297f46-d734-41e9-a7b8-5033ce03f315-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-2qq6g\" (UID: \"f1297f46-d734-41e9-a7b8-5033ce03f315\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-2qq6g" Jan 20 18:08:11 crc kubenswrapper[4661]: I0120 18:08:11.793743 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f1297f46-d734-41e9-a7b8-5033ce03f315-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-2qq6g\" (UID: \"f1297f46-d734-41e9-a7b8-5033ce03f315\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-2qq6g" Jan 20 18:08:11 crc kubenswrapper[4661]: I0120 18:08:11.793821 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/28e39614-3757-41e0-b164-eb1964ff6a8d-config\") pod \"etcd-operator-b45778765-xfrkj\" (UID: \"28e39614-3757-41e0-b164-eb1964ff6a8d\") " pod="openshift-etcd-operator/etcd-operator-b45778765-xfrkj" Jan 20 18:08:11 crc kubenswrapper[4661]: I0120 18:08:11.794122 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/28e39614-3757-41e0-b164-eb1964ff6a8d-etcd-service-ca\") pod \"etcd-operator-b45778765-xfrkj\" (UID: \"28e39614-3757-41e0-b164-eb1964ff6a8d\") " pod="openshift-etcd-operator/etcd-operator-b45778765-xfrkj" Jan 20 18:08:11 crc kubenswrapper[4661]: I0120 18:08:11.794198 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/28e39614-3757-41e0-b164-eb1964ff6a8d-etcd-ca\") pod \"etcd-operator-b45778765-xfrkj\" (UID: \"28e39614-3757-41e0-b164-eb1964ff6a8d\") " pod="openshift-etcd-operator/etcd-operator-b45778765-xfrkj" Jan 20 18:08:11 crc kubenswrapper[4661]: I0120 18:08:11.794239 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/28e39614-3757-41e0-b164-eb1964ff6a8d-etcd-client\") pod \"etcd-operator-b45778765-xfrkj\" (UID: \"28e39614-3757-41e0-b164-eb1964ff6a8d\") " pod="openshift-etcd-operator/etcd-operator-b45778765-xfrkj" Jan 20 18:08:11 crc kubenswrapper[4661]: I0120 18:08:11.794322 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/28e39614-3757-41e0-b164-eb1964ff6a8d-serving-cert\") pod \"etcd-operator-b45778765-xfrkj\" (UID: \"28e39614-3757-41e0-b164-eb1964ff6a8d\") " pod="openshift-etcd-operator/etcd-operator-b45778765-xfrkj" Jan 20 18:08:11 crc kubenswrapper[4661]: I0120 18:08:11.794596 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f1297f46-d734-41e9-a7b8-5033ce03f315-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-2qq6g\" (UID: \"f1297f46-d734-41e9-a7b8-5033ce03f315\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-2qq6g" Jan 20 18:08:11 crc kubenswrapper[4661]: I0120 18:08:11.795258 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/28e39614-3757-41e0-b164-eb1964ff6a8d-config\") pod \"etcd-operator-b45778765-xfrkj\" (UID: \"28e39614-3757-41e0-b164-eb1964ff6a8d\") " pod="openshift-etcd-operator/etcd-operator-b45778765-xfrkj" Jan 20 18:08:11 crc kubenswrapper[4661]: I0120 18:08:11.795488 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/28e39614-3757-41e0-b164-eb1964ff6a8d-etcd-ca\") pod \"etcd-operator-b45778765-xfrkj\" (UID: \"28e39614-3757-41e0-b164-eb1964ff6a8d\") " pod="openshift-etcd-operator/etcd-operator-b45778765-xfrkj" Jan 20 18:08:11 crc kubenswrapper[4661]: I0120 18:08:11.795867 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/28e39614-3757-41e0-b164-eb1964ff6a8d-etcd-service-ca\") pod \"etcd-operator-b45778765-xfrkj\" (UID: \"28e39614-3757-41e0-b164-eb1964ff6a8d\") " pod="openshift-etcd-operator/etcd-operator-b45778765-xfrkj" Jan 20 18:08:11 crc kubenswrapper[4661]: I0120 18:08:11.797496 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/28e39614-3757-41e0-b164-eb1964ff6a8d-etcd-client\") pod \"etcd-operator-b45778765-xfrkj\" (UID: \"28e39614-3757-41e0-b164-eb1964ff6a8d\") " pod="openshift-etcd-operator/etcd-operator-b45778765-xfrkj" Jan 20 18:08:11 crc kubenswrapper[4661]: I0120 18:08:11.798785 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g8xsw\" (UniqueName: \"kubernetes.io/projected/457c15d5-4066-4d88-bbb4-a9fe13de20cd-kube-api-access-g8xsw\") pod \"route-controller-manager-6576b87f9c-j48cg\" (UID: \"457c15d5-4066-4d88-bbb4-a9fe13de20cd\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-j48cg" Jan 20 18:08:11 crc kubenswrapper[4661]: I0120 18:08:11.799012 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f1297f46-d734-41e9-a7b8-5033ce03f315-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-2qq6g\" (UID: \"f1297f46-d734-41e9-a7b8-5033ce03f315\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-2qq6g" Jan 20 18:08:11 crc kubenswrapper[4661]: I0120 18:08:11.799073 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/28e39614-3757-41e0-b164-eb1964ff6a8d-serving-cert\") pod \"etcd-operator-b45778765-xfrkj\" (UID: \"28e39614-3757-41e0-b164-eb1964ff6a8d\") " pod="openshift-etcd-operator/etcd-operator-b45778765-xfrkj" Jan 20 18:08:11 crc kubenswrapper[4661]: I0120 18:08:11.820629 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qf8xf\" (UniqueName: \"kubernetes.io/projected/53f0efc5-f6a1-4f6b-a37b-3e9c4e3fea65-kube-api-access-qf8xf\") pod \"apiserver-76f77b778f-q7s9r\" (UID: \"53f0efc5-f6a1-4f6b-a37b-3e9c4e3fea65\") " pod="openshift-apiserver/apiserver-76f77b778f-q7s9r" Jan 20 18:08:11 crc kubenswrapper[4661]: I0120 18:08:11.840213 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mlmjt\" (UniqueName: \"kubernetes.io/projected/1fe4d701-349f-4edf-a59f-092ccfcdd40e-kube-api-access-mlmjt\") pod \"openshift-apiserver-operator-796bbdcf4f-ngxm7\" (UID: \"1fe4d701-349f-4edf-a59f-092ccfcdd40e\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-ngxm7" Jan 20 18:08:11 crc kubenswrapper[4661]: I0120 18:08:11.847071 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-kh25t" Jan 20 18:08:11 crc kubenswrapper[4661]: I0120 18:08:11.872325 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-99vq6\" (UniqueName: \"kubernetes.io/projected/18723097-a708-4951-89bc-48ffc2128786-kube-api-access-99vq6\") pod \"authentication-operator-69f744f599-7bk7r\" (UID: \"18723097-a708-4951-89bc-48ffc2128786\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-7bk7r" Jan 20 18:08:11 crc kubenswrapper[4661]: I0120 18:08:11.889485 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j2rcj\" (UniqueName: \"kubernetes.io/projected/ad28ae9e-274e-45fc-8202-683aadfa3494-kube-api-access-j2rcj\") pod \"service-ca-9c57cc56f-4flqc\" (UID: \"ad28ae9e-274e-45fc-8202-683aadfa3494\") " pod="openshift-service-ca/service-ca-9c57cc56f-4flqc" Jan 20 18:08:11 crc kubenswrapper[4661]: I0120 18:08:11.900552 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rb5l7\" (UniqueName: \"kubernetes.io/projected/e6808303-41c9-4185-beed-5e7460b07075-kube-api-access-rb5l7\") pod \"cluster-samples-operator-665b6dd947-446tk\" (UID: \"e6808303-41c9-4185-beed-5e7460b07075\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-446tk" Jan 20 18:08:11 crc kubenswrapper[4661]: I0120 18:08:11.907266 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-4flqc" Jan 20 18:08:11 crc kubenswrapper[4661]: I0120 18:08:11.908188 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-7bk7r" Jan 20 18:08:11 crc kubenswrapper[4661]: I0120 18:08:11.917975 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-tmtgr" Jan 20 18:08:11 crc kubenswrapper[4661]: I0120 18:08:11.934600 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-crnqh\" (UniqueName: \"kubernetes.io/projected/00496c34-a198-4516-bff8-0b553db85849-kube-api-access-crnqh\") pod \"migrator-59844c95c7-qwp42\" (UID: \"00496c34-a198-4516-bff8-0b553db85849\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-qwp42" Jan 20 18:08:11 crc kubenswrapper[4661]: I0120 18:08:11.942092 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-97twf\" (UniqueName: \"kubernetes.io/projected/302e8226-565c-44a4-bb0e-dee670200ae3-kube-api-access-97twf\") pod \"machine-api-operator-5694c8668f-7hbkg\" (UID: \"302e8226-565c-44a4-bb0e-dee670200ae3\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-7hbkg" Jan 20 18:08:11 crc kubenswrapper[4661]: I0120 18:08:11.982919 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-wd4nq" Jan 20 18:08:11 crc kubenswrapper[4661]: I0120 18:08:11.991517 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/52978b9f-376f-4f49-9c2d-fc3da32b178f-bound-sa-token\") pod \"ingress-operator-5b745b69d9-mcf6f\" (UID: \"52978b9f-376f-4f49-9c2d-fc3da32b178f\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-mcf6f" Jan 20 18:08:12 crc kubenswrapper[4661]: I0120 18:08:12.005356 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-446tk" Jan 20 18:08:12 crc kubenswrapper[4661]: I0120 18:08:12.017205 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tqrqq\" (UniqueName: \"kubernetes.io/projected/afe96487-2a45-4ad8-8f17-7f33186f55f4-kube-api-access-tqrqq\") pod \"downloads-7954f5f757-bvpn8\" (UID: \"afe96487-2a45-4ad8-8f17-7f33186f55f4\") " pod="openshift-console/downloads-7954f5f757-bvpn8" Jan 20 18:08:12 crc kubenswrapper[4661]: I0120 18:08:12.020133 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-j48cg" Jan 20 18:08:12 crc kubenswrapper[4661]: I0120 18:08:12.024440 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hxvmg\" (UniqueName: \"kubernetes.io/projected/4c500541-c3f2-4f6d-8bb7-1227aa74989a-kube-api-access-hxvmg\") pod \"console-f9d7485db-phg9x\" (UID: \"4c500541-c3f2-4f6d-8bb7-1227aa74989a\") " pod="openshift-console/console-f9d7485db-phg9x" Jan 20 18:08:12 crc kubenswrapper[4661]: I0120 18:08:12.055029 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nnzzj\" (UniqueName: \"kubernetes.io/projected/8a9d6945-a25a-4a09-92ee-7b90664a2edd-kube-api-access-nnzzj\") pod \"kube-storage-version-migrator-operator-b67b599dd-mlncs\" (UID: \"8a9d6945-a25a-4a09-92ee-7b90664a2edd\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-mlncs" Jan 20 18:08:12 crc kubenswrapper[4661]: I0120 18:08:12.065797 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-q7s9r" Jan 20 18:08:12 crc kubenswrapper[4661]: I0120 18:08:12.066271 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-bvpn8" Jan 20 18:08:12 crc kubenswrapper[4661]: I0120 18:08:12.066278 4661 request.go:700] Waited for 1.904888372s due to client-side throttling, not priority and fairness, request: POST:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-service-ca-operator/serviceaccounts/service-ca-operator/token Jan 20 18:08:12 crc kubenswrapper[4661]: I0120 18:08:12.067721 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qs5q2\" (UniqueName: \"kubernetes.io/projected/e192fd3a-6efd-4ce8-8915-2bba0f9dc8c0-kube-api-access-qs5q2\") pod \"machine-approver-56656f9798-hkbjv\" (UID: \"e192fd3a-6efd-4ce8-8915-2bba0f9dc8c0\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-hkbjv" Jan 20 18:08:12 crc kubenswrapper[4661]: I0120 18:08:12.072177 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-ngxm7" Jan 20 18:08:12 crc kubenswrapper[4661]: I0120 18:08:12.096308 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-phg9x" Jan 20 18:08:12 crc kubenswrapper[4661]: I0120 18:08:12.097940 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-mlncs" Jan 20 18:08:12 crc kubenswrapper[4661]: I0120 18:08:12.099878 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-7hbkg" Jan 20 18:08:12 crc kubenswrapper[4661]: I0120 18:08:12.100312 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xw4mr\" (UniqueName: \"kubernetes.io/projected/ebd4ae2f-ef41-402c-bc81-83385361e291-kube-api-access-xw4mr\") pod \"service-ca-operator-777779d784-c4pxm\" (UID: \"ebd4ae2f-ef41-402c-bc81-83385361e291\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-c4pxm" Jan 20 18:08:12 crc kubenswrapper[4661]: I0120 18:08:12.105863 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-599hc\" (UniqueName: \"kubernetes.io/projected/081efbd4-859f-49bb-84d6-778ce124b602-kube-api-access-599hc\") pod \"dns-operator-744455d44c-zdd7g\" (UID: \"081efbd4-859f-49bb-84d6-778ce124b602\") " pod="openshift-dns-operator/dns-operator-744455d44c-zdd7g" Jan 20 18:08:12 crc kubenswrapper[4661]: I0120 18:08:12.115837 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-kh25t"] Jan 20 18:08:12 crc kubenswrapper[4661]: I0120 18:08:12.122599 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hnr4m\" (UniqueName: \"kubernetes.io/projected/1f0c818b-31de-43ee-a20a-1fc174261b42-kube-api-access-hnr4m\") pod \"oauth-openshift-558db77b4-bh9mt\" (UID: \"1f0c818b-31de-43ee-a20a-1fc174261b42\") " pod="openshift-authentication/oauth-openshift-558db77b4-bh9mt" Jan 20 18:08:12 crc kubenswrapper[4661]: I0120 18:08:12.134247 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-qwp42" Jan 20 18:08:12 crc kubenswrapper[4661]: I0120 18:08:12.154443 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l9n45\" (UniqueName: \"kubernetes.io/projected/a1a9a02c-4b40-412e-a7f1-94568385465a-kube-api-access-l9n45\") pod \"catalog-operator-68c6474976-lrwcc\" (UID: \"a1a9a02c-4b40-412e-a7f1-94568385465a\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-lrwcc" Jan 20 18:08:12 crc kubenswrapper[4661]: I0120 18:08:12.171337 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fhsns\" (UniqueName: \"kubernetes.io/projected/9a7d9c3e-88e9-44b2-98bc-6aab91fbf9b4-kube-api-access-fhsns\") pod \"marketplace-operator-79b997595-flvxz\" (UID: \"9a7d9c3e-88e9-44b2-98bc-6aab91fbf9b4\") " pod="openshift-marketplace/marketplace-operator-79b997595-flvxz" Jan 20 18:08:12 crc kubenswrapper[4661]: I0120 18:08:12.189368 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f9qpp\" (UniqueName: \"kubernetes.io/projected/c7ff0869-4b3b-447f-a012-9bc155bae99b-kube-api-access-f9qpp\") pod \"router-default-5444994796-tptl9\" (UID: \"c7ff0869-4b3b-447f-a012-9bc155bae99b\") " pod="openshift-ingress/router-default-5444994796-tptl9" Jan 20 18:08:12 crc kubenswrapper[4661]: I0120 18:08:12.205487 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cpnqn\" (UniqueName: \"kubernetes.io/projected/02d802d7-1516-4eb2-98a9-2f1878609216-kube-api-access-cpnqn\") pod \"console-operator-58897d9998-n76xd\" (UID: \"02d802d7-1516-4eb2-98a9-2f1878609216\") " pod="openshift-console-operator/console-operator-58897d9998-n76xd" Jan 20 18:08:12 crc kubenswrapper[4661]: I0120 18:08:12.225057 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"76b080f7a22e18d32100562c9d01af2530ac8ab10957d2789e905b33bda5e86a"} Jan 20 18:08:12 crc kubenswrapper[4661]: I0120 18:08:12.225117 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"b0ecefaa8676602205dadb88f46adf3a632ad6d6cb262458e2c41a6edb28a9a5"} Jan 20 18:08:12 crc kubenswrapper[4661]: I0120 18:08:12.226442 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"f66a899c65f95a29824ca2a430314b07b85c6c52132b3139c855dcf7267666b2"} Jan 20 18:08:12 crc kubenswrapper[4661]: I0120 18:08:12.226463 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"8b35ac2d401216959fe7a834d42904e50abf51a87a8f461d62c8d99566718cfd"} Jan 20 18:08:12 crc kubenswrapper[4661]: I0120 18:08:12.226722 4661 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 18:08:12 crc kubenswrapper[4661]: I0120 18:08:12.227535 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qrdx6\" (UniqueName: \"kubernetes.io/projected/52978b9f-376f-4f49-9c2d-fc3da32b178f-kube-api-access-qrdx6\") pod \"ingress-operator-5b745b69d9-mcf6f\" (UID: \"52978b9f-376f-4f49-9c2d-fc3da32b178f\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-mcf6f" Jan 20 18:08:12 crc kubenswrapper[4661]: I0120 18:08:12.227616 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"f02eb999b23d221a8f0ccbcc1a463d92a982130b0b5bc978c3d88a604cf2b829"} Jan 20 18:08:12 crc kubenswrapper[4661]: I0120 18:08:12.227691 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"9d1641450d81900e9e2bb62a75e5a25f6821247aa7c17ef594405e5b26fd7c1b"} Jan 20 18:08:12 crc kubenswrapper[4661]: I0120 18:08:12.229998 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-lrwcc" Jan 20 18:08:12 crc kubenswrapper[4661]: I0120 18:08:12.235643 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-hkbjv" Jan 20 18:08:12 crc kubenswrapper[4661]: I0120 18:08:12.253477 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/db930c1a-9c05-419d-b168-b232e2b98e9b-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-r6qrc\" (UID: \"db930c1a-9c05-419d-b168-b232e2b98e9b\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-r6qrc" Jan 20 18:08:12 crc kubenswrapper[4661]: I0120 18:08:12.258692 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-tptl9" Jan 20 18:08:12 crc kubenswrapper[4661]: I0120 18:08:12.282216 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d697c\" (UniqueName: \"kubernetes.io/projected/7a1f08ef-d9a0-484e-9959-14d3ab178d28-kube-api-access-d697c\") pod \"collect-profiles-29482200-ckvfc\" (UID: \"7a1f08ef-d9a0-484e-9959-14d3ab178d28\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29482200-ckvfc" Jan 20 18:08:12 crc kubenswrapper[4661]: I0120 18:08:12.289638 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8b7cc6dd-b02e-4e3f-b569-42201693f3e7-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-4p2m6\" (UID: \"8b7cc6dd-b02e-4e3f-b569-42201693f3e7\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-4p2m6" Jan 20 18:08:12 crc kubenswrapper[4661]: I0120 18:08:12.303095 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-bh9mt" Jan 20 18:08:12 crc kubenswrapper[4661]: I0120 18:08:12.306505 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-zdd7g" Jan 20 18:08:12 crc kubenswrapper[4661]: I0120 18:08:12.311362 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q5nmr\" (UniqueName: \"kubernetes.io/projected/8e261f9f-2027-4bb2-9254-40758baaa1ea-kube-api-access-q5nmr\") pod \"openshift-config-operator-7777fb866f-9htcv\" (UID: \"8e261f9f-2027-4bb2-9254-40758baaa1ea\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-9htcv" Jan 20 18:08:12 crc kubenswrapper[4661]: I0120 18:08:12.324210 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-n76xd" Jan 20 18:08:12 crc kubenswrapper[4661]: I0120 18:08:12.331380 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-9htcv" Jan 20 18:08:12 crc kubenswrapper[4661]: I0120 18:08:12.339632 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-4p2m6" Jan 20 18:08:12 crc kubenswrapper[4661]: I0120 18:08:12.346726 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-c4pxm" Jan 20 18:08:12 crc kubenswrapper[4661]: I0120 18:08:12.351355 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Jan 20 18:08:12 crc kubenswrapper[4661]: I0120 18:08:12.351762 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9jgm2\" (UniqueName: \"kubernetes.io/projected/550a5702-08aa-4dca-a1a0-7adfebbd9312-kube-api-access-9jgm2\") pod \"machine-config-controller-84d6567774-xlt4n\" (UID: \"550a5702-08aa-4dca-a1a0-7adfebbd9312\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-xlt4n" Jan 20 18:08:12 crc kubenswrapper[4661]: I0120 18:08:12.356995 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-r6qrc" Jan 20 18:08:12 crc kubenswrapper[4661]: I0120 18:08:12.368313 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jrn8w\" (UniqueName: \"kubernetes.io/projected/e4cd0e68-3282-4713-8386-8c86f56f1f70-kube-api-access-jrn8w\") pod \"multus-admission-controller-857f4d67dd-44vhk\" (UID: \"e4cd0e68-3282-4713-8386-8c86f56f1f70\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-44vhk" Jan 20 18:08:12 crc kubenswrapper[4661]: I0120 18:08:12.370525 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Jan 20 18:08:12 crc kubenswrapper[4661]: I0120 18:08:12.372684 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-flvxz" Jan 20 18:08:12 crc kubenswrapper[4661]: I0120 18:08:12.388382 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Jan 20 18:08:12 crc kubenswrapper[4661]: I0120 18:08:12.399515 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-xlt4n" Jan 20 18:08:12 crc kubenswrapper[4661]: I0120 18:08:12.406788 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Jan 20 18:08:12 crc kubenswrapper[4661]: I0120 18:08:12.410877 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-mcf6f" Jan 20 18:08:12 crc kubenswrapper[4661]: I0120 18:08:12.431050 4661 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Jan 20 18:08:12 crc kubenswrapper[4661]: I0120 18:08:12.448042 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Jan 20 18:08:12 crc kubenswrapper[4661]: I0120 18:08:12.467132 4661 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Jan 20 18:08:12 crc kubenswrapper[4661]: I0120 18:08:12.488102 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Jan 20 18:08:12 crc kubenswrapper[4661]: I0120 18:08:12.511718 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Jan 20 18:08:12 crc kubenswrapper[4661]: I0120 18:08:12.526912 4661 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Jan 20 18:08:12 crc kubenswrapper[4661]: I0120 18:08:12.543985 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29482200-ckvfc" Jan 20 18:08:12 crc kubenswrapper[4661]: I0120 18:08:12.545667 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-7bk7r"] Jan 20 18:08:12 crc kubenswrapper[4661]: I0120 18:08:12.561015 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-4flqc"] Jan 20 18:08:12 crc kubenswrapper[4661]: I0120 18:08:12.588629 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0d3a2340-8262-4080-8c72-8ced3d6c0c5a-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-9j9rr\" (UID: \"0d3a2340-8262-4080-8c72-8ced3d6c0c5a\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-9j9rr" Jan 20 18:08:12 crc kubenswrapper[4661]: I0120 18:08:12.605809 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bdzzp\" (UniqueName: \"kubernetes.io/projected/28e39614-3757-41e0-b164-eb1964ff6a8d-kube-api-access-bdzzp\") pod \"etcd-operator-b45778765-xfrkj\" (UID: \"28e39614-3757-41e0-b164-eb1964ff6a8d\") " pod="openshift-etcd-operator/etcd-operator-b45778765-xfrkj" Jan 20 18:08:12 crc kubenswrapper[4661]: I0120 18:08:12.614044 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-44vhk" Jan 20 18:08:12 crc kubenswrapper[4661]: I0120 18:08:12.625920 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-79klp\" (UniqueName: \"kubernetes.io/projected/a507ebcc-7e0b-445b-9688-882358d365ce-kube-api-access-79klp\") pod \"control-plane-machine-set-operator-78cbb6b69f-qdlnn\" (UID: \"a507ebcc-7e0b-445b-9688-882358d365ce\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-qdlnn" Jan 20 18:08:12 crc kubenswrapper[4661]: I0120 18:08:12.637014 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-tmtgr"] Jan 20 18:08:12 crc kubenswrapper[4661]: I0120 18:08:12.647359 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f2m8l\" (UniqueName: \"kubernetes.io/projected/f1297f46-d734-41e9-a7b8-5033ce03f315-kube-api-access-f2m8l\") pod \"openshift-controller-manager-operator-756b6f6bc6-2qq6g\" (UID: \"f1297f46-d734-41e9-a7b8-5033ce03f315\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-2qq6g" Jan 20 18:08:12 crc kubenswrapper[4661]: I0120 18:08:12.666808 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gr5h5\" (UniqueName: \"kubernetes.io/projected/76bf33d8-cdc1-4d99-a84f-4e9289a963af-kube-api-access-gr5h5\") pod \"machine-config-operator-74547568cd-vx646\" (UID: \"76bf33d8-cdc1-4d99-a84f-4e9289a963af\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-vx646" Jan 20 18:08:12 crc kubenswrapper[4661]: W0120 18:08:12.681289 4661 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podad28ae9e_274e_45fc_8202_683aadfa3494.slice/crio-3965f3b87a1bd6aa8b7a209dabc6ca0db7b93b0d09ee1a99a31490d4a004baac WatchSource:0}: Error finding container 3965f3b87a1bd6aa8b7a209dabc6ca0db7b93b0d09ee1a99a31490d4a004baac: Status 404 returned error can't find the container with id 3965f3b87a1bd6aa8b7a209dabc6ca0db7b93b0d09ee1a99a31490d4a004baac Jan 20 18:08:12 crc kubenswrapper[4661]: I0120 18:08:12.690507 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5x95s\" (UniqueName: \"kubernetes.io/projected/413ba99d-7214-4981-98fb-910c4f5731d8-kube-api-access-5x95s\") pod \"csi-hostpathplugin-dwpwk\" (UID: \"413ba99d-7214-4981-98fb-910c4f5731d8\") " pod="hostpath-provisioner/csi-hostpathplugin-dwpwk" Jan 20 18:08:12 crc kubenswrapper[4661]: W0120 18:08:12.698575 4661 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod17cc4c8d_5d73_4307_83ea_e826befa5b06.slice/crio-7d1ce06e8fc740d0508c2f1c1c49e2bb90473a1e49b7722d7010980dbb9ef0b7 WatchSource:0}: Error finding container 7d1ce06e8fc740d0508c2f1c1c49e2bb90473a1e49b7722d7010980dbb9ef0b7: Status 404 returned error can't find the container with id 7d1ce06e8fc740d0508c2f1c1c49e2bb90473a1e49b7722d7010980dbb9ef0b7 Jan 20 18:08:13 crc kubenswrapper[4661]: I0120 18:08:13.096798 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-9j9rr" Jan 20 18:08:13 crc kubenswrapper[4661]: I0120 18:08:13.097269 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-qdlnn" Jan 20 18:08:13 crc kubenswrapper[4661]: I0120 18:08:13.097548 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-vx646" Jan 20 18:08:13 crc kubenswrapper[4661]: I0120 18:08:13.098382 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-xfrkj" Jan 20 18:08:13 crc kubenswrapper[4661]: I0120 18:08:13.098533 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-2qq6g" Jan 20 18:08:13 crc kubenswrapper[4661]: I0120 18:08:13.101062 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-dwpwk" Jan 20 18:08:13 crc kubenswrapper[4661]: I0120 18:08:13.107323 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/1a3225a4-585b-4ad0-9951-c5feae37b6cc-registry-tls\") pod \"image-registry-697d97f7c8-7m2kh\" (UID: \"1a3225a4-585b-4ad0-9951-c5feae37b6cc\") " pod="openshift-image-registry/image-registry-697d97f7c8-7m2kh" Jan 20 18:08:13 crc kubenswrapper[4661]: I0120 18:08:13.108723 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/1a3225a4-585b-4ad0-9951-c5feae37b6cc-registry-certificates\") pod \"image-registry-697d97f7c8-7m2kh\" (UID: \"1a3225a4-585b-4ad0-9951-c5feae37b6cc\") " pod="openshift-image-registry/image-registry-697d97f7c8-7m2kh" Jan 20 18:08:13 crc kubenswrapper[4661]: I0120 18:08:13.108839 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/1a3225a4-585b-4ad0-9951-c5feae37b6cc-bound-sa-token\") pod \"image-registry-697d97f7c8-7m2kh\" (UID: \"1a3225a4-585b-4ad0-9951-c5feae37b6cc\") " pod="openshift-image-registry/image-registry-697d97f7c8-7m2kh" Jan 20 18:08:13 crc kubenswrapper[4661]: I0120 18:08:13.108987 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vfg74\" (UniqueName: \"kubernetes.io/projected/1a3225a4-585b-4ad0-9951-c5feae37b6cc-kube-api-access-vfg74\") pod \"image-registry-697d97f7c8-7m2kh\" (UID: \"1a3225a4-585b-4ad0-9951-c5feae37b6cc\") " pod="openshift-image-registry/image-registry-697d97f7c8-7m2kh" Jan 20 18:08:13 crc kubenswrapper[4661]: I0120 18:08:13.140078 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7m2kh\" (UID: \"1a3225a4-585b-4ad0-9951-c5feae37b6cc\") " pod="openshift-image-registry/image-registry-697d97f7c8-7m2kh" Jan 20 18:08:13 crc kubenswrapper[4661]: I0120 18:08:13.140196 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/1a3225a4-585b-4ad0-9951-c5feae37b6cc-installation-pull-secrets\") pod \"image-registry-697d97f7c8-7m2kh\" (UID: \"1a3225a4-585b-4ad0-9951-c5feae37b6cc\") " pod="openshift-image-registry/image-registry-697d97f7c8-7m2kh" Jan 20 18:08:13 crc kubenswrapper[4661]: I0120 18:08:13.140362 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/1a3225a4-585b-4ad0-9951-c5feae37b6cc-trusted-ca\") pod \"image-registry-697d97f7c8-7m2kh\" (UID: \"1a3225a4-585b-4ad0-9951-c5feae37b6cc\") " pod="openshift-image-registry/image-registry-697d97f7c8-7m2kh" Jan 20 18:08:13 crc kubenswrapper[4661]: I0120 18:08:13.140744 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/1a3225a4-585b-4ad0-9951-c5feae37b6cc-ca-trust-extracted\") pod \"image-registry-697d97f7c8-7m2kh\" (UID: \"1a3225a4-585b-4ad0-9951-c5feae37b6cc\") " pod="openshift-image-registry/image-registry-697d97f7c8-7m2kh" Jan 20 18:08:13 crc kubenswrapper[4661]: E0120 18:08:13.144514 4661 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-20 18:08:13.644492641 +0000 UTC m=+149.975282303 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7m2kh" (UID: "1a3225a4-585b-4ad0-9951-c5feae37b6cc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 18:08:13 crc kubenswrapper[4661]: I0120 18:08:13.256536 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 18:08:13 crc kubenswrapper[4661]: E0120 18:08:13.256923 4661 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 18:08:13.756880888 +0000 UTC m=+150.087670550 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 18:08:13 crc kubenswrapper[4661]: I0120 18:08:13.257916 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/1a3225a4-585b-4ad0-9951-c5feae37b6cc-trusted-ca\") pod \"image-registry-697d97f7c8-7m2kh\" (UID: \"1a3225a4-585b-4ad0-9951-c5feae37b6cc\") " pod="openshift-image-registry/image-registry-697d97f7c8-7m2kh" Jan 20 18:08:13 crc kubenswrapper[4661]: I0120 18:08:13.258565 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r68vb\" (UniqueName: \"kubernetes.io/projected/e066f8cc-45be-4120-b044-f1852309d339-kube-api-access-r68vb\") pod \"machine-config-server-tdtbm\" (UID: \"e066f8cc-45be-4120-b044-f1852309d339\") " pod="openshift-machine-config-operator/machine-config-server-tdtbm" Jan 20 18:08:13 crc kubenswrapper[4661]: I0120 18:08:13.258666 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/3a8e7e81-15e8-4127-9d43-732131427aa2-webhook-cert\") pod \"packageserver-d55dfcdfc-454g9\" (UID: \"3a8e7e81-15e8-4127-9d43-732131427aa2\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-454g9" Jan 20 18:08:13 crc kubenswrapper[4661]: I0120 18:08:13.258823 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/b3568f10-700a-4f42-8e58-f749a50cf0cb-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-mfmq7\" (UID: \"b3568f10-700a-4f42-8e58-f749a50cf0cb\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-mfmq7" Jan 20 18:08:13 crc kubenswrapper[4661]: I0120 18:08:13.258916 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/79c9338e-65b7-4805-810b-beb0b3c032a6-cert\") pod \"ingress-canary-6xth2\" (UID: \"79c9338e-65b7-4805-810b-beb0b3c032a6\") " pod="openshift-ingress-canary/ingress-canary-6xth2" Jan 20 18:08:13 crc kubenswrapper[4661]: I0120 18:08:13.259061 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/1a3225a4-585b-4ad0-9951-c5feae37b6cc-ca-trust-extracted\") pod \"image-registry-697d97f7c8-7m2kh\" (UID: \"1a3225a4-585b-4ad0-9951-c5feae37b6cc\") " pod="openshift-image-registry/image-registry-697d97f7c8-7m2kh" Jan 20 18:08:13 crc kubenswrapper[4661]: I0120 18:08:13.259165 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/b3568f10-700a-4f42-8e58-f749a50cf0cb-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-mfmq7\" (UID: \"b3568f10-700a-4f42-8e58-f749a50cf0cb\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-mfmq7" Jan 20 18:08:13 crc kubenswrapper[4661]: I0120 18:08:13.259255 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/3a8e7e81-15e8-4127-9d43-732131427aa2-apiservice-cert\") pod \"packageserver-d55dfcdfc-454g9\" (UID: \"3a8e7e81-15e8-4127-9d43-732131427aa2\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-454g9" Jan 20 18:08:13 crc kubenswrapper[4661]: I0120 18:08:13.259323 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/1a3225a4-585b-4ad0-9951-c5feae37b6cc-registry-tls\") pod \"image-registry-697d97f7c8-7m2kh\" (UID: \"1a3225a4-585b-4ad0-9951-c5feae37b6cc\") " pod="openshift-image-registry/image-registry-697d97f7c8-7m2kh" Jan 20 18:08:13 crc kubenswrapper[4661]: I0120 18:08:13.259440 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/673a7b2a-8e9f-4d92-800e-9ae986189dec-profile-collector-cert\") pod \"olm-operator-6b444d44fb-76h24\" (UID: \"673a7b2a-8e9f-4d92-800e-9ae986189dec\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-76h24" Jan 20 18:08:13 crc kubenswrapper[4661]: I0120 18:08:13.259547 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/e066f8cc-45be-4120-b044-f1852309d339-node-bootstrap-token\") pod \"machine-config-server-tdtbm\" (UID: \"e066f8cc-45be-4120-b044-f1852309d339\") " pod="openshift-machine-config-operator/machine-config-server-tdtbm" Jan 20 18:08:13 crc kubenswrapper[4661]: I0120 18:08:13.259630 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/1a3225a4-585b-4ad0-9951-c5feae37b6cc-registry-certificates\") pod \"image-registry-697d97f7c8-7m2kh\" (UID: \"1a3225a4-585b-4ad0-9951-c5feae37b6cc\") " pod="openshift-image-registry/image-registry-697d97f7c8-7m2kh" Jan 20 18:08:13 crc kubenswrapper[4661]: I0120 18:08:13.259715 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/3a8e7e81-15e8-4127-9d43-732131427aa2-tmpfs\") pod \"packageserver-d55dfcdfc-454g9\" (UID: \"3a8e7e81-15e8-4127-9d43-732131427aa2\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-454g9" Jan 20 18:08:13 crc kubenswrapper[4661]: I0120 18:08:13.259901 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/673a7b2a-8e9f-4d92-800e-9ae986189dec-srv-cert\") pod \"olm-operator-6b444d44fb-76h24\" (UID: \"673a7b2a-8e9f-4d92-800e-9ae986189dec\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-76h24" Jan 20 18:08:13 crc kubenswrapper[4661]: I0120 18:08:13.260006 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b3568f10-700a-4f42-8e58-f749a50cf0cb-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-mfmq7\" (UID: \"b3568f10-700a-4f42-8e58-f749a50cf0cb\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-mfmq7" Jan 20 18:08:13 crc kubenswrapper[4661]: I0120 18:08:13.260093 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/1a3225a4-585b-4ad0-9951-c5feae37b6cc-bound-sa-token\") pod \"image-registry-697d97f7c8-7m2kh\" (UID: \"1a3225a4-585b-4ad0-9951-c5feae37b6cc\") " pod="openshift-image-registry/image-registry-697d97f7c8-7m2kh" Jan 20 18:08:13 crc kubenswrapper[4661]: I0120 18:08:13.260163 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k8bcz\" (UniqueName: \"kubernetes.io/projected/b3568f10-700a-4f42-8e58-f749a50cf0cb-kube-api-access-k8bcz\") pod \"cluster-image-registry-operator-dc59b4c8b-mfmq7\" (UID: \"b3568f10-700a-4f42-8e58-f749a50cf0cb\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-mfmq7" Jan 20 18:08:13 crc kubenswrapper[4661]: I0120 18:08:13.260233 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5nm8f\" (UniqueName: \"kubernetes.io/projected/8924a8e2-8af0-49f0-9e0b-befc11e5755a-kube-api-access-5nm8f\") pod \"dns-default-gbprg\" (UID: \"8924a8e2-8af0-49f0-9e0b-befc11e5755a\") " pod="openshift-dns/dns-default-gbprg" Jan 20 18:08:13 crc kubenswrapper[4661]: I0120 18:08:13.260350 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-946dp\" (UniqueName: \"kubernetes.io/projected/79c9338e-65b7-4805-810b-beb0b3c032a6-kube-api-access-946dp\") pod \"ingress-canary-6xth2\" (UID: \"79c9338e-65b7-4805-810b-beb0b3c032a6\") " pod="openshift-ingress-canary/ingress-canary-6xth2" Jan 20 18:08:13 crc kubenswrapper[4661]: I0120 18:08:13.260814 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vfg74\" (UniqueName: \"kubernetes.io/projected/1a3225a4-585b-4ad0-9951-c5feae37b6cc-kube-api-access-vfg74\") pod \"image-registry-697d97f7c8-7m2kh\" (UID: \"1a3225a4-585b-4ad0-9951-c5feae37b6cc\") " pod="openshift-image-registry/image-registry-697d97f7c8-7m2kh" Jan 20 18:08:13 crc kubenswrapper[4661]: I0120 18:08:13.260914 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8924a8e2-8af0-49f0-9e0b-befc11e5755a-config-volume\") pod \"dns-default-gbprg\" (UID: \"8924a8e2-8af0-49f0-9e0b-befc11e5755a\") " pod="openshift-dns/dns-default-gbprg" Jan 20 18:08:13 crc kubenswrapper[4661]: I0120 18:08:13.260999 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pqrcx\" (UniqueName: \"kubernetes.io/projected/673a7b2a-8e9f-4d92-800e-9ae986189dec-kube-api-access-pqrcx\") pod \"olm-operator-6b444d44fb-76h24\" (UID: \"673a7b2a-8e9f-4d92-800e-9ae986189dec\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-76h24" Jan 20 18:08:13 crc kubenswrapper[4661]: I0120 18:08:13.261162 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wg5bv\" (UniqueName: \"kubernetes.io/projected/3a8e7e81-15e8-4127-9d43-732131427aa2-kube-api-access-wg5bv\") pod \"packageserver-d55dfcdfc-454g9\" (UID: \"3a8e7e81-15e8-4127-9d43-732131427aa2\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-454g9" Jan 20 18:08:13 crc kubenswrapper[4661]: I0120 18:08:13.261245 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7m2kh\" (UID: \"1a3225a4-585b-4ad0-9951-c5feae37b6cc\") " pod="openshift-image-registry/image-registry-697d97f7c8-7m2kh" Jan 20 18:08:13 crc kubenswrapper[4661]: I0120 18:08:13.261275 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/e066f8cc-45be-4120-b044-f1852309d339-certs\") pod \"machine-config-server-tdtbm\" (UID: \"e066f8cc-45be-4120-b044-f1852309d339\") " pod="openshift-machine-config-operator/machine-config-server-tdtbm" Jan 20 18:08:13 crc kubenswrapper[4661]: I0120 18:08:13.261372 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/1a3225a4-585b-4ad0-9951-c5feae37b6cc-installation-pull-secrets\") pod \"image-registry-697d97f7c8-7m2kh\" (UID: \"1a3225a4-585b-4ad0-9951-c5feae37b6cc\") " pod="openshift-image-registry/image-registry-697d97f7c8-7m2kh" Jan 20 18:08:13 crc kubenswrapper[4661]: I0120 18:08:13.261422 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/8924a8e2-8af0-49f0-9e0b-befc11e5755a-metrics-tls\") pod \"dns-default-gbprg\" (UID: \"8924a8e2-8af0-49f0-9e0b-befc11e5755a\") " pod="openshift-dns/dns-default-gbprg" Jan 20 18:08:13 crc kubenswrapper[4661]: I0120 18:08:13.263227 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/1a3225a4-585b-4ad0-9951-c5feae37b6cc-trusted-ca\") pod \"image-registry-697d97f7c8-7m2kh\" (UID: \"1a3225a4-585b-4ad0-9951-c5feae37b6cc\") " pod="openshift-image-registry/image-registry-697d97f7c8-7m2kh" Jan 20 18:08:13 crc kubenswrapper[4661]: E0120 18:08:13.264444 4661 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-20 18:08:13.764429942 +0000 UTC m=+150.095219604 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7m2kh" (UID: "1a3225a4-585b-4ad0-9951-c5feae37b6cc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 18:08:13 crc kubenswrapper[4661]: I0120 18:08:13.266279 4661 generic.go:334] "Generic (PLEG): container finished" podID="8c6c735b-2d38-430c-a5b7-10b9b06ef623" containerID="fb749ec28824345f1dfa248f7b2a215d6d0517b61c2b16822fc9f6d72cd1783d" exitCode=0 Jan 20 18:08:13 crc kubenswrapper[4661]: I0120 18:08:13.266442 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-kh25t" event={"ID":"8c6c735b-2d38-430c-a5b7-10b9b06ef623","Type":"ContainerDied","Data":"fb749ec28824345f1dfa248f7b2a215d6d0517b61c2b16822fc9f6d72cd1783d"} Jan 20 18:08:13 crc kubenswrapper[4661]: I0120 18:08:13.266480 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-kh25t" event={"ID":"8c6c735b-2d38-430c-a5b7-10b9b06ef623","Type":"ContainerStarted","Data":"11056272334aed81324b950c94f3bbc3b84d532d3affd096f8520391bbc5048f"} Jan 20 18:08:13 crc kubenswrapper[4661]: I0120 18:08:13.268480 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/1a3225a4-585b-4ad0-9951-c5feae37b6cc-registry-certificates\") pod \"image-registry-697d97f7c8-7m2kh\" (UID: \"1a3225a4-585b-4ad0-9951-c5feae37b6cc\") " pod="openshift-image-registry/image-registry-697d97f7c8-7m2kh" Jan 20 18:08:13 crc kubenswrapper[4661]: I0120 18:08:13.273727 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/1a3225a4-585b-4ad0-9951-c5feae37b6cc-ca-trust-extracted\") pod \"image-registry-697d97f7c8-7m2kh\" (UID: \"1a3225a4-585b-4ad0-9951-c5feae37b6cc\") " pod="openshift-image-registry/image-registry-697d97f7c8-7m2kh" Jan 20 18:08:13 crc kubenswrapper[4661]: I0120 18:08:13.281492 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-tptl9" event={"ID":"c7ff0869-4b3b-447f-a012-9bc155bae99b","Type":"ContainerStarted","Data":"37db350607a35c126b1f487c6233a785393e87a6f2f64b83125effcc77a5036d"} Jan 20 18:08:13 crc kubenswrapper[4661]: I0120 18:08:13.282973 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-7bk7r" event={"ID":"18723097-a708-4951-89bc-48ffc2128786","Type":"ContainerStarted","Data":"b6e3e8067a071515ca8d83220c83f939dc7725bda2c42501af4ece5970dd31d7"} Jan 20 18:08:13 crc kubenswrapper[4661]: I0120 18:08:13.305277 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-4flqc" event={"ID":"ad28ae9e-274e-45fc-8202-683aadfa3494","Type":"ContainerStarted","Data":"3965f3b87a1bd6aa8b7a209dabc6ca0db7b93b0d09ee1a99a31490d4a004baac"} Jan 20 18:08:13 crc kubenswrapper[4661]: I0120 18:08:13.310464 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/1a3225a4-585b-4ad0-9951-c5feae37b6cc-installation-pull-secrets\") pod \"image-registry-697d97f7c8-7m2kh\" (UID: \"1a3225a4-585b-4ad0-9951-c5feae37b6cc\") " pod="openshift-image-registry/image-registry-697d97f7c8-7m2kh" Jan 20 18:08:13 crc kubenswrapper[4661]: I0120 18:08:13.310564 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/1a3225a4-585b-4ad0-9951-c5feae37b6cc-bound-sa-token\") pod \"image-registry-697d97f7c8-7m2kh\" (UID: \"1a3225a4-585b-4ad0-9951-c5feae37b6cc\") " pod="openshift-image-registry/image-registry-697d97f7c8-7m2kh" Jan 20 18:08:13 crc kubenswrapper[4661]: I0120 18:08:13.311683 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/1a3225a4-585b-4ad0-9951-c5feae37b6cc-registry-tls\") pod \"image-registry-697d97f7c8-7m2kh\" (UID: \"1a3225a4-585b-4ad0-9951-c5feae37b6cc\") " pod="openshift-image-registry/image-registry-697d97f7c8-7m2kh" Jan 20 18:08:13 crc kubenswrapper[4661]: I0120 18:08:13.320613 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vfg74\" (UniqueName: \"kubernetes.io/projected/1a3225a4-585b-4ad0-9951-c5feae37b6cc-kube-api-access-vfg74\") pod \"image-registry-697d97f7c8-7m2kh\" (UID: \"1a3225a4-585b-4ad0-9951-c5feae37b6cc\") " pod="openshift-image-registry/image-registry-697d97f7c8-7m2kh" Jan 20 18:08:13 crc kubenswrapper[4661]: I0120 18:08:13.364136 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-ngxm7"] Jan 20 18:08:13 crc kubenswrapper[4661]: I0120 18:08:13.364379 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 18:08:13 crc kubenswrapper[4661]: E0120 18:08:13.364473 4661 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 18:08:13.864450994 +0000 UTC m=+150.195240656 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 18:08:13 crc kubenswrapper[4661]: I0120 18:08:13.364808 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r68vb\" (UniqueName: \"kubernetes.io/projected/e066f8cc-45be-4120-b044-f1852309d339-kube-api-access-r68vb\") pod \"machine-config-server-tdtbm\" (UID: \"e066f8cc-45be-4120-b044-f1852309d339\") " pod="openshift-machine-config-operator/machine-config-server-tdtbm" Jan 20 18:08:13 crc kubenswrapper[4661]: I0120 18:08:13.364844 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/3a8e7e81-15e8-4127-9d43-732131427aa2-webhook-cert\") pod \"packageserver-d55dfcdfc-454g9\" (UID: \"3a8e7e81-15e8-4127-9d43-732131427aa2\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-454g9" Jan 20 18:08:13 crc kubenswrapper[4661]: I0120 18:08:13.364882 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/b3568f10-700a-4f42-8e58-f749a50cf0cb-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-mfmq7\" (UID: \"b3568f10-700a-4f42-8e58-f749a50cf0cb\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-mfmq7" Jan 20 18:08:13 crc kubenswrapper[4661]: I0120 18:08:13.364906 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/79c9338e-65b7-4805-810b-beb0b3c032a6-cert\") pod \"ingress-canary-6xth2\" (UID: \"79c9338e-65b7-4805-810b-beb0b3c032a6\") " pod="openshift-ingress-canary/ingress-canary-6xth2" Jan 20 18:08:13 crc kubenswrapper[4661]: I0120 18:08:13.364966 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/b3568f10-700a-4f42-8e58-f749a50cf0cb-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-mfmq7\" (UID: \"b3568f10-700a-4f42-8e58-f749a50cf0cb\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-mfmq7" Jan 20 18:08:13 crc kubenswrapper[4661]: I0120 18:08:13.364988 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/3a8e7e81-15e8-4127-9d43-732131427aa2-apiservice-cert\") pod \"packageserver-d55dfcdfc-454g9\" (UID: \"3a8e7e81-15e8-4127-9d43-732131427aa2\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-454g9" Jan 20 18:08:13 crc kubenswrapper[4661]: I0120 18:08:13.365011 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/e066f8cc-45be-4120-b044-f1852309d339-node-bootstrap-token\") pod \"machine-config-server-tdtbm\" (UID: \"e066f8cc-45be-4120-b044-f1852309d339\") " pod="openshift-machine-config-operator/machine-config-server-tdtbm" Jan 20 18:08:13 crc kubenswrapper[4661]: I0120 18:08:13.365031 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/673a7b2a-8e9f-4d92-800e-9ae986189dec-profile-collector-cert\") pod \"olm-operator-6b444d44fb-76h24\" (UID: \"673a7b2a-8e9f-4d92-800e-9ae986189dec\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-76h24" Jan 20 18:08:13 crc kubenswrapper[4661]: I0120 18:08:13.365055 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/3a8e7e81-15e8-4127-9d43-732131427aa2-tmpfs\") pod \"packageserver-d55dfcdfc-454g9\" (UID: \"3a8e7e81-15e8-4127-9d43-732131427aa2\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-454g9" Jan 20 18:08:13 crc kubenswrapper[4661]: I0120 18:08:13.365074 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/673a7b2a-8e9f-4d92-800e-9ae986189dec-srv-cert\") pod \"olm-operator-6b444d44fb-76h24\" (UID: \"673a7b2a-8e9f-4d92-800e-9ae986189dec\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-76h24" Jan 20 18:08:13 crc kubenswrapper[4661]: I0120 18:08:13.365098 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b3568f10-700a-4f42-8e58-f749a50cf0cb-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-mfmq7\" (UID: \"b3568f10-700a-4f42-8e58-f749a50cf0cb\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-mfmq7" Jan 20 18:08:13 crc kubenswrapper[4661]: I0120 18:08:13.365117 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k8bcz\" (UniqueName: \"kubernetes.io/projected/b3568f10-700a-4f42-8e58-f749a50cf0cb-kube-api-access-k8bcz\") pod \"cluster-image-registry-operator-dc59b4c8b-mfmq7\" (UID: \"b3568f10-700a-4f42-8e58-f749a50cf0cb\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-mfmq7" Jan 20 18:08:13 crc kubenswrapper[4661]: I0120 18:08:13.365134 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5nm8f\" (UniqueName: \"kubernetes.io/projected/8924a8e2-8af0-49f0-9e0b-befc11e5755a-kube-api-access-5nm8f\") pod \"dns-default-gbprg\" (UID: \"8924a8e2-8af0-49f0-9e0b-befc11e5755a\") " pod="openshift-dns/dns-default-gbprg" Jan 20 18:08:13 crc kubenswrapper[4661]: I0120 18:08:13.365173 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-946dp\" (UniqueName: \"kubernetes.io/projected/79c9338e-65b7-4805-810b-beb0b3c032a6-kube-api-access-946dp\") pod \"ingress-canary-6xth2\" (UID: \"79c9338e-65b7-4805-810b-beb0b3c032a6\") " pod="openshift-ingress-canary/ingress-canary-6xth2" Jan 20 18:08:13 crc kubenswrapper[4661]: I0120 18:08:13.365205 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8924a8e2-8af0-49f0-9e0b-befc11e5755a-config-volume\") pod \"dns-default-gbprg\" (UID: \"8924a8e2-8af0-49f0-9e0b-befc11e5755a\") " pod="openshift-dns/dns-default-gbprg" Jan 20 18:08:13 crc kubenswrapper[4661]: I0120 18:08:13.365223 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pqrcx\" (UniqueName: \"kubernetes.io/projected/673a7b2a-8e9f-4d92-800e-9ae986189dec-kube-api-access-pqrcx\") pod \"olm-operator-6b444d44fb-76h24\" (UID: \"673a7b2a-8e9f-4d92-800e-9ae986189dec\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-76h24" Jan 20 18:08:13 crc kubenswrapper[4661]: I0120 18:08:13.365244 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wg5bv\" (UniqueName: \"kubernetes.io/projected/3a8e7e81-15e8-4127-9d43-732131427aa2-kube-api-access-wg5bv\") pod \"packageserver-d55dfcdfc-454g9\" (UID: \"3a8e7e81-15e8-4127-9d43-732131427aa2\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-454g9" Jan 20 18:08:13 crc kubenswrapper[4661]: I0120 18:08:13.365270 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7m2kh\" (UID: \"1a3225a4-585b-4ad0-9951-c5feae37b6cc\") " pod="openshift-image-registry/image-registry-697d97f7c8-7m2kh" Jan 20 18:08:13 crc kubenswrapper[4661]: I0120 18:08:13.365286 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/e066f8cc-45be-4120-b044-f1852309d339-certs\") pod \"machine-config-server-tdtbm\" (UID: \"e066f8cc-45be-4120-b044-f1852309d339\") " pod="openshift-machine-config-operator/machine-config-server-tdtbm" Jan 20 18:08:13 crc kubenswrapper[4661]: I0120 18:08:13.365313 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/8924a8e2-8af0-49f0-9e0b-befc11e5755a-metrics-tls\") pod \"dns-default-gbprg\" (UID: \"8924a8e2-8af0-49f0-9e0b-befc11e5755a\") " pod="openshift-dns/dns-default-gbprg" Jan 20 18:08:13 crc kubenswrapper[4661]: I0120 18:08:13.376411 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/3a8e7e81-15e8-4127-9d43-732131427aa2-webhook-cert\") pod \"packageserver-d55dfcdfc-454g9\" (UID: \"3a8e7e81-15e8-4127-9d43-732131427aa2\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-454g9" Jan 20 18:08:13 crc kubenswrapper[4661]: I0120 18:08:13.382551 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8924a8e2-8af0-49f0-9e0b-befc11e5755a-config-volume\") pod \"dns-default-gbprg\" (UID: \"8924a8e2-8af0-49f0-9e0b-befc11e5755a\") " pod="openshift-dns/dns-default-gbprg" Jan 20 18:08:13 crc kubenswrapper[4661]: I0120 18:08:13.383344 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-hkbjv" event={"ID":"e192fd3a-6efd-4ce8-8915-2bba0f9dc8c0","Type":"ContainerStarted","Data":"7b46a735d6f856a986d57fd4dde5942c1511f7c74fba125640a714d74c898a46"} Jan 20 18:08:13 crc kubenswrapper[4661]: E0120 18:08:13.384037 4661 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-20 18:08:13.884021912 +0000 UTC m=+150.214811574 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7m2kh" (UID: "1a3225a4-585b-4ad0-9951-c5feae37b6cc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 18:08:13 crc kubenswrapper[4661]: I0120 18:08:13.384595 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/3a8e7e81-15e8-4127-9d43-732131427aa2-tmpfs\") pod \"packageserver-d55dfcdfc-454g9\" (UID: \"3a8e7e81-15e8-4127-9d43-732131427aa2\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-454g9" Jan 20 18:08:13 crc kubenswrapper[4661]: I0120 18:08:13.386733 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/e066f8cc-45be-4120-b044-f1852309d339-node-bootstrap-token\") pod \"machine-config-server-tdtbm\" (UID: \"e066f8cc-45be-4120-b044-f1852309d339\") " pod="openshift-machine-config-operator/machine-config-server-tdtbm" Jan 20 18:08:13 crc kubenswrapper[4661]: I0120 18:08:13.387356 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/8924a8e2-8af0-49f0-9e0b-befc11e5755a-metrics-tls\") pod \"dns-default-gbprg\" (UID: \"8924a8e2-8af0-49f0-9e0b-befc11e5755a\") " pod="openshift-dns/dns-default-gbprg" Jan 20 18:08:13 crc kubenswrapper[4661]: I0120 18:08:13.390172 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/e066f8cc-45be-4120-b044-f1852309d339-certs\") pod \"machine-config-server-tdtbm\" (UID: \"e066f8cc-45be-4120-b044-f1852309d339\") " pod="openshift-machine-config-operator/machine-config-server-tdtbm" Jan 20 18:08:13 crc kubenswrapper[4661]: I0120 18:08:13.390738 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/673a7b2a-8e9f-4d92-800e-9ae986189dec-srv-cert\") pod \"olm-operator-6b444d44fb-76h24\" (UID: \"673a7b2a-8e9f-4d92-800e-9ae986189dec\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-76h24" Jan 20 18:08:13 crc kubenswrapper[4661]: I0120 18:08:13.391738 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/79c9338e-65b7-4805-810b-beb0b3c032a6-cert\") pod \"ingress-canary-6xth2\" (UID: \"79c9338e-65b7-4805-810b-beb0b3c032a6\") " pod="openshift-ingress-canary/ingress-canary-6xth2" Jan 20 18:08:13 crc kubenswrapper[4661]: I0120 18:08:13.420961 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b3568f10-700a-4f42-8e58-f749a50cf0cb-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-mfmq7\" (UID: \"b3568f10-700a-4f42-8e58-f749a50cf0cb\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-mfmq7" Jan 20 18:08:13 crc kubenswrapper[4661]: I0120 18:08:13.448896 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-tmtgr" event={"ID":"17cc4c8d-5d73-4307-83ea-e826befa5b06","Type":"ContainerStarted","Data":"7d1ce06e8fc740d0508c2f1c1c49e2bb90473a1e49b7722d7010980dbb9ef0b7"} Jan 20 18:08:13 crc kubenswrapper[4661]: I0120 18:08:13.462897 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-j48cg"] Jan 20 18:08:13 crc kubenswrapper[4661]: I0120 18:08:13.501010 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pqrcx\" (UniqueName: \"kubernetes.io/projected/673a7b2a-8e9f-4d92-800e-9ae986189dec-kube-api-access-pqrcx\") pod \"olm-operator-6b444d44fb-76h24\" (UID: \"673a7b2a-8e9f-4d92-800e-9ae986189dec\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-76h24" Jan 20 18:08:13 crc kubenswrapper[4661]: I0120 18:08:13.501457 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/3a8e7e81-15e8-4127-9d43-732131427aa2-apiservice-cert\") pod \"packageserver-d55dfcdfc-454g9\" (UID: \"3a8e7e81-15e8-4127-9d43-732131427aa2\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-454g9" Jan 20 18:08:13 crc kubenswrapper[4661]: I0120 18:08:13.502770 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/673a7b2a-8e9f-4d92-800e-9ae986189dec-profile-collector-cert\") pod \"olm-operator-6b444d44fb-76h24\" (UID: \"673a7b2a-8e9f-4d92-800e-9ae986189dec\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-76h24" Jan 20 18:08:13 crc kubenswrapper[4661]: I0120 18:08:13.503508 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wg5bv\" (UniqueName: \"kubernetes.io/projected/3a8e7e81-15e8-4127-9d43-732131427aa2-kube-api-access-wg5bv\") pod \"packageserver-d55dfcdfc-454g9\" (UID: \"3a8e7e81-15e8-4127-9d43-732131427aa2\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-454g9" Jan 20 18:08:13 crc kubenswrapper[4661]: I0120 18:08:13.503567 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r68vb\" (UniqueName: \"kubernetes.io/projected/e066f8cc-45be-4120-b044-f1852309d339-kube-api-access-r68vb\") pod \"machine-config-server-tdtbm\" (UID: \"e066f8cc-45be-4120-b044-f1852309d339\") " pod="openshift-machine-config-operator/machine-config-server-tdtbm" Jan 20 18:08:13 crc kubenswrapper[4661]: I0120 18:08:13.504091 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-946dp\" (UniqueName: \"kubernetes.io/projected/79c9338e-65b7-4805-810b-beb0b3c032a6-kube-api-access-946dp\") pod \"ingress-canary-6xth2\" (UID: \"79c9338e-65b7-4805-810b-beb0b3c032a6\") " pod="openshift-ingress-canary/ingress-canary-6xth2" Jan 20 18:08:13 crc kubenswrapper[4661]: I0120 18:08:13.504487 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 18:08:13 crc kubenswrapper[4661]: E0120 18:08:13.504909 4661 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 18:08:14.004893908 +0000 UTC m=+150.335683570 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 18:08:13 crc kubenswrapper[4661]: I0120 18:08:13.508801 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/b3568f10-700a-4f42-8e58-f749a50cf0cb-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-mfmq7\" (UID: \"b3568f10-700a-4f42-8e58-f749a50cf0cb\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-mfmq7" Jan 20 18:08:13 crc kubenswrapper[4661]: I0120 18:08:13.512291 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/b3568f10-700a-4f42-8e58-f749a50cf0cb-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-mfmq7\" (UID: \"b3568f10-700a-4f42-8e58-f749a50cf0cb\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-mfmq7" Jan 20 18:08:13 crc kubenswrapper[4661]: I0120 18:08:13.513625 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k8bcz\" (UniqueName: \"kubernetes.io/projected/b3568f10-700a-4f42-8e58-f749a50cf0cb-kube-api-access-k8bcz\") pod \"cluster-image-registry-operator-dc59b4c8b-mfmq7\" (UID: \"b3568f10-700a-4f42-8e58-f749a50cf0cb\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-mfmq7" Jan 20 18:08:13 crc kubenswrapper[4661]: I0120 18:08:13.518559 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-446tk"] Jan 20 18:08:13 crc kubenswrapper[4661]: I0120 18:08:13.520575 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-wd4nq"] Jan 20 18:08:13 crc kubenswrapper[4661]: I0120 18:08:13.534876 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5nm8f\" (UniqueName: \"kubernetes.io/projected/8924a8e2-8af0-49f0-9e0b-befc11e5755a-kube-api-access-5nm8f\") pod \"dns-default-gbprg\" (UID: \"8924a8e2-8af0-49f0-9e0b-befc11e5755a\") " pod="openshift-dns/dns-default-gbprg" Jan 20 18:08:13 crc kubenswrapper[4661]: W0120 18:08:13.577722 4661 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1fe4d701_349f_4edf_a59f_092ccfcdd40e.slice/crio-c5c778c82a5c34e972fed1d11577e086cdd464deeb22c7d36f9ab0f61e8822f0 WatchSource:0}: Error finding container c5c778c82a5c34e972fed1d11577e086cdd464deeb22c7d36f9ab0f61e8822f0: Status 404 returned error can't find the container with id c5c778c82a5c34e972fed1d11577e086cdd464deeb22c7d36f9ab0f61e8822f0 Jan 20 18:08:13 crc kubenswrapper[4661]: I0120 18:08:13.602041 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-7hbkg"] Jan 20 18:08:13 crc kubenswrapper[4661]: I0120 18:08:13.613863 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7m2kh\" (UID: \"1a3225a4-585b-4ad0-9951-c5feae37b6cc\") " pod="openshift-image-registry/image-registry-697d97f7c8-7m2kh" Jan 20 18:08:13 crc kubenswrapper[4661]: E0120 18:08:13.614218 4661 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-20 18:08:14.114203381 +0000 UTC m=+150.444993043 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7m2kh" (UID: "1a3225a4-585b-4ad0-9951-c5feae37b6cc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 18:08:13 crc kubenswrapper[4661]: I0120 18:08:13.641022 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-phg9x"] Jan 20 18:08:13 crc kubenswrapper[4661]: I0120 18:08:13.641287 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-454g9" Jan 20 18:08:13 crc kubenswrapper[4661]: I0120 18:08:13.652294 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-mlncs"] Jan 20 18:08:13 crc kubenswrapper[4661]: I0120 18:08:13.657894 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-mfmq7" Jan 20 18:08:13 crc kubenswrapper[4661]: I0120 18:08:13.686352 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-76h24" Jan 20 18:08:13 crc kubenswrapper[4661]: I0120 18:08:13.707209 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-bvpn8"] Jan 20 18:08:13 crc kubenswrapper[4661]: I0120 18:08:13.716364 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 18:08:13 crc kubenswrapper[4661]: E0120 18:08:13.716843 4661 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 18:08:14.216821273 +0000 UTC m=+150.547610935 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 18:08:13 crc kubenswrapper[4661]: I0120 18:08:13.731792 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-tdtbm" Jan 20 18:08:13 crc kubenswrapper[4661]: I0120 18:08:13.750168 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-6xth2" Jan 20 18:08:13 crc kubenswrapper[4661]: I0120 18:08:13.765775 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbprg" Jan 20 18:08:13 crc kubenswrapper[4661]: I0120 18:08:13.823303 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7m2kh\" (UID: \"1a3225a4-585b-4ad0-9951-c5feae37b6cc\") " pod="openshift-image-registry/image-registry-697d97f7c8-7m2kh" Jan 20 18:08:13 crc kubenswrapper[4661]: E0120 18:08:13.823824 4661 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-20 18:08:14.323804024 +0000 UTC m=+150.654593686 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7m2kh" (UID: "1a3225a4-585b-4ad0-9951-c5feae37b6cc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 18:08:13 crc kubenswrapper[4661]: I0120 18:08:13.932900 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-xlt4n"] Jan 20 18:08:13 crc kubenswrapper[4661]: I0120 18:08:13.935144 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-bh9mt"] Jan 20 18:08:13 crc kubenswrapper[4661]: I0120 18:08:13.942087 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 18:08:13 crc kubenswrapper[4661]: E0120 18:08:13.943063 4661 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 18:08:14.443042175 +0000 UTC m=+150.773831837 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 18:08:13 crc kubenswrapper[4661]: I0120 18:08:13.975065 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-zdd7g"] Jan 20 18:08:14 crc kubenswrapper[4661]: I0120 18:08:14.048940 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7m2kh\" (UID: \"1a3225a4-585b-4ad0-9951-c5feae37b6cc\") " pod="openshift-image-registry/image-registry-697d97f7c8-7m2kh" Jan 20 18:08:14 crc kubenswrapper[4661]: E0120 18:08:14.050803 4661 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-20 18:08:14.550781046 +0000 UTC m=+150.881570708 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7m2kh" (UID: "1a3225a4-585b-4ad0-9951-c5feae37b6cc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 18:08:14 crc kubenswrapper[4661]: I0120 18:08:14.165838 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 18:08:14 crc kubenswrapper[4661]: E0120 18:08:14.166452 4661 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 18:08:14.66643009 +0000 UTC m=+150.997219752 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 18:08:14 crc kubenswrapper[4661]: I0120 18:08:14.258353 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-qwp42"] Jan 20 18:08:14 crc kubenswrapper[4661]: I0120 18:08:14.258804 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-q7s9r"] Jan 20 18:08:14 crc kubenswrapper[4661]: I0120 18:08:14.258819 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-r6qrc"] Jan 20 18:08:14 crc kubenswrapper[4661]: I0120 18:08:14.258834 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-4p2m6"] Jan 20 18:08:14 crc kubenswrapper[4661]: I0120 18:08:14.258868 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-lrwcc"] Jan 20 18:08:14 crc kubenswrapper[4661]: I0120 18:08:14.272156 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7m2kh\" (UID: \"1a3225a4-585b-4ad0-9951-c5feae37b6cc\") " pod="openshift-image-registry/image-registry-697d97f7c8-7m2kh" Jan 20 18:08:14 crc kubenswrapper[4661]: E0120 18:08:14.272548 4661 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-20 18:08:14.772534167 +0000 UTC m=+151.103323829 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7m2kh" (UID: "1a3225a4-585b-4ad0-9951-c5feae37b6cc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 18:08:14 crc kubenswrapper[4661]: I0120 18:08:14.372827 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 18:08:14 crc kubenswrapper[4661]: E0120 18:08:14.373173 4661 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 18:08:14.873151655 +0000 UTC m=+151.203941317 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 18:08:14 crc kubenswrapper[4661]: I0120 18:08:14.387418 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29482200-ckvfc"] Jan 20 18:08:14 crc kubenswrapper[4661]: I0120 18:08:14.397901 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-flvxz"] Jan 20 18:08:14 crc kubenswrapper[4661]: I0120 18:08:14.450865 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-c4pxm"] Jan 20 18:08:14 crc kubenswrapper[4661]: I0120 18:08:14.458619 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-n76xd"] Jan 20 18:08:14 crc kubenswrapper[4661]: I0120 18:08:14.473568 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-44vhk"] Jan 20 18:08:14 crc kubenswrapper[4661]: I0120 18:08:14.477274 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-mcf6f"] Jan 20 18:08:14 crc kubenswrapper[4661]: I0120 18:08:14.480910 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7m2kh\" (UID: \"1a3225a4-585b-4ad0-9951-c5feae37b6cc\") " pod="openshift-image-registry/image-registry-697d97f7c8-7m2kh" Jan 20 18:08:14 crc kubenswrapper[4661]: E0120 18:08:14.481184 4661 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-20 18:08:14.981170793 +0000 UTC m=+151.311960455 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7m2kh" (UID: "1a3225a4-585b-4ad0-9951-c5feae37b6cc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 18:08:14 crc kubenswrapper[4661]: I0120 18:08:14.497328 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-ngxm7" event={"ID":"1fe4d701-349f-4edf-a59f-092ccfcdd40e","Type":"ContainerStarted","Data":"c5c778c82a5c34e972fed1d11577e086cdd464deeb22c7d36f9ab0f61e8822f0"} Jan 20 18:08:14 crc kubenswrapper[4661]: I0120 18:08:14.500448 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-tmtgr" event={"ID":"17cc4c8d-5d73-4307-83ea-e826befa5b06","Type":"ContainerStarted","Data":"30b0c49ffe9d8cc818a944cbd39557d00cdf71782c6729c34053726aa8f06181"} Jan 20 18:08:14 crc kubenswrapper[4661]: I0120 18:08:14.504253 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-mlncs" event={"ID":"8a9d6945-a25a-4a09-92ee-7b90664a2edd","Type":"ContainerStarted","Data":"9f88a82276f0df236c1833bd6354caeed3f1ddfa7b0a40440880217603a527c4"} Jan 20 18:08:14 crc kubenswrapper[4661]: I0120 18:08:14.515079 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-bvpn8" event={"ID":"afe96487-2a45-4ad8-8f17-7f33186f55f4","Type":"ContainerStarted","Data":"74a42488d351370d712422737371cf1d70350aa5ec564ee94be5e6e170475495"} Jan 20 18:08:14 crc kubenswrapper[4661]: I0120 18:08:14.517405 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-xlt4n" event={"ID":"550a5702-08aa-4dca-a1a0-7adfebbd9312","Type":"ContainerStarted","Data":"0b643a702a95ff202b1dc0acc7079b90a54fa1dc042ff8136d285bd1168bd564"} Jan 20 18:08:14 crc kubenswrapper[4661]: I0120 18:08:14.528433 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-phg9x" event={"ID":"4c500541-c3f2-4f6d-8bb7-1227aa74989a","Type":"ContainerStarted","Data":"78cd7411d3700beb90063b3132373ffd49296a73375a75e8b159868fbf0d47e5"} Jan 20 18:08:14 crc kubenswrapper[4661]: I0120 18:08:14.552320 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-9htcv"] Jan 20 18:08:14 crc kubenswrapper[4661]: I0120 18:08:14.553962 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-7hbkg" event={"ID":"302e8226-565c-44a4-bb0e-dee670200ae3","Type":"ContainerStarted","Data":"ff2033d256a3a9db236cb5d9dd4440cee31727050b8ea40a56ece89ace79e4b2"} Jan 20 18:08:14 crc kubenswrapper[4661]: I0120 18:08:14.580826 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-wd4nq" event={"ID":"c223ef1c-922a-42b8-b8d0-428a27f5ae6d","Type":"ContainerStarted","Data":"5f1cbe3dfa9e2632fdd27e9e8f31f139dbbc95a11f739f8e4d9a8d35fb388264"} Jan 20 18:08:14 crc kubenswrapper[4661]: I0120 18:08:14.582690 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 18:08:14 crc kubenswrapper[4661]: E0120 18:08:14.583304 4661 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 18:08:15.083287962 +0000 UTC m=+151.414077624 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 18:08:14 crc kubenswrapper[4661]: I0120 18:08:14.583406 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-bh9mt" event={"ID":"1f0c818b-31de-43ee-a20a-1fc174261b42","Type":"ContainerStarted","Data":"3a058474f96937277440c2a027dc0ba305d810d19fee387fa794396334472896"} Jan 20 18:08:14 crc kubenswrapper[4661]: I0120 18:08:14.600883 4661 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-wd4nq" Jan 20 18:08:14 crc kubenswrapper[4661]: I0120 18:08:14.600926 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-4flqc" event={"ID":"ad28ae9e-274e-45fc-8202-683aadfa3494","Type":"ContainerStarted","Data":"5c1f746b0b6897193d53a9906b6a0c015efd0d8f76e9730453e9dd0cfcbb24ca"} Jan 20 18:08:14 crc kubenswrapper[4661]: I0120 18:08:14.606257 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-qdlnn"] Jan 20 18:08:14 crc kubenswrapper[4661]: I0120 18:08:14.610765 4661 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-wd4nq container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.6:8443/healthz\": dial tcp 10.217.0.6:8443: connect: connection refused" start-of-body= Jan 20 18:08:14 crc kubenswrapper[4661]: I0120 18:08:14.610813 4661 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-wd4nq" podUID="c223ef1c-922a-42b8-b8d0-428a27f5ae6d" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.6:8443/healthz\": dial tcp 10.217.0.6:8443: connect: connection refused" Jan 20 18:08:14 crc kubenswrapper[4661]: I0120 18:08:14.624497 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-zdd7g" event={"ID":"081efbd4-859f-49bb-84d6-778ce124b602","Type":"ContainerStarted","Data":"1fe016247735f5f146faaf23eb9d26e2f3c1ebb2941fa48006bbf22cfcc77aa0"} Jan 20 18:08:14 crc kubenswrapper[4661]: I0120 18:08:14.659588 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-446tk" event={"ID":"e6808303-41c9-4185-beed-5e7460b07075","Type":"ContainerStarted","Data":"9a7b8212c63aa0e2d46af816c19e798efb05bc66c64c162b44090f9ffb19d87a"} Jan 20 18:08:14 crc kubenswrapper[4661]: I0120 18:08:14.662109 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-mfmq7"] Jan 20 18:08:14 crc kubenswrapper[4661]: I0120 18:08:14.673133 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-j48cg" event={"ID":"457c15d5-4066-4d88-bbb4-a9fe13de20cd","Type":"ContainerStarted","Data":"5b9a63bea591f294d59b34ca047fcbc567057b96a9f92f7e9e704c64575782d8"} Jan 20 18:08:14 crc kubenswrapper[4661]: I0120 18:08:14.673191 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-j48cg" event={"ID":"457c15d5-4066-4d88-bbb4-a9fe13de20cd","Type":"ContainerStarted","Data":"a0e24c38ee002ec2152c650325d169931e8f23fe4d6b5c87987d1d7ee1d9decf"} Jan 20 18:08:14 crc kubenswrapper[4661]: I0120 18:08:14.674022 4661 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-j48cg" Jan 20 18:08:14 crc kubenswrapper[4661]: I0120 18:08:14.684552 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7m2kh\" (UID: \"1a3225a4-585b-4ad0-9951-c5feae37b6cc\") " pod="openshift-image-registry/image-registry-697d97f7c8-7m2kh" Jan 20 18:08:14 crc kubenswrapper[4661]: E0120 18:08:14.686114 4661 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-20 18:08:15.18610204 +0000 UTC m=+151.516891702 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7m2kh" (UID: "1a3225a4-585b-4ad0-9951-c5feae37b6cc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 18:08:14 crc kubenswrapper[4661]: I0120 18:08:14.724610 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-hkbjv" event={"ID":"e192fd3a-6efd-4ce8-8915-2bba0f9dc8c0","Type":"ContainerStarted","Data":"c3085fffd5ba274c9f84df5ea9c2e1a849d6c6b951753d1ed270330dcc2b8677"} Jan 20 18:08:14 crc kubenswrapper[4661]: I0120 18:08:14.740497 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-xfrkj"] Jan 20 18:08:14 crc kubenswrapper[4661]: I0120 18:08:14.748576 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-9j9rr"] Jan 20 18:08:14 crc kubenswrapper[4661]: I0120 18:08:14.748622 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-vx646"] Jan 20 18:08:14 crc kubenswrapper[4661]: I0120 18:08:14.754061 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-dwpwk"] Jan 20 18:08:14 crc kubenswrapper[4661]: I0120 18:08:14.774979 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-7bk7r" event={"ID":"18723097-a708-4951-89bc-48ffc2128786","Type":"ContainerStarted","Data":"45bbf1e6618889b77809dfd4a075314cd069d3e594df5c5aa4a601e174d10db7"} Jan 20 18:08:14 crc kubenswrapper[4661]: I0120 18:08:14.778878 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-454g9"] Jan 20 18:08:14 crc kubenswrapper[4661]: I0120 18:08:14.785286 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 18:08:14 crc kubenswrapper[4661]: E0120 18:08:14.786925 4661 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 18:08:15.286901883 +0000 UTC m=+151.617691555 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 18:08:14 crc kubenswrapper[4661]: I0120 18:08:14.872082 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-tptl9" event={"ID":"c7ff0869-4b3b-447f-a012-9bc155bae99b","Type":"ContainerStarted","Data":"1fcca7c9ea95087b19c2ba2f607711c981b6846f920bc037d75a71813cf21fe0"} Jan 20 18:08:14 crc kubenswrapper[4661]: I0120 18:08:14.889815 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7m2kh\" (UID: \"1a3225a4-585b-4ad0-9951-c5feae37b6cc\") " pod="openshift-image-registry/image-registry-697d97f7c8-7m2kh" Jan 20 18:08:14 crc kubenswrapper[4661]: E0120 18:08:14.893115 4661 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-20 18:08:15.393102012 +0000 UTC m=+151.723891674 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7m2kh" (UID: "1a3225a4-585b-4ad0-9951-c5feae37b6cc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 18:08:14 crc kubenswrapper[4661]: I0120 18:08:14.914565 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-76h24"] Jan 20 18:08:14 crc kubenswrapper[4661]: I0120 18:08:14.936333 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-2qq6g"] Jan 20 18:08:14 crc kubenswrapper[4661]: I0120 18:08:14.991520 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 18:08:14 crc kubenswrapper[4661]: E0120 18:08:14.991794 4661 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 18:08:15.491776487 +0000 UTC m=+151.822566149 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 18:08:14 crc kubenswrapper[4661]: I0120 18:08:14.997208 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-6xth2"] Jan 20 18:08:15 crc kubenswrapper[4661]: I0120 18:08:15.092131 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7m2kh\" (UID: \"1a3225a4-585b-4ad0-9951-c5feae37b6cc\") " pod="openshift-image-registry/image-registry-697d97f7c8-7m2kh" Jan 20 18:08:15 crc kubenswrapper[4661]: E0120 18:08:15.092500 4661 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-20 18:08:15.592487558 +0000 UTC m=+151.923277220 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7m2kh" (UID: "1a3225a4-585b-4ad0-9951-c5feae37b6cc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 18:08:15 crc kubenswrapper[4661]: I0120 18:08:15.108962 4661 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-j48cg" Jan 20 18:08:15 crc kubenswrapper[4661]: I0120 18:08:15.157907 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-gbprg"] Jan 20 18:08:15 crc kubenswrapper[4661]: I0120 18:08:15.192574 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 18:08:15 crc kubenswrapper[4661]: E0120 18:08:15.194194 4661 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 18:08:15.694171835 +0000 UTC m=+152.024961497 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 18:08:15 crc kubenswrapper[4661]: I0120 18:08:15.194259 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7m2kh\" (UID: \"1a3225a4-585b-4ad0-9951-c5feae37b6cc\") " pod="openshift-image-registry/image-registry-697d97f7c8-7m2kh" Jan 20 18:08:15 crc kubenswrapper[4661]: E0120 18:08:15.194605 4661 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-20 18:08:15.694598176 +0000 UTC m=+152.025387838 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7m2kh" (UID: "1a3225a4-585b-4ad0-9951-c5feae37b6cc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 18:08:15 crc kubenswrapper[4661]: I0120 18:08:15.266766 4661 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-tptl9" Jan 20 18:08:15 crc kubenswrapper[4661]: I0120 18:08:15.270841 4661 patch_prober.go:28] interesting pod/router-default-5444994796-tptl9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 20 18:08:15 crc kubenswrapper[4661]: [-]has-synced failed: reason withheld Jan 20 18:08:15 crc kubenswrapper[4661]: [+]process-running ok Jan 20 18:08:15 crc kubenswrapper[4661]: healthz check failed Jan 20 18:08:15 crc kubenswrapper[4661]: I0120 18:08:15.270906 4661 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-tptl9" podUID="c7ff0869-4b3b-447f-a012-9bc155bae99b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 20 18:08:15 crc kubenswrapper[4661]: I0120 18:08:15.299787 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 18:08:15 crc kubenswrapper[4661]: E0120 18:08:15.300149 4661 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 18:08:15.800076096 +0000 UTC m=+152.130865768 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 18:08:15 crc kubenswrapper[4661]: I0120 18:08:15.403891 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7m2kh\" (UID: \"1a3225a4-585b-4ad0-9951-c5feae37b6cc\") " pod="openshift-image-registry/image-registry-697d97f7c8-7m2kh" Jan 20 18:08:15 crc kubenswrapper[4661]: E0120 18:08:15.404761 4661 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-20 18:08:15.904719243 +0000 UTC m=+152.235508905 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7m2kh" (UID: "1a3225a4-585b-4ad0-9951-c5feae37b6cc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 18:08:15 crc kubenswrapper[4661]: I0120 18:08:15.506682 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 18:08:15 crc kubenswrapper[4661]: E0120 18:08:15.507104 4661 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 18:08:16.007081879 +0000 UTC m=+152.337871541 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 18:08:15 crc kubenswrapper[4661]: I0120 18:08:15.546612 4661 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-j48cg" podStartSLOduration=130.546592046 podStartE2EDuration="2m10.546592046s" podCreationTimestamp="2026-01-20 18:06:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 18:08:15.442803122 +0000 UTC m=+151.773592784" watchObservedRunningTime="2026-01-20 18:08:15.546592046 +0000 UTC m=+151.877381708" Jan 20 18:08:15 crc kubenswrapper[4661]: I0120 18:08:15.608484 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7m2kh\" (UID: \"1a3225a4-585b-4ad0-9951-c5feae37b6cc\") " pod="openshift-image-registry/image-registry-697d97f7c8-7m2kh" Jan 20 18:08:15 crc kubenswrapper[4661]: E0120 18:08:15.610109 4661 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-20 18:08:16.110090441 +0000 UTC m=+152.440880103 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7m2kh" (UID: "1a3225a4-585b-4ad0-9951-c5feae37b6cc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 18:08:15 crc kubenswrapper[4661]: I0120 18:08:15.618174 4661 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-7bk7r" podStartSLOduration=131.618152619 podStartE2EDuration="2m11.618152619s" podCreationTimestamp="2026-01-20 18:06:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 18:08:15.578447396 +0000 UTC m=+151.909237058" watchObservedRunningTime="2026-01-20 18:08:15.618152619 +0000 UTC m=+151.948942271" Jan 20 18:08:15 crc kubenswrapper[4661]: I0120 18:08:15.662796 4661 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-4flqc" podStartSLOduration=130.662762584 podStartE2EDuration="2m10.662762584s" podCreationTimestamp="2026-01-20 18:06:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 18:08:15.619093195 +0000 UTC m=+151.949882857" watchObservedRunningTime="2026-01-20 18:08:15.662762584 +0000 UTC m=+151.993552246" Jan 20 18:08:15 crc kubenswrapper[4661]: I0120 18:08:15.665488 4661 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-wd4nq" podStartSLOduration=130.665481518 podStartE2EDuration="2m10.665481518s" podCreationTimestamp="2026-01-20 18:06:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 18:08:15.663146575 +0000 UTC m=+151.993936237" watchObservedRunningTime="2026-01-20 18:08:15.665481518 +0000 UTC m=+151.996271180" Jan 20 18:08:15 crc kubenswrapper[4661]: I0120 18:08:15.695621 4661 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-tptl9" podStartSLOduration=130.695594901 podStartE2EDuration="2m10.695594901s" podCreationTimestamp="2026-01-20 18:06:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 18:08:15.695444417 +0000 UTC m=+152.026234079" watchObservedRunningTime="2026-01-20 18:08:15.695594901 +0000 UTC m=+152.026384563" Jan 20 18:08:15 crc kubenswrapper[4661]: I0120 18:08:15.721322 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 18:08:15 crc kubenswrapper[4661]: E0120 18:08:15.721720 4661 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 18:08:16.221661986 +0000 UTC m=+152.552451648 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 18:08:15 crc kubenswrapper[4661]: I0120 18:08:15.822987 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7m2kh\" (UID: \"1a3225a4-585b-4ad0-9951-c5feae37b6cc\") " pod="openshift-image-registry/image-registry-697d97f7c8-7m2kh" Jan 20 18:08:15 crc kubenswrapper[4661]: E0120 18:08:15.823468 4661 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-20 18:08:16.323453096 +0000 UTC m=+152.654242758 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7m2kh" (UID: "1a3225a4-585b-4ad0-9951-c5feae37b6cc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 18:08:15 crc kubenswrapper[4661]: I0120 18:08:15.898406 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-2qq6g" event={"ID":"f1297f46-d734-41e9-a7b8-5033ce03f315","Type":"ContainerStarted","Data":"5a373b0bfa90b3a022de6494bcf37e527153e69d66d8eca661f15ec9cf64abd1"} Jan 20 18:08:15 crc kubenswrapper[4661]: I0120 18:08:15.928989 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 18:08:15 crc kubenswrapper[4661]: E0120 18:08:15.929572 4661 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 18:08:16.429546532 +0000 UTC m=+152.760336194 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 18:08:15 crc kubenswrapper[4661]: I0120 18:08:15.943369 4661 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-fxb9d" Jan 20 18:08:15 crc kubenswrapper[4661]: I0120 18:08:15.953042 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-n76xd" event={"ID":"02d802d7-1516-4eb2-98a9-2f1878609216","Type":"ContainerStarted","Data":"765e12bc0e484de2f5814bf332f68b6943d0fa66cb2b93c3b72cb54eda0b59db"} Jan 20 18:08:15 crc kubenswrapper[4661]: I0120 18:08:15.953099 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-n76xd" event={"ID":"02d802d7-1516-4eb2-98a9-2f1878609216","Type":"ContainerStarted","Data":"087bb989f19cd74fc7fd2024da680b046cf5b4d739d00b7fd757dc7f0c658779"} Jan 20 18:08:15 crc kubenswrapper[4661]: I0120 18:08:15.954281 4661 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-n76xd" Jan 20 18:08:15 crc kubenswrapper[4661]: I0120 18:08:15.974708 4661 patch_prober.go:28] interesting pod/console-operator-58897d9998-n76xd container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.22:8443/readyz\": dial tcp 10.217.0.22:8443: connect: connection refused" start-of-body= Jan 20 18:08:15 crc kubenswrapper[4661]: I0120 18:08:15.974783 4661 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-n76xd" podUID="02d802d7-1516-4eb2-98a9-2f1878609216" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.22:8443/readyz\": dial tcp 10.217.0.22:8443: connect: connection refused" Jan 20 18:08:15 crc kubenswrapper[4661]: I0120 18:08:15.975867 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-gbprg" event={"ID":"8924a8e2-8af0-49f0-9e0b-befc11e5755a","Type":"ContainerStarted","Data":"5c7508b9416d03a65e1bc4731d41522416ef4560a1c90337a6ff4efdb5c1a76e"} Jan 20 18:08:15 crc kubenswrapper[4661]: I0120 18:08:15.979929 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-4p2m6" event={"ID":"8b7cc6dd-b02e-4e3f-b569-42201693f3e7","Type":"ContainerStarted","Data":"f8e5c64cf8456c1be1b2fd2b610e8f09b60c84cef1f89a8980df7efc8b18a715"} Jan 20 18:08:15 crc kubenswrapper[4661]: I0120 18:08:15.981661 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-xlt4n" event={"ID":"550a5702-08aa-4dca-a1a0-7adfebbd9312","Type":"ContainerStarted","Data":"4beb6cc340fb7aa8870cd64fd9660c5ec7717870cf3e15d37c2100fe5fd96f98"} Jan 20 18:08:15 crc kubenswrapper[4661]: I0120 18:08:15.994866 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-mfmq7" event={"ID":"b3568f10-700a-4f42-8e58-f749a50cf0cb","Type":"ContainerStarted","Data":"1ef7d0a157035c96cac6b7231c9bfa84600276024e7d55df833a5284ab9b1a80"} Jan 20 18:08:16 crc kubenswrapper[4661]: I0120 18:08:16.010117 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-9j9rr" event={"ID":"0d3a2340-8262-4080-8c72-8ced3d6c0c5a","Type":"ContainerStarted","Data":"c4a01ff680145433bfcdd4110f2863988e779e666d580efe93be3070289c1fb1"} Jan 20 18:08:16 crc kubenswrapper[4661]: I0120 18:08:16.036611 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7m2kh\" (UID: \"1a3225a4-585b-4ad0-9951-c5feae37b6cc\") " pod="openshift-image-registry/image-registry-697d97f7c8-7m2kh" Jan 20 18:08:16 crc kubenswrapper[4661]: E0120 18:08:16.037133 4661 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-20 18:08:16.537090067 +0000 UTC m=+152.867879729 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7m2kh" (UID: "1a3225a4-585b-4ad0-9951-c5feae37b6cc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 18:08:16 crc kubenswrapper[4661]: I0120 18:08:16.083890 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-mcf6f" event={"ID":"52978b9f-376f-4f49-9c2d-fc3da32b178f","Type":"ContainerStarted","Data":"ccfc7f9c65c8568cec7cd0fd45409cb53a6782d346083e9561decef28fb0581a"} Jan 20 18:08:16 crc kubenswrapper[4661]: I0120 18:08:16.112539 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29482200-ckvfc" event={"ID":"7a1f08ef-d9a0-484e-9959-14d3ab178d28","Type":"ContainerStarted","Data":"d478bea8e1e7e0bb5b56b1a2364a67ef46f694c725076ea64a878e404548786f"} Jan 20 18:08:16 crc kubenswrapper[4661]: I0120 18:08:16.138022 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 18:08:16 crc kubenswrapper[4661]: E0120 18:08:16.138331 4661 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 18:08:16.638311482 +0000 UTC m=+152.969101144 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 18:08:16 crc kubenswrapper[4661]: I0120 18:08:16.184868 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-zdd7g" event={"ID":"081efbd4-859f-49bb-84d6-778ce124b602","Type":"ContainerStarted","Data":"7ea2d0b0c7fbd97a5b8f112ca2bf9dfc3a9a20d4d5740c55016f42a009e404ac"} Jan 20 18:08:16 crc kubenswrapper[4661]: I0120 18:08:16.244239 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7m2kh\" (UID: \"1a3225a4-585b-4ad0-9951-c5feae37b6cc\") " pod="openshift-image-registry/image-registry-697d97f7c8-7m2kh" Jan 20 18:08:16 crc kubenswrapper[4661]: E0120 18:08:16.244657 4661 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-20 18:08:16.744643265 +0000 UTC m=+153.075432927 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7m2kh" (UID: "1a3225a4-585b-4ad0-9951-c5feae37b6cc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 18:08:16 crc kubenswrapper[4661]: I0120 18:08:16.275161 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-lrwcc" event={"ID":"a1a9a02c-4b40-412e-a7f1-94568385465a","Type":"ContainerStarted","Data":"f15fcc225cccf5d58e2bfab29d37a6bfd6fc687f10c538e1ff4675a3d49feb76"} Jan 20 18:08:16 crc kubenswrapper[4661]: I0120 18:08:16.275367 4661 patch_prober.go:28] interesting pod/router-default-5444994796-tptl9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 20 18:08:16 crc kubenswrapper[4661]: [-]has-synced failed: reason withheld Jan 20 18:08:16 crc kubenswrapper[4661]: [+]process-running ok Jan 20 18:08:16 crc kubenswrapper[4661]: healthz check failed Jan 20 18:08:16 crc kubenswrapper[4661]: I0120 18:08:16.275429 4661 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-tptl9" podUID="c7ff0869-4b3b-447f-a012-9bc155bae99b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 20 18:08:16 crc kubenswrapper[4661]: I0120 18:08:16.351993 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 18:08:16 crc kubenswrapper[4661]: E0120 18:08:16.352450 4661 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 18:08:16.852430007 +0000 UTC m=+153.183219669 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 18:08:16 crc kubenswrapper[4661]: I0120 18:08:16.399182 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-tmtgr" event={"ID":"17cc4c8d-5d73-4307-83ea-e826befa5b06","Type":"ContainerStarted","Data":"ce29c9ac06ca7202cbf4eb0079a7d15eef5a1602fd405d9b909ab06a0537d219"} Jan 20 18:08:16 crc kubenswrapper[4661]: I0120 18:08:16.399975 4661 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-tmtgr" Jan 20 18:08:16 crc kubenswrapper[4661]: I0120 18:08:16.432983 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-qwp42" event={"ID":"00496c34-a198-4516-bff8-0b553db85849","Type":"ContainerStarted","Data":"5ee004cf2c73b052926483b8892a12237fc9381fc657e49f3655d6a02c49997b"} Jan 20 18:08:16 crc kubenswrapper[4661]: I0120 18:08:16.454648 4661 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-n76xd" podStartSLOduration=132.454630868 podStartE2EDuration="2m12.454630868s" podCreationTimestamp="2026-01-20 18:06:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 18:08:16.128583319 +0000 UTC m=+152.459372981" watchObservedRunningTime="2026-01-20 18:08:16.454630868 +0000 UTC m=+152.785420530" Jan 20 18:08:16 crc kubenswrapper[4661]: I0120 18:08:16.454790 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7m2kh\" (UID: \"1a3225a4-585b-4ad0-9951-c5feae37b6cc\") " pod="openshift-image-registry/image-registry-697d97f7c8-7m2kh" Jan 20 18:08:16 crc kubenswrapper[4661]: E0120 18:08:16.462771 4661 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-20 18:08:16.962751497 +0000 UTC m=+153.293541159 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7m2kh" (UID: "1a3225a4-585b-4ad0-9951-c5feae37b6cc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 18:08:16 crc kubenswrapper[4661]: I0120 18:08:16.505504 4661 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-tmtgr" podStartSLOduration=131.505474061 podStartE2EDuration="2m11.505474061s" podCreationTimestamp="2026-01-20 18:06:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 18:08:16.453175048 +0000 UTC m=+152.783964710" watchObservedRunningTime="2026-01-20 18:08:16.505474061 +0000 UTC m=+152.836263723" Jan 20 18:08:16 crc kubenswrapper[4661]: I0120 18:08:16.534124 4661 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-bh9mt" Jan 20 18:08:16 crc kubenswrapper[4661]: I0120 18:08:16.542602 4661 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-bh9mt container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.13:6443/healthz\": dial tcp 10.217.0.13:6443: connect: connection refused" start-of-body= Jan 20 18:08:16 crc kubenswrapper[4661]: I0120 18:08:16.542681 4661 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-bh9mt" podUID="1f0c818b-31de-43ee-a20a-1fc174261b42" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.13:6443/healthz\": dial tcp 10.217.0.13:6443: connect: connection refused" Jan 20 18:08:16 crc kubenswrapper[4661]: I0120 18:08:16.556438 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 18:08:16 crc kubenswrapper[4661]: E0120 18:08:16.557512 4661 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 18:08:17.057492527 +0000 UTC m=+153.388282189 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 18:08:16 crc kubenswrapper[4661]: I0120 18:08:16.589322 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-xfrkj" event={"ID":"28e39614-3757-41e0-b164-eb1964ff6a8d","Type":"ContainerStarted","Data":"02e5433fbadb2faac7812ded09ccfd1a6f82d114bfb7acaaee4a581252e1066e"} Jan 20 18:08:16 crc kubenswrapper[4661]: I0120 18:08:16.612120 4661 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-bh9mt" podStartSLOduration=132.612100662 podStartE2EDuration="2m12.612100662s" podCreationTimestamp="2026-01-20 18:06:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 18:08:16.610306953 +0000 UTC m=+152.941096615" watchObservedRunningTime="2026-01-20 18:08:16.612100662 +0000 UTC m=+152.942890324" Jan 20 18:08:16 crc kubenswrapper[4661]: I0120 18:08:16.631463 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-qdlnn" event={"ID":"a507ebcc-7e0b-445b-9688-882358d365ce","Type":"ContainerStarted","Data":"1a4aa570dcbc71294005b35c1176f427a7cb6ecbd895d467fdd5abcdd65aa542"} Jan 20 18:08:16 crc kubenswrapper[4661]: I0120 18:08:16.663761 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7m2kh\" (UID: \"1a3225a4-585b-4ad0-9951-c5feae37b6cc\") " pod="openshift-image-registry/image-registry-697d97f7c8-7m2kh" Jan 20 18:08:16 crc kubenswrapper[4661]: I0120 18:08:16.669379 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-hkbjv" event={"ID":"e192fd3a-6efd-4ce8-8915-2bba0f9dc8c0","Type":"ContainerStarted","Data":"9b20aa00ed73595da824db3de410294cfc0a096b5706cd288e3be41c8832e20e"} Jan 20 18:08:16 crc kubenswrapper[4661]: E0120 18:08:16.671768 4661 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-20 18:08:17.171747363 +0000 UTC m=+153.502537025 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7m2kh" (UID: "1a3225a4-585b-4ad0-9951-c5feae37b6cc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 18:08:16 crc kubenswrapper[4661]: I0120 18:08:16.714074 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-mlncs" event={"ID":"8a9d6945-a25a-4a09-92ee-7b90664a2edd","Type":"ContainerStarted","Data":"21b4959f61427bb9a1d730914689dbd9502da5a79d4f7491075808000a20f6f8"} Jan 20 18:08:16 crc kubenswrapper[4661]: I0120 18:08:16.716276 4661 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-qdlnn" podStartSLOduration=131.716257886 podStartE2EDuration="2m11.716257886s" podCreationTimestamp="2026-01-20 18:06:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 18:08:16.714209641 +0000 UTC m=+153.044999303" watchObservedRunningTime="2026-01-20 18:08:16.716257886 +0000 UTC m=+153.047047548" Jan 20 18:08:16 crc kubenswrapper[4661]: I0120 18:08:16.767588 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 18:08:16 crc kubenswrapper[4661]: E0120 18:08:16.768144 4661 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 18:08:17.268123177 +0000 UTC m=+153.598912839 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 18:08:16 crc kubenswrapper[4661]: I0120 18:08:16.768244 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7m2kh\" (UID: \"1a3225a4-585b-4ad0-9951-c5feae37b6cc\") " pod="openshift-image-registry/image-registry-697d97f7c8-7m2kh" Jan 20 18:08:16 crc kubenswrapper[4661]: E0120 18:08:16.769449 4661 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-20 18:08:17.269439093 +0000 UTC m=+153.600228855 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7m2kh" (UID: "1a3225a4-585b-4ad0-9951-c5feae37b6cc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 18:08:16 crc kubenswrapper[4661]: I0120 18:08:16.797874 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-tdtbm" event={"ID":"e066f8cc-45be-4120-b044-f1852309d339","Type":"ContainerStarted","Data":"84c1e2ec77d12ef155c4b3d973a9de4c3e3de57bc7be0e3d3e827c7eb4763473"} Jan 20 18:08:16 crc kubenswrapper[4661]: I0120 18:08:16.833933 4661 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-hkbjv" podStartSLOduration=132.833902254 podStartE2EDuration="2m12.833902254s" podCreationTimestamp="2026-01-20 18:06:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 18:08:16.830004889 +0000 UTC m=+153.160794551" watchObservedRunningTime="2026-01-20 18:08:16.833902254 +0000 UTC m=+153.164691916" Jan 20 18:08:16 crc kubenswrapper[4661]: I0120 18:08:16.872417 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 18:08:16 crc kubenswrapper[4661]: E0120 18:08:16.872861 4661 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 18:08:17.372841556 +0000 UTC m=+153.703631218 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 18:08:16 crc kubenswrapper[4661]: I0120 18:08:16.878950 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-vx646" event={"ID":"76bf33d8-cdc1-4d99-a84f-4e9289a963af","Type":"ContainerStarted","Data":"6139b5a7f0a09a3a28f33c2c4c61f44b3ddbd99b80ca7211a2fa2ea88aea6d14"} Jan 20 18:08:16 crc kubenswrapper[4661]: I0120 18:08:16.927021 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-r6qrc" event={"ID":"db930c1a-9c05-419d-b168-b232e2b98e9b","Type":"ContainerStarted","Data":"978dedcd94f3404248d60003ec1ea8c05dcc8fc2e8a5e359acc59c69f9214b8a"} Jan 20 18:08:16 crc kubenswrapper[4661]: I0120 18:08:16.951734 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-bvpn8" event={"ID":"afe96487-2a45-4ad8-8f17-7f33186f55f4","Type":"ContainerStarted","Data":"88a7efa8914f58fb8adf2b5f6e9c88791ad194d5956a75d41a061004f37c62af"} Jan 20 18:08:16 crc kubenswrapper[4661]: I0120 18:08:16.953406 4661 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-bvpn8" Jan 20 18:08:16 crc kubenswrapper[4661]: I0120 18:08:16.955737 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-phg9x" event={"ID":"4c500541-c3f2-4f6d-8bb7-1227aa74989a","Type":"ContainerStarted","Data":"b226e077e4ab24849277d38c0a08eacaa3c1e7f4fed8a5f921bbc1e1eb4cb8ed"} Jan 20 18:08:16 crc kubenswrapper[4661]: I0120 18:08:16.970948 4661 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-mlncs" podStartSLOduration=131.970918606 podStartE2EDuration="2m11.970918606s" podCreationTimestamp="2026-01-20 18:06:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 18:08:16.924205364 +0000 UTC m=+153.254995026" watchObservedRunningTime="2026-01-20 18:08:16.970918606 +0000 UTC m=+153.301708268" Jan 20 18:08:16 crc kubenswrapper[4661]: I0120 18:08:16.974026 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7m2kh\" (UID: \"1a3225a4-585b-4ad0-9951-c5feae37b6cc\") " pod="openshift-image-registry/image-registry-697d97f7c8-7m2kh" Jan 20 18:08:16 crc kubenswrapper[4661]: E0120 18:08:16.975589 4661 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-20 18:08:17.475570002 +0000 UTC m=+153.806359664 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7m2kh" (UID: "1a3225a4-585b-4ad0-9951-c5feae37b6cc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 18:08:16 crc kubenswrapper[4661]: I0120 18:08:16.976442 4661 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-tdtbm" podStartSLOduration=7.976429185 podStartE2EDuration="7.976429185s" podCreationTimestamp="2026-01-20 18:08:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 18:08:16.973351802 +0000 UTC m=+153.304141464" watchObservedRunningTime="2026-01-20 18:08:16.976429185 +0000 UTC m=+153.307218847" Jan 20 18:08:16 crc kubenswrapper[4661]: I0120 18:08:16.976925 4661 patch_prober.go:28] interesting pod/downloads-7954f5f757-bvpn8 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.16:8080/\": dial tcp 10.217.0.16:8080: connect: connection refused" start-of-body= Jan 20 18:08:16 crc kubenswrapper[4661]: I0120 18:08:16.976976 4661 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-bvpn8" podUID="afe96487-2a45-4ad8-8f17-7f33186f55f4" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.16:8080/\": dial tcp 10.217.0.16:8080: connect: connection refused" Jan 20 18:08:17 crc kubenswrapper[4661]: I0120 18:08:17.002468 4661 csr.go:261] certificate signing request csr-6bc89 is approved, waiting to be issued Jan 20 18:08:17 crc kubenswrapper[4661]: I0120 18:08:17.002766 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-6xth2" event={"ID":"79c9338e-65b7-4805-810b-beb0b3c032a6","Type":"ContainerStarted","Data":"5487cc44c00c5a2e1af1a856e11b46a3cffe636fcc316c1f37501fddbd267010"} Jan 20 18:08:17 crc kubenswrapper[4661]: I0120 18:08:17.019022 4661 csr.go:257] certificate signing request csr-6bc89 is issued Jan 20 18:08:17 crc kubenswrapper[4661]: I0120 18:08:17.019546 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-wd4nq" event={"ID":"c223ef1c-922a-42b8-b8d0-428a27f5ae6d","Type":"ContainerStarted","Data":"53d40a2a00f8e68cb2f04c1ac5bca2cdb1fb4d312ff10b495cb1fb8fb2e4bd2e"} Jan 20 18:08:17 crc kubenswrapper[4661]: I0120 18:08:17.019975 4661 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-bvpn8" podStartSLOduration=132.019957321 podStartE2EDuration="2m12.019957321s" podCreationTimestamp="2026-01-20 18:06:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 18:08:17.019268432 +0000 UTC m=+153.350058104" watchObservedRunningTime="2026-01-20 18:08:17.019957321 +0000 UTC m=+153.350746983" Jan 20 18:08:17 crc kubenswrapper[4661]: I0120 18:08:17.036118 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-7hbkg" event={"ID":"302e8226-565c-44a4-bb0e-dee670200ae3","Type":"ContainerStarted","Data":"94f428ff30a9e2f5c34ee7d9b99f2e994b2d77fec7defb0292de40d68a26806d"} Jan 20 18:08:17 crc kubenswrapper[4661]: I0120 18:08:17.037588 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-flvxz" event={"ID":"9a7d9c3e-88e9-44b2-98bc-6aab91fbf9b4","Type":"ContainerStarted","Data":"4f5c399f1f43fa72aa1342d35d3156571d0a9548a99a1e3dd64db11795daec82"} Jan 20 18:08:17 crc kubenswrapper[4661]: I0120 18:08:17.037607 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-flvxz" event={"ID":"9a7d9c3e-88e9-44b2-98bc-6aab91fbf9b4","Type":"ContainerStarted","Data":"01ccab6430930e877dfb12601065669655cc9d4b67d76dc0c147bcc8885b4281"} Jan 20 18:08:17 crc kubenswrapper[4661]: I0120 18:08:17.038854 4661 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-flvxz" Jan 20 18:08:17 crc kubenswrapper[4661]: I0120 18:08:17.053725 4661 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-flvxz container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.15:8080/healthz\": dial tcp 10.217.0.15:8080: connect: connection refused" start-of-body= Jan 20 18:08:17 crc kubenswrapper[4661]: I0120 18:08:17.053738 4661 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-wd4nq" Jan 20 18:08:17 crc kubenswrapper[4661]: I0120 18:08:17.053815 4661 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-flvxz" podUID="9a7d9c3e-88e9-44b2-98bc-6aab91fbf9b4" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.15:8080/healthz\": dial tcp 10.217.0.15:8080: connect: connection refused" Jan 20 18:08:17 crc kubenswrapper[4661]: I0120 18:08:17.063984 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-76h24" event={"ID":"673a7b2a-8e9f-4d92-800e-9ae986189dec","Type":"ContainerStarted","Data":"424a96ab7704e3dc9eca4be3d7b1dc9c6bf22cf8fa6dddf5fbf42b14ebdde0a8"} Jan 20 18:08:17 crc kubenswrapper[4661]: I0120 18:08:17.075259 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 18:08:17 crc kubenswrapper[4661]: E0120 18:08:17.076796 4661 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 18:08:17.576762315 +0000 UTC m=+153.907551977 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 18:08:17 crc kubenswrapper[4661]: I0120 18:08:17.089278 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-9htcv" event={"ID":"8e261f9f-2027-4bb2-9254-40758baaa1ea","Type":"ContainerStarted","Data":"8f6937470765155fadcb38b1efcfcb9735db5e43e45c2eb9df1a31ce1c4a9b7b"} Jan 20 18:08:17 crc kubenswrapper[4661]: I0120 18:08:17.096523 4661 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-phg9x" podStartSLOduration=132.096501019 podStartE2EDuration="2m12.096501019s" podCreationTimestamp="2026-01-20 18:06:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 18:08:17.087138816 +0000 UTC m=+153.417928478" watchObservedRunningTime="2026-01-20 18:08:17.096501019 +0000 UTC m=+153.427290671" Jan 20 18:08:17 crc kubenswrapper[4661]: I0120 18:08:17.139019 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-446tk" event={"ID":"e6808303-41c9-4185-beed-5e7460b07075","Type":"ContainerStarted","Data":"6c9a983d9fa29aeb97cf7765d3628071215903a6b4de49f416eb6386dad17474"} Jan 20 18:08:17 crc kubenswrapper[4661]: I0120 18:08:17.164257 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-ngxm7" event={"ID":"1fe4d701-349f-4edf-a59f-092ccfcdd40e","Type":"ContainerStarted","Data":"d5d07c4da5537997e86313363f49fa9ac3aa9f2893bb7612ce5d25d2f50f086f"} Jan 20 18:08:17 crc kubenswrapper[4661]: I0120 18:08:17.177705 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7m2kh\" (UID: \"1a3225a4-585b-4ad0-9951-c5feae37b6cc\") " pod="openshift-image-registry/image-registry-697d97f7c8-7m2kh" Jan 20 18:08:17 crc kubenswrapper[4661]: I0120 18:08:17.206801 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-dwpwk" event={"ID":"413ba99d-7214-4981-98fb-910c4f5731d8","Type":"ContainerStarted","Data":"ef823974cfb617a07a81d3bbf355f12dd9012b0591b79608be029875657f1cf7"} Jan 20 18:08:17 crc kubenswrapper[4661]: E0120 18:08:17.211647 4661 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-20 18:08:17.711628039 +0000 UTC m=+154.042417701 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7m2kh" (UID: "1a3225a4-585b-4ad0-9951-c5feae37b6cc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 18:08:17 crc kubenswrapper[4661]: I0120 18:08:17.226111 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-454g9" event={"ID":"3a8e7e81-15e8-4127-9d43-732131427aa2","Type":"ContainerStarted","Data":"d5c6df1a3a79844264193aaa09b7e2e2dd5f6fbb3eb264137de05718eefd4a60"} Jan 20 18:08:17 crc kubenswrapper[4661]: I0120 18:08:17.245545 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-c4pxm" event={"ID":"ebd4ae2f-ef41-402c-bc81-83385361e291","Type":"ContainerStarted","Data":"18179c20ea2ea835ab39eb7355f0cef3ed4fbb155cf7d7802dcedcf0a751a57a"} Jan 20 18:08:17 crc kubenswrapper[4661]: I0120 18:08:17.245629 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-c4pxm" event={"ID":"ebd4ae2f-ef41-402c-bc81-83385361e291","Type":"ContainerStarted","Data":"f6733d1c89bbc4d3ce8694969cfe9abee585ed89933e322e17d05f5da277daa7"} Jan 20 18:08:17 crc kubenswrapper[4661]: I0120 18:08:17.253733 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-q7s9r" event={"ID":"53f0efc5-f6a1-4f6b-a37b-3e9c4e3fea65","Type":"ContainerStarted","Data":"4d5867d4439a5c36792d0cd715d94d01e00ca3947b97c102fe78bbfc2364ba19"} Jan 20 18:08:17 crc kubenswrapper[4661]: I0120 18:08:17.256810 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-44vhk" event={"ID":"e4cd0e68-3282-4713-8386-8c86f56f1f70","Type":"ContainerStarted","Data":"7a67510e86a4b7ccb23f64efc139f40557c019064383c22b2597f6236b761361"} Jan 20 18:08:17 crc kubenswrapper[4661]: I0120 18:08:17.275820 4661 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-446tk" podStartSLOduration=133.275798233 podStartE2EDuration="2m13.275798233s" podCreationTimestamp="2026-01-20 18:06:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 18:08:17.275155575 +0000 UTC m=+153.605945247" watchObservedRunningTime="2026-01-20 18:08:17.275798233 +0000 UTC m=+153.606587895" Jan 20 18:08:17 crc kubenswrapper[4661]: I0120 18:08:17.276201 4661 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-flvxz" podStartSLOduration=132.276195993 podStartE2EDuration="2m12.276195993s" podCreationTimestamp="2026-01-20 18:06:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 18:08:17.195264417 +0000 UTC m=+153.526054079" watchObservedRunningTime="2026-01-20 18:08:17.276195993 +0000 UTC m=+153.606985655" Jan 20 18:08:17 crc kubenswrapper[4661]: I0120 18:08:17.278527 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 18:08:17 crc kubenswrapper[4661]: E0120 18:08:17.278979 4661 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 18:08:17.778964808 +0000 UTC m=+154.109754470 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 18:08:17 crc kubenswrapper[4661]: I0120 18:08:17.285026 4661 patch_prober.go:28] interesting pod/router-default-5444994796-tptl9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 20 18:08:17 crc kubenswrapper[4661]: [-]has-synced failed: reason withheld Jan 20 18:08:17 crc kubenswrapper[4661]: [+]process-running ok Jan 20 18:08:17 crc kubenswrapper[4661]: healthz check failed Jan 20 18:08:17 crc kubenswrapper[4661]: I0120 18:08:17.285080 4661 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-tptl9" podUID="c7ff0869-4b3b-447f-a012-9bc155bae99b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 20 18:08:17 crc kubenswrapper[4661]: I0120 18:08:17.333448 4661 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-ngxm7" podStartSLOduration=133.33342754 podStartE2EDuration="2m13.33342754s" podCreationTimestamp="2026-01-20 18:06:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 18:08:17.329255877 +0000 UTC m=+153.660045539" watchObservedRunningTime="2026-01-20 18:08:17.33342754 +0000 UTC m=+153.664217202" Jan 20 18:08:17 crc kubenswrapper[4661]: I0120 18:08:17.386090 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7m2kh\" (UID: \"1a3225a4-585b-4ad0-9951-c5feae37b6cc\") " pod="openshift-image-registry/image-registry-697d97f7c8-7m2kh" Jan 20 18:08:17 crc kubenswrapper[4661]: E0120 18:08:17.394751 4661 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-20 18:08:17.894727216 +0000 UTC m=+154.225516878 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7m2kh" (UID: "1a3225a4-585b-4ad0-9951-c5feae37b6cc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 18:08:17 crc kubenswrapper[4661]: I0120 18:08:17.441003 4661 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-c4pxm" podStartSLOduration=132.440932494 podStartE2EDuration="2m12.440932494s" podCreationTimestamp="2026-01-20 18:06:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 18:08:17.437972454 +0000 UTC m=+153.768762116" watchObservedRunningTime="2026-01-20 18:08:17.440932494 +0000 UTC m=+153.771722156" Jan 20 18:08:17 crc kubenswrapper[4661]: I0120 18:08:17.493317 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 18:08:17 crc kubenswrapper[4661]: E0120 18:08:17.493616 4661 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 18:08:17.993595877 +0000 UTC m=+154.324385539 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 18:08:17 crc kubenswrapper[4661]: I0120 18:08:17.596102 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7m2kh\" (UID: \"1a3225a4-585b-4ad0-9951-c5feae37b6cc\") " pod="openshift-image-registry/image-registry-697d97f7c8-7m2kh" Jan 20 18:08:17 crc kubenswrapper[4661]: E0120 18:08:17.597285 4661 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-20 18:08:18.097268428 +0000 UTC m=+154.428058090 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7m2kh" (UID: "1a3225a4-585b-4ad0-9951-c5feae37b6cc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 18:08:17 crc kubenswrapper[4661]: I0120 18:08:17.698054 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 18:08:17 crc kubenswrapper[4661]: E0120 18:08:17.698361 4661 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 18:08:18.198337278 +0000 UTC m=+154.529126940 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 18:08:17 crc kubenswrapper[4661]: I0120 18:08:17.801697 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7m2kh\" (UID: \"1a3225a4-585b-4ad0-9951-c5feae37b6cc\") " pod="openshift-image-registry/image-registry-697d97f7c8-7m2kh" Jan 20 18:08:17 crc kubenswrapper[4661]: E0120 18:08:17.802440 4661 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-20 18:08:18.30242449 +0000 UTC m=+154.633214142 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7m2kh" (UID: "1a3225a4-585b-4ad0-9951-c5feae37b6cc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 18:08:17 crc kubenswrapper[4661]: I0120 18:08:17.902782 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 18:08:17 crc kubenswrapper[4661]: E0120 18:08:17.903204 4661 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 18:08:18.403182362 +0000 UTC m=+154.733972014 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 18:08:18 crc kubenswrapper[4661]: I0120 18:08:18.009487 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7m2kh\" (UID: \"1a3225a4-585b-4ad0-9951-c5feae37b6cc\") " pod="openshift-image-registry/image-registry-697d97f7c8-7m2kh" Jan 20 18:08:18 crc kubenswrapper[4661]: E0120 18:08:18.009880 4661 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-20 18:08:18.509867084 +0000 UTC m=+154.840656746 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7m2kh" (UID: "1a3225a4-585b-4ad0-9951-c5feae37b6cc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 18:08:18 crc kubenswrapper[4661]: I0120 18:08:18.027866 4661 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2027-01-20 18:03:17 +0000 UTC, rotation deadline is 2026-12-14 07:52:22.579846169 +0000 UTC Jan 20 18:08:18 crc kubenswrapper[4661]: I0120 18:08:18.027920 4661 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 7861h44m4.551929467s for next certificate rotation Jan 20 18:08:18 crc kubenswrapper[4661]: I0120 18:08:18.113187 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 18:08:18 crc kubenswrapper[4661]: E0120 18:08:18.113587 4661 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 18:08:18.613569326 +0000 UTC m=+154.944358988 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 18:08:18 crc kubenswrapper[4661]: I0120 18:08:18.215329 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7m2kh\" (UID: \"1a3225a4-585b-4ad0-9951-c5feae37b6cc\") " pod="openshift-image-registry/image-registry-697d97f7c8-7m2kh" Jan 20 18:08:18 crc kubenswrapper[4661]: E0120 18:08:18.215811 4661 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-20 18:08:18.715787758 +0000 UTC m=+155.046577420 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7m2kh" (UID: "1a3225a4-585b-4ad0-9951-c5feae37b6cc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 18:08:18 crc kubenswrapper[4661]: I0120 18:08:18.264862 4661 patch_prober.go:28] interesting pod/router-default-5444994796-tptl9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 20 18:08:18 crc kubenswrapper[4661]: [-]has-synced failed: reason withheld Jan 20 18:08:18 crc kubenswrapper[4661]: [+]process-running ok Jan 20 18:08:18 crc kubenswrapper[4661]: healthz check failed Jan 20 18:08:18 crc kubenswrapper[4661]: I0120 18:08:18.264920 4661 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-tptl9" podUID="c7ff0869-4b3b-447f-a012-9bc155bae99b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 20 18:08:18 crc kubenswrapper[4661]: I0120 18:08:18.275863 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-76h24" event={"ID":"673a7b2a-8e9f-4d92-800e-9ae986189dec","Type":"ContainerStarted","Data":"a0da6ed18db99f5cc3bf3e74f216a1eacea3a290ea6fc56c9e78e8b8903cd471"} Jan 20 18:08:18 crc kubenswrapper[4661]: I0120 18:08:18.276054 4661 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-76h24" Jan 20 18:08:18 crc kubenswrapper[4661]: I0120 18:08:18.278600 4661 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-76h24 container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.41:8443/healthz\": dial tcp 10.217.0.41:8443: connect: connection refused" start-of-body= Jan 20 18:08:18 crc kubenswrapper[4661]: I0120 18:08:18.278637 4661 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-76h24" podUID="673a7b2a-8e9f-4d92-800e-9ae986189dec" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.41:8443/healthz\": dial tcp 10.217.0.41:8443: connect: connection refused" Jan 20 18:08:18 crc kubenswrapper[4661]: I0120 18:08:18.316454 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 18:08:18 crc kubenswrapper[4661]: E0120 18:08:18.316627 4661 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 18:08:18.816596231 +0000 UTC m=+155.147385893 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 18:08:18 crc kubenswrapper[4661]: I0120 18:08:18.316712 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7m2kh\" (UID: \"1a3225a4-585b-4ad0-9951-c5feae37b6cc\") " pod="openshift-image-registry/image-registry-697d97f7c8-7m2kh" Jan 20 18:08:18 crc kubenswrapper[4661]: E0120 18:08:18.317113 4661 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-20 18:08:18.817105215 +0000 UTC m=+155.147894877 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7m2kh" (UID: "1a3225a4-585b-4ad0-9951-c5feae37b6cc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 18:08:18 crc kubenswrapper[4661]: I0120 18:08:18.323926 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-mfmq7" event={"ID":"b3568f10-700a-4f42-8e58-f749a50cf0cb","Type":"ContainerStarted","Data":"93bec243b483ac810b08d006d043598db0ec32f09a351bd5fc5ba80461ecef5f"} Jan 20 18:08:18 crc kubenswrapper[4661]: I0120 18:08:18.330824 4661 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-76h24" podStartSLOduration=133.330803625 podStartE2EDuration="2m13.330803625s" podCreationTimestamp="2026-01-20 18:06:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 18:08:18.31617942 +0000 UTC m=+154.646969072" watchObservedRunningTime="2026-01-20 18:08:18.330803625 +0000 UTC m=+154.661593287" Jan 20 18:08:18 crc kubenswrapper[4661]: I0120 18:08:18.335955 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-446tk" event={"ID":"e6808303-41c9-4185-beed-5e7460b07075","Type":"ContainerStarted","Data":"16fe0eab6c0784b56c16043aa2cff24e824720ab4626e2a84c429da8f5b5f776"} Jan 20 18:08:18 crc kubenswrapper[4661]: I0120 18:08:18.344103 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-lrwcc" event={"ID":"a1a9a02c-4b40-412e-a7f1-94568385465a","Type":"ContainerStarted","Data":"6b59c264aa81fe4362224755a3e03fa0293738afe00c91d480a8c8a2938a8e35"} Jan 20 18:08:18 crc kubenswrapper[4661]: I0120 18:08:18.345403 4661 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-lrwcc" Jan 20 18:08:18 crc kubenswrapper[4661]: I0120 18:08:18.353083 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-tdtbm" event={"ID":"e066f8cc-45be-4120-b044-f1852309d339","Type":"ContainerStarted","Data":"d981ebf0144003c11679dfb913c6ff46bf89fd7bae5f9ae2a944257597c20443"} Jan 20 18:08:18 crc kubenswrapper[4661]: I0120 18:08:18.356551 4661 generic.go:334] "Generic (PLEG): container finished" podID="53f0efc5-f6a1-4f6b-a37b-3e9c4e3fea65" containerID="5a9907f7996cfa422bce663b2d6f833981951a261ee4570cbfa2ce1abcb7eb31" exitCode=0 Jan 20 18:08:18 crc kubenswrapper[4661]: I0120 18:08:18.356604 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-q7s9r" event={"ID":"53f0efc5-f6a1-4f6b-a37b-3e9c4e3fea65","Type":"ContainerDied","Data":"5a9907f7996cfa422bce663b2d6f833981951a261ee4570cbfa2ce1abcb7eb31"} Jan 20 18:08:18 crc kubenswrapper[4661]: I0120 18:08:18.365505 4661 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-lrwcc" Jan 20 18:08:18 crc kubenswrapper[4661]: I0120 18:08:18.367613 4661 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-mfmq7" podStartSLOduration=133.367602219 podStartE2EDuration="2m13.367602219s" podCreationTimestamp="2026-01-20 18:06:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 18:08:18.366588592 +0000 UTC m=+154.697378254" watchObservedRunningTime="2026-01-20 18:08:18.367602219 +0000 UTC m=+154.698391881" Jan 20 18:08:18 crc kubenswrapper[4661]: I0120 18:08:18.387334 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-bh9mt" event={"ID":"1f0c818b-31de-43ee-a20a-1fc174261b42","Type":"ContainerStarted","Data":"6df115046f10fec312525b9af37d3626b33aee7a1e999247aa0d2f2ea30d2a64"} Jan 20 18:08:18 crc kubenswrapper[4661]: I0120 18:08:18.392106 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29482200-ckvfc" event={"ID":"7a1f08ef-d9a0-484e-9959-14d3ab178d28","Type":"ContainerStarted","Data":"5bc222cbabf1caaccf29dc04a46ebaf7c5ea9847c82556c26cd119bc347eb80a"} Jan 20 18:08:18 crc kubenswrapper[4661]: I0120 18:08:18.393959 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-44vhk" event={"ID":"e4cd0e68-3282-4713-8386-8c86f56f1f70","Type":"ContainerStarted","Data":"168c75344794d5bb588390349f98f37049b6387b39d653292bcb11b9ea257698"} Jan 20 18:08:18 crc kubenswrapper[4661]: I0120 18:08:18.410512 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-qdlnn" event={"ID":"a507ebcc-7e0b-445b-9688-882358d365ce","Type":"ContainerStarted","Data":"9ad7fac9e89dd0d727c9b7302cff19aacd5d543b51e26341552c3507413974a0"} Jan 20 18:08:18 crc kubenswrapper[4661]: I0120 18:08:18.411882 4661 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-bh9mt" Jan 20 18:08:18 crc kubenswrapper[4661]: I0120 18:08:18.413165 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-4p2m6" event={"ID":"8b7cc6dd-b02e-4e3f-b569-42201693f3e7","Type":"ContainerStarted","Data":"25de45df22b783330ec5a77dcd6f156aeee2fe9ab7b08a32cf93117496116341"} Jan 20 18:08:18 crc kubenswrapper[4661]: I0120 18:08:18.418457 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 18:08:18 crc kubenswrapper[4661]: E0120 18:08:18.419621 4661 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 18:08:18.919600894 +0000 UTC m=+155.250390556 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 18:08:18 crc kubenswrapper[4661]: I0120 18:08:18.448162 4661 generic.go:334] "Generic (PLEG): container finished" podID="8e261f9f-2027-4bb2-9254-40758baaa1ea" containerID="80615f16e83051d64aaeee15faf83d695e178b0495aab40ae65dcf75a689b4c6" exitCode=0 Jan 20 18:08:18 crc kubenswrapper[4661]: I0120 18:08:18.448288 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-9htcv" event={"ID":"8e261f9f-2027-4bb2-9254-40758baaa1ea","Type":"ContainerStarted","Data":"0360389fd8910d2b468a6212e5e6a9bfe150655572c233a445c3e5c42561abc2"} Jan 20 18:08:18 crc kubenswrapper[4661]: I0120 18:08:18.448324 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-9htcv" event={"ID":"8e261f9f-2027-4bb2-9254-40758baaa1ea","Type":"ContainerDied","Data":"80615f16e83051d64aaeee15faf83d695e178b0495aab40ae65dcf75a689b4c6"} Jan 20 18:08:18 crc kubenswrapper[4661]: I0120 18:08:18.448953 4661 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-9htcv" Jan 20 18:08:18 crc kubenswrapper[4661]: I0120 18:08:18.450218 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-r6qrc" event={"ID":"db930c1a-9c05-419d-b168-b232e2b98e9b","Type":"ContainerStarted","Data":"4237067469e752b1f03ea84787eba598d1ead880fd59528a4c266373af3bf6e1"} Jan 20 18:08:18 crc kubenswrapper[4661]: I0120 18:08:18.451615 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-vx646" event={"ID":"76bf33d8-cdc1-4d99-a84f-4e9289a963af","Type":"ContainerStarted","Data":"53d045f4320297d7ec2832687432b15bec157f38c502868f73af02ba429437ef"} Jan 20 18:08:18 crc kubenswrapper[4661]: I0120 18:08:18.451640 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-vx646" event={"ID":"76bf33d8-cdc1-4d99-a84f-4e9289a963af","Type":"ContainerStarted","Data":"10c8b7b244a92fd8604ad5fb74d918eb6b2d8f3ecf6e9c13e57e1d0727cf2345"} Jan 20 18:08:18 crc kubenswrapper[4661]: I0120 18:08:18.472901 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-7hbkg" event={"ID":"302e8226-565c-44a4-bb0e-dee670200ae3","Type":"ContainerStarted","Data":"ea801346cb6bd8761b2297ab4862b17c5f62257ca96528b327c80a02f3762447"} Jan 20 18:08:18 crc kubenswrapper[4661]: I0120 18:08:18.487381 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-zdd7g" event={"ID":"081efbd4-859f-49bb-84d6-778ce124b602","Type":"ContainerStarted","Data":"6c016146b953d102667e8277202a0ed721be465f1528655c1fbc69580e87f7fd"} Jan 20 18:08:18 crc kubenswrapper[4661]: I0120 18:08:18.497058 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-2qq6g" event={"ID":"f1297f46-d734-41e9-a7b8-5033ce03f315","Type":"ContainerStarted","Data":"baa797da434dd587f21c688e7bf904a3f7a457f9aaba09ddfc3e7bfd2f83d07a"} Jan 20 18:08:18 crc kubenswrapper[4661]: I0120 18:08:18.504373 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-kh25t" event={"ID":"8c6c735b-2d38-430c-a5b7-10b9b06ef623","Type":"ContainerStarted","Data":"ca22c9fa397ec0e4eb0b920c2f646f797608e5226c11154e312d2063edd709c7"} Jan 20 18:08:18 crc kubenswrapper[4661]: I0120 18:08:18.505965 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-gbprg" event={"ID":"8924a8e2-8af0-49f0-9e0b-befc11e5755a","Type":"ContainerStarted","Data":"0af01e09e32788357ea88999ad4434089eff6bf5480718653c93df2b01f4eb06"} Jan 20 18:08:18 crc kubenswrapper[4661]: I0120 18:08:18.506880 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-454g9" event={"ID":"3a8e7e81-15e8-4127-9d43-732131427aa2","Type":"ContainerStarted","Data":"6e758465d1adf8a0557a1da6cf36fc38a5a240d283bf7e357e69c12197c79e60"} Jan 20 18:08:18 crc kubenswrapper[4661]: I0120 18:08:18.507575 4661 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-454g9" Jan 20 18:08:18 crc kubenswrapper[4661]: I0120 18:08:18.512087 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-9j9rr" event={"ID":"0d3a2340-8262-4080-8c72-8ced3d6c0c5a","Type":"ContainerStarted","Data":"f7fd71b70b83a2a4a959814c72efd87810c34813922c4b7b3b58dfb1d4368d6e"} Jan 20 18:08:18 crc kubenswrapper[4661]: I0120 18:08:18.520955 4661 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-454g9 container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.37:5443/healthz\": dial tcp 10.217.0.37:5443: connect: connection refused" start-of-body= Jan 20 18:08:18 crc kubenswrapper[4661]: I0120 18:08:18.521023 4661 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-454g9" podUID="3a8e7e81-15e8-4127-9d43-732131427aa2" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.37:5443/healthz\": dial tcp 10.217.0.37:5443: connect: connection refused" Jan 20 18:08:18 crc kubenswrapper[4661]: I0120 18:08:18.521507 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7m2kh\" (UID: \"1a3225a4-585b-4ad0-9951-c5feae37b6cc\") " pod="openshift-image-registry/image-registry-697d97f7c8-7m2kh" Jan 20 18:08:18 crc kubenswrapper[4661]: E0120 18:08:18.523467 4661 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-20 18:08:19.02345252 +0000 UTC m=+155.354242282 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7m2kh" (UID: "1a3225a4-585b-4ad0-9951-c5feae37b6cc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 18:08:18 crc kubenswrapper[4661]: I0120 18:08:18.527389 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-qwp42" event={"ID":"00496c34-a198-4516-bff8-0b553db85849","Type":"ContainerStarted","Data":"8429dfab064158c3987986eec4438575382faf7851b151eb4f55441c4e63f4b4"} Jan 20 18:08:18 crc kubenswrapper[4661]: I0120 18:08:18.527441 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-qwp42" event={"ID":"00496c34-a198-4516-bff8-0b553db85849","Type":"ContainerStarted","Data":"62b57b81b8a6e98694ebf38557c2b91b7b127fe3e6218c3481ab151b4c19c338"} Jan 20 18:08:18 crc kubenswrapper[4661]: I0120 18:08:18.533146 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-6xth2" event={"ID":"79c9338e-65b7-4805-810b-beb0b3c032a6","Type":"ContainerStarted","Data":"baf46a4a532ec063290f9f77ecd38d0126e3c0e7f5ea98509eee65d4c38d9a39"} Jan 20 18:08:18 crc kubenswrapper[4661]: I0120 18:08:18.537038 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-mcf6f" event={"ID":"52978b9f-376f-4f49-9c2d-fc3da32b178f","Type":"ContainerStarted","Data":"34699b8881b8151c3ffab64526035ae5d7538f50e365af13613176832d199a7d"} Jan 20 18:08:18 crc kubenswrapper[4661]: I0120 18:08:18.537084 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-mcf6f" event={"ID":"52978b9f-376f-4f49-9c2d-fc3da32b178f","Type":"ContainerStarted","Data":"3104d30528b7614cece1e4f448cafbc7121a93e3c57c60d1736773cf00e2ef99"} Jan 20 18:08:18 crc kubenswrapper[4661]: I0120 18:08:18.538664 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-xfrkj" event={"ID":"28e39614-3757-41e0-b164-eb1964ff6a8d","Type":"ContainerStarted","Data":"ac4d6a4749253eadf6e79e68a4c9c67422ef2587c2433bdb4eac2c10179fe3c9"} Jan 20 18:08:18 crc kubenswrapper[4661]: I0120 18:08:18.541459 4661 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-flvxz container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.15:8080/healthz\": dial tcp 10.217.0.15:8080: connect: connection refused" start-of-body= Jan 20 18:08:18 crc kubenswrapper[4661]: I0120 18:08:18.541501 4661 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-flvxz" podUID="9a7d9c3e-88e9-44b2-98bc-6aab91fbf9b4" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.15:8080/healthz\": dial tcp 10.217.0.15:8080: connect: connection refused" Jan 20 18:08:18 crc kubenswrapper[4661]: I0120 18:08:18.541551 4661 patch_prober.go:28] interesting pod/downloads-7954f5f757-bvpn8 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.16:8080/\": dial tcp 10.217.0.16:8080: connect: connection refused" start-of-body= Jan 20 18:08:18 crc kubenswrapper[4661]: I0120 18:08:18.541563 4661 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-bvpn8" podUID="afe96487-2a45-4ad8-8f17-7f33186f55f4" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.16:8080/\": dial tcp 10.217.0.16:8080: connect: connection refused" Jan 20 18:08:18 crc kubenswrapper[4661]: I0120 18:08:18.625583 4661 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-lrwcc" podStartSLOduration=133.625561607 podStartE2EDuration="2m13.625561607s" podCreationTimestamp="2026-01-20 18:06:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 18:08:18.487018785 +0000 UTC m=+154.817808447" watchObservedRunningTime="2026-01-20 18:08:18.625561607 +0000 UTC m=+154.956351269" Jan 20 18:08:18 crc kubenswrapper[4661]: I0120 18:08:18.626422 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 18:08:18 crc kubenswrapper[4661]: E0120 18:08:18.626523 4661 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 18:08:19.126502233 +0000 UTC m=+155.457291895 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 18:08:18 crc kubenswrapper[4661]: I0120 18:08:18.626917 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7m2kh\" (UID: \"1a3225a4-585b-4ad0-9951-c5feae37b6cc\") " pod="openshift-image-registry/image-registry-697d97f7c8-7m2kh" Jan 20 18:08:18 crc kubenswrapper[4661]: E0120 18:08:18.629416 4661 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-20 18:08:19.129407991 +0000 UTC m=+155.460197653 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7m2kh" (UID: "1a3225a4-585b-4ad0-9951-c5feae37b6cc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 18:08:18 crc kubenswrapper[4661]: I0120 18:08:18.728783 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 18:08:18 crc kubenswrapper[4661]: E0120 18:08:18.730302 4661 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 18:08:19.230269726 +0000 UTC m=+155.561059388 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 18:08:18 crc kubenswrapper[4661]: I0120 18:08:18.752393 4661 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-44vhk" podStartSLOduration=133.752369423 podStartE2EDuration="2m13.752369423s" podCreationTimestamp="2026-01-20 18:06:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 18:08:18.746752881 +0000 UTC m=+155.077542543" watchObservedRunningTime="2026-01-20 18:08:18.752369423 +0000 UTC m=+155.083159085" Jan 20 18:08:18 crc kubenswrapper[4661]: I0120 18:08:18.841051 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7m2kh\" (UID: \"1a3225a4-585b-4ad0-9951-c5feae37b6cc\") " pod="openshift-image-registry/image-registry-697d97f7c8-7m2kh" Jan 20 18:08:18 crc kubenswrapper[4661]: E0120 18:08:18.841488 4661 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-20 18:08:19.34147262 +0000 UTC m=+155.672262282 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7m2kh" (UID: "1a3225a4-585b-4ad0-9951-c5feae37b6cc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 18:08:18 crc kubenswrapper[4661]: I0120 18:08:18.846394 4661 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-n76xd" Jan 20 18:08:18 crc kubenswrapper[4661]: I0120 18:08:18.851648 4661 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-7hbkg" podStartSLOduration=133.851615114 podStartE2EDuration="2m13.851615114s" podCreationTimestamp="2026-01-20 18:06:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 18:08:18.829952099 +0000 UTC m=+155.160741761" watchObservedRunningTime="2026-01-20 18:08:18.851615114 +0000 UTC m=+155.182404776" Jan 20 18:08:18 crc kubenswrapper[4661]: I0120 18:08:18.942344 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 18:08:18 crc kubenswrapper[4661]: E0120 18:08:18.942800 4661 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 18:08:19.442777527 +0000 UTC m=+155.773567189 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 18:08:18 crc kubenswrapper[4661]: I0120 18:08:18.977869 4661 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-4p2m6" podStartSLOduration=133.977843805 podStartE2EDuration="2m13.977843805s" podCreationTimestamp="2026-01-20 18:06:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 18:08:18.8640448 +0000 UTC m=+155.194834462" watchObservedRunningTime="2026-01-20 18:08:18.977843805 +0000 UTC m=+155.308633467" Jan 20 18:08:18 crc kubenswrapper[4661]: I0120 18:08:18.979344 4661 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29482200-ckvfc" podStartSLOduration=134.979334755 podStartE2EDuration="2m14.979334755s" podCreationTimestamp="2026-01-20 18:06:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 18:08:18.979227892 +0000 UTC m=+155.310017564" watchObservedRunningTime="2026-01-20 18:08:18.979334755 +0000 UTC m=+155.310124417" Jan 20 18:08:19 crc kubenswrapper[4661]: I0120 18:08:19.043834 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7m2kh\" (UID: \"1a3225a4-585b-4ad0-9951-c5feae37b6cc\") " pod="openshift-image-registry/image-registry-697d97f7c8-7m2kh" Jan 20 18:08:19 crc kubenswrapper[4661]: E0120 18:08:19.044307 4661 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-20 18:08:19.54428697 +0000 UTC m=+155.875076632 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7m2kh" (UID: "1a3225a4-585b-4ad0-9951-c5feae37b6cc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 18:08:19 crc kubenswrapper[4661]: I0120 18:08:19.132300 4661 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-9htcv" podStartSLOduration=135.132273197 podStartE2EDuration="2m15.132273197s" podCreationTimestamp="2026-01-20 18:06:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 18:08:19.097412685 +0000 UTC m=+155.428202357" watchObservedRunningTime="2026-01-20 18:08:19.132273197 +0000 UTC m=+155.463062869" Jan 20 18:08:19 crc kubenswrapper[4661]: I0120 18:08:19.145476 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 18:08:19 crc kubenswrapper[4661]: E0120 18:08:19.145709 4661 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 18:08:19.645678749 +0000 UTC m=+155.976468411 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 18:08:19 crc kubenswrapper[4661]: I0120 18:08:19.145825 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7m2kh\" (UID: \"1a3225a4-585b-4ad0-9951-c5feae37b6cc\") " pod="openshift-image-registry/image-registry-697d97f7c8-7m2kh" Jan 20 18:08:19 crc kubenswrapper[4661]: E0120 18:08:19.146168 4661 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-20 18:08:19.646154542 +0000 UTC m=+155.976944204 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7m2kh" (UID: "1a3225a4-585b-4ad0-9951-c5feae37b6cc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 18:08:19 crc kubenswrapper[4661]: I0120 18:08:19.247293 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 18:08:19 crc kubenswrapper[4661]: E0120 18:08:19.247576 4661 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 18:08:19.747540071 +0000 UTC m=+156.078329733 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 18:08:19 crc kubenswrapper[4661]: I0120 18:08:19.247628 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7m2kh\" (UID: \"1a3225a4-585b-4ad0-9951-c5feae37b6cc\") " pod="openshift-image-registry/image-registry-697d97f7c8-7m2kh" Jan 20 18:08:19 crc kubenswrapper[4661]: E0120 18:08:19.248119 4661 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-20 18:08:19.748098596 +0000 UTC m=+156.078888248 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7m2kh" (UID: "1a3225a4-585b-4ad0-9951-c5feae37b6cc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 18:08:19 crc kubenswrapper[4661]: I0120 18:08:19.254829 4661 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-vx646" podStartSLOduration=134.254810487 podStartE2EDuration="2m14.254810487s" podCreationTimestamp="2026-01-20 18:06:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 18:08:19.148558097 +0000 UTC m=+155.479347759" watchObservedRunningTime="2026-01-20 18:08:19.254810487 +0000 UTC m=+155.585600149" Jan 20 18:08:19 crc kubenswrapper[4661]: I0120 18:08:19.255078 4661 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-r6qrc" podStartSLOduration=134.255072214 podStartE2EDuration="2m14.255072214s" podCreationTimestamp="2026-01-20 18:06:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 18:08:19.254803537 +0000 UTC m=+155.585593199" watchObservedRunningTime="2026-01-20 18:08:19.255072214 +0000 UTC m=+155.585861876" Jan 20 18:08:19 crc kubenswrapper[4661]: I0120 18:08:19.265071 4661 patch_prober.go:28] interesting pod/router-default-5444994796-tptl9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 20 18:08:19 crc kubenswrapper[4661]: [-]has-synced failed: reason withheld Jan 20 18:08:19 crc kubenswrapper[4661]: [+]process-running ok Jan 20 18:08:19 crc kubenswrapper[4661]: healthz check failed Jan 20 18:08:19 crc kubenswrapper[4661]: I0120 18:08:19.265134 4661 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-tptl9" podUID="c7ff0869-4b3b-447f-a012-9bc155bae99b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 20 18:08:19 crc kubenswrapper[4661]: I0120 18:08:19.348884 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 18:08:19 crc kubenswrapper[4661]: E0120 18:08:19.349108 4661 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 18:08:19.849072284 +0000 UTC m=+156.179861946 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 18:08:19 crc kubenswrapper[4661]: I0120 18:08:19.349284 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7m2kh\" (UID: \"1a3225a4-585b-4ad0-9951-c5feae37b6cc\") " pod="openshift-image-registry/image-registry-697d97f7c8-7m2kh" Jan 20 18:08:19 crc kubenswrapper[4661]: E0120 18:08:19.349776 4661 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-20 18:08:19.849765083 +0000 UTC m=+156.180554745 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7m2kh" (UID: "1a3225a4-585b-4ad0-9951-c5feae37b6cc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 18:08:19 crc kubenswrapper[4661]: I0120 18:08:19.391384 4661 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-mcf6f" podStartSLOduration=134.391361746 podStartE2EDuration="2m14.391361746s" podCreationTimestamp="2026-01-20 18:06:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 18:08:19.38741795 +0000 UTC m=+155.718207612" watchObservedRunningTime="2026-01-20 18:08:19.391361746 +0000 UTC m=+155.722151408" Jan 20 18:08:19 crc kubenswrapper[4661]: I0120 18:08:19.450323 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 18:08:19 crc kubenswrapper[4661]: E0120 18:08:19.450775 4661 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 18:08:19.950754961 +0000 UTC m=+156.281544613 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 18:08:19 crc kubenswrapper[4661]: I0120 18:08:19.547090 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-q7s9r" event={"ID":"53f0efc5-f6a1-4f6b-a37b-3e9c4e3fea65","Type":"ContainerStarted","Data":"e4509f35ed53b6030afa19ec8c0cf0fa13e866c1dbfaab843d68e7ab790652a7"} Jan 20 18:08:19 crc kubenswrapper[4661]: I0120 18:08:19.547151 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-q7s9r" event={"ID":"53f0efc5-f6a1-4f6b-a37b-3e9c4e3fea65","Type":"ContainerStarted","Data":"407b5f48ee62f444b8b4012bc5353150c6d396ec954c13aa17fe04f25607e3ee"} Jan 20 18:08:19 crc kubenswrapper[4661]: I0120 18:08:19.551871 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7m2kh\" (UID: \"1a3225a4-585b-4ad0-9951-c5feae37b6cc\") " pod="openshift-image-registry/image-registry-697d97f7c8-7m2kh" Jan 20 18:08:19 crc kubenswrapper[4661]: E0120 18:08:19.552330 4661 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-20 18:08:20.052313055 +0000 UTC m=+156.383102727 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7m2kh" (UID: "1a3225a4-585b-4ad0-9951-c5feae37b6cc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 18:08:19 crc kubenswrapper[4661]: I0120 18:08:19.555118 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-44vhk" event={"ID":"e4cd0e68-3282-4713-8386-8c86f56f1f70","Type":"ContainerStarted","Data":"07e7e05e98af83bc5a9efad22a00d1922a271fac5cd147e45f5039dad3177faf"} Jan 20 18:08:19 crc kubenswrapper[4661]: I0120 18:08:19.558970 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-xlt4n" event={"ID":"550a5702-08aa-4dca-a1a0-7adfebbd9312","Type":"ContainerStarted","Data":"a04e4937983f78efd1d1ba5a8d158a03d3c0055495b2140bdf02245127df672b"} Jan 20 18:08:19 crc kubenswrapper[4661]: I0120 18:08:19.560903 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-gbprg" event={"ID":"8924a8e2-8af0-49f0-9e0b-befc11e5755a","Type":"ContainerStarted","Data":"a215ce62e7913d6a427094cf4d89131ce3bd078a92fd573ba98dfd96943dd6e2"} Jan 20 18:08:19 crc kubenswrapper[4661]: I0120 18:08:19.561098 4661 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-gbprg" Jan 20 18:08:19 crc kubenswrapper[4661]: I0120 18:08:19.562459 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-dwpwk" event={"ID":"413ba99d-7214-4981-98fb-910c4f5731d8","Type":"ContainerStarted","Data":"e928e97923863d7d94a0dcf0e6258f82a276f85fa8ad587b8be13cbd8e97a668"} Jan 20 18:08:19 crc kubenswrapper[4661]: I0120 18:08:19.563376 4661 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-flvxz container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.15:8080/healthz\": dial tcp 10.217.0.15:8080: connect: connection refused" start-of-body= Jan 20 18:08:19 crc kubenswrapper[4661]: I0120 18:08:19.563449 4661 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-flvxz" podUID="9a7d9c3e-88e9-44b2-98bc-6aab91fbf9b4" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.15:8080/healthz\": dial tcp 10.217.0.15:8080: connect: connection refused" Jan 20 18:08:19 crc kubenswrapper[4661]: I0120 18:08:19.565040 4661 patch_prober.go:28] interesting pod/downloads-7954f5f757-bvpn8 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.16:8080/\": dial tcp 10.217.0.16:8080: connect: connection refused" start-of-body= Jan 20 18:08:19 crc kubenswrapper[4661]: I0120 18:08:19.565089 4661 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-bvpn8" podUID="afe96487-2a45-4ad8-8f17-7f33186f55f4" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.16:8080/\": dial tcp 10.217.0.16:8080: connect: connection refused" Jan 20 18:08:19 crc kubenswrapper[4661]: I0120 18:08:19.572197 4661 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-zdd7g" podStartSLOduration=134.572177151 podStartE2EDuration="2m14.572177151s" podCreationTimestamp="2026-01-20 18:06:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 18:08:19.491884632 +0000 UTC m=+155.822674294" watchObservedRunningTime="2026-01-20 18:08:19.572177151 +0000 UTC m=+155.902966813" Jan 20 18:08:19 crc kubenswrapper[4661]: I0120 18:08:19.572316 4661 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-9j9rr" podStartSLOduration=134.572312905 podStartE2EDuration="2m14.572312905s" podCreationTimestamp="2026-01-20 18:06:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 18:08:19.570775903 +0000 UTC m=+155.901565575" watchObservedRunningTime="2026-01-20 18:08:19.572312905 +0000 UTC m=+155.903102567" Jan 20 18:08:19 crc kubenswrapper[4661]: I0120 18:08:19.653257 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 18:08:19 crc kubenswrapper[4661]: E0120 18:08:19.655766 4661 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 18:08:20.155736499 +0000 UTC m=+156.486526171 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 18:08:19 crc kubenswrapper[4661]: I0120 18:08:19.708913 4661 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-76h24" Jan 20 18:08:19 crc kubenswrapper[4661]: I0120 18:08:19.755715 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7m2kh\" (UID: \"1a3225a4-585b-4ad0-9951-c5feae37b6cc\") " pod="openshift-image-registry/image-registry-697d97f7c8-7m2kh" Jan 20 18:08:19 crc kubenswrapper[4661]: E0120 18:08:19.760743 4661 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-20 18:08:20.260724705 +0000 UTC m=+156.591514367 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7m2kh" (UID: "1a3225a4-585b-4ad0-9951-c5feae37b6cc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 18:08:19 crc kubenswrapper[4661]: I0120 18:08:19.861232 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 18:08:19 crc kubenswrapper[4661]: E0120 18:08:19.861525 4661 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 18:08:20.361471427 +0000 UTC m=+156.692261089 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 18:08:19 crc kubenswrapper[4661]: I0120 18:08:19.861963 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7m2kh\" (UID: \"1a3225a4-585b-4ad0-9951-c5feae37b6cc\") " pod="openshift-image-registry/image-registry-697d97f7c8-7m2kh" Jan 20 18:08:19 crc kubenswrapper[4661]: E0120 18:08:19.862356 4661 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-20 18:08:20.362339861 +0000 UTC m=+156.693129523 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7m2kh" (UID: "1a3225a4-585b-4ad0-9951-c5feae37b6cc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 18:08:19 crc kubenswrapper[4661]: I0120 18:08:19.894096 4661 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-qwp42" podStartSLOduration=134.894070598 podStartE2EDuration="2m14.894070598s" podCreationTimestamp="2026-01-20 18:06:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 18:08:19.877349476 +0000 UTC m=+156.208139128" watchObservedRunningTime="2026-01-20 18:08:19.894070598 +0000 UTC m=+156.224860260" Jan 20 18:08:19 crc kubenswrapper[4661]: I0120 18:08:19.895780 4661 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-2qq6g" podStartSLOduration=134.895774474 podStartE2EDuration="2m14.895774474s" podCreationTimestamp="2026-01-20 18:06:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 18:08:19.673867529 +0000 UTC m=+156.004657191" watchObservedRunningTime="2026-01-20 18:08:19.895774474 +0000 UTC m=+156.226564136" Jan 20 18:08:19 crc kubenswrapper[4661]: I0120 18:08:19.947415 4661 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-7lfrt"] Jan 20 18:08:19 crc kubenswrapper[4661]: I0120 18:08:19.952309 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7lfrt" Jan 20 18:08:19 crc kubenswrapper[4661]: I0120 18:08:19.971159 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 18:08:19 crc kubenswrapper[4661]: E0120 18:08:19.971861 4661 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 18:08:20.471817878 +0000 UTC m=+156.802607540 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 18:08:20 crc kubenswrapper[4661]: I0120 18:08:20.018948 4661 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-5jf5n"] Jan 20 18:08:20 crc kubenswrapper[4661]: I0120 18:08:20.022771 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5jf5n" Jan 20 18:08:20 crc kubenswrapper[4661]: I0120 18:08:20.025087 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-7lfrt"] Jan 20 18:08:20 crc kubenswrapper[4661]: I0120 18:08:20.048271 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 20 18:08:20 crc kubenswrapper[4661]: I0120 18:08:20.053181 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 20 18:08:20 crc kubenswrapper[4661]: I0120 18:08:20.062169 4661 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-454g9" podStartSLOduration=135.062144668 podStartE2EDuration="2m15.062144668s" podCreationTimestamp="2026-01-20 18:06:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 18:08:20.033233117 +0000 UTC m=+156.364022779" watchObservedRunningTime="2026-01-20 18:08:20.062144668 +0000 UTC m=+156.392934330" Jan 20 18:08:20 crc kubenswrapper[4661]: I0120 18:08:20.064485 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-5jf5n"] Jan 20 18:08:20 crc kubenswrapper[4661]: I0120 18:08:20.074492 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c34658be-616a-469f-a560-61709f82cde6-utilities\") pod \"community-operators-7lfrt\" (UID: \"c34658be-616a-469f-a560-61709f82cde6\") " pod="openshift-marketplace/community-operators-7lfrt" Jan 20 18:08:20 crc kubenswrapper[4661]: I0120 18:08:20.074593 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7m2kh\" (UID: \"1a3225a4-585b-4ad0-9951-c5feae37b6cc\") " pod="openshift-image-registry/image-registry-697d97f7c8-7m2kh" Jan 20 18:08:20 crc kubenswrapper[4661]: I0120 18:08:20.074625 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lpsks\" (UniqueName: \"kubernetes.io/projected/c34658be-616a-469f-a560-61709f82cde6-kube-api-access-lpsks\") pod \"community-operators-7lfrt\" (UID: \"c34658be-616a-469f-a560-61709f82cde6\") " pod="openshift-marketplace/community-operators-7lfrt" Jan 20 18:08:20 crc kubenswrapper[4661]: I0120 18:08:20.074641 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c34658be-616a-469f-a560-61709f82cde6-catalog-content\") pod \"community-operators-7lfrt\" (UID: \"c34658be-616a-469f-a560-61709f82cde6\") " pod="openshift-marketplace/community-operators-7lfrt" Jan 20 18:08:20 crc kubenswrapper[4661]: E0120 18:08:20.075047 4661 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-20 18:08:20.575033147 +0000 UTC m=+156.905822809 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7m2kh" (UID: "1a3225a4-585b-4ad0-9951-c5feae37b6cc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 18:08:20 crc kubenswrapper[4661]: I0120 18:08:20.145515 4661 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-6xth2" podStartSLOduration=11.14549559 podStartE2EDuration="11.14549559s" podCreationTimestamp="2026-01-20 18:08:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 18:08:20.145187442 +0000 UTC m=+156.475977114" watchObservedRunningTime="2026-01-20 18:08:20.14549559 +0000 UTC m=+156.476285252" Jan 20 18:08:20 crc kubenswrapper[4661]: I0120 18:08:20.167487 4661 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-kpzrp"] Jan 20 18:08:20 crc kubenswrapper[4661]: I0120 18:08:20.168771 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-kpzrp" Jan 20 18:08:20 crc kubenswrapper[4661]: I0120 18:08:20.180412 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 18:08:20 crc kubenswrapper[4661]: I0120 18:08:20.180570 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c34658be-616a-469f-a560-61709f82cde6-utilities\") pod \"community-operators-7lfrt\" (UID: \"c34658be-616a-469f-a560-61709f82cde6\") " pod="openshift-marketplace/community-operators-7lfrt" Jan 20 18:08:20 crc kubenswrapper[4661]: I0120 18:08:20.180608 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wchwz\" (UniqueName: \"kubernetes.io/projected/ac4a870b-4ca8-4046-b9b1-6001f8b13a51-kube-api-access-wchwz\") pod \"certified-operators-5jf5n\" (UID: \"ac4a870b-4ca8-4046-b9b1-6001f8b13a51\") " pod="openshift-marketplace/certified-operators-5jf5n" Jan 20 18:08:20 crc kubenswrapper[4661]: I0120 18:08:20.180642 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ac4a870b-4ca8-4046-b9b1-6001f8b13a51-utilities\") pod \"certified-operators-5jf5n\" (UID: \"ac4a870b-4ca8-4046-b9b1-6001f8b13a51\") " pod="openshift-marketplace/certified-operators-5jf5n" Jan 20 18:08:20 crc kubenswrapper[4661]: I0120 18:08:20.180683 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ac4a870b-4ca8-4046-b9b1-6001f8b13a51-catalog-content\") pod \"certified-operators-5jf5n\" (UID: \"ac4a870b-4ca8-4046-b9b1-6001f8b13a51\") " pod="openshift-marketplace/certified-operators-5jf5n" Jan 20 18:08:20 crc kubenswrapper[4661]: I0120 18:08:20.180739 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lpsks\" (UniqueName: \"kubernetes.io/projected/c34658be-616a-469f-a560-61709f82cde6-kube-api-access-lpsks\") pod \"community-operators-7lfrt\" (UID: \"c34658be-616a-469f-a560-61709f82cde6\") " pod="openshift-marketplace/community-operators-7lfrt" Jan 20 18:08:20 crc kubenswrapper[4661]: I0120 18:08:20.180758 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c34658be-616a-469f-a560-61709f82cde6-catalog-content\") pod \"community-operators-7lfrt\" (UID: \"c34658be-616a-469f-a560-61709f82cde6\") " pod="openshift-marketplace/community-operators-7lfrt" Jan 20 18:08:20 crc kubenswrapper[4661]: I0120 18:08:20.181195 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c34658be-616a-469f-a560-61709f82cde6-catalog-content\") pod \"community-operators-7lfrt\" (UID: \"c34658be-616a-469f-a560-61709f82cde6\") " pod="openshift-marketplace/community-operators-7lfrt" Jan 20 18:08:20 crc kubenswrapper[4661]: E0120 18:08:20.181462 4661 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 18:08:20.681442481 +0000 UTC m=+157.012232143 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 18:08:20 crc kubenswrapper[4661]: I0120 18:08:20.181702 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c34658be-616a-469f-a560-61709f82cde6-utilities\") pod \"community-operators-7lfrt\" (UID: \"c34658be-616a-469f-a560-61709f82cde6\") " pod="openshift-marketplace/community-operators-7lfrt" Jan 20 18:08:20 crc kubenswrapper[4661]: I0120 18:08:20.270548 4661 patch_prober.go:28] interesting pod/router-default-5444994796-tptl9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 20 18:08:20 crc kubenswrapper[4661]: [-]has-synced failed: reason withheld Jan 20 18:08:20 crc kubenswrapper[4661]: [+]process-running ok Jan 20 18:08:20 crc kubenswrapper[4661]: healthz check failed Jan 20 18:08:20 crc kubenswrapper[4661]: I0120 18:08:20.270958 4661 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-tptl9" podUID="c7ff0869-4b3b-447f-a012-9bc155bae99b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 20 18:08:20 crc kubenswrapper[4661]: I0120 18:08:20.278982 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lpsks\" (UniqueName: \"kubernetes.io/projected/c34658be-616a-469f-a560-61709f82cde6-kube-api-access-lpsks\") pod \"community-operators-7lfrt\" (UID: \"c34658be-616a-469f-a560-61709f82cde6\") " pod="openshift-marketplace/community-operators-7lfrt" Jan 20 18:08:20 crc kubenswrapper[4661]: I0120 18:08:20.284904 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wchwz\" (UniqueName: \"kubernetes.io/projected/ac4a870b-4ca8-4046-b9b1-6001f8b13a51-kube-api-access-wchwz\") pod \"certified-operators-5jf5n\" (UID: \"ac4a870b-4ca8-4046-b9b1-6001f8b13a51\") " pod="openshift-marketplace/certified-operators-5jf5n" Jan 20 18:08:20 crc kubenswrapper[4661]: I0120 18:08:20.284978 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ac4a870b-4ca8-4046-b9b1-6001f8b13a51-utilities\") pod \"certified-operators-5jf5n\" (UID: \"ac4a870b-4ca8-4046-b9b1-6001f8b13a51\") " pod="openshift-marketplace/certified-operators-5jf5n" Jan 20 18:08:20 crc kubenswrapper[4661]: I0120 18:08:20.285011 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ac4a870b-4ca8-4046-b9b1-6001f8b13a51-catalog-content\") pod \"certified-operators-5jf5n\" (UID: \"ac4a870b-4ca8-4046-b9b1-6001f8b13a51\") " pod="openshift-marketplace/certified-operators-5jf5n" Jan 20 18:08:20 crc kubenswrapper[4661]: I0120 18:08:20.285052 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8f48199b-41a0-44a2-b1b4-2f623ab0f413-utilities\") pod \"community-operators-kpzrp\" (UID: \"8f48199b-41a0-44a2-b1b4-2f623ab0f413\") " pod="openshift-marketplace/community-operators-kpzrp" Jan 20 18:08:20 crc kubenswrapper[4661]: I0120 18:08:20.285103 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8f48199b-41a0-44a2-b1b4-2f623ab0f413-catalog-content\") pod \"community-operators-kpzrp\" (UID: \"8f48199b-41a0-44a2-b1b4-2f623ab0f413\") " pod="openshift-marketplace/community-operators-kpzrp" Jan 20 18:08:20 crc kubenswrapper[4661]: I0120 18:08:20.285157 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7m2kh\" (UID: \"1a3225a4-585b-4ad0-9951-c5feae37b6cc\") " pod="openshift-image-registry/image-registry-697d97f7c8-7m2kh" Jan 20 18:08:20 crc kubenswrapper[4661]: I0120 18:08:20.285206 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ks96p\" (UniqueName: \"kubernetes.io/projected/8f48199b-41a0-44a2-b1b4-2f623ab0f413-kube-api-access-ks96p\") pod \"community-operators-kpzrp\" (UID: \"8f48199b-41a0-44a2-b1b4-2f623ab0f413\") " pod="openshift-marketplace/community-operators-kpzrp" Jan 20 18:08:20 crc kubenswrapper[4661]: I0120 18:08:20.286163 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ac4a870b-4ca8-4046-b9b1-6001f8b13a51-utilities\") pod \"certified-operators-5jf5n\" (UID: \"ac4a870b-4ca8-4046-b9b1-6001f8b13a51\") " pod="openshift-marketplace/certified-operators-5jf5n" Jan 20 18:08:20 crc kubenswrapper[4661]: I0120 18:08:20.286448 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ac4a870b-4ca8-4046-b9b1-6001f8b13a51-catalog-content\") pod \"certified-operators-5jf5n\" (UID: \"ac4a870b-4ca8-4046-b9b1-6001f8b13a51\") " pod="openshift-marketplace/certified-operators-5jf5n" Jan 20 18:08:20 crc kubenswrapper[4661]: E0120 18:08:20.286839 4661 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-20 18:08:20.786823708 +0000 UTC m=+157.117613370 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7m2kh" (UID: "1a3225a4-585b-4ad0-9951-c5feae37b6cc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 18:08:20 crc kubenswrapper[4661]: I0120 18:08:20.335063 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7lfrt" Jan 20 18:08:20 crc kubenswrapper[4661]: I0120 18:08:20.357483 4661 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-vgcbk"] Jan 20 18:08:20 crc kubenswrapper[4661]: I0120 18:08:20.358293 4661 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-kh25t" podStartSLOduration=135.358271329 podStartE2EDuration="2m15.358271329s" podCreationTimestamp="2026-01-20 18:06:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 18:08:20.341519676 +0000 UTC m=+156.672309338" watchObservedRunningTime="2026-01-20 18:08:20.358271329 +0000 UTC m=+156.689060991" Jan 20 18:08:20 crc kubenswrapper[4661]: I0120 18:08:20.358491 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vgcbk" Jan 20 18:08:20 crc kubenswrapper[4661]: I0120 18:08:20.386788 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 18:08:20 crc kubenswrapper[4661]: I0120 18:08:20.387031 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b74c3546-f6c8-44f1-a0b7-cf45a7b7ba38-catalog-content\") pod \"certified-operators-vgcbk\" (UID: \"b74c3546-f6c8-44f1-a0b7-cf45a7b7ba38\") " pod="openshift-marketplace/certified-operators-vgcbk" Jan 20 18:08:20 crc kubenswrapper[4661]: I0120 18:08:20.387065 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8f48199b-41a0-44a2-b1b4-2f623ab0f413-catalog-content\") pod \"community-operators-kpzrp\" (UID: \"8f48199b-41a0-44a2-b1b4-2f623ab0f413\") " pod="openshift-marketplace/community-operators-kpzrp" Jan 20 18:08:20 crc kubenswrapper[4661]: I0120 18:08:20.387115 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ks96p\" (UniqueName: \"kubernetes.io/projected/8f48199b-41a0-44a2-b1b4-2f623ab0f413-kube-api-access-ks96p\") pod \"community-operators-kpzrp\" (UID: \"8f48199b-41a0-44a2-b1b4-2f623ab0f413\") " pod="openshift-marketplace/community-operators-kpzrp" Jan 20 18:08:20 crc kubenswrapper[4661]: I0120 18:08:20.387162 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sh765\" (UniqueName: \"kubernetes.io/projected/b74c3546-f6c8-44f1-a0b7-cf45a7b7ba38-kube-api-access-sh765\") pod \"certified-operators-vgcbk\" (UID: \"b74c3546-f6c8-44f1-a0b7-cf45a7b7ba38\") " pod="openshift-marketplace/certified-operators-vgcbk" Jan 20 18:08:20 crc kubenswrapper[4661]: I0120 18:08:20.387181 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b74c3546-f6c8-44f1-a0b7-cf45a7b7ba38-utilities\") pod \"certified-operators-vgcbk\" (UID: \"b74c3546-f6c8-44f1-a0b7-cf45a7b7ba38\") " pod="openshift-marketplace/certified-operators-vgcbk" Jan 20 18:08:20 crc kubenswrapper[4661]: I0120 18:08:20.387204 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8f48199b-41a0-44a2-b1b4-2f623ab0f413-utilities\") pod \"community-operators-kpzrp\" (UID: \"8f48199b-41a0-44a2-b1b4-2f623ab0f413\") " pod="openshift-marketplace/community-operators-kpzrp" Jan 20 18:08:20 crc kubenswrapper[4661]: I0120 18:08:20.387880 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8f48199b-41a0-44a2-b1b4-2f623ab0f413-utilities\") pod \"community-operators-kpzrp\" (UID: \"8f48199b-41a0-44a2-b1b4-2f623ab0f413\") " pod="openshift-marketplace/community-operators-kpzrp" Jan 20 18:08:20 crc kubenswrapper[4661]: E0120 18:08:20.387929 4661 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 18:08:20.887875599 +0000 UTC m=+157.218665431 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 18:08:20 crc kubenswrapper[4661]: I0120 18:08:20.388120 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8f48199b-41a0-44a2-b1b4-2f623ab0f413-catalog-content\") pod \"community-operators-kpzrp\" (UID: \"8f48199b-41a0-44a2-b1b4-2f623ab0f413\") " pod="openshift-marketplace/community-operators-kpzrp" Jan 20 18:08:20 crc kubenswrapper[4661]: I0120 18:08:20.402901 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-kpzrp"] Jan 20 18:08:20 crc kubenswrapper[4661]: I0120 18:08:20.450726 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wchwz\" (UniqueName: \"kubernetes.io/projected/ac4a870b-4ca8-4046-b9b1-6001f8b13a51-kube-api-access-wchwz\") pod \"certified-operators-5jf5n\" (UID: \"ac4a870b-4ca8-4046-b9b1-6001f8b13a51\") " pod="openshift-marketplace/certified-operators-5jf5n" Jan 20 18:08:20 crc kubenswrapper[4661]: I0120 18:08:20.488843 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b74c3546-f6c8-44f1-a0b7-cf45a7b7ba38-utilities\") pod \"certified-operators-vgcbk\" (UID: \"b74c3546-f6c8-44f1-a0b7-cf45a7b7ba38\") " pod="openshift-marketplace/certified-operators-vgcbk" Jan 20 18:08:20 crc kubenswrapper[4661]: I0120 18:08:20.488911 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b74c3546-f6c8-44f1-a0b7-cf45a7b7ba38-catalog-content\") pod \"certified-operators-vgcbk\" (UID: \"b74c3546-f6c8-44f1-a0b7-cf45a7b7ba38\") " pod="openshift-marketplace/certified-operators-vgcbk" Jan 20 18:08:20 crc kubenswrapper[4661]: I0120 18:08:20.488945 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7m2kh\" (UID: \"1a3225a4-585b-4ad0-9951-c5feae37b6cc\") " pod="openshift-image-registry/image-registry-697d97f7c8-7m2kh" Jan 20 18:08:20 crc kubenswrapper[4661]: I0120 18:08:20.489044 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sh765\" (UniqueName: \"kubernetes.io/projected/b74c3546-f6c8-44f1-a0b7-cf45a7b7ba38-kube-api-access-sh765\") pod \"certified-operators-vgcbk\" (UID: \"b74c3546-f6c8-44f1-a0b7-cf45a7b7ba38\") " pod="openshift-marketplace/certified-operators-vgcbk" Jan 20 18:08:20 crc kubenswrapper[4661]: I0120 18:08:20.490392 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b74c3546-f6c8-44f1-a0b7-cf45a7b7ba38-utilities\") pod \"certified-operators-vgcbk\" (UID: \"b74c3546-f6c8-44f1-a0b7-cf45a7b7ba38\") " pod="openshift-marketplace/certified-operators-vgcbk" Jan 20 18:08:20 crc kubenswrapper[4661]: I0120 18:08:20.490626 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b74c3546-f6c8-44f1-a0b7-cf45a7b7ba38-catalog-content\") pod \"certified-operators-vgcbk\" (UID: \"b74c3546-f6c8-44f1-a0b7-cf45a7b7ba38\") " pod="openshift-marketplace/certified-operators-vgcbk" Jan 20 18:08:20 crc kubenswrapper[4661]: E0120 18:08:20.490939 4661 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-20 18:08:20.990925183 +0000 UTC m=+157.321714845 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7m2kh" (UID: "1a3225a4-585b-4ad0-9951-c5feae37b6cc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 18:08:20 crc kubenswrapper[4661]: I0120 18:08:20.507427 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ks96p\" (UniqueName: \"kubernetes.io/projected/8f48199b-41a0-44a2-b1b4-2f623ab0f413-kube-api-access-ks96p\") pod \"community-operators-kpzrp\" (UID: \"8f48199b-41a0-44a2-b1b4-2f623ab0f413\") " pod="openshift-marketplace/community-operators-kpzrp" Jan 20 18:08:20 crc kubenswrapper[4661]: I0120 18:08:20.528152 4661 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-xfrkj" podStartSLOduration=135.528128838 podStartE2EDuration="2m15.528128838s" podCreationTimestamp="2026-01-20 18:06:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 18:08:20.524167761 +0000 UTC m=+156.854957423" watchObservedRunningTime="2026-01-20 18:08:20.528128838 +0000 UTC m=+156.858918500" Jan 20 18:08:20 crc kubenswrapper[4661]: I0120 18:08:20.529740 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-vgcbk"] Jan 20 18:08:20 crc kubenswrapper[4661]: I0120 18:08:20.569385 4661 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-454g9 container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.37:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 20 18:08:20 crc kubenswrapper[4661]: I0120 18:08:20.569478 4661 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-454g9" podUID="3a8e7e81-15e8-4127-9d43-732131427aa2" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.37:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 20 18:08:20 crc kubenswrapper[4661]: I0120 18:08:20.589508 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 18:08:20 crc kubenswrapper[4661]: E0120 18:08:20.589903 4661 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 18:08:21.089884766 +0000 UTC m=+157.420674428 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 18:08:20 crc kubenswrapper[4661]: I0120 18:08:20.608578 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sh765\" (UniqueName: \"kubernetes.io/projected/b74c3546-f6c8-44f1-a0b7-cf45a7b7ba38-kube-api-access-sh765\") pod \"certified-operators-vgcbk\" (UID: \"b74c3546-f6c8-44f1-a0b7-cf45a7b7ba38\") " pod="openshift-marketplace/certified-operators-vgcbk" Jan 20 18:08:20 crc kubenswrapper[4661]: I0120 18:08:20.611975 4661 generic.go:334] "Generic (PLEG): container finished" podID="7a1f08ef-d9a0-484e-9959-14d3ab178d28" containerID="5bc222cbabf1caaccf29dc04a46ebaf7c5ea9847c82556c26cd119bc347eb80a" exitCode=0 Jan 20 18:08:20 crc kubenswrapper[4661]: I0120 18:08:20.612054 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29482200-ckvfc" event={"ID":"7a1f08ef-d9a0-484e-9959-14d3ab178d28","Type":"ContainerDied","Data":"5bc222cbabf1caaccf29dc04a46ebaf7c5ea9847c82556c26cd119bc347eb80a"} Jan 20 18:08:20 crc kubenswrapper[4661]: I0120 18:08:20.649830 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-dwpwk" event={"ID":"413ba99d-7214-4981-98fb-910c4f5731d8","Type":"ContainerStarted","Data":"98b6da9a038377b0c3dec2dd7ba9f87a80db437c598a59c687418053466f46ff"} Jan 20 18:08:20 crc kubenswrapper[4661]: I0120 18:08:20.649884 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-dwpwk" event={"ID":"413ba99d-7214-4981-98fb-910c4f5731d8","Type":"ContainerStarted","Data":"19927f1a0475877f07aa9db52c2223d883238c5f7bcc1750ce4c71edd632b8cd"} Jan 20 18:08:20 crc kubenswrapper[4661]: I0120 18:08:20.672843 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5jf5n" Jan 20 18:08:20 crc kubenswrapper[4661]: I0120 18:08:20.693796 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7m2kh\" (UID: \"1a3225a4-585b-4ad0-9951-c5feae37b6cc\") " pod="openshift-image-registry/image-registry-697d97f7c8-7m2kh" Jan 20 18:08:20 crc kubenswrapper[4661]: E0120 18:08:20.694223 4661 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-20 18:08:21.194206174 +0000 UTC m=+157.524995836 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7m2kh" (UID: "1a3225a4-585b-4ad0-9951-c5feae37b6cc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 18:08:20 crc kubenswrapper[4661]: I0120 18:08:20.703855 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vgcbk" Jan 20 18:08:20 crc kubenswrapper[4661]: I0120 18:08:20.799416 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 18:08:20 crc kubenswrapper[4661]: E0120 18:08:20.799653 4661 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 18:08:21.299620932 +0000 UTC m=+157.630410594 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 18:08:20 crc kubenswrapper[4661]: I0120 18:08:20.799786 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7m2kh\" (UID: \"1a3225a4-585b-4ad0-9951-c5feae37b6cc\") " pod="openshift-image-registry/image-registry-697d97f7c8-7m2kh" Jan 20 18:08:20 crc kubenswrapper[4661]: E0120 18:08:20.800177 4661 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-20 18:08:21.300169547 +0000 UTC m=+157.630959199 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7m2kh" (UID: "1a3225a4-585b-4ad0-9951-c5feae37b6cc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 18:08:20 crc kubenswrapper[4661]: I0120 18:08:20.804686 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-kpzrp" Jan 20 18:08:20 crc kubenswrapper[4661]: I0120 18:08:20.875924 4661 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-gbprg" podStartSLOduration=10.875904503 podStartE2EDuration="10.875904503s" podCreationTimestamp="2026-01-20 18:08:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 18:08:20.852233024 +0000 UTC m=+157.183022686" watchObservedRunningTime="2026-01-20 18:08:20.875904503 +0000 UTC m=+157.206694165" Jan 20 18:08:20 crc kubenswrapper[4661]: I0120 18:08:20.903701 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 18:08:20 crc kubenswrapper[4661]: E0120 18:08:20.904500 4661 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 18:08:21.404478015 +0000 UTC m=+157.735267677 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 18:08:21 crc kubenswrapper[4661]: I0120 18:08:21.008937 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7m2kh\" (UID: \"1a3225a4-585b-4ad0-9951-c5feae37b6cc\") " pod="openshift-image-registry/image-registry-697d97f7c8-7m2kh" Jan 20 18:08:21 crc kubenswrapper[4661]: E0120 18:08:21.009344 4661 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-20 18:08:21.509329168 +0000 UTC m=+157.840118830 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7m2kh" (UID: "1a3225a4-585b-4ad0-9951-c5feae37b6cc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 18:08:21 crc kubenswrapper[4661]: I0120 18:08:21.110044 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 18:08:21 crc kubenswrapper[4661]: E0120 18:08:21.110724 4661 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 18:08:21.610700887 +0000 UTC m=+157.941490549 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 18:08:21 crc kubenswrapper[4661]: I0120 18:08:21.211717 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7m2kh\" (UID: \"1a3225a4-585b-4ad0-9951-c5feae37b6cc\") " pod="openshift-image-registry/image-registry-697d97f7c8-7m2kh" Jan 20 18:08:21 crc kubenswrapper[4661]: E0120 18:08:21.212107 4661 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-20 18:08:21.712088216 +0000 UTC m=+158.042877878 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7m2kh" (UID: "1a3225a4-585b-4ad0-9951-c5feae37b6cc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 18:08:21 crc kubenswrapper[4661]: I0120 18:08:21.281218 4661 patch_prober.go:28] interesting pod/router-default-5444994796-tptl9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 20 18:08:21 crc kubenswrapper[4661]: [-]has-synced failed: reason withheld Jan 20 18:08:21 crc kubenswrapper[4661]: [+]process-running ok Jan 20 18:08:21 crc kubenswrapper[4661]: healthz check failed Jan 20 18:08:21 crc kubenswrapper[4661]: I0120 18:08:21.281282 4661 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-tptl9" podUID="c7ff0869-4b3b-447f-a012-9bc155bae99b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 20 18:08:21 crc kubenswrapper[4661]: I0120 18:08:21.317357 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 18:08:21 crc kubenswrapper[4661]: E0120 18:08:21.317905 4661 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 18:08:21.817868443 +0000 UTC m=+158.148658105 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 18:08:21 crc kubenswrapper[4661]: I0120 18:08:21.318200 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7m2kh\" (UID: \"1a3225a4-585b-4ad0-9951-c5feae37b6cc\") " pod="openshift-image-registry/image-registry-697d97f7c8-7m2kh" Jan 20 18:08:21 crc kubenswrapper[4661]: E0120 18:08:21.318585 4661 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-20 18:08:21.818569222 +0000 UTC m=+158.149358884 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7m2kh" (UID: "1a3225a4-585b-4ad0-9951-c5feae37b6cc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 18:08:21 crc kubenswrapper[4661]: I0120 18:08:21.348151 4661 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-xlt4n" podStartSLOduration=136.348126221 podStartE2EDuration="2m16.348126221s" podCreationTimestamp="2026-01-20 18:06:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 18:08:21.34661099 +0000 UTC m=+157.677400652" watchObservedRunningTime="2026-01-20 18:08:21.348126221 +0000 UTC m=+157.678915883" Jan 20 18:08:21 crc kubenswrapper[4661]: I0120 18:08:21.420516 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 18:08:21 crc kubenswrapper[4661]: E0120 18:08:21.420948 4661 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 18:08:21.920926818 +0000 UTC m=+158.251716480 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 18:08:21 crc kubenswrapper[4661]: I0120 18:08:21.439929 4661 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-9htcv" Jan 20 18:08:21 crc kubenswrapper[4661]: I0120 18:08:21.457395 4661 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-q7s9r" podStartSLOduration=137.457368442 podStartE2EDuration="2m17.457368442s" podCreationTimestamp="2026-01-20 18:06:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 18:08:21.452927452 +0000 UTC m=+157.783717114" watchObservedRunningTime="2026-01-20 18:08:21.457368442 +0000 UTC m=+157.788158104" Jan 20 18:08:21 crc kubenswrapper[4661]: I0120 18:08:21.524147 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7m2kh\" (UID: \"1a3225a4-585b-4ad0-9951-c5feae37b6cc\") " pod="openshift-image-registry/image-registry-697d97f7c8-7m2kh" Jan 20 18:08:21 crc kubenswrapper[4661]: E0120 18:08:21.524595 4661 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-20 18:08:22.024579298 +0000 UTC m=+158.355368960 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7m2kh" (UID: "1a3225a4-585b-4ad0-9951-c5feae37b6cc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 18:08:21 crc kubenswrapper[4661]: I0120 18:08:21.602136 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-7lfrt"] Jan 20 18:08:21 crc kubenswrapper[4661]: I0120 18:08:21.628702 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 18:08:21 crc kubenswrapper[4661]: E0120 18:08:21.630213 4661 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 18:08:22.130192641 +0000 UTC m=+158.460982303 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 18:08:21 crc kubenswrapper[4661]: I0120 18:08:21.634296 4661 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-rvkbf"] Jan 20 18:08:21 crc kubenswrapper[4661]: I0120 18:08:21.636650 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rvkbf" Jan 20 18:08:21 crc kubenswrapper[4661]: I0120 18:08:21.641052 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 20 18:08:21 crc kubenswrapper[4661]: I0120 18:08:21.654809 4661 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-454g9 container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.37:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 20 18:08:21 crc kubenswrapper[4661]: I0120 18:08:21.654864 4661 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-454g9" podUID="3a8e7e81-15e8-4127-9d43-732131427aa2" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.37:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 20 18:08:21 crc kubenswrapper[4661]: I0120 18:08:21.692244 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7lfrt" event={"ID":"c34658be-616a-469f-a560-61709f82cde6","Type":"ContainerStarted","Data":"10f52907222a143512351165bdb5cb0d5a92bf6f9309ecde7fc3da63cfec33fc"} Jan 20 18:08:21 crc kubenswrapper[4661]: I0120 18:08:21.730034 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/40861cf6-5e11-46ad-be02-b415c4f06dee-catalog-content\") pod \"redhat-marketplace-rvkbf\" (UID: \"40861cf6-5e11-46ad-be02-b415c4f06dee\") " pod="openshift-marketplace/redhat-marketplace-rvkbf" Jan 20 18:08:21 crc kubenswrapper[4661]: I0120 18:08:21.730112 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7m2kh\" (UID: \"1a3225a4-585b-4ad0-9951-c5feae37b6cc\") " pod="openshift-image-registry/image-registry-697d97f7c8-7m2kh" Jan 20 18:08:21 crc kubenswrapper[4661]: I0120 18:08:21.730135 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9g6cx\" (UniqueName: \"kubernetes.io/projected/40861cf6-5e11-46ad-be02-b415c4f06dee-kube-api-access-9g6cx\") pod \"redhat-marketplace-rvkbf\" (UID: \"40861cf6-5e11-46ad-be02-b415c4f06dee\") " pod="openshift-marketplace/redhat-marketplace-rvkbf" Jan 20 18:08:21 crc kubenswrapper[4661]: I0120 18:08:21.730167 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/40861cf6-5e11-46ad-be02-b415c4f06dee-utilities\") pod \"redhat-marketplace-rvkbf\" (UID: \"40861cf6-5e11-46ad-be02-b415c4f06dee\") " pod="openshift-marketplace/redhat-marketplace-rvkbf" Jan 20 18:08:21 crc kubenswrapper[4661]: E0120 18:08:21.730506 4661 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-20 18:08:22.230491681 +0000 UTC m=+158.561281343 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7m2kh" (UID: "1a3225a4-585b-4ad0-9951-c5feae37b6cc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 18:08:21 crc kubenswrapper[4661]: I0120 18:08:21.742542 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-rvkbf"] Jan 20 18:08:21 crc kubenswrapper[4661]: I0120 18:08:21.832512 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 18:08:21 crc kubenswrapper[4661]: I0120 18:08:21.832724 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9g6cx\" (UniqueName: \"kubernetes.io/projected/40861cf6-5e11-46ad-be02-b415c4f06dee-kube-api-access-9g6cx\") pod \"redhat-marketplace-rvkbf\" (UID: \"40861cf6-5e11-46ad-be02-b415c4f06dee\") " pod="openshift-marketplace/redhat-marketplace-rvkbf" Jan 20 18:08:21 crc kubenswrapper[4661]: I0120 18:08:21.832763 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/40861cf6-5e11-46ad-be02-b415c4f06dee-utilities\") pod \"redhat-marketplace-rvkbf\" (UID: \"40861cf6-5e11-46ad-be02-b415c4f06dee\") " pod="openshift-marketplace/redhat-marketplace-rvkbf" Jan 20 18:08:21 crc kubenswrapper[4661]: I0120 18:08:21.832816 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/40861cf6-5e11-46ad-be02-b415c4f06dee-catalog-content\") pod \"redhat-marketplace-rvkbf\" (UID: \"40861cf6-5e11-46ad-be02-b415c4f06dee\") " pod="openshift-marketplace/redhat-marketplace-rvkbf" Jan 20 18:08:21 crc kubenswrapper[4661]: I0120 18:08:21.833311 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/40861cf6-5e11-46ad-be02-b415c4f06dee-catalog-content\") pod \"redhat-marketplace-rvkbf\" (UID: \"40861cf6-5e11-46ad-be02-b415c4f06dee\") " pod="openshift-marketplace/redhat-marketplace-rvkbf" Jan 20 18:08:21 crc kubenswrapper[4661]: E0120 18:08:21.833405 4661 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 18:08:22.333381721 +0000 UTC m=+158.664171393 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 18:08:21 crc kubenswrapper[4661]: I0120 18:08:21.833925 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/40861cf6-5e11-46ad-be02-b415c4f06dee-utilities\") pod \"redhat-marketplace-rvkbf\" (UID: \"40861cf6-5e11-46ad-be02-b415c4f06dee\") " pod="openshift-marketplace/redhat-marketplace-rvkbf" Jan 20 18:08:21 crc kubenswrapper[4661]: I0120 18:08:21.847810 4661 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-kh25t" Jan 20 18:08:21 crc kubenswrapper[4661]: I0120 18:08:21.849256 4661 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-kh25t" Jan 20 18:08:21 crc kubenswrapper[4661]: I0120 18:08:21.849446 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-vgcbk"] Jan 20 18:08:21 crc kubenswrapper[4661]: I0120 18:08:21.886887 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9g6cx\" (UniqueName: \"kubernetes.io/projected/40861cf6-5e11-46ad-be02-b415c4f06dee-kube-api-access-9g6cx\") pod \"redhat-marketplace-rvkbf\" (UID: \"40861cf6-5e11-46ad-be02-b415c4f06dee\") " pod="openshift-marketplace/redhat-marketplace-rvkbf" Jan 20 18:08:21 crc kubenswrapper[4661]: I0120 18:08:21.912143 4661 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-kh25t" Jan 20 18:08:21 crc kubenswrapper[4661]: I0120 18:08:21.935656 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7m2kh\" (UID: \"1a3225a4-585b-4ad0-9951-c5feae37b6cc\") " pod="openshift-image-registry/image-registry-697d97f7c8-7m2kh" Jan 20 18:08:21 crc kubenswrapper[4661]: E0120 18:08:21.936124 4661 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-20 18:08:22.436107696 +0000 UTC m=+158.766897358 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7m2kh" (UID: "1a3225a4-585b-4ad0-9951-c5feae37b6cc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 18:08:21 crc kubenswrapper[4661]: I0120 18:08:21.967225 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rvkbf" Jan 20 18:08:22 crc kubenswrapper[4661]: I0120 18:08:22.037898 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 18:08:22 crc kubenswrapper[4661]: E0120 18:08:22.039279 4661 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 18:08:22.539248922 +0000 UTC m=+158.870038584 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 18:08:22 crc kubenswrapper[4661]: I0120 18:08:22.070301 4661 patch_prober.go:28] interesting pod/downloads-7954f5f757-bvpn8 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.16:8080/\": dial tcp 10.217.0.16:8080: connect: connection refused" start-of-body= Jan 20 18:08:22 crc kubenswrapper[4661]: I0120 18:08:22.070361 4661 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-bvpn8" podUID="afe96487-2a45-4ad8-8f17-7f33186f55f4" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.16:8080/\": dial tcp 10.217.0.16:8080: connect: connection refused" Jan 20 18:08:22 crc kubenswrapper[4661]: I0120 18:08:22.073787 4661 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-pblrm"] Jan 20 18:08:22 crc kubenswrapper[4661]: I0120 18:08:22.074726 4661 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-q7s9r" Jan 20 18:08:22 crc kubenswrapper[4661]: I0120 18:08:22.074746 4661 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-q7s9r" Jan 20 18:08:22 crc kubenswrapper[4661]: I0120 18:08:22.074966 4661 patch_prober.go:28] interesting pod/downloads-7954f5f757-bvpn8 container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.16:8080/\": dial tcp 10.217.0.16:8080: connect: connection refused" start-of-body= Jan 20 18:08:22 crc kubenswrapper[4661]: I0120 18:08:22.075041 4661 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-bvpn8" podUID="afe96487-2a45-4ad8-8f17-7f33186f55f4" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.16:8080/\": dial tcp 10.217.0.16:8080: connect: connection refused" Jan 20 18:08:22 crc kubenswrapper[4661]: I0120 18:08:22.086982 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-pblrm" Jan 20 18:08:22 crc kubenswrapper[4661]: I0120 18:08:22.098276 4661 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-phg9x" Jan 20 18:08:22 crc kubenswrapper[4661]: I0120 18:08:22.098336 4661 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-phg9x" Jan 20 18:08:22 crc kubenswrapper[4661]: I0120 18:08:22.103452 4661 patch_prober.go:28] interesting pod/console-f9d7485db-phg9x container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.17:8443/health\": dial tcp 10.217.0.17:8443: connect: connection refused" start-of-body= Jan 20 18:08:22 crc kubenswrapper[4661]: I0120 18:08:22.103625 4661 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-phg9x" podUID="4c500541-c3f2-4f6d-8bb7-1227aa74989a" containerName="console" probeResult="failure" output="Get \"https://10.217.0.17:8443/health\": dial tcp 10.217.0.17:8443: connect: connection refused" Jan 20 18:08:22 crc kubenswrapper[4661]: I0120 18:08:22.125880 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-pblrm"] Jan 20 18:08:22 crc kubenswrapper[4661]: I0120 18:08:22.143149 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7m2kh\" (UID: \"1a3225a4-585b-4ad0-9951-c5feae37b6cc\") " pod="openshift-image-registry/image-registry-697d97f7c8-7m2kh" Jan 20 18:08:22 crc kubenswrapper[4661]: E0120 18:08:22.143683 4661 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-20 18:08:22.643647282 +0000 UTC m=+158.974436954 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7m2kh" (UID: "1a3225a4-585b-4ad0-9951-c5feae37b6cc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 18:08:22 crc kubenswrapper[4661]: I0120 18:08:22.191983 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-kpzrp"] Jan 20 18:08:22 crc kubenswrapper[4661]: I0120 18:08:22.254285 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 18:08:22 crc kubenswrapper[4661]: I0120 18:08:22.254555 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k5rhd\" (UniqueName: \"kubernetes.io/projected/038c2d26-4bbf-46fd-90fa-d6e5a6929c7c-kube-api-access-k5rhd\") pod \"redhat-marketplace-pblrm\" (UID: \"038c2d26-4bbf-46fd-90fa-d6e5a6929c7c\") " pod="openshift-marketplace/redhat-marketplace-pblrm" Jan 20 18:08:22 crc kubenswrapper[4661]: I0120 18:08:22.254689 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/038c2d26-4bbf-46fd-90fa-d6e5a6929c7c-catalog-content\") pod \"redhat-marketplace-pblrm\" (UID: \"038c2d26-4bbf-46fd-90fa-d6e5a6929c7c\") " pod="openshift-marketplace/redhat-marketplace-pblrm" Jan 20 18:08:22 crc kubenswrapper[4661]: I0120 18:08:22.254729 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/038c2d26-4bbf-46fd-90fa-d6e5a6929c7c-utilities\") pod \"redhat-marketplace-pblrm\" (UID: \"038c2d26-4bbf-46fd-90fa-d6e5a6929c7c\") " pod="openshift-marketplace/redhat-marketplace-pblrm" Jan 20 18:08:22 crc kubenswrapper[4661]: E0120 18:08:22.254885 4661 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 18:08:22.754865417 +0000 UTC m=+159.085655079 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 18:08:22 crc kubenswrapper[4661]: I0120 18:08:22.259142 4661 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-tptl9" Jan 20 18:08:22 crc kubenswrapper[4661]: I0120 18:08:22.269377 4661 patch_prober.go:28] interesting pod/router-default-5444994796-tptl9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 20 18:08:22 crc kubenswrapper[4661]: [-]has-synced failed: reason withheld Jan 20 18:08:22 crc kubenswrapper[4661]: [+]process-running ok Jan 20 18:08:22 crc kubenswrapper[4661]: healthz check failed Jan 20 18:08:22 crc kubenswrapper[4661]: I0120 18:08:22.270268 4661 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-tptl9" podUID="c7ff0869-4b3b-447f-a012-9bc155bae99b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 20 18:08:22 crc kubenswrapper[4661]: I0120 18:08:22.357402 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/038c2d26-4bbf-46fd-90fa-d6e5a6929c7c-catalog-content\") pod \"redhat-marketplace-pblrm\" (UID: \"038c2d26-4bbf-46fd-90fa-d6e5a6929c7c\") " pod="openshift-marketplace/redhat-marketplace-pblrm" Jan 20 18:08:22 crc kubenswrapper[4661]: I0120 18:08:22.357744 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/038c2d26-4bbf-46fd-90fa-d6e5a6929c7c-utilities\") pod \"redhat-marketplace-pblrm\" (UID: \"038c2d26-4bbf-46fd-90fa-d6e5a6929c7c\") " pod="openshift-marketplace/redhat-marketplace-pblrm" Jan 20 18:08:22 crc kubenswrapper[4661]: I0120 18:08:22.357772 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7m2kh\" (UID: \"1a3225a4-585b-4ad0-9951-c5feae37b6cc\") " pod="openshift-image-registry/image-registry-697d97f7c8-7m2kh" Jan 20 18:08:22 crc kubenswrapper[4661]: I0120 18:08:22.357835 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k5rhd\" (UniqueName: \"kubernetes.io/projected/038c2d26-4bbf-46fd-90fa-d6e5a6929c7c-kube-api-access-k5rhd\") pod \"redhat-marketplace-pblrm\" (UID: \"038c2d26-4bbf-46fd-90fa-d6e5a6929c7c\") " pod="openshift-marketplace/redhat-marketplace-pblrm" Jan 20 18:08:22 crc kubenswrapper[4661]: I0120 18:08:22.358737 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/038c2d26-4bbf-46fd-90fa-d6e5a6929c7c-catalog-content\") pod \"redhat-marketplace-pblrm\" (UID: \"038c2d26-4bbf-46fd-90fa-d6e5a6929c7c\") " pod="openshift-marketplace/redhat-marketplace-pblrm" Jan 20 18:08:22 crc kubenswrapper[4661]: E0120 18:08:22.359092 4661 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-20 18:08:22.859062172 +0000 UTC m=+159.189851834 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7m2kh" (UID: "1a3225a4-585b-4ad0-9951-c5feae37b6cc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 18:08:22 crc kubenswrapper[4661]: I0120 18:08:22.359224 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/038c2d26-4bbf-46fd-90fa-d6e5a6929c7c-utilities\") pod \"redhat-marketplace-pblrm\" (UID: \"038c2d26-4bbf-46fd-90fa-d6e5a6929c7c\") " pod="openshift-marketplace/redhat-marketplace-pblrm" Jan 20 18:08:22 crc kubenswrapper[4661]: I0120 18:08:22.397135 4661 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-flvxz" Jan 20 18:08:22 crc kubenswrapper[4661]: I0120 18:08:22.420081 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-5jf5n"] Jan 20 18:08:22 crc kubenswrapper[4661]: I0120 18:08:22.446525 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k5rhd\" (UniqueName: \"kubernetes.io/projected/038c2d26-4bbf-46fd-90fa-d6e5a6929c7c-kube-api-access-k5rhd\") pod \"redhat-marketplace-pblrm\" (UID: \"038c2d26-4bbf-46fd-90fa-d6e5a6929c7c\") " pod="openshift-marketplace/redhat-marketplace-pblrm" Jan 20 18:08:22 crc kubenswrapper[4661]: I0120 18:08:22.458964 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 18:08:22 crc kubenswrapper[4661]: E0120 18:08:22.460286 4661 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 18:08:22.960261586 +0000 UTC m=+159.291051248 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 18:08:22 crc kubenswrapper[4661]: I0120 18:08:22.562800 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7m2kh\" (UID: \"1a3225a4-585b-4ad0-9951-c5feae37b6cc\") " pod="openshift-image-registry/image-registry-697d97f7c8-7m2kh" Jan 20 18:08:22 crc kubenswrapper[4661]: E0120 18:08:22.565412 4661 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-20 18:08:23.065388366 +0000 UTC m=+159.396178028 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7m2kh" (UID: "1a3225a4-585b-4ad0-9951-c5feae37b6cc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 18:08:22 crc kubenswrapper[4661]: I0120 18:08:22.665201 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 18:08:22 crc kubenswrapper[4661]: E0120 18:08:22.665525 4661 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 18:08:23.165505271 +0000 UTC m=+159.496294933 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 18:08:22 crc kubenswrapper[4661]: I0120 18:08:22.725199 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-rvkbf"] Jan 20 18:08:22 crc kubenswrapper[4661]: I0120 18:08:22.728925 4661 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29482200-ckvfc" Jan 20 18:08:22 crc kubenswrapper[4661]: I0120 18:08:22.729006 4661 generic.go:334] "Generic (PLEG): container finished" podID="b74c3546-f6c8-44f1-a0b7-cf45a7b7ba38" containerID="72f35ed4405aaeb208d8c537c9714bf5f042011bb254200b4d219d863d0e205d" exitCode=0 Jan 20 18:08:22 crc kubenswrapper[4661]: I0120 18:08:22.729198 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vgcbk" event={"ID":"b74c3546-f6c8-44f1-a0b7-cf45a7b7ba38","Type":"ContainerDied","Data":"72f35ed4405aaeb208d8c537c9714bf5f042011bb254200b4d219d863d0e205d"} Jan 20 18:08:22 crc kubenswrapper[4661]: I0120 18:08:22.729239 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vgcbk" event={"ID":"b74c3546-f6c8-44f1-a0b7-cf45a7b7ba38","Type":"ContainerStarted","Data":"86c905d9ac031ea7d0030bebbc5ca89a0915fcd1f4154dc195382df9a525b27b"} Jan 20 18:08:22 crc kubenswrapper[4661]: I0120 18:08:22.732886 4661 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 20 18:08:22 crc kubenswrapper[4661]: I0120 18:08:22.736030 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-pblrm" Jan 20 18:08:22 crc kubenswrapper[4661]: I0120 18:08:22.743947 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29482200-ckvfc" event={"ID":"7a1f08ef-d9a0-484e-9959-14d3ab178d28","Type":"ContainerDied","Data":"d478bea8e1e7e0bb5b56b1a2364a67ef46f694c725076ea64a878e404548786f"} Jan 20 18:08:22 crc kubenswrapper[4661]: I0120 18:08:22.743996 4661 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d478bea8e1e7e0bb5b56b1a2364a67ef46f694c725076ea64a878e404548786f" Jan 20 18:08:22 crc kubenswrapper[4661]: I0120 18:08:22.744074 4661 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29482200-ckvfc" Jan 20 18:08:22 crc kubenswrapper[4661]: I0120 18:08:22.745862 4661 generic.go:334] "Generic (PLEG): container finished" podID="c34658be-616a-469f-a560-61709f82cde6" containerID="b547dbca4059534c8408283f55e6dc6ff26c310b608dff887bc41d78c755fe63" exitCode=0 Jan 20 18:08:22 crc kubenswrapper[4661]: I0120 18:08:22.745906 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7lfrt" event={"ID":"c34658be-616a-469f-a560-61709f82cde6","Type":"ContainerDied","Data":"b547dbca4059534c8408283f55e6dc6ff26c310b608dff887bc41d78c755fe63"} Jan 20 18:08:22 crc kubenswrapper[4661]: I0120 18:08:22.751060 4661 generic.go:334] "Generic (PLEG): container finished" podID="8f48199b-41a0-44a2-b1b4-2f623ab0f413" containerID="3978e0f4d060d08a2ac1273252335d07c7f12f11b63f7bf90b9853126592c9d5" exitCode=0 Jan 20 18:08:22 crc kubenswrapper[4661]: I0120 18:08:22.751108 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kpzrp" event={"ID":"8f48199b-41a0-44a2-b1b4-2f623ab0f413","Type":"ContainerDied","Data":"3978e0f4d060d08a2ac1273252335d07c7f12f11b63f7bf90b9853126592c9d5"} Jan 20 18:08:22 crc kubenswrapper[4661]: I0120 18:08:22.751129 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kpzrp" event={"ID":"8f48199b-41a0-44a2-b1b4-2f623ab0f413","Type":"ContainerStarted","Data":"314c6206849cbdcb2879a061abb97fdc2430026f135356a81c1a9d27e19dc876"} Jan 20 18:08:22 crc kubenswrapper[4661]: I0120 18:08:22.771806 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7m2kh\" (UID: \"1a3225a4-585b-4ad0-9951-c5feae37b6cc\") " pod="openshift-image-registry/image-registry-697d97f7c8-7m2kh" Jan 20 18:08:22 crc kubenswrapper[4661]: E0120 18:08:22.772472 4661 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-20 18:08:23.27245019 +0000 UTC m=+159.603239852 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7m2kh" (UID: "1a3225a4-585b-4ad0-9951-c5feae37b6cc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 18:08:22 crc kubenswrapper[4661]: I0120 18:08:22.776881 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-dwpwk" event={"ID":"413ba99d-7214-4981-98fb-910c4f5731d8","Type":"ContainerStarted","Data":"15a75f670d0ad12046e490ea7a14029be92c6f572dcdd5b8f05810f4a023226f"} Jan 20 18:08:22 crc kubenswrapper[4661]: I0120 18:08:22.781685 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5jf5n" event={"ID":"ac4a870b-4ca8-4046-b9b1-6001f8b13a51","Type":"ContainerStarted","Data":"3393c739a03766e293b89fad2ab822403cc320eb3766252b4fd88c54c3cf37ca"} Jan 20 18:08:22 crc kubenswrapper[4661]: I0120 18:08:22.789880 4661 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-kh25t" Jan 20 18:08:22 crc kubenswrapper[4661]: I0120 18:08:22.845776 4661 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-cr2m2"] Jan 20 18:08:22 crc kubenswrapper[4661]: E0120 18:08:22.846028 4661 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7a1f08ef-d9a0-484e-9959-14d3ab178d28" containerName="collect-profiles" Jan 20 18:08:22 crc kubenswrapper[4661]: I0120 18:08:22.846041 4661 state_mem.go:107] "Deleted CPUSet assignment" podUID="7a1f08ef-d9a0-484e-9959-14d3ab178d28" containerName="collect-profiles" Jan 20 18:08:22 crc kubenswrapper[4661]: I0120 18:08:22.846134 4661 memory_manager.go:354] "RemoveStaleState removing state" podUID="7a1f08ef-d9a0-484e-9959-14d3ab178d28" containerName="collect-profiles" Jan 20 18:08:22 crc kubenswrapper[4661]: I0120 18:08:22.847016 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-cr2m2" Jan 20 18:08:22 crc kubenswrapper[4661]: I0120 18:08:22.853104 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 20 18:08:22 crc kubenswrapper[4661]: I0120 18:08:22.857093 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-cr2m2"] Jan 20 18:08:22 crc kubenswrapper[4661]: I0120 18:08:22.878387 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d697c\" (UniqueName: \"kubernetes.io/projected/7a1f08ef-d9a0-484e-9959-14d3ab178d28-kube-api-access-d697c\") pod \"7a1f08ef-d9a0-484e-9959-14d3ab178d28\" (UID: \"7a1f08ef-d9a0-484e-9959-14d3ab178d28\") " Jan 20 18:08:22 crc kubenswrapper[4661]: I0120 18:08:22.878507 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7a1f08ef-d9a0-484e-9959-14d3ab178d28-config-volume\") pod \"7a1f08ef-d9a0-484e-9959-14d3ab178d28\" (UID: \"7a1f08ef-d9a0-484e-9959-14d3ab178d28\") " Jan 20 18:08:22 crc kubenswrapper[4661]: I0120 18:08:22.878681 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 18:08:22 crc kubenswrapper[4661]: E0120 18:08:22.878859 4661 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 18:08:23.378836754 +0000 UTC m=+159.709626406 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 18:08:22 crc kubenswrapper[4661]: I0120 18:08:22.879492 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7a1f08ef-d9a0-484e-9959-14d3ab178d28-secret-volume\") pod \"7a1f08ef-d9a0-484e-9959-14d3ab178d28\" (UID: \"7a1f08ef-d9a0-484e-9959-14d3ab178d28\") " Jan 20 18:08:22 crc kubenswrapper[4661]: I0120 18:08:22.879645 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2t9xx\" (UniqueName: \"kubernetes.io/projected/a087d508-430f-45ba-bff2-b58d06cebd51-kube-api-access-2t9xx\") pod \"redhat-operators-cr2m2\" (UID: \"a087d508-430f-45ba-bff2-b58d06cebd51\") " pod="openshift-marketplace/redhat-operators-cr2m2" Jan 20 18:08:22 crc kubenswrapper[4661]: I0120 18:08:22.879728 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a087d508-430f-45ba-bff2-b58d06cebd51-catalog-content\") pod \"redhat-operators-cr2m2\" (UID: \"a087d508-430f-45ba-bff2-b58d06cebd51\") " pod="openshift-marketplace/redhat-operators-cr2m2" Jan 20 18:08:22 crc kubenswrapper[4661]: I0120 18:08:22.879815 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7a1f08ef-d9a0-484e-9959-14d3ab178d28-config-volume" (OuterVolumeSpecName: "config-volume") pod "7a1f08ef-d9a0-484e-9959-14d3ab178d28" (UID: "7a1f08ef-d9a0-484e-9959-14d3ab178d28"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 18:08:22 crc kubenswrapper[4661]: I0120 18:08:22.879914 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7m2kh\" (UID: \"1a3225a4-585b-4ad0-9951-c5feae37b6cc\") " pod="openshift-image-registry/image-registry-697d97f7c8-7m2kh" Jan 20 18:08:22 crc kubenswrapper[4661]: E0120 18:08:22.880264 4661 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-20 18:08:23.380248632 +0000 UTC m=+159.711038294 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7m2kh" (UID: "1a3225a4-585b-4ad0-9951-c5feae37b6cc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 18:08:22 crc kubenswrapper[4661]: I0120 18:08:22.880240 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a087d508-430f-45ba-bff2-b58d06cebd51-utilities\") pod \"redhat-operators-cr2m2\" (UID: \"a087d508-430f-45ba-bff2-b58d06cebd51\") " pod="openshift-marketplace/redhat-operators-cr2m2" Jan 20 18:08:22 crc kubenswrapper[4661]: I0120 18:08:22.880330 4661 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7a1f08ef-d9a0-484e-9959-14d3ab178d28-config-volume\") on node \"crc\" DevicePath \"\"" Jan 20 18:08:22 crc kubenswrapper[4661]: I0120 18:08:22.918094 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7a1f08ef-d9a0-484e-9959-14d3ab178d28-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "7a1f08ef-d9a0-484e-9959-14d3ab178d28" (UID: "7a1f08ef-d9a0-484e-9959-14d3ab178d28"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 18:08:22 crc kubenswrapper[4661]: I0120 18:08:22.925880 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7a1f08ef-d9a0-484e-9959-14d3ab178d28-kube-api-access-d697c" (OuterVolumeSpecName: "kube-api-access-d697c") pod "7a1f08ef-d9a0-484e-9959-14d3ab178d28" (UID: "7a1f08ef-d9a0-484e-9959-14d3ab178d28"). InnerVolumeSpecName "kube-api-access-d697c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 18:08:22 crc kubenswrapper[4661]: I0120 18:08:22.982443 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 18:08:22 crc kubenswrapper[4661]: I0120 18:08:22.983053 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a087d508-430f-45ba-bff2-b58d06cebd51-utilities\") pod \"redhat-operators-cr2m2\" (UID: \"a087d508-430f-45ba-bff2-b58d06cebd51\") " pod="openshift-marketplace/redhat-operators-cr2m2" Jan 20 18:08:22 crc kubenswrapper[4661]: I0120 18:08:22.983086 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2t9xx\" (UniqueName: \"kubernetes.io/projected/a087d508-430f-45ba-bff2-b58d06cebd51-kube-api-access-2t9xx\") pod \"redhat-operators-cr2m2\" (UID: \"a087d508-430f-45ba-bff2-b58d06cebd51\") " pod="openshift-marketplace/redhat-operators-cr2m2" Jan 20 18:08:22 crc kubenswrapper[4661]: I0120 18:08:22.983112 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a087d508-430f-45ba-bff2-b58d06cebd51-catalog-content\") pod \"redhat-operators-cr2m2\" (UID: \"a087d508-430f-45ba-bff2-b58d06cebd51\") " pod="openshift-marketplace/redhat-operators-cr2m2" Jan 20 18:08:22 crc kubenswrapper[4661]: I0120 18:08:22.983180 4661 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d697c\" (UniqueName: \"kubernetes.io/projected/7a1f08ef-d9a0-484e-9959-14d3ab178d28-kube-api-access-d697c\") on node \"crc\" DevicePath \"\"" Jan 20 18:08:22 crc kubenswrapper[4661]: I0120 18:08:22.983191 4661 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7a1f08ef-d9a0-484e-9959-14d3ab178d28-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 20 18:08:22 crc kubenswrapper[4661]: I0120 18:08:22.983785 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a087d508-430f-45ba-bff2-b58d06cebd51-catalog-content\") pod \"redhat-operators-cr2m2\" (UID: \"a087d508-430f-45ba-bff2-b58d06cebd51\") " pod="openshift-marketplace/redhat-operators-cr2m2" Jan 20 18:08:22 crc kubenswrapper[4661]: E0120 18:08:22.983840 4661 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 18:08:23.483796739 +0000 UTC m=+159.814586391 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 18:08:22 crc kubenswrapper[4661]: I0120 18:08:22.985125 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a087d508-430f-45ba-bff2-b58d06cebd51-utilities\") pod \"redhat-operators-cr2m2\" (UID: \"a087d508-430f-45ba-bff2-b58d06cebd51\") " pod="openshift-marketplace/redhat-operators-cr2m2" Jan 20 18:08:23 crc kubenswrapper[4661]: I0120 18:08:23.002902 4661 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-dwpwk" podStartSLOduration=14.002878975 podStartE2EDuration="14.002878975s" podCreationTimestamp="2026-01-20 18:08:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 18:08:23.002062663 +0000 UTC m=+159.332852325" watchObservedRunningTime="2026-01-20 18:08:23.002878975 +0000 UTC m=+159.333668627" Jan 20 18:08:23 crc kubenswrapper[4661]: I0120 18:08:23.046166 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2t9xx\" (UniqueName: \"kubernetes.io/projected/a087d508-430f-45ba-bff2-b58d06cebd51-kube-api-access-2t9xx\") pod \"redhat-operators-cr2m2\" (UID: \"a087d508-430f-45ba-bff2-b58d06cebd51\") " pod="openshift-marketplace/redhat-operators-cr2m2" Jan 20 18:08:23 crc kubenswrapper[4661]: I0120 18:08:23.089002 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7m2kh\" (UID: \"1a3225a4-585b-4ad0-9951-c5feae37b6cc\") " pod="openshift-image-registry/image-registry-697d97f7c8-7m2kh" Jan 20 18:08:23 crc kubenswrapper[4661]: E0120 18:08:23.089849 4661 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-20 18:08:23.589809403 +0000 UTC m=+159.920599065 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7m2kh" (UID: "1a3225a4-585b-4ad0-9951-c5feae37b6cc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 18:08:23 crc kubenswrapper[4661]: I0120 18:08:23.193716 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 18:08:23 crc kubenswrapper[4661]: E0120 18:08:23.194519 4661 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 18:08:23.694479651 +0000 UTC m=+160.025269323 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 18:08:23 crc kubenswrapper[4661]: I0120 18:08:23.194880 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7m2kh\" (UID: \"1a3225a4-585b-4ad0-9951-c5feae37b6cc\") " pod="openshift-image-registry/image-registry-697d97f7c8-7m2kh" Jan 20 18:08:23 crc kubenswrapper[4661]: E0120 18:08:23.195329 4661 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-20 18:08:23.695309344 +0000 UTC m=+160.026099006 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7m2kh" (UID: "1a3225a4-585b-4ad0-9951-c5feae37b6cc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 18:08:23 crc kubenswrapper[4661]: I0120 18:08:23.256921 4661 patch_prober.go:28] interesting pod/apiserver-76f77b778f-q7s9r container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Jan 20 18:08:23 crc kubenswrapper[4661]: [+]log ok Jan 20 18:08:23 crc kubenswrapper[4661]: [+]etcd ok Jan 20 18:08:23 crc kubenswrapper[4661]: [+]poststarthook/start-apiserver-admission-initializer ok Jan 20 18:08:23 crc kubenswrapper[4661]: [+]poststarthook/generic-apiserver-start-informers ok Jan 20 18:08:23 crc kubenswrapper[4661]: [+]poststarthook/max-in-flight-filter ok Jan 20 18:08:23 crc kubenswrapper[4661]: [+]poststarthook/storage-object-count-tracker-hook ok Jan 20 18:08:23 crc kubenswrapper[4661]: [+]poststarthook/image.openshift.io-apiserver-caches ok Jan 20 18:08:23 crc kubenswrapper[4661]: [-]poststarthook/authorization.openshift.io-bootstrapclusterroles failed: reason withheld Jan 20 18:08:23 crc kubenswrapper[4661]: [-]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa failed: reason withheld Jan 20 18:08:23 crc kubenswrapper[4661]: [+]poststarthook/project.openshift.io-projectcache ok Jan 20 18:08:23 crc kubenswrapper[4661]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Jan 20 18:08:23 crc kubenswrapper[4661]: [+]poststarthook/openshift.io-startinformers ok Jan 20 18:08:23 crc kubenswrapper[4661]: [+]poststarthook/openshift.io-restmapperupdater ok Jan 20 18:08:23 crc kubenswrapper[4661]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Jan 20 18:08:23 crc kubenswrapper[4661]: livez check failed Jan 20 18:08:23 crc kubenswrapper[4661]: I0120 18:08:23.257039 4661 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-76f77b778f-q7s9r" podUID="53f0efc5-f6a1-4f6b-a37b-3e9c4e3fea65" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 20 18:08:23 crc kubenswrapper[4661]: I0120 18:08:23.270616 4661 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-zrhnk"] Jan 20 18:08:23 crc kubenswrapper[4661]: I0120 18:08:23.272291 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zrhnk" Jan 20 18:08:23 crc kubenswrapper[4661]: I0120 18:08:23.273135 4661 patch_prober.go:28] interesting pod/router-default-5444994796-tptl9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 20 18:08:23 crc kubenswrapper[4661]: [-]has-synced failed: reason withheld Jan 20 18:08:23 crc kubenswrapper[4661]: [+]process-running ok Jan 20 18:08:23 crc kubenswrapper[4661]: healthz check failed Jan 20 18:08:23 crc kubenswrapper[4661]: I0120 18:08:23.281643 4661 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-tptl9" podUID="c7ff0869-4b3b-447f-a012-9bc155bae99b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 20 18:08:23 crc kubenswrapper[4661]: I0120 18:08:23.292220 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-cr2m2" Jan 20 18:08:23 crc kubenswrapper[4661]: I0120 18:08:23.297365 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 18:08:23 crc kubenswrapper[4661]: E0120 18:08:23.297683 4661 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 18:08:23.797643248 +0000 UTC m=+160.128432910 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 18:08:23 crc kubenswrapper[4661]: I0120 18:08:23.310523 4661 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Jan 20 18:08:23 crc kubenswrapper[4661]: I0120 18:08:23.341727 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-zrhnk"] Jan 20 18:08:23 crc kubenswrapper[4661]: I0120 18:08:23.353930 4661 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2026-01-20T18:08:23.310553007Z","Handler":null,"Name":""} Jan 20 18:08:23 crc kubenswrapper[4661]: I0120 18:08:23.399453 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vk5ll\" (UniqueName: \"kubernetes.io/projected/16283339-07c9-417a-9616-06e3f9eac63d-kube-api-access-vk5ll\") pod \"redhat-operators-zrhnk\" (UID: \"16283339-07c9-417a-9616-06e3f9eac63d\") " pod="openshift-marketplace/redhat-operators-zrhnk" Jan 20 18:08:23 crc kubenswrapper[4661]: I0120 18:08:23.399768 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/16283339-07c9-417a-9616-06e3f9eac63d-catalog-content\") pod \"redhat-operators-zrhnk\" (UID: \"16283339-07c9-417a-9616-06e3f9eac63d\") " pod="openshift-marketplace/redhat-operators-zrhnk" Jan 20 18:08:23 crc kubenswrapper[4661]: I0120 18:08:23.399852 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/16283339-07c9-417a-9616-06e3f9eac63d-utilities\") pod \"redhat-operators-zrhnk\" (UID: \"16283339-07c9-417a-9616-06e3f9eac63d\") " pod="openshift-marketplace/redhat-operators-zrhnk" Jan 20 18:08:23 crc kubenswrapper[4661]: I0120 18:08:23.399997 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7m2kh\" (UID: \"1a3225a4-585b-4ad0-9951-c5feae37b6cc\") " pod="openshift-image-registry/image-registry-697d97f7c8-7m2kh" Jan 20 18:08:23 crc kubenswrapper[4661]: E0120 18:08:23.400409 4661 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-20 18:08:23.900394444 +0000 UTC m=+160.231184106 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7m2kh" (UID: "1a3225a4-585b-4ad0-9951-c5feae37b6cc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 18:08:23 crc kubenswrapper[4661]: I0120 18:08:23.466057 4661 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Jan 20 18:08:23 crc kubenswrapper[4661]: I0120 18:08:23.466115 4661 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Jan 20 18:08:23 crc kubenswrapper[4661]: I0120 18:08:23.507247 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 18:08:23 crc kubenswrapper[4661]: I0120 18:08:23.507597 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vk5ll\" (UniqueName: \"kubernetes.io/projected/16283339-07c9-417a-9616-06e3f9eac63d-kube-api-access-vk5ll\") pod \"redhat-operators-zrhnk\" (UID: \"16283339-07c9-417a-9616-06e3f9eac63d\") " pod="openshift-marketplace/redhat-operators-zrhnk" Jan 20 18:08:23 crc kubenswrapper[4661]: I0120 18:08:23.507635 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/16283339-07c9-417a-9616-06e3f9eac63d-catalog-content\") pod \"redhat-operators-zrhnk\" (UID: \"16283339-07c9-417a-9616-06e3f9eac63d\") " pod="openshift-marketplace/redhat-operators-zrhnk" Jan 20 18:08:23 crc kubenswrapper[4661]: I0120 18:08:23.507661 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/16283339-07c9-417a-9616-06e3f9eac63d-utilities\") pod \"redhat-operators-zrhnk\" (UID: \"16283339-07c9-417a-9616-06e3f9eac63d\") " pod="openshift-marketplace/redhat-operators-zrhnk" Jan 20 18:08:23 crc kubenswrapper[4661]: I0120 18:08:23.508572 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/16283339-07c9-417a-9616-06e3f9eac63d-utilities\") pod \"redhat-operators-zrhnk\" (UID: \"16283339-07c9-417a-9616-06e3f9eac63d\") " pod="openshift-marketplace/redhat-operators-zrhnk" Jan 20 18:08:23 crc kubenswrapper[4661]: I0120 18:08:23.508866 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/16283339-07c9-417a-9616-06e3f9eac63d-catalog-content\") pod \"redhat-operators-zrhnk\" (UID: \"16283339-07c9-417a-9616-06e3f9eac63d\") " pod="openshift-marketplace/redhat-operators-zrhnk" Jan 20 18:08:23 crc kubenswrapper[4661]: I0120 18:08:23.539931 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vk5ll\" (UniqueName: \"kubernetes.io/projected/16283339-07c9-417a-9616-06e3f9eac63d-kube-api-access-vk5ll\") pod \"redhat-operators-zrhnk\" (UID: \"16283339-07c9-417a-9616-06e3f9eac63d\") " pod="openshift-marketplace/redhat-operators-zrhnk" Jan 20 18:08:23 crc kubenswrapper[4661]: I0120 18:08:23.603850 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zrhnk" Jan 20 18:08:23 crc kubenswrapper[4661]: I0120 18:08:23.614004 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 20 18:08:23 crc kubenswrapper[4661]: I0120 18:08:23.655600 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-pblrm"] Jan 20 18:08:23 crc kubenswrapper[4661]: I0120 18:08:23.674322 4661 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-454g9" Jan 20 18:08:23 crc kubenswrapper[4661]: I0120 18:08:23.715298 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7m2kh\" (UID: \"1a3225a4-585b-4ad0-9951-c5feae37b6cc\") " pod="openshift-image-registry/image-registry-697d97f7c8-7m2kh" Jan 20 18:08:23 crc kubenswrapper[4661]: I0120 18:08:23.815279 4661 generic.go:334] "Generic (PLEG): container finished" podID="40861cf6-5e11-46ad-be02-b415c4f06dee" containerID="bb4ecb237666be5a9f78edbc6f8062be1066f0db17312852a44589c207cf3970" exitCode=0 Jan 20 18:08:23 crc kubenswrapper[4661]: I0120 18:08:23.815351 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rvkbf" event={"ID":"40861cf6-5e11-46ad-be02-b415c4f06dee","Type":"ContainerDied","Data":"bb4ecb237666be5a9f78edbc6f8062be1066f0db17312852a44589c207cf3970"} Jan 20 18:08:23 crc kubenswrapper[4661]: I0120 18:08:23.815382 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rvkbf" event={"ID":"40861cf6-5e11-46ad-be02-b415c4f06dee","Type":"ContainerStarted","Data":"c5a0f84abd162ad6153c3a53b6a491e853d2d91e534c3544a4f1ae3cf85c58e0"} Jan 20 18:08:23 crc kubenswrapper[4661]: I0120 18:08:23.838946 4661 generic.go:334] "Generic (PLEG): container finished" podID="ac4a870b-4ca8-4046-b9b1-6001f8b13a51" containerID="7a5c42800d36e5d6c21494b0c03b43ed58b44f390b48c5f0040c4ce81df19867" exitCode=0 Jan 20 18:08:23 crc kubenswrapper[4661]: I0120 18:08:23.840149 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5jf5n" event={"ID":"ac4a870b-4ca8-4046-b9b1-6001f8b13a51","Type":"ContainerDied","Data":"7a5c42800d36e5d6c21494b0c03b43ed58b44f390b48c5f0040c4ce81df19867"} Jan 20 18:08:23 crc kubenswrapper[4661]: I0120 18:08:23.927618 4661 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 20 18:08:23 crc kubenswrapper[4661]: I0120 18:08:23.927686 4661 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7m2kh\" (UID: \"1a3225a4-585b-4ad0-9951-c5feae37b6cc\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-7m2kh" Jan 20 18:08:23 crc kubenswrapper[4661]: I0120 18:08:23.942184 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-cr2m2"] Jan 20 18:08:24 crc kubenswrapper[4661]: I0120 18:08:24.010289 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7m2kh\" (UID: \"1a3225a4-585b-4ad0-9951-c5feae37b6cc\") " pod="openshift-image-registry/image-registry-697d97f7c8-7m2kh" Jan 20 18:08:24 crc kubenswrapper[4661]: I0120 18:08:24.167779 4661 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Jan 20 18:08:24 crc kubenswrapper[4661]: I0120 18:08:24.177652 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-7m2kh" Jan 20 18:08:24 crc kubenswrapper[4661]: I0120 18:08:24.229038 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-zrhnk"] Jan 20 18:08:24 crc kubenswrapper[4661]: I0120 18:08:24.274934 4661 patch_prober.go:28] interesting pod/router-default-5444994796-tptl9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 20 18:08:24 crc kubenswrapper[4661]: [-]has-synced failed: reason withheld Jan 20 18:08:24 crc kubenswrapper[4661]: [+]process-running ok Jan 20 18:08:24 crc kubenswrapper[4661]: healthz check failed Jan 20 18:08:24 crc kubenswrapper[4661]: I0120 18:08:24.275058 4661 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-tptl9" podUID="c7ff0869-4b3b-447f-a012-9bc155bae99b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 20 18:08:24 crc kubenswrapper[4661]: I0120 18:08:24.850734 4661 generic.go:334] "Generic (PLEG): container finished" podID="038c2d26-4bbf-46fd-90fa-d6e5a6929c7c" containerID="4f0be225dab321348eea5c7d2d972c6eedec8afd15e1533559f7c75fdcbf7eeb" exitCode=0 Jan 20 18:08:24 crc kubenswrapper[4661]: I0120 18:08:24.850874 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pblrm" event={"ID":"038c2d26-4bbf-46fd-90fa-d6e5a6929c7c","Type":"ContainerDied","Data":"4f0be225dab321348eea5c7d2d972c6eedec8afd15e1533559f7c75fdcbf7eeb"} Jan 20 18:08:24 crc kubenswrapper[4661]: I0120 18:08:24.850915 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pblrm" event={"ID":"038c2d26-4bbf-46fd-90fa-d6e5a6929c7c","Type":"ContainerStarted","Data":"3042670c2470b291604b62ad3991ddbc65087e814f733df5ed68552f08fe6b92"} Jan 20 18:08:24 crc kubenswrapper[4661]: I0120 18:08:24.863897 4661 generic.go:334] "Generic (PLEG): container finished" podID="16283339-07c9-417a-9616-06e3f9eac63d" containerID="86714e9a821150efc96b7b00bb40869cfd96c0ca6de7ea024f1e404ab353d103" exitCode=0 Jan 20 18:08:24 crc kubenswrapper[4661]: I0120 18:08:24.863975 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zrhnk" event={"ID":"16283339-07c9-417a-9616-06e3f9eac63d","Type":"ContainerDied","Data":"86714e9a821150efc96b7b00bb40869cfd96c0ca6de7ea024f1e404ab353d103"} Jan 20 18:08:24 crc kubenswrapper[4661]: I0120 18:08:24.864007 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zrhnk" event={"ID":"16283339-07c9-417a-9616-06e3f9eac63d","Type":"ContainerStarted","Data":"58f8e98d66849e84526f9f1bbda84723772038c0330853f062373fefd9dfff03"} Jan 20 18:08:24 crc kubenswrapper[4661]: I0120 18:08:24.876139 4661 generic.go:334] "Generic (PLEG): container finished" podID="a087d508-430f-45ba-bff2-b58d06cebd51" containerID="947a5bd7969601103f94f4dde8dbee930156c260f00fc2e4df04935a6a402c6d" exitCode=0 Jan 20 18:08:24 crc kubenswrapper[4661]: I0120 18:08:24.876662 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cr2m2" event={"ID":"a087d508-430f-45ba-bff2-b58d06cebd51","Type":"ContainerDied","Data":"947a5bd7969601103f94f4dde8dbee930156c260f00fc2e4df04935a6a402c6d"} Jan 20 18:08:24 crc kubenswrapper[4661]: I0120 18:08:24.876752 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cr2m2" event={"ID":"a087d508-430f-45ba-bff2-b58d06cebd51","Type":"ContainerStarted","Data":"fc7b89aecf983e94a9a987d79dc56f07000002ff3b29e19fd1f61da471c75e48"} Jan 20 18:08:24 crc kubenswrapper[4661]: I0120 18:08:24.904796 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-7m2kh"] Jan 20 18:08:25 crc kubenswrapper[4661]: I0120 18:08:25.206993 4661 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 20 18:08:25 crc kubenswrapper[4661]: I0120 18:08:25.208606 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 20 18:08:25 crc kubenswrapper[4661]: I0120 18:08:25.213995 4661 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Jan 20 18:08:25 crc kubenswrapper[4661]: I0120 18:08:25.213995 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 20 18:08:25 crc kubenswrapper[4661]: I0120 18:08:25.214249 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Jan 20 18:08:25 crc kubenswrapper[4661]: I0120 18:08:25.263341 4661 patch_prober.go:28] interesting pod/router-default-5444994796-tptl9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 20 18:08:25 crc kubenswrapper[4661]: [-]has-synced failed: reason withheld Jan 20 18:08:25 crc kubenswrapper[4661]: [+]process-running ok Jan 20 18:08:25 crc kubenswrapper[4661]: healthz check failed Jan 20 18:08:25 crc kubenswrapper[4661]: I0120 18:08:25.263418 4661 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-tptl9" podUID="c7ff0869-4b3b-447f-a012-9bc155bae99b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 20 18:08:25 crc kubenswrapper[4661]: I0120 18:08:25.385097 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/211d52d0-1b94-4a18-89ae-f52d5206f12b-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"211d52d0-1b94-4a18-89ae-f52d5206f12b\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 20 18:08:25 crc kubenswrapper[4661]: I0120 18:08:25.385177 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/211d52d0-1b94-4a18-89ae-f52d5206f12b-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"211d52d0-1b94-4a18-89ae-f52d5206f12b\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 20 18:08:25 crc kubenswrapper[4661]: I0120 18:08:25.486441 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/211d52d0-1b94-4a18-89ae-f52d5206f12b-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"211d52d0-1b94-4a18-89ae-f52d5206f12b\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 20 18:08:25 crc kubenswrapper[4661]: I0120 18:08:25.486596 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/211d52d0-1b94-4a18-89ae-f52d5206f12b-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"211d52d0-1b94-4a18-89ae-f52d5206f12b\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 20 18:08:25 crc kubenswrapper[4661]: I0120 18:08:25.487076 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/211d52d0-1b94-4a18-89ae-f52d5206f12b-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"211d52d0-1b94-4a18-89ae-f52d5206f12b\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 20 18:08:25 crc kubenswrapper[4661]: I0120 18:08:25.507573 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/211d52d0-1b94-4a18-89ae-f52d5206f12b-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"211d52d0-1b94-4a18-89ae-f52d5206f12b\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 20 18:08:25 crc kubenswrapper[4661]: I0120 18:08:25.544802 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 20 18:08:25 crc kubenswrapper[4661]: I0120 18:08:25.920792 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-7m2kh" event={"ID":"1a3225a4-585b-4ad0-9951-c5feae37b6cc","Type":"ContainerStarted","Data":"89f9fec8fd3522e747064cdfdb72ed825348852b56e5dba941f2e53edbd881e1"} Jan 20 18:08:25 crc kubenswrapper[4661]: I0120 18:08:25.921160 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-7m2kh" event={"ID":"1a3225a4-585b-4ad0-9951-c5feae37b6cc","Type":"ContainerStarted","Data":"f235571a41b0a8546bde42679a8fc71c3b6b4c2b1fc0c865be3d4f9fed59544d"} Jan 20 18:08:25 crc kubenswrapper[4661]: I0120 18:08:25.921222 4661 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-7m2kh" Jan 20 18:08:25 crc kubenswrapper[4661]: I0120 18:08:25.945717 4661 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-7m2kh" podStartSLOduration=140.945675406 podStartE2EDuration="2m20.945675406s" podCreationTimestamp="2026-01-20 18:06:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 18:08:25.945007848 +0000 UTC m=+162.275797510" watchObservedRunningTime="2026-01-20 18:08:25.945675406 +0000 UTC m=+162.276465068" Jan 20 18:08:26 crc kubenswrapper[4661]: I0120 18:08:26.158057 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 20 18:08:26 crc kubenswrapper[4661]: I0120 18:08:26.264235 4661 patch_prober.go:28] interesting pod/router-default-5444994796-tptl9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 20 18:08:26 crc kubenswrapper[4661]: [-]has-synced failed: reason withheld Jan 20 18:08:26 crc kubenswrapper[4661]: [+]process-running ok Jan 20 18:08:26 crc kubenswrapper[4661]: healthz check failed Jan 20 18:08:26 crc kubenswrapper[4661]: I0120 18:08:26.264562 4661 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-tptl9" podUID="c7ff0869-4b3b-447f-a012-9bc155bae99b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 20 18:08:27 crc kubenswrapper[4661]: I0120 18:08:27.071035 4661 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-q7s9r" Jan 20 18:08:27 crc kubenswrapper[4661]: I0120 18:08:27.084121 4661 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-q7s9r" Jan 20 18:08:27 crc kubenswrapper[4661]: I0120 18:08:27.200749 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"211d52d0-1b94-4a18-89ae-f52d5206f12b","Type":"ContainerStarted","Data":"d32beff8ede66439fc86a5b12ff4f017e8798af30eec61912c75ceae0ddc04bf"} Jan 20 18:08:27 crc kubenswrapper[4661]: I0120 18:08:27.265411 4661 patch_prober.go:28] interesting pod/router-default-5444994796-tptl9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 20 18:08:27 crc kubenswrapper[4661]: [-]has-synced failed: reason withheld Jan 20 18:08:27 crc kubenswrapper[4661]: [+]process-running ok Jan 20 18:08:27 crc kubenswrapper[4661]: healthz check failed Jan 20 18:08:27 crc kubenswrapper[4661]: I0120 18:08:27.265500 4661 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-tptl9" podUID="c7ff0869-4b3b-447f-a012-9bc155bae99b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 20 18:08:27 crc kubenswrapper[4661]: I0120 18:08:27.938382 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/58dc28d6-51a6-49ce-ab8e-ac0b6bbe7131-metrics-certs\") pod \"network-metrics-daemon-dhd6h\" (UID: \"58dc28d6-51a6-49ce-ab8e-ac0b6bbe7131\") " pod="openshift-multus/network-metrics-daemon-dhd6h" Jan 20 18:08:27 crc kubenswrapper[4661]: I0120 18:08:27.970645 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/58dc28d6-51a6-49ce-ab8e-ac0b6bbe7131-metrics-certs\") pod \"network-metrics-daemon-dhd6h\" (UID: \"58dc28d6-51a6-49ce-ab8e-ac0b6bbe7131\") " pod="openshift-multus/network-metrics-daemon-dhd6h" Jan 20 18:08:27 crc kubenswrapper[4661]: I0120 18:08:27.991148 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dhd6h" Jan 20 18:08:28 crc kubenswrapper[4661]: I0120 18:08:28.271816 4661 patch_prober.go:28] interesting pod/router-default-5444994796-tptl9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 20 18:08:28 crc kubenswrapper[4661]: [-]has-synced failed: reason withheld Jan 20 18:08:28 crc kubenswrapper[4661]: [+]process-running ok Jan 20 18:08:28 crc kubenswrapper[4661]: healthz check failed Jan 20 18:08:28 crc kubenswrapper[4661]: I0120 18:08:28.272280 4661 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-tptl9" podUID="c7ff0869-4b3b-447f-a012-9bc155bae99b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 20 18:08:28 crc kubenswrapper[4661]: I0120 18:08:28.353863 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"211d52d0-1b94-4a18-89ae-f52d5206f12b","Type":"ContainerStarted","Data":"605fb41791f5cff9f2c0ed6cc8776f664d8485e0b4d9c19ef4cd373f1aad5839"} Jan 20 18:08:28 crc kubenswrapper[4661]: I0120 18:08:28.727806 4661 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/revision-pruner-9-crc" podStartSLOduration=3.727776328 podStartE2EDuration="3.727776328s" podCreationTimestamp="2026-01-20 18:08:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 18:08:28.38545194 +0000 UTC m=+164.716241602" watchObservedRunningTime="2026-01-20 18:08:28.727776328 +0000 UTC m=+165.058565990" Jan 20 18:08:28 crc kubenswrapper[4661]: I0120 18:08:28.729747 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-dhd6h"] Jan 20 18:08:28 crc kubenswrapper[4661]: I0120 18:08:28.770478 4661 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-gbprg" Jan 20 18:08:29 crc kubenswrapper[4661]: I0120 18:08:29.267395 4661 patch_prober.go:28] interesting pod/router-default-5444994796-tptl9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 20 18:08:29 crc kubenswrapper[4661]: [-]has-synced failed: reason withheld Jan 20 18:08:29 crc kubenswrapper[4661]: [+]process-running ok Jan 20 18:08:29 crc kubenswrapper[4661]: healthz check failed Jan 20 18:08:29 crc kubenswrapper[4661]: I0120 18:08:29.267582 4661 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-tptl9" podUID="c7ff0869-4b3b-447f-a012-9bc155bae99b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 20 18:08:29 crc kubenswrapper[4661]: I0120 18:08:29.324127 4661 patch_prober.go:28] interesting pod/machine-config-daemon-svf7c container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 20 18:08:29 crc kubenswrapper[4661]: I0120 18:08:29.324200 4661 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-svf7c" podUID="78855c94-da90-4523-8d65-70f7fd153dee" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 20 18:08:29 crc kubenswrapper[4661]: I0120 18:08:29.390464 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-dhd6h" event={"ID":"58dc28d6-51a6-49ce-ab8e-ac0b6bbe7131","Type":"ContainerStarted","Data":"27db502685274860bfac87a0d65de8311b1362541138bbe3bdf36bcec9f45dee"} Jan 20 18:08:29 crc kubenswrapper[4661]: I0120 18:08:29.391960 4661 generic.go:334] "Generic (PLEG): container finished" podID="211d52d0-1b94-4a18-89ae-f52d5206f12b" containerID="605fb41791f5cff9f2c0ed6cc8776f664d8485e0b4d9c19ef4cd373f1aad5839" exitCode=0 Jan 20 18:08:29 crc kubenswrapper[4661]: I0120 18:08:29.391996 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"211d52d0-1b94-4a18-89ae-f52d5206f12b","Type":"ContainerDied","Data":"605fb41791f5cff9f2c0ed6cc8776f664d8485e0b4d9c19ef4cd373f1aad5839"} Jan 20 18:08:29 crc kubenswrapper[4661]: I0120 18:08:29.772738 4661 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 20 18:08:29 crc kubenswrapper[4661]: I0120 18:08:29.773567 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 20 18:08:29 crc kubenswrapper[4661]: I0120 18:08:29.782803 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 20 18:08:29 crc kubenswrapper[4661]: I0120 18:08:29.788645 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Jan 20 18:08:29 crc kubenswrapper[4661]: I0120 18:08:29.788857 4661 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Jan 20 18:08:29 crc kubenswrapper[4661]: I0120 18:08:29.907138 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/110d8923-49b6-4148-b18e-11ba66c71af7-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"110d8923-49b6-4148-b18e-11ba66c71af7\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 20 18:08:29 crc kubenswrapper[4661]: I0120 18:08:29.907197 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/110d8923-49b6-4148-b18e-11ba66c71af7-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"110d8923-49b6-4148-b18e-11ba66c71af7\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 20 18:08:30 crc kubenswrapper[4661]: I0120 18:08:30.017233 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/110d8923-49b6-4148-b18e-11ba66c71af7-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"110d8923-49b6-4148-b18e-11ba66c71af7\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 20 18:08:30 crc kubenswrapper[4661]: I0120 18:08:30.017315 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/110d8923-49b6-4148-b18e-11ba66c71af7-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"110d8923-49b6-4148-b18e-11ba66c71af7\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 20 18:08:30 crc kubenswrapper[4661]: I0120 18:08:30.017790 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/110d8923-49b6-4148-b18e-11ba66c71af7-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"110d8923-49b6-4148-b18e-11ba66c71af7\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 20 18:08:30 crc kubenswrapper[4661]: I0120 18:08:30.079059 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/110d8923-49b6-4148-b18e-11ba66c71af7-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"110d8923-49b6-4148-b18e-11ba66c71af7\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 20 18:08:30 crc kubenswrapper[4661]: I0120 18:08:30.118313 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 20 18:08:30 crc kubenswrapper[4661]: I0120 18:08:30.266376 4661 patch_prober.go:28] interesting pod/router-default-5444994796-tptl9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 20 18:08:30 crc kubenswrapper[4661]: [-]has-synced failed: reason withheld Jan 20 18:08:30 crc kubenswrapper[4661]: [+]process-running ok Jan 20 18:08:30 crc kubenswrapper[4661]: healthz check failed Jan 20 18:08:30 crc kubenswrapper[4661]: I0120 18:08:30.266906 4661 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-tptl9" podUID="c7ff0869-4b3b-447f-a012-9bc155bae99b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 20 18:08:30 crc kubenswrapper[4661]: I0120 18:08:30.541288 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 20 18:08:30 crc kubenswrapper[4661]: I0120 18:08:30.803417 4661 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 20 18:08:30 crc kubenswrapper[4661]: I0120 18:08:30.935087 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/211d52d0-1b94-4a18-89ae-f52d5206f12b-kubelet-dir\") pod \"211d52d0-1b94-4a18-89ae-f52d5206f12b\" (UID: \"211d52d0-1b94-4a18-89ae-f52d5206f12b\") " Jan 20 18:08:30 crc kubenswrapper[4661]: I0120 18:08:30.935235 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/211d52d0-1b94-4a18-89ae-f52d5206f12b-kube-api-access\") pod \"211d52d0-1b94-4a18-89ae-f52d5206f12b\" (UID: \"211d52d0-1b94-4a18-89ae-f52d5206f12b\") " Jan 20 18:08:30 crc kubenswrapper[4661]: I0120 18:08:30.935272 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/211d52d0-1b94-4a18-89ae-f52d5206f12b-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "211d52d0-1b94-4a18-89ae-f52d5206f12b" (UID: "211d52d0-1b94-4a18-89ae-f52d5206f12b"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 20 18:08:30 crc kubenswrapper[4661]: I0120 18:08:30.935478 4661 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/211d52d0-1b94-4a18-89ae-f52d5206f12b-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 20 18:08:30 crc kubenswrapper[4661]: I0120 18:08:30.943267 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/211d52d0-1b94-4a18-89ae-f52d5206f12b-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "211d52d0-1b94-4a18-89ae-f52d5206f12b" (UID: "211d52d0-1b94-4a18-89ae-f52d5206f12b"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 18:08:31 crc kubenswrapper[4661]: I0120 18:08:31.039488 4661 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/211d52d0-1b94-4a18-89ae-f52d5206f12b-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 20 18:08:31 crc kubenswrapper[4661]: I0120 18:08:31.263697 4661 patch_prober.go:28] interesting pod/router-default-5444994796-tptl9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 20 18:08:31 crc kubenswrapper[4661]: [-]has-synced failed: reason withheld Jan 20 18:08:31 crc kubenswrapper[4661]: [+]process-running ok Jan 20 18:08:31 crc kubenswrapper[4661]: healthz check failed Jan 20 18:08:31 crc kubenswrapper[4661]: I0120 18:08:31.264090 4661 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-tptl9" podUID="c7ff0869-4b3b-447f-a012-9bc155bae99b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 20 18:08:31 crc kubenswrapper[4661]: I0120 18:08:31.494402 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-dhd6h" event={"ID":"58dc28d6-51a6-49ce-ab8e-ac0b6bbe7131","Type":"ContainerStarted","Data":"8948a0a8d5edd21c825a52e53c3849beaa8c6b6318255f137b8a3320e4d73c3d"} Jan 20 18:08:31 crc kubenswrapper[4661]: I0120 18:08:31.528312 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"211d52d0-1b94-4a18-89ae-f52d5206f12b","Type":"ContainerDied","Data":"d32beff8ede66439fc86a5b12ff4f017e8798af30eec61912c75ceae0ddc04bf"} Jan 20 18:08:31 crc kubenswrapper[4661]: I0120 18:08:31.528369 4661 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d32beff8ede66439fc86a5b12ff4f017e8798af30eec61912c75ceae0ddc04bf" Jan 20 18:08:31 crc kubenswrapper[4661]: I0120 18:08:31.528450 4661 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 20 18:08:31 crc kubenswrapper[4661]: I0120 18:08:31.549891 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"110d8923-49b6-4148-b18e-11ba66c71af7","Type":"ContainerStarted","Data":"2865fc64b0097a99f58916b025489e83e51422720791ea84a31de29738547d73"} Jan 20 18:08:32 crc kubenswrapper[4661]: I0120 18:08:32.079341 4661 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-bvpn8" Jan 20 18:08:32 crc kubenswrapper[4661]: I0120 18:08:32.100263 4661 patch_prober.go:28] interesting pod/console-f9d7485db-phg9x container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.17:8443/health\": dial tcp 10.217.0.17:8443: connect: connection refused" start-of-body= Jan 20 18:08:32 crc kubenswrapper[4661]: I0120 18:08:32.100366 4661 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-phg9x" podUID="4c500541-c3f2-4f6d-8bb7-1227aa74989a" containerName="console" probeResult="failure" output="Get \"https://10.217.0.17:8443/health\": dial tcp 10.217.0.17:8443: connect: connection refused" Jan 20 18:08:32 crc kubenswrapper[4661]: I0120 18:08:32.279777 4661 patch_prober.go:28] interesting pod/router-default-5444994796-tptl9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 20 18:08:32 crc kubenswrapper[4661]: [-]has-synced failed: reason withheld Jan 20 18:08:32 crc kubenswrapper[4661]: [+]process-running ok Jan 20 18:08:32 crc kubenswrapper[4661]: healthz check failed Jan 20 18:08:32 crc kubenswrapper[4661]: I0120 18:08:32.280092 4661 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-tptl9" podUID="c7ff0869-4b3b-447f-a012-9bc155bae99b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 20 18:08:32 crc kubenswrapper[4661]: I0120 18:08:32.625932 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"110d8923-49b6-4148-b18e-11ba66c71af7","Type":"ContainerStarted","Data":"4eec2c64cd98374061255871491274b097d8f1f62441d8ea6f6cd8bef64ef05f"} Jan 20 18:08:32 crc kubenswrapper[4661]: I0120 18:08:32.672864 4661 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-8-crc" podStartSLOduration=3.672839738 podStartE2EDuration="3.672839738s" podCreationTimestamp="2026-01-20 18:08:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 18:08:32.669501588 +0000 UTC m=+169.000291250" watchObservedRunningTime="2026-01-20 18:08:32.672839738 +0000 UTC m=+169.003629400" Jan 20 18:08:32 crc kubenswrapper[4661]: I0120 18:08:32.682501 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-dhd6h" event={"ID":"58dc28d6-51a6-49ce-ab8e-ac0b6bbe7131","Type":"ContainerStarted","Data":"c02c84f999d88c99736799e9d3e0540b2476f11f429c7b94cf622c107236f362"} Jan 20 18:08:32 crc kubenswrapper[4661]: I0120 18:08:32.707749 4661 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-dhd6h" podStartSLOduration=147.70772111 podStartE2EDuration="2m27.70772111s" podCreationTimestamp="2026-01-20 18:06:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 18:08:32.707387361 +0000 UTC m=+169.038177023" watchObservedRunningTime="2026-01-20 18:08:32.70772111 +0000 UTC m=+169.038510772" Jan 20 18:08:33 crc kubenswrapper[4661]: I0120 18:08:33.262949 4661 patch_prober.go:28] interesting pod/router-default-5444994796-tptl9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 20 18:08:33 crc kubenswrapper[4661]: [-]has-synced failed: reason withheld Jan 20 18:08:33 crc kubenswrapper[4661]: [+]process-running ok Jan 20 18:08:33 crc kubenswrapper[4661]: healthz check failed Jan 20 18:08:33 crc kubenswrapper[4661]: I0120 18:08:33.263069 4661 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-tptl9" podUID="c7ff0869-4b3b-447f-a012-9bc155bae99b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 20 18:08:33 crc kubenswrapper[4661]: I0120 18:08:33.723348 4661 generic.go:334] "Generic (PLEG): container finished" podID="110d8923-49b6-4148-b18e-11ba66c71af7" containerID="4eec2c64cd98374061255871491274b097d8f1f62441d8ea6f6cd8bef64ef05f" exitCode=0 Jan 20 18:08:33 crc kubenswrapper[4661]: I0120 18:08:33.723460 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"110d8923-49b6-4148-b18e-11ba66c71af7","Type":"ContainerDied","Data":"4eec2c64cd98374061255871491274b097d8f1f62441d8ea6f6cd8bef64ef05f"} Jan 20 18:08:34 crc kubenswrapper[4661]: I0120 18:08:34.273042 4661 patch_prober.go:28] interesting pod/router-default-5444994796-tptl9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 20 18:08:34 crc kubenswrapper[4661]: [-]has-synced failed: reason withheld Jan 20 18:08:34 crc kubenswrapper[4661]: [+]process-running ok Jan 20 18:08:34 crc kubenswrapper[4661]: healthz check failed Jan 20 18:08:34 crc kubenswrapper[4661]: I0120 18:08:34.273122 4661 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-tptl9" podUID="c7ff0869-4b3b-447f-a012-9bc155bae99b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 20 18:08:35 crc kubenswrapper[4661]: I0120 18:08:35.262967 4661 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-tptl9" Jan 20 18:08:35 crc kubenswrapper[4661]: I0120 18:08:35.268093 4661 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-tptl9" Jan 20 18:08:41 crc kubenswrapper[4661]: I0120 18:08:41.206199 4661 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 18:08:42 crc kubenswrapper[4661]: I0120 18:08:42.102009 4661 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-phg9x" Jan 20 18:08:42 crc kubenswrapper[4661]: I0120 18:08:42.105912 4661 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-phg9x" Jan 20 18:08:44 crc kubenswrapper[4661]: I0120 18:08:44.184461 4661 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-7m2kh" Jan 20 18:08:51 crc kubenswrapper[4661]: I0120 18:08:51.923212 4661 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-tmtgr" Jan 20 18:08:56 crc kubenswrapper[4661]: I0120 18:08:56.240137 4661 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 20 18:08:56 crc kubenswrapper[4661]: I0120 18:08:56.374650 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/110d8923-49b6-4148-b18e-11ba66c71af7-kube-api-access\") pod \"110d8923-49b6-4148-b18e-11ba66c71af7\" (UID: \"110d8923-49b6-4148-b18e-11ba66c71af7\") " Jan 20 18:08:56 crc kubenswrapper[4661]: I0120 18:08:56.374780 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/110d8923-49b6-4148-b18e-11ba66c71af7-kubelet-dir\") pod \"110d8923-49b6-4148-b18e-11ba66c71af7\" (UID: \"110d8923-49b6-4148-b18e-11ba66c71af7\") " Jan 20 18:08:56 crc kubenswrapper[4661]: I0120 18:08:56.375324 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/110d8923-49b6-4148-b18e-11ba66c71af7-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "110d8923-49b6-4148-b18e-11ba66c71af7" (UID: "110d8923-49b6-4148-b18e-11ba66c71af7"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 20 18:08:56 crc kubenswrapper[4661]: I0120 18:08:56.408442 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/110d8923-49b6-4148-b18e-11ba66c71af7-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "110d8923-49b6-4148-b18e-11ba66c71af7" (UID: "110d8923-49b6-4148-b18e-11ba66c71af7"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 18:08:56 crc kubenswrapper[4661]: I0120 18:08:56.476158 4661 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/110d8923-49b6-4148-b18e-11ba66c71af7-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 20 18:08:56 crc kubenswrapper[4661]: I0120 18:08:56.476192 4661 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/110d8923-49b6-4148-b18e-11ba66c71af7-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 20 18:08:56 crc kubenswrapper[4661]: I0120 18:08:56.959010 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"110d8923-49b6-4148-b18e-11ba66c71af7","Type":"ContainerDied","Data":"2865fc64b0097a99f58916b025489e83e51422720791ea84a31de29738547d73"} Jan 20 18:08:56 crc kubenswrapper[4661]: I0120 18:08:56.968857 4661 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2865fc64b0097a99f58916b025489e83e51422720791ea84a31de29738547d73" Jan 20 18:08:56 crc kubenswrapper[4661]: I0120 18:08:56.959647 4661 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 20 18:08:59 crc kubenswrapper[4661]: I0120 18:08:59.324199 4661 patch_prober.go:28] interesting pod/machine-config-daemon-svf7c container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 20 18:08:59 crc kubenswrapper[4661]: I0120 18:08:59.324575 4661 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-svf7c" podUID="78855c94-da90-4523-8d65-70f7fd153dee" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 20 18:09:03 crc kubenswrapper[4661]: I0120 18:09:03.161113 4661 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 20 18:09:03 crc kubenswrapper[4661]: E0120 18:09:03.162095 4661 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="211d52d0-1b94-4a18-89ae-f52d5206f12b" containerName="pruner" Jan 20 18:09:03 crc kubenswrapper[4661]: I0120 18:09:03.162107 4661 state_mem.go:107] "Deleted CPUSet assignment" podUID="211d52d0-1b94-4a18-89ae-f52d5206f12b" containerName="pruner" Jan 20 18:09:03 crc kubenswrapper[4661]: E0120 18:09:03.162132 4661 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="110d8923-49b6-4148-b18e-11ba66c71af7" containerName="pruner" Jan 20 18:09:03 crc kubenswrapper[4661]: I0120 18:09:03.162138 4661 state_mem.go:107] "Deleted CPUSet assignment" podUID="110d8923-49b6-4148-b18e-11ba66c71af7" containerName="pruner" Jan 20 18:09:03 crc kubenswrapper[4661]: I0120 18:09:03.162233 4661 memory_manager.go:354] "RemoveStaleState removing state" podUID="211d52d0-1b94-4a18-89ae-f52d5206f12b" containerName="pruner" Jan 20 18:09:03 crc kubenswrapper[4661]: I0120 18:09:03.162243 4661 memory_manager.go:354] "RemoveStaleState removing state" podUID="110d8923-49b6-4148-b18e-11ba66c71af7" containerName="pruner" Jan 20 18:09:03 crc kubenswrapper[4661]: I0120 18:09:03.162595 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 20 18:09:03 crc kubenswrapper[4661]: I0120 18:09:03.165522 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Jan 20 18:09:03 crc kubenswrapper[4661]: I0120 18:09:03.165931 4661 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Jan 20 18:09:03 crc kubenswrapper[4661]: I0120 18:09:03.172256 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 20 18:09:03 crc kubenswrapper[4661]: E0120 18:09:03.180101 4661 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Jan 20 18:09:03 crc kubenswrapper[4661]: E0120 18:09:03.180289 4661 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-sh765,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-vgcbk_openshift-marketplace(b74c3546-f6c8-44f1-a0b7-cf45a7b7ba38): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 20 18:09:03 crc kubenswrapper[4661]: E0120 18:09:03.186161 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-vgcbk" podUID="b74c3546-f6c8-44f1-a0b7-cf45a7b7ba38" Jan 20 18:09:03 crc kubenswrapper[4661]: I0120 18:09:03.279520 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/52eae3f3-7a47-40de-9c5a-739d65fd397c-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"52eae3f3-7a47-40de-9c5a-739d65fd397c\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 20 18:09:03 crc kubenswrapper[4661]: I0120 18:09:03.279807 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/52eae3f3-7a47-40de-9c5a-739d65fd397c-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"52eae3f3-7a47-40de-9c5a-739d65fd397c\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 20 18:09:03 crc kubenswrapper[4661]: I0120 18:09:03.380785 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/52eae3f3-7a47-40de-9c5a-739d65fd397c-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"52eae3f3-7a47-40de-9c5a-739d65fd397c\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 20 18:09:03 crc kubenswrapper[4661]: I0120 18:09:03.380858 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/52eae3f3-7a47-40de-9c5a-739d65fd397c-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"52eae3f3-7a47-40de-9c5a-739d65fd397c\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 20 18:09:03 crc kubenswrapper[4661]: I0120 18:09:03.381016 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/52eae3f3-7a47-40de-9c5a-739d65fd397c-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"52eae3f3-7a47-40de-9c5a-739d65fd397c\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 20 18:09:03 crc kubenswrapper[4661]: I0120 18:09:03.404161 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/52eae3f3-7a47-40de-9c5a-739d65fd397c-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"52eae3f3-7a47-40de-9c5a-739d65fd397c\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 20 18:09:03 crc kubenswrapper[4661]: I0120 18:09:03.505275 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 20 18:09:04 crc kubenswrapper[4661]: E0120 18:09:04.900373 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-vgcbk" podUID="b74c3546-f6c8-44f1-a0b7-cf45a7b7ba38" Jan 20 18:09:04 crc kubenswrapper[4661]: E0120 18:09:04.978205 4661 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Jan 20 18:09:04 crc kubenswrapper[4661]: E0120 18:09:04.978466 4661 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ks96p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-kpzrp_openshift-marketplace(8f48199b-41a0-44a2-b1b4-2f623ab0f413): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 20 18:09:04 crc kubenswrapper[4661]: E0120 18:09:04.980428 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-kpzrp" podUID="8f48199b-41a0-44a2-b1b4-2f623ab0f413" Jan 20 18:09:04 crc kubenswrapper[4661]: E0120 18:09:04.996079 4661 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Jan 20 18:09:04 crc kubenswrapper[4661]: E0120 18:09:04.996221 4661 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lpsks,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-7lfrt_openshift-marketplace(c34658be-616a-469f-a560-61709f82cde6): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 20 18:09:04 crc kubenswrapper[4661]: E0120 18:09:04.997397 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-7lfrt" podUID="c34658be-616a-469f-a560-61709f82cde6" Jan 20 18:09:07 crc kubenswrapper[4661]: I0120 18:09:07.567506 4661 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 20 18:09:07 crc kubenswrapper[4661]: I0120 18:09:07.569368 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 20 18:09:07 crc kubenswrapper[4661]: I0120 18:09:07.580103 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 20 18:09:07 crc kubenswrapper[4661]: I0120 18:09:07.660539 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e2ed603c-7744-4ed0-b168-f022e5b8e145-kubelet-dir\") pod \"installer-9-crc\" (UID: \"e2ed603c-7744-4ed0-b168-f022e5b8e145\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 20 18:09:07 crc kubenswrapper[4661]: I0120 18:09:07.660601 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e2ed603c-7744-4ed0-b168-f022e5b8e145-kube-api-access\") pod \"installer-9-crc\" (UID: \"e2ed603c-7744-4ed0-b168-f022e5b8e145\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 20 18:09:07 crc kubenswrapper[4661]: I0120 18:09:07.660713 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/e2ed603c-7744-4ed0-b168-f022e5b8e145-var-lock\") pod \"installer-9-crc\" (UID: \"e2ed603c-7744-4ed0-b168-f022e5b8e145\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 20 18:09:07 crc kubenswrapper[4661]: I0120 18:09:07.765119 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/e2ed603c-7744-4ed0-b168-f022e5b8e145-var-lock\") pod \"installer-9-crc\" (UID: \"e2ed603c-7744-4ed0-b168-f022e5b8e145\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 20 18:09:07 crc kubenswrapper[4661]: I0120 18:09:07.765189 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e2ed603c-7744-4ed0-b168-f022e5b8e145-kubelet-dir\") pod \"installer-9-crc\" (UID: \"e2ed603c-7744-4ed0-b168-f022e5b8e145\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 20 18:09:07 crc kubenswrapper[4661]: I0120 18:09:07.765205 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e2ed603c-7744-4ed0-b168-f022e5b8e145-kube-api-access\") pod \"installer-9-crc\" (UID: \"e2ed603c-7744-4ed0-b168-f022e5b8e145\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 20 18:09:07 crc kubenswrapper[4661]: I0120 18:09:07.765596 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/e2ed603c-7744-4ed0-b168-f022e5b8e145-var-lock\") pod \"installer-9-crc\" (UID: \"e2ed603c-7744-4ed0-b168-f022e5b8e145\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 20 18:09:07 crc kubenswrapper[4661]: I0120 18:09:07.765627 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e2ed603c-7744-4ed0-b168-f022e5b8e145-kubelet-dir\") pod \"installer-9-crc\" (UID: \"e2ed603c-7744-4ed0-b168-f022e5b8e145\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 20 18:09:07 crc kubenswrapper[4661]: I0120 18:09:07.785564 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e2ed603c-7744-4ed0-b168-f022e5b8e145-kube-api-access\") pod \"installer-9-crc\" (UID: \"e2ed603c-7744-4ed0-b168-f022e5b8e145\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 20 18:09:07 crc kubenswrapper[4661]: I0120 18:09:07.892700 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 20 18:09:09 crc kubenswrapper[4661]: E0120 18:09:09.533885 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-kpzrp" podUID="8f48199b-41a0-44a2-b1b4-2f623ab0f413" Jan 20 18:09:09 crc kubenswrapper[4661]: E0120 18:09:09.533970 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-7lfrt" podUID="c34658be-616a-469f-a560-61709f82cde6" Jan 20 18:09:09 crc kubenswrapper[4661]: E0120 18:09:09.629985 4661 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Jan 20 18:09:09 crc kubenswrapper[4661]: E0120 18:09:09.630149 4661 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2t9xx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-cr2m2_openshift-marketplace(a087d508-430f-45ba-bff2-b58d06cebd51): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 20 18:09:09 crc kubenswrapper[4661]: E0120 18:09:09.632249 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-cr2m2" podUID="a087d508-430f-45ba-bff2-b58d06cebd51" Jan 20 18:09:09 crc kubenswrapper[4661]: E0120 18:09:09.645822 4661 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Jan 20 18:09:09 crc kubenswrapper[4661]: E0120 18:09:09.645980 4661 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vk5ll,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-zrhnk_openshift-marketplace(16283339-07c9-417a-9616-06e3f9eac63d): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 20 18:09:09 crc kubenswrapper[4661]: E0120 18:09:09.647250 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-zrhnk" podUID="16283339-07c9-417a-9616-06e3f9eac63d" Jan 20 18:09:09 crc kubenswrapper[4661]: E0120 18:09:09.884222 4661 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Jan 20 18:09:09 crc kubenswrapper[4661]: E0120 18:09:09.884406 4661 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wchwz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-5jf5n_openshift-marketplace(ac4a870b-4ca8-4046-b9b1-6001f8b13a51): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 20 18:09:09 crc kubenswrapper[4661]: E0120 18:09:09.885620 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-5jf5n" podUID="ac4a870b-4ca8-4046-b9b1-6001f8b13a51" Jan 20 18:09:13 crc kubenswrapper[4661]: E0120 18:09:13.346935 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-zrhnk" podUID="16283339-07c9-417a-9616-06e3f9eac63d" Jan 20 18:09:13 crc kubenswrapper[4661]: E0120 18:09:13.347273 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-5jf5n" podUID="ac4a870b-4ca8-4046-b9b1-6001f8b13a51" Jan 20 18:09:13 crc kubenswrapper[4661]: E0120 18:09:13.347323 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-cr2m2" podUID="a087d508-430f-45ba-bff2-b58d06cebd51" Jan 20 18:09:16 crc kubenswrapper[4661]: E0120 18:09:16.738199 4661 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Jan 20 18:09:16 crc kubenswrapper[4661]: E0120 18:09:16.738578 4661 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9g6cx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-rvkbf_openshift-marketplace(40861cf6-5e11-46ad-be02-b415c4f06dee): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 20 18:09:16 crc kubenswrapper[4661]: E0120 18:09:16.740594 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-rvkbf" podUID="40861cf6-5e11-46ad-be02-b415c4f06dee" Jan 20 18:09:16 crc kubenswrapper[4661]: E0120 18:09:16.796478 4661 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Jan 20 18:09:16 crc kubenswrapper[4661]: E0120 18:09:16.796956 4661 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-k5rhd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-pblrm_openshift-marketplace(038c2d26-4bbf-46fd-90fa-d6e5a6929c7c): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 20 18:09:16 crc kubenswrapper[4661]: E0120 18:09:16.798805 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-pblrm" podUID="038c2d26-4bbf-46fd-90fa-d6e5a6929c7c" Jan 20 18:09:17 crc kubenswrapper[4661]: E0120 18:09:17.073487 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-rvkbf" podUID="40861cf6-5e11-46ad-be02-b415c4f06dee" Jan 20 18:09:17 crc kubenswrapper[4661]: E0120 18:09:17.073570 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-pblrm" podUID="038c2d26-4bbf-46fd-90fa-d6e5a6929c7c" Jan 20 18:09:17 crc kubenswrapper[4661]: I0120 18:09:17.097833 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 20 18:09:17 crc kubenswrapper[4661]: W0120 18:09:17.104843 4661 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pode2ed603c_7744_4ed0_b168_f022e5b8e145.slice/crio-9bc92273f2c497b883a6df21ef33201fb28b7771a0869015ad2267eda5eccf47 WatchSource:0}: Error finding container 9bc92273f2c497b883a6df21ef33201fb28b7771a0869015ad2267eda5eccf47: Status 404 returned error can't find the container with id 9bc92273f2c497b883a6df21ef33201fb28b7771a0869015ad2267eda5eccf47 Jan 20 18:09:17 crc kubenswrapper[4661]: I0120 18:09:17.135338 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 20 18:09:17 crc kubenswrapper[4661]: W0120 18:09:17.151131 4661 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod52eae3f3_7a47_40de_9c5a_739d65fd397c.slice/crio-037f1ba2f52783b1f8db9346122dd4e835adef959b6fe13ad4a9eefa29065226 WatchSource:0}: Error finding container 037f1ba2f52783b1f8db9346122dd4e835adef959b6fe13ad4a9eefa29065226: Status 404 returned error can't find the container with id 037f1ba2f52783b1f8db9346122dd4e835adef959b6fe13ad4a9eefa29065226 Jan 20 18:09:18 crc kubenswrapper[4661]: I0120 18:09:18.076979 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"e2ed603c-7744-4ed0-b168-f022e5b8e145","Type":"ContainerStarted","Data":"a914a75bc2c1e1c5f87a4847b18f8a9588cc49af3eada2587fa0a69d40f17be2"} Jan 20 18:09:18 crc kubenswrapper[4661]: I0120 18:09:18.077582 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"e2ed603c-7744-4ed0-b168-f022e5b8e145","Type":"ContainerStarted","Data":"9bc92273f2c497b883a6df21ef33201fb28b7771a0869015ad2267eda5eccf47"} Jan 20 18:09:18 crc kubenswrapper[4661]: I0120 18:09:18.080148 4661 generic.go:334] "Generic (PLEG): container finished" podID="52eae3f3-7a47-40de-9c5a-739d65fd397c" containerID="520d1329607ff8db94dc419aafb478bc210572b6733e850d467a35c829caf1c2" exitCode=0 Jan 20 18:09:18 crc kubenswrapper[4661]: I0120 18:09:18.080247 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"52eae3f3-7a47-40de-9c5a-739d65fd397c","Type":"ContainerDied","Data":"520d1329607ff8db94dc419aafb478bc210572b6733e850d467a35c829caf1c2"} Jan 20 18:09:18 crc kubenswrapper[4661]: I0120 18:09:18.080278 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"52eae3f3-7a47-40de-9c5a-739d65fd397c","Type":"ContainerStarted","Data":"037f1ba2f52783b1f8db9346122dd4e835adef959b6fe13ad4a9eefa29065226"} Jan 20 18:09:18 crc kubenswrapper[4661]: I0120 18:09:18.082131 4661 generic.go:334] "Generic (PLEG): container finished" podID="b74c3546-f6c8-44f1-a0b7-cf45a7b7ba38" containerID="e06991b2c368f2693448e08b4bca9b968c55e1eb0debfc8f0cff76f32f116e0c" exitCode=0 Jan 20 18:09:18 crc kubenswrapper[4661]: I0120 18:09:18.082157 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vgcbk" event={"ID":"b74c3546-f6c8-44f1-a0b7-cf45a7b7ba38","Type":"ContainerDied","Data":"e06991b2c368f2693448e08b4bca9b968c55e1eb0debfc8f0cff76f32f116e0c"} Jan 20 18:09:18 crc kubenswrapper[4661]: I0120 18:09:18.104822 4661 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-9-crc" podStartSLOduration=11.104804236 podStartE2EDuration="11.104804236s" podCreationTimestamp="2026-01-20 18:09:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 18:09:18.103930383 +0000 UTC m=+214.434720055" watchObservedRunningTime="2026-01-20 18:09:18.104804236 +0000 UTC m=+214.435593898" Jan 20 18:09:19 crc kubenswrapper[4661]: I0120 18:09:19.089884 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vgcbk" event={"ID":"b74c3546-f6c8-44f1-a0b7-cf45a7b7ba38","Type":"ContainerStarted","Data":"8548d6534cf2791c70d6cb0b7090e2a1a6f8634f2d1279c17f3747d3ca252c80"} Jan 20 18:09:19 crc kubenswrapper[4661]: I0120 18:09:19.350032 4661 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 20 18:09:19 crc kubenswrapper[4661]: I0120 18:09:19.366603 4661 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-vgcbk" podStartSLOduration=3.557304619 podStartE2EDuration="59.366584595s" podCreationTimestamp="2026-01-20 18:08:20 +0000 UTC" firstStartedPulling="2026-01-20 18:08:22.732556112 +0000 UTC m=+159.063345774" lastFinishedPulling="2026-01-20 18:09:18.541836078 +0000 UTC m=+214.872625750" observedRunningTime="2026-01-20 18:09:19.113346662 +0000 UTC m=+215.444136334" watchObservedRunningTime="2026-01-20 18:09:19.366584595 +0000 UTC m=+215.697374257" Jan 20 18:09:19 crc kubenswrapper[4661]: I0120 18:09:19.513481 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/52eae3f3-7a47-40de-9c5a-739d65fd397c-kubelet-dir\") pod \"52eae3f3-7a47-40de-9c5a-739d65fd397c\" (UID: \"52eae3f3-7a47-40de-9c5a-739d65fd397c\") " Jan 20 18:09:19 crc kubenswrapper[4661]: I0120 18:09:19.513568 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/52eae3f3-7a47-40de-9c5a-739d65fd397c-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "52eae3f3-7a47-40de-9c5a-739d65fd397c" (UID: "52eae3f3-7a47-40de-9c5a-739d65fd397c"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 20 18:09:19 crc kubenswrapper[4661]: I0120 18:09:19.513717 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/52eae3f3-7a47-40de-9c5a-739d65fd397c-kube-api-access\") pod \"52eae3f3-7a47-40de-9c5a-739d65fd397c\" (UID: \"52eae3f3-7a47-40de-9c5a-739d65fd397c\") " Jan 20 18:09:19 crc kubenswrapper[4661]: I0120 18:09:19.513992 4661 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/52eae3f3-7a47-40de-9c5a-739d65fd397c-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 20 18:09:19 crc kubenswrapper[4661]: I0120 18:09:19.520095 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/52eae3f3-7a47-40de-9c5a-739d65fd397c-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "52eae3f3-7a47-40de-9c5a-739d65fd397c" (UID: "52eae3f3-7a47-40de-9c5a-739d65fd397c"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 18:09:19 crc kubenswrapper[4661]: I0120 18:09:19.615375 4661 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/52eae3f3-7a47-40de-9c5a-739d65fd397c-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 20 18:09:20 crc kubenswrapper[4661]: I0120 18:09:20.096431 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"52eae3f3-7a47-40de-9c5a-739d65fd397c","Type":"ContainerDied","Data":"037f1ba2f52783b1f8db9346122dd4e835adef959b6fe13ad4a9eefa29065226"} Jan 20 18:09:20 crc kubenswrapper[4661]: I0120 18:09:20.096472 4661 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="037f1ba2f52783b1f8db9346122dd4e835adef959b6fe13ad4a9eefa29065226" Jan 20 18:09:20 crc kubenswrapper[4661]: I0120 18:09:20.096540 4661 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 20 18:09:20 crc kubenswrapper[4661]: I0120 18:09:20.705162 4661 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-vgcbk" Jan 20 18:09:20 crc kubenswrapper[4661]: I0120 18:09:20.705594 4661 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-vgcbk" Jan 20 18:09:20 crc kubenswrapper[4661]: I0120 18:09:20.773792 4661 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-vgcbk" Jan 20 18:09:26 crc kubenswrapper[4661]: I0120 18:09:26.138793 4661 generic.go:334] "Generic (PLEG): container finished" podID="c34658be-616a-469f-a560-61709f82cde6" containerID="0b11b2cf1afdb0b38de19fd37e8bce3a11679416f3be4f41a7680340086385da" exitCode=0 Jan 20 18:09:26 crc kubenswrapper[4661]: I0120 18:09:26.138909 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7lfrt" event={"ID":"c34658be-616a-469f-a560-61709f82cde6","Type":"ContainerDied","Data":"0b11b2cf1afdb0b38de19fd37e8bce3a11679416f3be4f41a7680340086385da"} Jan 20 18:09:26 crc kubenswrapper[4661]: I0120 18:09:26.159305 4661 generic.go:334] "Generic (PLEG): container finished" podID="8f48199b-41a0-44a2-b1b4-2f623ab0f413" containerID="9b8bf4bcd0d341c298fb4e77bedf58dc83c7b292802aea620a01f5e6412e9e54" exitCode=0 Jan 20 18:09:26 crc kubenswrapper[4661]: I0120 18:09:26.184267 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kpzrp" event={"ID":"8f48199b-41a0-44a2-b1b4-2f623ab0f413","Type":"ContainerDied","Data":"9b8bf4bcd0d341c298fb4e77bedf58dc83c7b292802aea620a01f5e6412e9e54"} Jan 20 18:09:27 crc kubenswrapper[4661]: I0120 18:09:27.166005 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7lfrt" event={"ID":"c34658be-616a-469f-a560-61709f82cde6","Type":"ContainerStarted","Data":"4cd41c4745ea7ae489870a1a22a7169f3afbab9860a41d8ce1244f1853a7bd8d"} Jan 20 18:09:27 crc kubenswrapper[4661]: I0120 18:09:27.169112 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kpzrp" event={"ID":"8f48199b-41a0-44a2-b1b4-2f623ab0f413","Type":"ContainerStarted","Data":"ca21fcff88b79903679035dce1b6d15cda3bc94194c329bbe06f013f61f70320"} Jan 20 18:09:27 crc kubenswrapper[4661]: I0120 18:09:27.171267 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cr2m2" event={"ID":"a087d508-430f-45ba-bff2-b58d06cebd51","Type":"ContainerStarted","Data":"b6129bd7a5902f3f43d546782dd51acfd7c00abc59dfc57f56760f6f5c2a1663"} Jan 20 18:09:27 crc kubenswrapper[4661]: I0120 18:09:27.183816 4661 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-7lfrt" podStartSLOduration=4.398804174 podStartE2EDuration="1m8.183797465s" podCreationTimestamp="2026-01-20 18:08:19 +0000 UTC" firstStartedPulling="2026-01-20 18:08:22.771031022 +0000 UTC m=+159.101820684" lastFinishedPulling="2026-01-20 18:09:26.556024273 +0000 UTC m=+222.886813975" observedRunningTime="2026-01-20 18:09:27.181774161 +0000 UTC m=+223.512563833" watchObservedRunningTime="2026-01-20 18:09:27.183797465 +0000 UTC m=+223.514587117" Jan 20 18:09:27 crc kubenswrapper[4661]: I0120 18:09:27.198883 4661 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-kpzrp" podStartSLOduration=3.358663539 podStartE2EDuration="1m7.198869781s" podCreationTimestamp="2026-01-20 18:08:20 +0000 UTC" firstStartedPulling="2026-01-20 18:08:22.769900431 +0000 UTC m=+159.100690093" lastFinishedPulling="2026-01-20 18:09:26.610106643 +0000 UTC m=+222.940896335" observedRunningTime="2026-01-20 18:09:27.196625562 +0000 UTC m=+223.527415224" watchObservedRunningTime="2026-01-20 18:09:27.198869781 +0000 UTC m=+223.529659443" Jan 20 18:09:28 crc kubenswrapper[4661]: I0120 18:09:28.177286 4661 generic.go:334] "Generic (PLEG): container finished" podID="a087d508-430f-45ba-bff2-b58d06cebd51" containerID="b6129bd7a5902f3f43d546782dd51acfd7c00abc59dfc57f56760f6f5c2a1663" exitCode=0 Jan 20 18:09:28 crc kubenswrapper[4661]: I0120 18:09:28.177358 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cr2m2" event={"ID":"a087d508-430f-45ba-bff2-b58d06cebd51","Type":"ContainerDied","Data":"b6129bd7a5902f3f43d546782dd51acfd7c00abc59dfc57f56760f6f5c2a1663"} Jan 20 18:09:28 crc kubenswrapper[4661]: I0120 18:09:28.179512 4661 generic.go:334] "Generic (PLEG): container finished" podID="ac4a870b-4ca8-4046-b9b1-6001f8b13a51" containerID="407855e057965dd9437ac8d98b2130c749094b681a7a4707f85912584fde6750" exitCode=0 Jan 20 18:09:28 crc kubenswrapper[4661]: I0120 18:09:28.179541 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5jf5n" event={"ID":"ac4a870b-4ca8-4046-b9b1-6001f8b13a51","Type":"ContainerDied","Data":"407855e057965dd9437ac8d98b2130c749094b681a7a4707f85912584fde6750"} Jan 20 18:09:29 crc kubenswrapper[4661]: I0120 18:09:29.187495 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cr2m2" event={"ID":"a087d508-430f-45ba-bff2-b58d06cebd51","Type":"ContainerStarted","Data":"46382e293347af2c3406396bb7946e118169b344875f2a17c16febcf9dd77459"} Jan 20 18:09:29 crc kubenswrapper[4661]: I0120 18:09:29.190607 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5jf5n" event={"ID":"ac4a870b-4ca8-4046-b9b1-6001f8b13a51","Type":"ContainerStarted","Data":"4ce98ee0b20cad31342ee8168f1f3238f10a2059a3f32502cebb6762ceffac4f"} Jan 20 18:09:29 crc kubenswrapper[4661]: I0120 18:09:29.191887 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rvkbf" event={"ID":"40861cf6-5e11-46ad-be02-b415c4f06dee","Type":"ContainerStarted","Data":"cd3e7e6ec69cbad2d4f4e9a3db89938b0258db3414dfaa789a8baf24e8f85f76"} Jan 20 18:09:29 crc kubenswrapper[4661]: I0120 18:09:29.193138 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zrhnk" event={"ID":"16283339-07c9-417a-9616-06e3f9eac63d","Type":"ContainerStarted","Data":"e0aa71530752d11d624fa198e81a6fd009a6604475acc0b0cce8ac8c021a27b3"} Jan 20 18:09:29 crc kubenswrapper[4661]: I0120 18:09:29.211714 4661 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-cr2m2" podStartSLOduration=3.572899168 podStartE2EDuration="1m7.211694631s" podCreationTimestamp="2026-01-20 18:08:22 +0000 UTC" firstStartedPulling="2026-01-20 18:08:24.916725219 +0000 UTC m=+161.247514881" lastFinishedPulling="2026-01-20 18:09:28.555520672 +0000 UTC m=+224.886310344" observedRunningTime="2026-01-20 18:09:29.20899935 +0000 UTC m=+225.539789022" watchObservedRunningTime="2026-01-20 18:09:29.211694631 +0000 UTC m=+225.542484303" Jan 20 18:09:29 crc kubenswrapper[4661]: I0120 18:09:29.251090 4661 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-5jf5n" podStartSLOduration=5.527127874 podStartE2EDuration="1m10.251073076s" podCreationTimestamp="2026-01-20 18:08:19 +0000 UTC" firstStartedPulling="2026-01-20 18:08:23.854544314 +0000 UTC m=+160.185333976" lastFinishedPulling="2026-01-20 18:09:28.578489506 +0000 UTC m=+224.909279178" observedRunningTime="2026-01-20 18:09:29.24626876 +0000 UTC m=+225.577058422" watchObservedRunningTime="2026-01-20 18:09:29.251073076 +0000 UTC m=+225.581862738" Jan 20 18:09:29 crc kubenswrapper[4661]: I0120 18:09:29.323620 4661 patch_prober.go:28] interesting pod/machine-config-daemon-svf7c container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 20 18:09:29 crc kubenswrapper[4661]: I0120 18:09:29.323695 4661 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-svf7c" podUID="78855c94-da90-4523-8d65-70f7fd153dee" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 20 18:09:29 crc kubenswrapper[4661]: I0120 18:09:29.323738 4661 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-svf7c" Jan 20 18:09:29 crc kubenswrapper[4661]: I0120 18:09:29.324247 4661 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"7dad5141c6e2e07d42bee1c473efffa900d0d900467b1524cd59962582696a3e"} pod="openshift-machine-config-operator/machine-config-daemon-svf7c" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 20 18:09:29 crc kubenswrapper[4661]: I0120 18:09:29.324346 4661 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-svf7c" podUID="78855c94-da90-4523-8d65-70f7fd153dee" containerName="machine-config-daemon" containerID="cri-o://7dad5141c6e2e07d42bee1c473efffa900d0d900467b1524cd59962582696a3e" gracePeriod=600 Jan 20 18:09:30 crc kubenswrapper[4661]: I0120 18:09:30.204124 4661 generic.go:334] "Generic (PLEG): container finished" podID="40861cf6-5e11-46ad-be02-b415c4f06dee" containerID="cd3e7e6ec69cbad2d4f4e9a3db89938b0258db3414dfaa789a8baf24e8f85f76" exitCode=0 Jan 20 18:09:30 crc kubenswrapper[4661]: I0120 18:09:30.204223 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rvkbf" event={"ID":"40861cf6-5e11-46ad-be02-b415c4f06dee","Type":"ContainerDied","Data":"cd3e7e6ec69cbad2d4f4e9a3db89938b0258db3414dfaa789a8baf24e8f85f76"} Jan 20 18:09:30 crc kubenswrapper[4661]: I0120 18:09:30.206457 4661 generic.go:334] "Generic (PLEG): container finished" podID="16283339-07c9-417a-9616-06e3f9eac63d" containerID="e0aa71530752d11d624fa198e81a6fd009a6604475acc0b0cce8ac8c021a27b3" exitCode=0 Jan 20 18:09:30 crc kubenswrapper[4661]: I0120 18:09:30.206526 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zrhnk" event={"ID":"16283339-07c9-417a-9616-06e3f9eac63d","Type":"ContainerDied","Data":"e0aa71530752d11d624fa198e81a6fd009a6604475acc0b0cce8ac8c021a27b3"} Jan 20 18:09:30 crc kubenswrapper[4661]: I0120 18:09:30.339435 4661 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-7lfrt" Jan 20 18:09:30 crc kubenswrapper[4661]: I0120 18:09:30.339554 4661 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-7lfrt" Jan 20 18:09:30 crc kubenswrapper[4661]: I0120 18:09:30.382550 4661 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-7lfrt" Jan 20 18:09:30 crc kubenswrapper[4661]: I0120 18:09:30.674052 4661 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-5jf5n" Jan 20 18:09:30 crc kubenswrapper[4661]: I0120 18:09:30.674127 4661 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-5jf5n" Jan 20 18:09:30 crc kubenswrapper[4661]: I0120 18:09:30.739401 4661 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-5jf5n" Jan 20 18:09:30 crc kubenswrapper[4661]: I0120 18:09:30.786476 4661 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-vgcbk" Jan 20 18:09:30 crc kubenswrapper[4661]: I0120 18:09:30.805080 4661 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-kpzrp" Jan 20 18:09:30 crc kubenswrapper[4661]: I0120 18:09:30.805131 4661 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-kpzrp" Jan 20 18:09:30 crc kubenswrapper[4661]: I0120 18:09:30.852107 4661 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-kpzrp" Jan 20 18:09:31 crc kubenswrapper[4661]: I0120 18:09:31.217873 4661 generic.go:334] "Generic (PLEG): container finished" podID="78855c94-da90-4523-8d65-70f7fd153dee" containerID="7dad5141c6e2e07d42bee1c473efffa900d0d900467b1524cd59962582696a3e" exitCode=0 Jan 20 18:09:31 crc kubenswrapper[4661]: I0120 18:09:31.219249 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-svf7c" event={"ID":"78855c94-da90-4523-8d65-70f7fd153dee","Type":"ContainerDied","Data":"7dad5141c6e2e07d42bee1c473efffa900d0d900467b1524cd59962582696a3e"} Jan 20 18:09:31 crc kubenswrapper[4661]: I0120 18:09:31.268062 4661 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-7lfrt" Jan 20 18:09:31 crc kubenswrapper[4661]: I0120 18:09:31.281112 4661 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-kpzrp" Jan 20 18:09:32 crc kubenswrapper[4661]: I0120 18:09:32.226105 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-svf7c" event={"ID":"78855c94-da90-4523-8d65-70f7fd153dee","Type":"ContainerStarted","Data":"99d4d400e62492d7e5ae5501c92fd17df5fa6c400aad3dbfd4b4f9a9fbee2fb0"} Jan 20 18:09:33 crc kubenswrapper[4661]: I0120 18:09:33.293407 4661 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-cr2m2" Jan 20 18:09:33 crc kubenswrapper[4661]: I0120 18:09:33.293903 4661 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-cr2m2" Jan 20 18:09:34 crc kubenswrapper[4661]: I0120 18:09:34.246222 4661 generic.go:334] "Generic (PLEG): container finished" podID="038c2d26-4bbf-46fd-90fa-d6e5a6929c7c" containerID="9a946d60c8e350051640c306ff3c3cd470a5ae117cb726b5ad7c29b7c8c8b206" exitCode=0 Jan 20 18:09:34 crc kubenswrapper[4661]: I0120 18:09:34.246309 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pblrm" event={"ID":"038c2d26-4bbf-46fd-90fa-d6e5a6929c7c","Type":"ContainerDied","Data":"9a946d60c8e350051640c306ff3c3cd470a5ae117cb726b5ad7c29b7c8c8b206"} Jan 20 18:09:34 crc kubenswrapper[4661]: I0120 18:09:34.250956 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rvkbf" event={"ID":"40861cf6-5e11-46ad-be02-b415c4f06dee","Type":"ContainerStarted","Data":"20844769653e3bc8dd46c3d2e967c83d2cddc6e594ea5c2d198fb826c885b921"} Jan 20 18:09:34 crc kubenswrapper[4661]: I0120 18:09:34.253439 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zrhnk" event={"ID":"16283339-07c9-417a-9616-06e3f9eac63d","Type":"ContainerStarted","Data":"9d2603d12c9a99da72a3c05d6a7f0ae03f3b5cc767545f951ceea74120944966"} Jan 20 18:09:34 crc kubenswrapper[4661]: I0120 18:09:34.303800 4661 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-zrhnk" podStartSLOduration=2.497929199 podStartE2EDuration="1m11.303780807s" podCreationTimestamp="2026-01-20 18:08:23 +0000 UTC" firstStartedPulling="2026-01-20 18:08:24.865984408 +0000 UTC m=+161.196774070" lastFinishedPulling="2026-01-20 18:09:33.671836016 +0000 UTC m=+230.002625678" observedRunningTime="2026-01-20 18:09:34.287874709 +0000 UTC m=+230.618664371" watchObservedRunningTime="2026-01-20 18:09:34.303780807 +0000 UTC m=+230.634570469" Jan 20 18:09:34 crc kubenswrapper[4661]: I0120 18:09:34.304024 4661 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-rvkbf" podStartSLOduration=3.749898096 podStartE2EDuration="1m13.304021174s" podCreationTimestamp="2026-01-20 18:08:21 +0000 UTC" firstStartedPulling="2026-01-20 18:08:23.826885567 +0000 UTC m=+160.157675229" lastFinishedPulling="2026-01-20 18:09:33.381008645 +0000 UTC m=+229.711798307" observedRunningTime="2026-01-20 18:09:34.303244233 +0000 UTC m=+230.634033895" watchObservedRunningTime="2026-01-20 18:09:34.304021174 +0000 UTC m=+230.634810836" Jan 20 18:09:34 crc kubenswrapper[4661]: I0120 18:09:34.355415 4661 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-cr2m2" podUID="a087d508-430f-45ba-bff2-b58d06cebd51" containerName="registry-server" probeResult="failure" output=< Jan 20 18:09:34 crc kubenswrapper[4661]: timeout: failed to connect service ":50051" within 1s Jan 20 18:09:34 crc kubenswrapper[4661]: > Jan 20 18:09:34 crc kubenswrapper[4661]: I0120 18:09:34.373632 4661 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-vgcbk"] Jan 20 18:09:34 crc kubenswrapper[4661]: I0120 18:09:34.374051 4661 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-vgcbk" podUID="b74c3546-f6c8-44f1-a0b7-cf45a7b7ba38" containerName="registry-server" containerID="cri-o://8548d6534cf2791c70d6cb0b7090e2a1a6f8634f2d1279c17f3747d3ca252c80" gracePeriod=2 Jan 20 18:09:34 crc kubenswrapper[4661]: I0120 18:09:34.572160 4661 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-kpzrp"] Jan 20 18:09:34 crc kubenswrapper[4661]: I0120 18:09:34.572376 4661 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-kpzrp" podUID="8f48199b-41a0-44a2-b1b4-2f623ab0f413" containerName="registry-server" containerID="cri-o://ca21fcff88b79903679035dce1b6d15cda3bc94194c329bbe06f013f61f70320" gracePeriod=2 Jan 20 18:09:34 crc kubenswrapper[4661]: I0120 18:09:34.802109 4661 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vgcbk" Jan 20 18:09:34 crc kubenswrapper[4661]: I0120 18:09:34.927175 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b74c3546-f6c8-44f1-a0b7-cf45a7b7ba38-utilities\") pod \"b74c3546-f6c8-44f1-a0b7-cf45a7b7ba38\" (UID: \"b74c3546-f6c8-44f1-a0b7-cf45a7b7ba38\") " Jan 20 18:09:34 crc kubenswrapper[4661]: I0120 18:09:34.927241 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b74c3546-f6c8-44f1-a0b7-cf45a7b7ba38-catalog-content\") pod \"b74c3546-f6c8-44f1-a0b7-cf45a7b7ba38\" (UID: \"b74c3546-f6c8-44f1-a0b7-cf45a7b7ba38\") " Jan 20 18:09:34 crc kubenswrapper[4661]: I0120 18:09:34.927263 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sh765\" (UniqueName: \"kubernetes.io/projected/b74c3546-f6c8-44f1-a0b7-cf45a7b7ba38-kube-api-access-sh765\") pod \"b74c3546-f6c8-44f1-a0b7-cf45a7b7ba38\" (UID: \"b74c3546-f6c8-44f1-a0b7-cf45a7b7ba38\") " Jan 20 18:09:34 crc kubenswrapper[4661]: I0120 18:09:34.932904 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b74c3546-f6c8-44f1-a0b7-cf45a7b7ba38-utilities" (OuterVolumeSpecName: "utilities") pod "b74c3546-f6c8-44f1-a0b7-cf45a7b7ba38" (UID: "b74c3546-f6c8-44f1-a0b7-cf45a7b7ba38"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 18:09:34 crc kubenswrapper[4661]: I0120 18:09:34.935885 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b74c3546-f6c8-44f1-a0b7-cf45a7b7ba38-kube-api-access-sh765" (OuterVolumeSpecName: "kube-api-access-sh765") pod "b74c3546-f6c8-44f1-a0b7-cf45a7b7ba38" (UID: "b74c3546-f6c8-44f1-a0b7-cf45a7b7ba38"). InnerVolumeSpecName "kube-api-access-sh765". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 18:09:34 crc kubenswrapper[4661]: I0120 18:09:34.952538 4661 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-kpzrp" Jan 20 18:09:35 crc kubenswrapper[4661]: I0120 18:09:35.014239 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b74c3546-f6c8-44f1-a0b7-cf45a7b7ba38-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b74c3546-f6c8-44f1-a0b7-cf45a7b7ba38" (UID: "b74c3546-f6c8-44f1-a0b7-cf45a7b7ba38"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 18:09:35 crc kubenswrapper[4661]: I0120 18:09:35.028380 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ks96p\" (UniqueName: \"kubernetes.io/projected/8f48199b-41a0-44a2-b1b4-2f623ab0f413-kube-api-access-ks96p\") pod \"8f48199b-41a0-44a2-b1b4-2f623ab0f413\" (UID: \"8f48199b-41a0-44a2-b1b4-2f623ab0f413\") " Jan 20 18:09:35 crc kubenswrapper[4661]: I0120 18:09:35.028461 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8f48199b-41a0-44a2-b1b4-2f623ab0f413-utilities\") pod \"8f48199b-41a0-44a2-b1b4-2f623ab0f413\" (UID: \"8f48199b-41a0-44a2-b1b4-2f623ab0f413\") " Jan 20 18:09:35 crc kubenswrapper[4661]: I0120 18:09:35.028504 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8f48199b-41a0-44a2-b1b4-2f623ab0f413-catalog-content\") pod \"8f48199b-41a0-44a2-b1b4-2f623ab0f413\" (UID: \"8f48199b-41a0-44a2-b1b4-2f623ab0f413\") " Jan 20 18:09:35 crc kubenswrapper[4661]: I0120 18:09:35.028721 4661 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b74c3546-f6c8-44f1-a0b7-cf45a7b7ba38-utilities\") on node \"crc\" DevicePath \"\"" Jan 20 18:09:35 crc kubenswrapper[4661]: I0120 18:09:35.028736 4661 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b74c3546-f6c8-44f1-a0b7-cf45a7b7ba38-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 20 18:09:35 crc kubenswrapper[4661]: I0120 18:09:35.028748 4661 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sh765\" (UniqueName: \"kubernetes.io/projected/b74c3546-f6c8-44f1-a0b7-cf45a7b7ba38-kube-api-access-sh765\") on node \"crc\" DevicePath \"\"" Jan 20 18:09:35 crc kubenswrapper[4661]: I0120 18:09:35.033454 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f48199b-41a0-44a2-b1b4-2f623ab0f413-utilities" (OuterVolumeSpecName: "utilities") pod "8f48199b-41a0-44a2-b1b4-2f623ab0f413" (UID: "8f48199b-41a0-44a2-b1b4-2f623ab0f413"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 18:09:35 crc kubenswrapper[4661]: I0120 18:09:35.033577 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f48199b-41a0-44a2-b1b4-2f623ab0f413-kube-api-access-ks96p" (OuterVolumeSpecName: "kube-api-access-ks96p") pod "8f48199b-41a0-44a2-b1b4-2f623ab0f413" (UID: "8f48199b-41a0-44a2-b1b4-2f623ab0f413"). InnerVolumeSpecName "kube-api-access-ks96p". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 18:09:35 crc kubenswrapper[4661]: I0120 18:09:35.087185 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f48199b-41a0-44a2-b1b4-2f623ab0f413-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8f48199b-41a0-44a2-b1b4-2f623ab0f413" (UID: "8f48199b-41a0-44a2-b1b4-2f623ab0f413"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 18:09:35 crc kubenswrapper[4661]: I0120 18:09:35.129588 4661 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ks96p\" (UniqueName: \"kubernetes.io/projected/8f48199b-41a0-44a2-b1b4-2f623ab0f413-kube-api-access-ks96p\") on node \"crc\" DevicePath \"\"" Jan 20 18:09:35 crc kubenswrapper[4661]: I0120 18:09:35.129629 4661 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8f48199b-41a0-44a2-b1b4-2f623ab0f413-utilities\") on node \"crc\" DevicePath \"\"" Jan 20 18:09:35 crc kubenswrapper[4661]: I0120 18:09:35.129640 4661 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8f48199b-41a0-44a2-b1b4-2f623ab0f413-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 20 18:09:35 crc kubenswrapper[4661]: I0120 18:09:35.258749 4661 generic.go:334] "Generic (PLEG): container finished" podID="b74c3546-f6c8-44f1-a0b7-cf45a7b7ba38" containerID="8548d6534cf2791c70d6cb0b7090e2a1a6f8634f2d1279c17f3747d3ca252c80" exitCode=0 Jan 20 18:09:35 crc kubenswrapper[4661]: I0120 18:09:35.258807 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vgcbk" event={"ID":"b74c3546-f6c8-44f1-a0b7-cf45a7b7ba38","Type":"ContainerDied","Data":"8548d6534cf2791c70d6cb0b7090e2a1a6f8634f2d1279c17f3747d3ca252c80"} Jan 20 18:09:35 crc kubenswrapper[4661]: I0120 18:09:35.258834 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vgcbk" event={"ID":"b74c3546-f6c8-44f1-a0b7-cf45a7b7ba38","Type":"ContainerDied","Data":"86c905d9ac031ea7d0030bebbc5ca89a0915fcd1f4154dc195382df9a525b27b"} Jan 20 18:09:35 crc kubenswrapper[4661]: I0120 18:09:35.258851 4661 scope.go:117] "RemoveContainer" containerID="8548d6534cf2791c70d6cb0b7090e2a1a6f8634f2d1279c17f3747d3ca252c80" Jan 20 18:09:35 crc kubenswrapper[4661]: I0120 18:09:35.259158 4661 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vgcbk" Jan 20 18:09:35 crc kubenswrapper[4661]: I0120 18:09:35.267772 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pblrm" event={"ID":"038c2d26-4bbf-46fd-90fa-d6e5a6929c7c","Type":"ContainerStarted","Data":"68d03292c6da35bd2a4ee7317059378ec16e21005d5c98ede22ca0004e33afac"} Jan 20 18:09:35 crc kubenswrapper[4661]: I0120 18:09:35.270986 4661 generic.go:334] "Generic (PLEG): container finished" podID="8f48199b-41a0-44a2-b1b4-2f623ab0f413" containerID="ca21fcff88b79903679035dce1b6d15cda3bc94194c329bbe06f013f61f70320" exitCode=0 Jan 20 18:09:35 crc kubenswrapper[4661]: I0120 18:09:35.271029 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kpzrp" event={"ID":"8f48199b-41a0-44a2-b1b4-2f623ab0f413","Type":"ContainerDied","Data":"ca21fcff88b79903679035dce1b6d15cda3bc94194c329bbe06f013f61f70320"} Jan 20 18:09:35 crc kubenswrapper[4661]: I0120 18:09:35.271049 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kpzrp" event={"ID":"8f48199b-41a0-44a2-b1b4-2f623ab0f413","Type":"ContainerDied","Data":"314c6206849cbdcb2879a061abb97fdc2430026f135356a81c1a9d27e19dc876"} Jan 20 18:09:35 crc kubenswrapper[4661]: I0120 18:09:35.271252 4661 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-kpzrp" Jan 20 18:09:35 crc kubenswrapper[4661]: I0120 18:09:35.284254 4661 scope.go:117] "RemoveContainer" containerID="e06991b2c368f2693448e08b4bca9b968c55e1eb0debfc8f0cff76f32f116e0c" Jan 20 18:09:35 crc kubenswrapper[4661]: I0120 18:09:35.297449 4661 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-pblrm" podStartSLOduration=3.5061800610000002 podStartE2EDuration="1m13.297401531s" podCreationTimestamp="2026-01-20 18:08:22 +0000 UTC" firstStartedPulling="2026-01-20 18:08:24.8575226 +0000 UTC m=+161.188312262" lastFinishedPulling="2026-01-20 18:09:34.64874407 +0000 UTC m=+230.979533732" observedRunningTime="2026-01-20 18:09:35.294436174 +0000 UTC m=+231.625225846" watchObservedRunningTime="2026-01-20 18:09:35.297401531 +0000 UTC m=+231.628191193" Jan 20 18:09:35 crc kubenswrapper[4661]: I0120 18:09:35.302310 4661 scope.go:117] "RemoveContainer" containerID="72f35ed4405aaeb208d8c537c9714bf5f042011bb254200b4d219d863d0e205d" Jan 20 18:09:35 crc kubenswrapper[4661]: I0120 18:09:35.316806 4661 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-vgcbk"] Jan 20 18:09:35 crc kubenswrapper[4661]: I0120 18:09:35.319592 4661 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-vgcbk"] Jan 20 18:09:35 crc kubenswrapper[4661]: I0120 18:09:35.323090 4661 scope.go:117] "RemoveContainer" containerID="8548d6534cf2791c70d6cb0b7090e2a1a6f8634f2d1279c17f3747d3ca252c80" Jan 20 18:09:35 crc kubenswrapper[4661]: E0120 18:09:35.323700 4661 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8548d6534cf2791c70d6cb0b7090e2a1a6f8634f2d1279c17f3747d3ca252c80\": container with ID starting with 8548d6534cf2791c70d6cb0b7090e2a1a6f8634f2d1279c17f3747d3ca252c80 not found: ID does not exist" containerID="8548d6534cf2791c70d6cb0b7090e2a1a6f8634f2d1279c17f3747d3ca252c80" Jan 20 18:09:35 crc kubenswrapper[4661]: I0120 18:09:35.323758 4661 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8548d6534cf2791c70d6cb0b7090e2a1a6f8634f2d1279c17f3747d3ca252c80"} err="failed to get container status \"8548d6534cf2791c70d6cb0b7090e2a1a6f8634f2d1279c17f3747d3ca252c80\": rpc error: code = NotFound desc = could not find container \"8548d6534cf2791c70d6cb0b7090e2a1a6f8634f2d1279c17f3747d3ca252c80\": container with ID starting with 8548d6534cf2791c70d6cb0b7090e2a1a6f8634f2d1279c17f3747d3ca252c80 not found: ID does not exist" Jan 20 18:09:35 crc kubenswrapper[4661]: I0120 18:09:35.323793 4661 scope.go:117] "RemoveContainer" containerID="e06991b2c368f2693448e08b4bca9b968c55e1eb0debfc8f0cff76f32f116e0c" Jan 20 18:09:35 crc kubenswrapper[4661]: E0120 18:09:35.324247 4661 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e06991b2c368f2693448e08b4bca9b968c55e1eb0debfc8f0cff76f32f116e0c\": container with ID starting with e06991b2c368f2693448e08b4bca9b968c55e1eb0debfc8f0cff76f32f116e0c not found: ID does not exist" containerID="e06991b2c368f2693448e08b4bca9b968c55e1eb0debfc8f0cff76f32f116e0c" Jan 20 18:09:35 crc kubenswrapper[4661]: I0120 18:09:35.324286 4661 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e06991b2c368f2693448e08b4bca9b968c55e1eb0debfc8f0cff76f32f116e0c"} err="failed to get container status \"e06991b2c368f2693448e08b4bca9b968c55e1eb0debfc8f0cff76f32f116e0c\": rpc error: code = NotFound desc = could not find container \"e06991b2c368f2693448e08b4bca9b968c55e1eb0debfc8f0cff76f32f116e0c\": container with ID starting with e06991b2c368f2693448e08b4bca9b968c55e1eb0debfc8f0cff76f32f116e0c not found: ID does not exist" Jan 20 18:09:35 crc kubenswrapper[4661]: I0120 18:09:35.324322 4661 scope.go:117] "RemoveContainer" containerID="72f35ed4405aaeb208d8c537c9714bf5f042011bb254200b4d219d863d0e205d" Jan 20 18:09:35 crc kubenswrapper[4661]: E0120 18:09:35.325621 4661 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"72f35ed4405aaeb208d8c537c9714bf5f042011bb254200b4d219d863d0e205d\": container with ID starting with 72f35ed4405aaeb208d8c537c9714bf5f042011bb254200b4d219d863d0e205d not found: ID does not exist" containerID="72f35ed4405aaeb208d8c537c9714bf5f042011bb254200b4d219d863d0e205d" Jan 20 18:09:35 crc kubenswrapper[4661]: I0120 18:09:35.325794 4661 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"72f35ed4405aaeb208d8c537c9714bf5f042011bb254200b4d219d863d0e205d"} err="failed to get container status \"72f35ed4405aaeb208d8c537c9714bf5f042011bb254200b4d219d863d0e205d\": rpc error: code = NotFound desc = could not find container \"72f35ed4405aaeb208d8c537c9714bf5f042011bb254200b4d219d863d0e205d\": container with ID starting with 72f35ed4405aaeb208d8c537c9714bf5f042011bb254200b4d219d863d0e205d not found: ID does not exist" Jan 20 18:09:35 crc kubenswrapper[4661]: I0120 18:09:35.326440 4661 scope.go:117] "RemoveContainer" containerID="ca21fcff88b79903679035dce1b6d15cda3bc94194c329bbe06f013f61f70320" Jan 20 18:09:35 crc kubenswrapper[4661]: I0120 18:09:35.334437 4661 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-kpzrp"] Jan 20 18:09:35 crc kubenswrapper[4661]: I0120 18:09:35.338569 4661 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-kpzrp"] Jan 20 18:09:35 crc kubenswrapper[4661]: I0120 18:09:35.351843 4661 scope.go:117] "RemoveContainer" containerID="9b8bf4bcd0d341c298fb4e77bedf58dc83c7b292802aea620a01f5e6412e9e54" Jan 20 18:09:35 crc kubenswrapper[4661]: I0120 18:09:35.368039 4661 scope.go:117] "RemoveContainer" containerID="3978e0f4d060d08a2ac1273252335d07c7f12f11b63f7bf90b9853126592c9d5" Jan 20 18:09:35 crc kubenswrapper[4661]: I0120 18:09:35.383780 4661 scope.go:117] "RemoveContainer" containerID="ca21fcff88b79903679035dce1b6d15cda3bc94194c329bbe06f013f61f70320" Jan 20 18:09:35 crc kubenswrapper[4661]: E0120 18:09:35.384169 4661 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ca21fcff88b79903679035dce1b6d15cda3bc94194c329bbe06f013f61f70320\": container with ID starting with ca21fcff88b79903679035dce1b6d15cda3bc94194c329bbe06f013f61f70320 not found: ID does not exist" containerID="ca21fcff88b79903679035dce1b6d15cda3bc94194c329bbe06f013f61f70320" Jan 20 18:09:35 crc kubenswrapper[4661]: I0120 18:09:35.384194 4661 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ca21fcff88b79903679035dce1b6d15cda3bc94194c329bbe06f013f61f70320"} err="failed to get container status \"ca21fcff88b79903679035dce1b6d15cda3bc94194c329bbe06f013f61f70320\": rpc error: code = NotFound desc = could not find container \"ca21fcff88b79903679035dce1b6d15cda3bc94194c329bbe06f013f61f70320\": container with ID starting with ca21fcff88b79903679035dce1b6d15cda3bc94194c329bbe06f013f61f70320 not found: ID does not exist" Jan 20 18:09:35 crc kubenswrapper[4661]: I0120 18:09:35.384212 4661 scope.go:117] "RemoveContainer" containerID="9b8bf4bcd0d341c298fb4e77bedf58dc83c7b292802aea620a01f5e6412e9e54" Jan 20 18:09:35 crc kubenswrapper[4661]: E0120 18:09:35.384464 4661 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9b8bf4bcd0d341c298fb4e77bedf58dc83c7b292802aea620a01f5e6412e9e54\": container with ID starting with 9b8bf4bcd0d341c298fb4e77bedf58dc83c7b292802aea620a01f5e6412e9e54 not found: ID does not exist" containerID="9b8bf4bcd0d341c298fb4e77bedf58dc83c7b292802aea620a01f5e6412e9e54" Jan 20 18:09:35 crc kubenswrapper[4661]: I0120 18:09:35.384478 4661 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9b8bf4bcd0d341c298fb4e77bedf58dc83c7b292802aea620a01f5e6412e9e54"} err="failed to get container status \"9b8bf4bcd0d341c298fb4e77bedf58dc83c7b292802aea620a01f5e6412e9e54\": rpc error: code = NotFound desc = could not find container \"9b8bf4bcd0d341c298fb4e77bedf58dc83c7b292802aea620a01f5e6412e9e54\": container with ID starting with 9b8bf4bcd0d341c298fb4e77bedf58dc83c7b292802aea620a01f5e6412e9e54 not found: ID does not exist" Jan 20 18:09:35 crc kubenswrapper[4661]: I0120 18:09:35.384490 4661 scope.go:117] "RemoveContainer" containerID="3978e0f4d060d08a2ac1273252335d07c7f12f11b63f7bf90b9853126592c9d5" Jan 20 18:09:35 crc kubenswrapper[4661]: E0120 18:09:35.384888 4661 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3978e0f4d060d08a2ac1273252335d07c7f12f11b63f7bf90b9853126592c9d5\": container with ID starting with 3978e0f4d060d08a2ac1273252335d07c7f12f11b63f7bf90b9853126592c9d5 not found: ID does not exist" containerID="3978e0f4d060d08a2ac1273252335d07c7f12f11b63f7bf90b9853126592c9d5" Jan 20 18:09:35 crc kubenswrapper[4661]: I0120 18:09:35.384917 4661 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3978e0f4d060d08a2ac1273252335d07c7f12f11b63f7bf90b9853126592c9d5"} err="failed to get container status \"3978e0f4d060d08a2ac1273252335d07c7f12f11b63f7bf90b9853126592c9d5\": rpc error: code = NotFound desc = could not find container \"3978e0f4d060d08a2ac1273252335d07c7f12f11b63f7bf90b9853126592c9d5\": container with ID starting with 3978e0f4d060d08a2ac1273252335d07c7f12f11b63f7bf90b9853126592c9d5 not found: ID does not exist" Jan 20 18:09:36 crc kubenswrapper[4661]: I0120 18:09:36.151940 4661 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f48199b-41a0-44a2-b1b4-2f623ab0f413" path="/var/lib/kubelet/pods/8f48199b-41a0-44a2-b1b4-2f623ab0f413/volumes" Jan 20 18:09:36 crc kubenswrapper[4661]: I0120 18:09:36.153000 4661 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b74c3546-f6c8-44f1-a0b7-cf45a7b7ba38" path="/var/lib/kubelet/pods/b74c3546-f6c8-44f1-a0b7-cf45a7b7ba38/volumes" Jan 20 18:09:40 crc kubenswrapper[4661]: I0120 18:09:40.747307 4661 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-5jf5n" Jan 20 18:09:41 crc kubenswrapper[4661]: I0120 18:09:41.968425 4661 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-rvkbf" Jan 20 18:09:41 crc kubenswrapper[4661]: I0120 18:09:41.968534 4661 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-rvkbf" Jan 20 18:09:42 crc kubenswrapper[4661]: I0120 18:09:42.039411 4661 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-rvkbf" Jan 20 18:09:42 crc kubenswrapper[4661]: I0120 18:09:42.378362 4661 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-rvkbf" Jan 20 18:09:42 crc kubenswrapper[4661]: I0120 18:09:42.737104 4661 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-pblrm" Jan 20 18:09:42 crc kubenswrapper[4661]: I0120 18:09:42.737157 4661 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-pblrm" Jan 20 18:09:42 crc kubenswrapper[4661]: I0120 18:09:42.778144 4661 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-pblrm" Jan 20 18:09:43 crc kubenswrapper[4661]: I0120 18:09:43.350402 4661 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-cr2m2" Jan 20 18:09:43 crc kubenswrapper[4661]: I0120 18:09:43.399372 4661 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-pblrm" Jan 20 18:09:43 crc kubenswrapper[4661]: I0120 18:09:43.402215 4661 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-cr2m2" Jan 20 18:09:43 crc kubenswrapper[4661]: I0120 18:09:43.604068 4661 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-zrhnk" Jan 20 18:09:43 crc kubenswrapper[4661]: I0120 18:09:43.604152 4661 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-zrhnk" Jan 20 18:09:43 crc kubenswrapper[4661]: I0120 18:09:43.668722 4661 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-zrhnk" Jan 20 18:09:43 crc kubenswrapper[4661]: I0120 18:09:43.970166 4661 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-pblrm"] Jan 20 18:09:44 crc kubenswrapper[4661]: I0120 18:09:44.401027 4661 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-zrhnk" Jan 20 18:09:45 crc kubenswrapper[4661]: I0120 18:09:45.345648 4661 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-pblrm" podUID="038c2d26-4bbf-46fd-90fa-d6e5a6929c7c" containerName="registry-server" containerID="cri-o://68d03292c6da35bd2a4ee7317059378ec16e21005d5c98ede22ca0004e33afac" gracePeriod=2 Jan 20 18:09:45 crc kubenswrapper[4661]: I0120 18:09:45.772073 4661 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-zrhnk"] Jan 20 18:09:46 crc kubenswrapper[4661]: I0120 18:09:46.351358 4661 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-zrhnk" podUID="16283339-07c9-417a-9616-06e3f9eac63d" containerName="registry-server" containerID="cri-o://9d2603d12c9a99da72a3c05d6a7f0ae03f3b5cc767545f951ceea74120944966" gracePeriod=2 Jan 20 18:09:47 crc kubenswrapper[4661]: I0120 18:09:47.364950 4661 generic.go:334] "Generic (PLEG): container finished" podID="038c2d26-4bbf-46fd-90fa-d6e5a6929c7c" containerID="68d03292c6da35bd2a4ee7317059378ec16e21005d5c98ede22ca0004e33afac" exitCode=0 Jan 20 18:09:47 crc kubenswrapper[4661]: I0120 18:09:47.365093 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pblrm" event={"ID":"038c2d26-4bbf-46fd-90fa-d6e5a6929c7c","Type":"ContainerDied","Data":"68d03292c6da35bd2a4ee7317059378ec16e21005d5c98ede22ca0004e33afac"} Jan 20 18:09:47 crc kubenswrapper[4661]: I0120 18:09:47.948789 4661 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-pblrm" Jan 20 18:09:48 crc kubenswrapper[4661]: I0120 18:09:48.009752 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/038c2d26-4bbf-46fd-90fa-d6e5a6929c7c-catalog-content\") pod \"038c2d26-4bbf-46fd-90fa-d6e5a6929c7c\" (UID: \"038c2d26-4bbf-46fd-90fa-d6e5a6929c7c\") " Jan 20 18:09:48 crc kubenswrapper[4661]: I0120 18:09:48.009845 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k5rhd\" (UniqueName: \"kubernetes.io/projected/038c2d26-4bbf-46fd-90fa-d6e5a6929c7c-kube-api-access-k5rhd\") pod \"038c2d26-4bbf-46fd-90fa-d6e5a6929c7c\" (UID: \"038c2d26-4bbf-46fd-90fa-d6e5a6929c7c\") " Jan 20 18:09:48 crc kubenswrapper[4661]: I0120 18:09:48.009896 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/038c2d26-4bbf-46fd-90fa-d6e5a6929c7c-utilities\") pod \"038c2d26-4bbf-46fd-90fa-d6e5a6929c7c\" (UID: \"038c2d26-4bbf-46fd-90fa-d6e5a6929c7c\") " Jan 20 18:09:48 crc kubenswrapper[4661]: I0120 18:09:48.010802 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/038c2d26-4bbf-46fd-90fa-d6e5a6929c7c-utilities" (OuterVolumeSpecName: "utilities") pod "038c2d26-4bbf-46fd-90fa-d6e5a6929c7c" (UID: "038c2d26-4bbf-46fd-90fa-d6e5a6929c7c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 18:09:48 crc kubenswrapper[4661]: I0120 18:09:48.022256 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/038c2d26-4bbf-46fd-90fa-d6e5a6929c7c-kube-api-access-k5rhd" (OuterVolumeSpecName: "kube-api-access-k5rhd") pod "038c2d26-4bbf-46fd-90fa-d6e5a6929c7c" (UID: "038c2d26-4bbf-46fd-90fa-d6e5a6929c7c"). InnerVolumeSpecName "kube-api-access-k5rhd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 18:09:48 crc kubenswrapper[4661]: I0120 18:09:48.037919 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/038c2d26-4bbf-46fd-90fa-d6e5a6929c7c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "038c2d26-4bbf-46fd-90fa-d6e5a6929c7c" (UID: "038c2d26-4bbf-46fd-90fa-d6e5a6929c7c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 18:09:48 crc kubenswrapper[4661]: I0120 18:09:48.111115 4661 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/038c2d26-4bbf-46fd-90fa-d6e5a6929c7c-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 20 18:09:48 crc kubenswrapper[4661]: I0120 18:09:48.111161 4661 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k5rhd\" (UniqueName: \"kubernetes.io/projected/038c2d26-4bbf-46fd-90fa-d6e5a6929c7c-kube-api-access-k5rhd\") on node \"crc\" DevicePath \"\"" Jan 20 18:09:48 crc kubenswrapper[4661]: I0120 18:09:48.111175 4661 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/038c2d26-4bbf-46fd-90fa-d6e5a6929c7c-utilities\") on node \"crc\" DevicePath \"\"" Jan 20 18:09:48 crc kubenswrapper[4661]: I0120 18:09:48.376853 4661 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-pblrm" Jan 20 18:09:48 crc kubenswrapper[4661]: I0120 18:09:48.376654 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pblrm" event={"ID":"038c2d26-4bbf-46fd-90fa-d6e5a6929c7c","Type":"ContainerDied","Data":"3042670c2470b291604b62ad3991ddbc65087e814f733df5ed68552f08fe6b92"} Jan 20 18:09:48 crc kubenswrapper[4661]: I0120 18:09:48.378440 4661 scope.go:117] "RemoveContainer" containerID="68d03292c6da35bd2a4ee7317059378ec16e21005d5c98ede22ca0004e33afac" Jan 20 18:09:48 crc kubenswrapper[4661]: I0120 18:09:48.385323 4661 generic.go:334] "Generic (PLEG): container finished" podID="16283339-07c9-417a-9616-06e3f9eac63d" containerID="9d2603d12c9a99da72a3c05d6a7f0ae03f3b5cc767545f951ceea74120944966" exitCode=0 Jan 20 18:09:48 crc kubenswrapper[4661]: I0120 18:09:48.385377 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zrhnk" event={"ID":"16283339-07c9-417a-9616-06e3f9eac63d","Type":"ContainerDied","Data":"9d2603d12c9a99da72a3c05d6a7f0ae03f3b5cc767545f951ceea74120944966"} Jan 20 18:09:48 crc kubenswrapper[4661]: I0120 18:09:48.408641 4661 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-pblrm"] Jan 20 18:09:48 crc kubenswrapper[4661]: I0120 18:09:48.409168 4661 scope.go:117] "RemoveContainer" containerID="9a946d60c8e350051640c306ff3c3cd470a5ae117cb726b5ad7c29b7c8c8b206" Jan 20 18:09:48 crc kubenswrapper[4661]: I0120 18:09:48.415909 4661 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-pblrm"] Jan 20 18:09:48 crc kubenswrapper[4661]: I0120 18:09:48.431853 4661 scope.go:117] "RemoveContainer" containerID="4f0be225dab321348eea5c7d2d972c6eedec8afd15e1533559f7c75fdcbf7eeb" Jan 20 18:09:48 crc kubenswrapper[4661]: I0120 18:09:48.945010 4661 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zrhnk" Jan 20 18:09:49 crc kubenswrapper[4661]: I0120 18:09:49.023340 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/16283339-07c9-417a-9616-06e3f9eac63d-utilities\") pod \"16283339-07c9-417a-9616-06e3f9eac63d\" (UID: \"16283339-07c9-417a-9616-06e3f9eac63d\") " Jan 20 18:09:49 crc kubenswrapper[4661]: I0120 18:09:49.023697 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/16283339-07c9-417a-9616-06e3f9eac63d-catalog-content\") pod \"16283339-07c9-417a-9616-06e3f9eac63d\" (UID: \"16283339-07c9-417a-9616-06e3f9eac63d\") " Jan 20 18:09:49 crc kubenswrapper[4661]: I0120 18:09:49.023888 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vk5ll\" (UniqueName: \"kubernetes.io/projected/16283339-07c9-417a-9616-06e3f9eac63d-kube-api-access-vk5ll\") pod \"16283339-07c9-417a-9616-06e3f9eac63d\" (UID: \"16283339-07c9-417a-9616-06e3f9eac63d\") " Jan 20 18:09:49 crc kubenswrapper[4661]: I0120 18:09:49.024939 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/16283339-07c9-417a-9616-06e3f9eac63d-utilities" (OuterVolumeSpecName: "utilities") pod "16283339-07c9-417a-9616-06e3f9eac63d" (UID: "16283339-07c9-417a-9616-06e3f9eac63d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 18:09:49 crc kubenswrapper[4661]: I0120 18:09:49.032911 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/16283339-07c9-417a-9616-06e3f9eac63d-kube-api-access-vk5ll" (OuterVolumeSpecName: "kube-api-access-vk5ll") pod "16283339-07c9-417a-9616-06e3f9eac63d" (UID: "16283339-07c9-417a-9616-06e3f9eac63d"). InnerVolumeSpecName "kube-api-access-vk5ll". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 18:09:49 crc kubenswrapper[4661]: I0120 18:09:49.125322 4661 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vk5ll\" (UniqueName: \"kubernetes.io/projected/16283339-07c9-417a-9616-06e3f9eac63d-kube-api-access-vk5ll\") on node \"crc\" DevicePath \"\"" Jan 20 18:09:49 crc kubenswrapper[4661]: I0120 18:09:49.125354 4661 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/16283339-07c9-417a-9616-06e3f9eac63d-utilities\") on node \"crc\" DevicePath \"\"" Jan 20 18:09:49 crc kubenswrapper[4661]: I0120 18:09:49.130933 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/16283339-07c9-417a-9616-06e3f9eac63d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "16283339-07c9-417a-9616-06e3f9eac63d" (UID: "16283339-07c9-417a-9616-06e3f9eac63d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 18:09:49 crc kubenswrapper[4661]: I0120 18:09:49.228162 4661 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/16283339-07c9-417a-9616-06e3f9eac63d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 20 18:09:49 crc kubenswrapper[4661]: I0120 18:09:49.393279 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zrhnk" event={"ID":"16283339-07c9-417a-9616-06e3f9eac63d","Type":"ContainerDied","Data":"58f8e98d66849e84526f9f1bbda84723772038c0330853f062373fefd9dfff03"} Jan 20 18:09:49 crc kubenswrapper[4661]: I0120 18:09:49.393327 4661 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zrhnk" Jan 20 18:09:49 crc kubenswrapper[4661]: I0120 18:09:49.393949 4661 scope.go:117] "RemoveContainer" containerID="9d2603d12c9a99da72a3c05d6a7f0ae03f3b5cc767545f951ceea74120944966" Jan 20 18:09:49 crc kubenswrapper[4661]: I0120 18:09:49.409224 4661 scope.go:117] "RemoveContainer" containerID="e0aa71530752d11d624fa198e81a6fd009a6604475acc0b0cce8ac8c021a27b3" Jan 20 18:09:49 crc kubenswrapper[4661]: I0120 18:09:49.431725 4661 scope.go:117] "RemoveContainer" containerID="86714e9a821150efc96b7b00bb40869cfd96c0ca6de7ea024f1e404ab353d103" Jan 20 18:09:49 crc kubenswrapper[4661]: I0120 18:09:49.432605 4661 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-zrhnk"] Jan 20 18:09:49 crc kubenswrapper[4661]: I0120 18:09:49.440275 4661 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-zrhnk"] Jan 20 18:09:50 crc kubenswrapper[4661]: I0120 18:09:50.150279 4661 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="038c2d26-4bbf-46fd-90fa-d6e5a6929c7c" path="/var/lib/kubelet/pods/038c2d26-4bbf-46fd-90fa-d6e5a6929c7c/volumes" Jan 20 18:09:50 crc kubenswrapper[4661]: I0120 18:09:50.151415 4661 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="16283339-07c9-417a-9616-06e3f9eac63d" path="/var/lib/kubelet/pods/16283339-07c9-417a-9616-06e3f9eac63d/volumes" Jan 20 18:09:51 crc kubenswrapper[4661]: I0120 18:09:51.806980 4661 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-bh9mt"] Jan 20 18:09:54 crc kubenswrapper[4661]: I0120 18:09:54.961102 4661 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 20 18:09:54 crc kubenswrapper[4661]: E0120 18:09:54.962234 4661 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8f48199b-41a0-44a2-b1b4-2f623ab0f413" containerName="registry-server" Jan 20 18:09:54 crc kubenswrapper[4661]: I0120 18:09:54.962349 4661 state_mem.go:107] "Deleted CPUSet assignment" podUID="8f48199b-41a0-44a2-b1b4-2f623ab0f413" containerName="registry-server" Jan 20 18:09:54 crc kubenswrapper[4661]: E0120 18:09:54.962410 4661 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8f48199b-41a0-44a2-b1b4-2f623ab0f413" containerName="extract-utilities" Jan 20 18:09:54 crc kubenswrapper[4661]: I0120 18:09:54.962465 4661 state_mem.go:107] "Deleted CPUSet assignment" podUID="8f48199b-41a0-44a2-b1b4-2f623ab0f413" containerName="extract-utilities" Jan 20 18:09:54 crc kubenswrapper[4661]: E0120 18:09:54.962518 4661 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="038c2d26-4bbf-46fd-90fa-d6e5a6929c7c" containerName="extract-content" Jan 20 18:09:54 crc kubenswrapper[4661]: I0120 18:09:54.962576 4661 state_mem.go:107] "Deleted CPUSet assignment" podUID="038c2d26-4bbf-46fd-90fa-d6e5a6929c7c" containerName="extract-content" Jan 20 18:09:54 crc kubenswrapper[4661]: E0120 18:09:54.962636 4661 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="038c2d26-4bbf-46fd-90fa-d6e5a6929c7c" containerName="extract-utilities" Jan 20 18:09:54 crc kubenswrapper[4661]: I0120 18:09:54.962735 4661 state_mem.go:107] "Deleted CPUSet assignment" podUID="038c2d26-4bbf-46fd-90fa-d6e5a6929c7c" containerName="extract-utilities" Jan 20 18:09:54 crc kubenswrapper[4661]: E0120 18:09:54.962808 4661 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b74c3546-f6c8-44f1-a0b7-cf45a7b7ba38" containerName="extract-content" Jan 20 18:09:54 crc kubenswrapper[4661]: I0120 18:09:54.962861 4661 state_mem.go:107] "Deleted CPUSet assignment" podUID="b74c3546-f6c8-44f1-a0b7-cf45a7b7ba38" containerName="extract-content" Jan 20 18:09:54 crc kubenswrapper[4661]: E0120 18:09:54.962919 4661 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="16283339-07c9-417a-9616-06e3f9eac63d" containerName="extract-utilities" Jan 20 18:09:54 crc kubenswrapper[4661]: I0120 18:09:54.963005 4661 state_mem.go:107] "Deleted CPUSet assignment" podUID="16283339-07c9-417a-9616-06e3f9eac63d" containerName="extract-utilities" Jan 20 18:09:54 crc kubenswrapper[4661]: E0120 18:09:54.963086 4661 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8f48199b-41a0-44a2-b1b4-2f623ab0f413" containerName="extract-content" Jan 20 18:09:54 crc kubenswrapper[4661]: I0120 18:09:54.963168 4661 state_mem.go:107] "Deleted CPUSet assignment" podUID="8f48199b-41a0-44a2-b1b4-2f623ab0f413" containerName="extract-content" Jan 20 18:09:54 crc kubenswrapper[4661]: E0120 18:09:54.963249 4661 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="52eae3f3-7a47-40de-9c5a-739d65fd397c" containerName="pruner" Jan 20 18:09:54 crc kubenswrapper[4661]: I0120 18:09:54.963316 4661 state_mem.go:107] "Deleted CPUSet assignment" podUID="52eae3f3-7a47-40de-9c5a-739d65fd397c" containerName="pruner" Jan 20 18:09:54 crc kubenswrapper[4661]: E0120 18:09:54.963371 4661 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b74c3546-f6c8-44f1-a0b7-cf45a7b7ba38" containerName="registry-server" Jan 20 18:09:54 crc kubenswrapper[4661]: I0120 18:09:54.963423 4661 state_mem.go:107] "Deleted CPUSet assignment" podUID="b74c3546-f6c8-44f1-a0b7-cf45a7b7ba38" containerName="registry-server" Jan 20 18:09:54 crc kubenswrapper[4661]: E0120 18:09:54.963482 4661 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="038c2d26-4bbf-46fd-90fa-d6e5a6929c7c" containerName="registry-server" Jan 20 18:09:54 crc kubenswrapper[4661]: I0120 18:09:54.963535 4661 state_mem.go:107] "Deleted CPUSet assignment" podUID="038c2d26-4bbf-46fd-90fa-d6e5a6929c7c" containerName="registry-server" Jan 20 18:09:54 crc kubenswrapper[4661]: E0120 18:09:54.963588 4661 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="16283339-07c9-417a-9616-06e3f9eac63d" containerName="registry-server" Jan 20 18:09:54 crc kubenswrapper[4661]: I0120 18:09:54.963639 4661 state_mem.go:107] "Deleted CPUSet assignment" podUID="16283339-07c9-417a-9616-06e3f9eac63d" containerName="registry-server" Jan 20 18:09:54 crc kubenswrapper[4661]: E0120 18:09:54.963735 4661 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="16283339-07c9-417a-9616-06e3f9eac63d" containerName="extract-content" Jan 20 18:09:54 crc kubenswrapper[4661]: I0120 18:09:54.963801 4661 state_mem.go:107] "Deleted CPUSet assignment" podUID="16283339-07c9-417a-9616-06e3f9eac63d" containerName="extract-content" Jan 20 18:09:54 crc kubenswrapper[4661]: E0120 18:09:54.963859 4661 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b74c3546-f6c8-44f1-a0b7-cf45a7b7ba38" containerName="extract-utilities" Jan 20 18:09:54 crc kubenswrapper[4661]: I0120 18:09:54.963917 4661 state_mem.go:107] "Deleted CPUSet assignment" podUID="b74c3546-f6c8-44f1-a0b7-cf45a7b7ba38" containerName="extract-utilities" Jan 20 18:09:54 crc kubenswrapper[4661]: I0120 18:09:54.964072 4661 memory_manager.go:354] "RemoveStaleState removing state" podUID="b74c3546-f6c8-44f1-a0b7-cf45a7b7ba38" containerName="registry-server" Jan 20 18:09:54 crc kubenswrapper[4661]: I0120 18:09:54.964136 4661 memory_manager.go:354] "RemoveStaleState removing state" podUID="038c2d26-4bbf-46fd-90fa-d6e5a6929c7c" containerName="registry-server" Jan 20 18:09:54 crc kubenswrapper[4661]: I0120 18:09:54.964202 4661 memory_manager.go:354] "RemoveStaleState removing state" podUID="16283339-07c9-417a-9616-06e3f9eac63d" containerName="registry-server" Jan 20 18:09:54 crc kubenswrapper[4661]: I0120 18:09:54.964265 4661 memory_manager.go:354] "RemoveStaleState removing state" podUID="8f48199b-41a0-44a2-b1b4-2f623ab0f413" containerName="registry-server" Jan 20 18:09:54 crc kubenswrapper[4661]: I0120 18:09:54.964325 4661 memory_manager.go:354] "RemoveStaleState removing state" podUID="52eae3f3-7a47-40de-9c5a-739d65fd397c" containerName="pruner" Jan 20 18:09:54 crc kubenswrapper[4661]: I0120 18:09:54.964700 4661 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 20 18:09:54 crc kubenswrapper[4661]: I0120 18:09:54.964794 4661 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 20 18:09:54 crc kubenswrapper[4661]: I0120 18:09:54.964967 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 20 18:09:54 crc kubenswrapper[4661]: E0120 18:09:54.964991 4661 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 20 18:09:54 crc kubenswrapper[4661]: I0120 18:09:54.965364 4661 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 20 18:09:54 crc kubenswrapper[4661]: E0120 18:09:54.965482 4661 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 20 18:09:54 crc kubenswrapper[4661]: I0120 18:09:54.965578 4661 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 20 18:09:54 crc kubenswrapper[4661]: I0120 18:09:54.965294 4661 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" containerID="cri-o://8a1fb928361cffd6f14855b6c1cf5964eccc9f923435bf79dddd8f0c94decd9a" gracePeriod=15 Jan 20 18:09:54 crc kubenswrapper[4661]: I0120 18:09:54.965254 4661 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" containerID="cri-o://5a8e025f49d745d0d846c606a3ec9dd6fbd2d255e8662ba1fd1a65f0d4289e77" gracePeriod=15 Jan 20 18:09:54 crc kubenswrapper[4661]: I0120 18:09:54.965362 4661 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://f09e5fcc7fafac7a11257184f5919c06b5b2e56a677b67c664e6489d9a581a20" gracePeriod=15 Jan 20 18:09:54 crc kubenswrapper[4661]: I0120 18:09:54.965282 4661 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://b7995b8e096ce8c7adf28d9baa4e12d943a697db80ee2b6e6b347b334e44b0df" gracePeriod=15 Jan 20 18:09:54 crc kubenswrapper[4661]: I0120 18:09:54.965337 4661 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" containerID="cri-o://2286c38d543136df613b2611b8d494d0777a950158adb169c26708335c024251" gracePeriod=15 Jan 20 18:09:54 crc kubenswrapper[4661]: E0120 18:09:54.965660 4661 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 20 18:09:54 crc kubenswrapper[4661]: I0120 18:09:54.966199 4661 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 20 18:09:54 crc kubenswrapper[4661]: E0120 18:09:54.966294 4661 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 20 18:09:54 crc kubenswrapper[4661]: I0120 18:09:54.966711 4661 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 20 18:09:54 crc kubenswrapper[4661]: E0120 18:09:54.966782 4661 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 20 18:09:54 crc kubenswrapper[4661]: I0120 18:09:54.966873 4661 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 20 18:09:54 crc kubenswrapper[4661]: E0120 18:09:54.968126 4661 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Jan 20 18:09:54 crc kubenswrapper[4661]: I0120 18:09:54.968561 4661 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Jan 20 18:09:54 crc kubenswrapper[4661]: E0120 18:09:54.968649 4661 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 20 18:09:54 crc kubenswrapper[4661]: I0120 18:09:54.968756 4661 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 20 18:09:54 crc kubenswrapper[4661]: I0120 18:09:54.968964 4661 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 20 18:09:54 crc kubenswrapper[4661]: I0120 18:09:54.969065 4661 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 20 18:09:54 crc kubenswrapper[4661]: I0120 18:09:54.969162 4661 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 20 18:09:54 crc kubenswrapper[4661]: I0120 18:09:54.969249 4661 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 20 18:09:54 crc kubenswrapper[4661]: I0120 18:09:54.969334 4661 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 20 18:09:54 crc kubenswrapper[4661]: I0120 18:09:54.969420 4661 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 20 18:09:54 crc kubenswrapper[4661]: I0120 18:09:54.970367 4661 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="f4b27818a5e8e43d0dc095d08835c792" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" Jan 20 18:09:55 crc kubenswrapper[4661]: E0120 18:09:55.016577 4661 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.36:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 20 18:09:55 crc kubenswrapper[4661]: I0120 18:09:55.101727 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 20 18:09:55 crc kubenswrapper[4661]: I0120 18:09:55.101773 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 20 18:09:55 crc kubenswrapper[4661]: I0120 18:09:55.101803 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 20 18:09:55 crc kubenswrapper[4661]: I0120 18:09:55.101820 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 20 18:09:55 crc kubenswrapper[4661]: I0120 18:09:55.101837 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 20 18:09:55 crc kubenswrapper[4661]: I0120 18:09:55.101857 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 20 18:09:55 crc kubenswrapper[4661]: I0120 18:09:55.101890 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 20 18:09:55 crc kubenswrapper[4661]: I0120 18:09:55.101905 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 20 18:09:55 crc kubenswrapper[4661]: I0120 18:09:55.203015 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 20 18:09:55 crc kubenswrapper[4661]: I0120 18:09:55.203055 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 20 18:09:55 crc kubenswrapper[4661]: I0120 18:09:55.203091 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 20 18:09:55 crc kubenswrapper[4661]: I0120 18:09:55.203119 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 20 18:09:55 crc kubenswrapper[4661]: I0120 18:09:55.203133 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 20 18:09:55 crc kubenswrapper[4661]: I0120 18:09:55.203178 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 20 18:09:55 crc kubenswrapper[4661]: I0120 18:09:55.203232 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 20 18:09:55 crc kubenswrapper[4661]: I0120 18:09:55.203159 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 20 18:09:55 crc kubenswrapper[4661]: I0120 18:09:55.203193 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 20 18:09:55 crc kubenswrapper[4661]: I0120 18:09:55.203240 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 20 18:09:55 crc kubenswrapper[4661]: I0120 18:09:55.203316 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 20 18:09:55 crc kubenswrapper[4661]: I0120 18:09:55.203470 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 20 18:09:55 crc kubenswrapper[4661]: I0120 18:09:55.203485 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 20 18:09:55 crc kubenswrapper[4661]: I0120 18:09:55.203527 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 20 18:09:55 crc kubenswrapper[4661]: I0120 18:09:55.203568 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 20 18:09:55 crc kubenswrapper[4661]: I0120 18:09:55.203648 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 20 18:09:55 crc kubenswrapper[4661]: I0120 18:09:55.317571 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 20 18:09:55 crc kubenswrapper[4661]: E0120 18:09:55.356233 4661 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.36:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.188c82d2c1ba9846 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-20 18:09:55.35473671 +0000 UTC m=+251.685526402,LastTimestamp:2026-01-20 18:09:55.35473671 +0000 UTC m=+251.685526402,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 20 18:09:55 crc kubenswrapper[4661]: I0120 18:09:55.434079 4661 generic.go:334] "Generic (PLEG): container finished" podID="e2ed603c-7744-4ed0-b168-f022e5b8e145" containerID="a914a75bc2c1e1c5f87a4847b18f8a9588cc49af3eada2587fa0a69d40f17be2" exitCode=0 Jan 20 18:09:55 crc kubenswrapper[4661]: I0120 18:09:55.434173 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"e2ed603c-7744-4ed0-b168-f022e5b8e145","Type":"ContainerDied","Data":"a914a75bc2c1e1c5f87a4847b18f8a9588cc49af3eada2587fa0a69d40f17be2"} Jan 20 18:09:55 crc kubenswrapper[4661]: I0120 18:09:55.434990 4661 status_manager.go:851] "Failed to get status for pod" podUID="e2ed603c-7744-4ed0-b168-f022e5b8e145" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.36:6443: connect: connection refused" Jan 20 18:09:55 crc kubenswrapper[4661]: I0120 18:09:55.437553 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 20 18:09:55 crc kubenswrapper[4661]: I0120 18:09:55.439333 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 20 18:09:55 crc kubenswrapper[4661]: I0120 18:09:55.439888 4661 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="5a8e025f49d745d0d846c606a3ec9dd6fbd2d255e8662ba1fd1a65f0d4289e77" exitCode=0 Jan 20 18:09:55 crc kubenswrapper[4661]: I0120 18:09:55.439907 4661 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="f09e5fcc7fafac7a11257184f5919c06b5b2e56a677b67c664e6489d9a581a20" exitCode=0 Jan 20 18:09:55 crc kubenswrapper[4661]: I0120 18:09:55.439917 4661 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="b7995b8e096ce8c7adf28d9baa4e12d943a697db80ee2b6e6b347b334e44b0df" exitCode=0 Jan 20 18:09:55 crc kubenswrapper[4661]: I0120 18:09:55.439924 4661 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="8a1fb928361cffd6f14855b6c1cf5964eccc9f923435bf79dddd8f0c94decd9a" exitCode=2 Jan 20 18:09:55 crc kubenswrapper[4661]: I0120 18:09:55.439974 4661 scope.go:117] "RemoveContainer" containerID="3584f02089912eecb6ea77d78d4f093929ce92631cb9ea758f1311268963b6b1" Jan 20 18:09:55 crc kubenswrapper[4661]: I0120 18:09:55.441596 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"cbf9bce4be4c4d49a70e7cfb0d967bf8ba1b7fb3252202ed92c1a0c2b1c1c1e0"} Jan 20 18:09:56 crc kubenswrapper[4661]: E0120 18:09:56.233446 4661 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.36:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.188c82d2c1ba9846 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-20 18:09:55.35473671 +0000 UTC m=+251.685526402,LastTimestamp:2026-01-20 18:09:55.35473671 +0000 UTC m=+251.685526402,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 20 18:09:56 crc kubenswrapper[4661]: I0120 18:09:56.448012 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"727bc80447e0f5d992ff97889a574647d7a8b35386fd4c933d3fa1bfd4bd76fa"} Jan 20 18:09:56 crc kubenswrapper[4661]: I0120 18:09:56.448621 4661 status_manager.go:851] "Failed to get status for pod" podUID="e2ed603c-7744-4ed0-b168-f022e5b8e145" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.36:6443: connect: connection refused" Jan 20 18:09:56 crc kubenswrapper[4661]: E0120 18:09:56.451029 4661 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.36:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 20 18:09:56 crc kubenswrapper[4661]: I0120 18:09:56.451627 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 20 18:09:56 crc kubenswrapper[4661]: I0120 18:09:56.724108 4661 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 20 18:09:56 crc kubenswrapper[4661]: I0120 18:09:56.725126 4661 status_manager.go:851] "Failed to get status for pod" podUID="e2ed603c-7744-4ed0-b168-f022e5b8e145" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.36:6443: connect: connection refused" Jan 20 18:09:56 crc kubenswrapper[4661]: I0120 18:09:56.823336 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e2ed603c-7744-4ed0-b168-f022e5b8e145-kubelet-dir\") pod \"e2ed603c-7744-4ed0-b168-f022e5b8e145\" (UID: \"e2ed603c-7744-4ed0-b168-f022e5b8e145\") " Jan 20 18:09:56 crc kubenswrapper[4661]: I0120 18:09:56.823435 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/e2ed603c-7744-4ed0-b168-f022e5b8e145-var-lock\") pod \"e2ed603c-7744-4ed0-b168-f022e5b8e145\" (UID: \"e2ed603c-7744-4ed0-b168-f022e5b8e145\") " Jan 20 18:09:56 crc kubenswrapper[4661]: I0120 18:09:56.823470 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e2ed603c-7744-4ed0-b168-f022e5b8e145-kube-api-access\") pod \"e2ed603c-7744-4ed0-b168-f022e5b8e145\" (UID: \"e2ed603c-7744-4ed0-b168-f022e5b8e145\") " Jan 20 18:09:56 crc kubenswrapper[4661]: I0120 18:09:56.823516 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e2ed603c-7744-4ed0-b168-f022e5b8e145-var-lock" (OuterVolumeSpecName: "var-lock") pod "e2ed603c-7744-4ed0-b168-f022e5b8e145" (UID: "e2ed603c-7744-4ed0-b168-f022e5b8e145"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 20 18:09:56 crc kubenswrapper[4661]: I0120 18:09:56.823844 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e2ed603c-7744-4ed0-b168-f022e5b8e145-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "e2ed603c-7744-4ed0-b168-f022e5b8e145" (UID: "e2ed603c-7744-4ed0-b168-f022e5b8e145"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 20 18:09:56 crc kubenswrapper[4661]: I0120 18:09:56.831537 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e2ed603c-7744-4ed0-b168-f022e5b8e145-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e2ed603c-7744-4ed0-b168-f022e5b8e145" (UID: "e2ed603c-7744-4ed0-b168-f022e5b8e145"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 18:09:56 crc kubenswrapper[4661]: I0120 18:09:56.924620 4661 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e2ed603c-7744-4ed0-b168-f022e5b8e145-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 20 18:09:56 crc kubenswrapper[4661]: I0120 18:09:56.924652 4661 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/e2ed603c-7744-4ed0-b168-f022e5b8e145-var-lock\") on node \"crc\" DevicePath \"\"" Jan 20 18:09:56 crc kubenswrapper[4661]: I0120 18:09:56.924661 4661 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e2ed603c-7744-4ed0-b168-f022e5b8e145-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 20 18:09:57 crc kubenswrapper[4661]: I0120 18:09:57.437626 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 20 18:09:57 crc kubenswrapper[4661]: I0120 18:09:57.438499 4661 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 20 18:09:57 crc kubenswrapper[4661]: I0120 18:09:57.439389 4661 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.36:6443: connect: connection refused" Jan 20 18:09:57 crc kubenswrapper[4661]: I0120 18:09:57.439894 4661 status_manager.go:851] "Failed to get status for pod" podUID="e2ed603c-7744-4ed0-b168-f022e5b8e145" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.36:6443: connect: connection refused" Jan 20 18:09:57 crc kubenswrapper[4661]: I0120 18:09:57.458524 4661 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 20 18:09:57 crc kubenswrapper[4661]: I0120 18:09:57.458696 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"e2ed603c-7744-4ed0-b168-f022e5b8e145","Type":"ContainerDied","Data":"9bc92273f2c497b883a6df21ef33201fb28b7771a0869015ad2267eda5eccf47"} Jan 20 18:09:57 crc kubenswrapper[4661]: I0120 18:09:57.458833 4661 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9bc92273f2c497b883a6df21ef33201fb28b7771a0869015ad2267eda5eccf47" Jan 20 18:09:57 crc kubenswrapper[4661]: I0120 18:09:57.460986 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 20 18:09:57 crc kubenswrapper[4661]: I0120 18:09:57.461635 4661 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="2286c38d543136df613b2611b8d494d0777a950158adb169c26708335c024251" exitCode=0 Jan 20 18:09:57 crc kubenswrapper[4661]: I0120 18:09:57.462344 4661 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 20 18:09:57 crc kubenswrapper[4661]: I0120 18:09:57.462659 4661 scope.go:117] "RemoveContainer" containerID="5a8e025f49d745d0d846c606a3ec9dd6fbd2d255e8662ba1fd1a65f0d4289e77" Jan 20 18:09:57 crc kubenswrapper[4661]: E0120 18:09:57.462877 4661 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.36:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 20 18:09:57 crc kubenswrapper[4661]: I0120 18:09:57.471404 4661 status_manager.go:851] "Failed to get status for pod" podUID="e2ed603c-7744-4ed0-b168-f022e5b8e145" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.36:6443: connect: connection refused" Jan 20 18:09:57 crc kubenswrapper[4661]: I0120 18:09:57.471985 4661 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.36:6443: connect: connection refused" Jan 20 18:09:57 crc kubenswrapper[4661]: I0120 18:09:57.482623 4661 scope.go:117] "RemoveContainer" containerID="f09e5fcc7fafac7a11257184f5919c06b5b2e56a677b67c664e6489d9a581a20" Jan 20 18:09:57 crc kubenswrapper[4661]: I0120 18:09:57.506691 4661 scope.go:117] "RemoveContainer" containerID="b7995b8e096ce8c7adf28d9baa4e12d943a697db80ee2b6e6b347b334e44b0df" Jan 20 18:09:57 crc kubenswrapper[4661]: I0120 18:09:57.532797 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 20 18:09:57 crc kubenswrapper[4661]: I0120 18:09:57.532868 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 20 18:09:57 crc kubenswrapper[4661]: I0120 18:09:57.532906 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 20 18:09:57 crc kubenswrapper[4661]: I0120 18:09:57.532935 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 20 18:09:57 crc kubenswrapper[4661]: I0120 18:09:57.532994 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 20 18:09:57 crc kubenswrapper[4661]: I0120 18:09:57.533088 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 20 18:09:57 crc kubenswrapper[4661]: I0120 18:09:57.533417 4661 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") on node \"crc\" DevicePath \"\"" Jan 20 18:09:57 crc kubenswrapper[4661]: I0120 18:09:57.533442 4661 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") on node \"crc\" DevicePath \"\"" Jan 20 18:09:57 crc kubenswrapper[4661]: I0120 18:09:57.533451 4661 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 20 18:09:57 crc kubenswrapper[4661]: I0120 18:09:57.534429 4661 scope.go:117] "RemoveContainer" containerID="8a1fb928361cffd6f14855b6c1cf5964eccc9f923435bf79dddd8f0c94decd9a" Jan 20 18:09:57 crc kubenswrapper[4661]: I0120 18:09:57.545939 4661 scope.go:117] "RemoveContainer" containerID="2286c38d543136df613b2611b8d494d0777a950158adb169c26708335c024251" Jan 20 18:09:57 crc kubenswrapper[4661]: I0120 18:09:57.557137 4661 scope.go:117] "RemoveContainer" containerID="6eedc9bdf3c37af238cf9ad5172a8d93751c0641cbf43057016157f086c77538" Jan 20 18:09:57 crc kubenswrapper[4661]: I0120 18:09:57.571656 4661 scope.go:117] "RemoveContainer" containerID="5a8e025f49d745d0d846c606a3ec9dd6fbd2d255e8662ba1fd1a65f0d4289e77" Jan 20 18:09:57 crc kubenswrapper[4661]: E0120 18:09:57.572192 4661 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5a8e025f49d745d0d846c606a3ec9dd6fbd2d255e8662ba1fd1a65f0d4289e77\": container with ID starting with 5a8e025f49d745d0d846c606a3ec9dd6fbd2d255e8662ba1fd1a65f0d4289e77 not found: ID does not exist" containerID="5a8e025f49d745d0d846c606a3ec9dd6fbd2d255e8662ba1fd1a65f0d4289e77" Jan 20 18:09:57 crc kubenswrapper[4661]: I0120 18:09:57.572244 4661 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5a8e025f49d745d0d846c606a3ec9dd6fbd2d255e8662ba1fd1a65f0d4289e77"} err="failed to get container status \"5a8e025f49d745d0d846c606a3ec9dd6fbd2d255e8662ba1fd1a65f0d4289e77\": rpc error: code = NotFound desc = could not find container \"5a8e025f49d745d0d846c606a3ec9dd6fbd2d255e8662ba1fd1a65f0d4289e77\": container with ID starting with 5a8e025f49d745d0d846c606a3ec9dd6fbd2d255e8662ba1fd1a65f0d4289e77 not found: ID does not exist" Jan 20 18:09:57 crc kubenswrapper[4661]: I0120 18:09:57.572279 4661 scope.go:117] "RemoveContainer" containerID="f09e5fcc7fafac7a11257184f5919c06b5b2e56a677b67c664e6489d9a581a20" Jan 20 18:09:57 crc kubenswrapper[4661]: E0120 18:09:57.572794 4661 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f09e5fcc7fafac7a11257184f5919c06b5b2e56a677b67c664e6489d9a581a20\": container with ID starting with f09e5fcc7fafac7a11257184f5919c06b5b2e56a677b67c664e6489d9a581a20 not found: ID does not exist" containerID="f09e5fcc7fafac7a11257184f5919c06b5b2e56a677b67c664e6489d9a581a20" Jan 20 18:09:57 crc kubenswrapper[4661]: I0120 18:09:57.572881 4661 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f09e5fcc7fafac7a11257184f5919c06b5b2e56a677b67c664e6489d9a581a20"} err="failed to get container status \"f09e5fcc7fafac7a11257184f5919c06b5b2e56a677b67c664e6489d9a581a20\": rpc error: code = NotFound desc = could not find container \"f09e5fcc7fafac7a11257184f5919c06b5b2e56a677b67c664e6489d9a581a20\": container with ID starting with f09e5fcc7fafac7a11257184f5919c06b5b2e56a677b67c664e6489d9a581a20 not found: ID does not exist" Jan 20 18:09:57 crc kubenswrapper[4661]: I0120 18:09:57.572966 4661 scope.go:117] "RemoveContainer" containerID="b7995b8e096ce8c7adf28d9baa4e12d943a697db80ee2b6e6b347b334e44b0df" Jan 20 18:09:57 crc kubenswrapper[4661]: E0120 18:09:57.573375 4661 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b7995b8e096ce8c7adf28d9baa4e12d943a697db80ee2b6e6b347b334e44b0df\": container with ID starting with b7995b8e096ce8c7adf28d9baa4e12d943a697db80ee2b6e6b347b334e44b0df not found: ID does not exist" containerID="b7995b8e096ce8c7adf28d9baa4e12d943a697db80ee2b6e6b347b334e44b0df" Jan 20 18:09:57 crc kubenswrapper[4661]: I0120 18:09:57.573408 4661 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b7995b8e096ce8c7adf28d9baa4e12d943a697db80ee2b6e6b347b334e44b0df"} err="failed to get container status \"b7995b8e096ce8c7adf28d9baa4e12d943a697db80ee2b6e6b347b334e44b0df\": rpc error: code = NotFound desc = could not find container \"b7995b8e096ce8c7adf28d9baa4e12d943a697db80ee2b6e6b347b334e44b0df\": container with ID starting with b7995b8e096ce8c7adf28d9baa4e12d943a697db80ee2b6e6b347b334e44b0df not found: ID does not exist" Jan 20 18:09:57 crc kubenswrapper[4661]: I0120 18:09:57.573430 4661 scope.go:117] "RemoveContainer" containerID="8a1fb928361cffd6f14855b6c1cf5964eccc9f923435bf79dddd8f0c94decd9a" Jan 20 18:09:57 crc kubenswrapper[4661]: E0120 18:09:57.573762 4661 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8a1fb928361cffd6f14855b6c1cf5964eccc9f923435bf79dddd8f0c94decd9a\": container with ID starting with 8a1fb928361cffd6f14855b6c1cf5964eccc9f923435bf79dddd8f0c94decd9a not found: ID does not exist" containerID="8a1fb928361cffd6f14855b6c1cf5964eccc9f923435bf79dddd8f0c94decd9a" Jan 20 18:09:57 crc kubenswrapper[4661]: I0120 18:09:57.573785 4661 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8a1fb928361cffd6f14855b6c1cf5964eccc9f923435bf79dddd8f0c94decd9a"} err="failed to get container status \"8a1fb928361cffd6f14855b6c1cf5964eccc9f923435bf79dddd8f0c94decd9a\": rpc error: code = NotFound desc = could not find container \"8a1fb928361cffd6f14855b6c1cf5964eccc9f923435bf79dddd8f0c94decd9a\": container with ID starting with 8a1fb928361cffd6f14855b6c1cf5964eccc9f923435bf79dddd8f0c94decd9a not found: ID does not exist" Jan 20 18:09:57 crc kubenswrapper[4661]: I0120 18:09:57.573801 4661 scope.go:117] "RemoveContainer" containerID="2286c38d543136df613b2611b8d494d0777a950158adb169c26708335c024251" Jan 20 18:09:57 crc kubenswrapper[4661]: E0120 18:09:57.574079 4661 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2286c38d543136df613b2611b8d494d0777a950158adb169c26708335c024251\": container with ID starting with 2286c38d543136df613b2611b8d494d0777a950158adb169c26708335c024251 not found: ID does not exist" containerID="2286c38d543136df613b2611b8d494d0777a950158adb169c26708335c024251" Jan 20 18:09:57 crc kubenswrapper[4661]: I0120 18:09:57.574151 4661 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2286c38d543136df613b2611b8d494d0777a950158adb169c26708335c024251"} err="failed to get container status \"2286c38d543136df613b2611b8d494d0777a950158adb169c26708335c024251\": rpc error: code = NotFound desc = could not find container \"2286c38d543136df613b2611b8d494d0777a950158adb169c26708335c024251\": container with ID starting with 2286c38d543136df613b2611b8d494d0777a950158adb169c26708335c024251 not found: ID does not exist" Jan 20 18:09:57 crc kubenswrapper[4661]: I0120 18:09:57.574254 4661 scope.go:117] "RemoveContainer" containerID="6eedc9bdf3c37af238cf9ad5172a8d93751c0641cbf43057016157f086c77538" Jan 20 18:09:57 crc kubenswrapper[4661]: E0120 18:09:57.574518 4661 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6eedc9bdf3c37af238cf9ad5172a8d93751c0641cbf43057016157f086c77538\": container with ID starting with 6eedc9bdf3c37af238cf9ad5172a8d93751c0641cbf43057016157f086c77538 not found: ID does not exist" containerID="6eedc9bdf3c37af238cf9ad5172a8d93751c0641cbf43057016157f086c77538" Jan 20 18:09:57 crc kubenswrapper[4661]: I0120 18:09:57.574536 4661 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6eedc9bdf3c37af238cf9ad5172a8d93751c0641cbf43057016157f086c77538"} err="failed to get container status \"6eedc9bdf3c37af238cf9ad5172a8d93751c0641cbf43057016157f086c77538\": rpc error: code = NotFound desc = could not find container \"6eedc9bdf3c37af238cf9ad5172a8d93751c0641cbf43057016157f086c77538\": container with ID starting with 6eedc9bdf3c37af238cf9ad5172a8d93751c0641cbf43057016157f086c77538 not found: ID does not exist" Jan 20 18:09:57 crc kubenswrapper[4661]: I0120 18:09:57.788782 4661 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.36:6443: connect: connection refused" Jan 20 18:09:57 crc kubenswrapper[4661]: I0120 18:09:57.789063 4661 status_manager.go:851] "Failed to get status for pod" podUID="e2ed603c-7744-4ed0-b168-f022e5b8e145" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.36:6443: connect: connection refused" Jan 20 18:09:57 crc kubenswrapper[4661]: E0120 18:09:57.892792 4661 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.36:6443: connect: connection refused" Jan 20 18:09:57 crc kubenswrapper[4661]: E0120 18:09:57.893197 4661 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.36:6443: connect: connection refused" Jan 20 18:09:57 crc kubenswrapper[4661]: E0120 18:09:57.893612 4661 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.36:6443: connect: connection refused" Jan 20 18:09:57 crc kubenswrapper[4661]: E0120 18:09:57.893867 4661 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.36:6443: connect: connection refused" Jan 20 18:09:57 crc kubenswrapper[4661]: E0120 18:09:57.894067 4661 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.36:6443: connect: connection refused" Jan 20 18:09:57 crc kubenswrapper[4661]: I0120 18:09:57.894096 4661 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Jan 20 18:09:57 crc kubenswrapper[4661]: E0120 18:09:57.894261 4661 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.36:6443: connect: connection refused" interval="200ms" Jan 20 18:09:58 crc kubenswrapper[4661]: E0120 18:09:58.094856 4661 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.36:6443: connect: connection refused" interval="400ms" Jan 20 18:09:58 crc kubenswrapper[4661]: I0120 18:09:58.149245 4661 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4b27818a5e8e43d0dc095d08835c792" path="/var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/volumes" Jan 20 18:09:58 crc kubenswrapper[4661]: E0120 18:09:58.495251 4661 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.36:6443: connect: connection refused" interval="800ms" Jan 20 18:09:59 crc kubenswrapper[4661]: E0120 18:09:59.296292 4661 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.36:6443: connect: connection refused" interval="1.6s" Jan 20 18:10:00 crc kubenswrapper[4661]: E0120 18:10:00.897268 4661 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.36:6443: connect: connection refused" interval="3.2s" Jan 20 18:10:04 crc kubenswrapper[4661]: E0120 18:10:04.098437 4661 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.36:6443: connect: connection refused" interval="6.4s" Jan 20 18:10:04 crc kubenswrapper[4661]: I0120 18:10:04.146417 4661 status_manager.go:851] "Failed to get status for pod" podUID="e2ed603c-7744-4ed0-b168-f022e5b8e145" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.36:6443: connect: connection refused" Jan 20 18:10:06 crc kubenswrapper[4661]: E0120 18:10:06.209451 4661 desired_state_of_world_populator.go:312] "Error processing volume" err="error processing PVC openshift-image-registry/crc-image-registry-storage: failed to fetch PVC from API server: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/persistentvolumeclaims/crc-image-registry-storage\": dial tcp 38.102.83.36:6443: connect: connection refused" pod="openshift-image-registry/image-registry-697d97f7c8-7m2kh" volumeName="registry-storage" Jan 20 18:10:06 crc kubenswrapper[4661]: E0120 18:10:06.235038 4661 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.36:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.188c82d2c1ba9846 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-20 18:09:55.35473671 +0000 UTC m=+251.685526402,LastTimestamp:2026-01-20 18:09:55.35473671 +0000 UTC m=+251.685526402,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 20 18:10:08 crc kubenswrapper[4661]: I0120 18:10:08.531534 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Jan 20 18:10:08 crc kubenswrapper[4661]: I0120 18:10:08.532578 4661 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="008613eee577926f777b6eba5a93379dca1203429fb29918bb057f2aba5eba4e" exitCode=1 Jan 20 18:10:08 crc kubenswrapper[4661]: I0120 18:10:08.532641 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"008613eee577926f777b6eba5a93379dca1203429fb29918bb057f2aba5eba4e"} Jan 20 18:10:08 crc kubenswrapper[4661]: I0120 18:10:08.533362 4661 scope.go:117] "RemoveContainer" containerID="008613eee577926f777b6eba5a93379dca1203429fb29918bb057f2aba5eba4e" Jan 20 18:10:08 crc kubenswrapper[4661]: I0120 18:10:08.534933 4661 status_manager.go:851] "Failed to get status for pod" podUID="e2ed603c-7744-4ed0-b168-f022e5b8e145" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.36:6443: connect: connection refused" Jan 20 18:10:08 crc kubenswrapper[4661]: I0120 18:10:08.535500 4661 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.36:6443: connect: connection refused" Jan 20 18:10:09 crc kubenswrapper[4661]: I0120 18:10:09.551378 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Jan 20 18:10:09 crc kubenswrapper[4661]: I0120 18:10:09.551853 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"da05a19eaa18ff74143f4df1e25642832831b668e6c1e8319b8386868c778345"} Jan 20 18:10:09 crc kubenswrapper[4661]: I0120 18:10:09.553213 4661 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.36:6443: connect: connection refused" Jan 20 18:10:09 crc kubenswrapper[4661]: I0120 18:10:09.553871 4661 status_manager.go:851] "Failed to get status for pod" podUID="e2ed603c-7744-4ed0-b168-f022e5b8e145" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.36:6443: connect: connection refused" Jan 20 18:10:10 crc kubenswrapper[4661]: I0120 18:10:10.146772 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 20 18:10:10 crc kubenswrapper[4661]: I0120 18:10:10.148096 4661 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.36:6443: connect: connection refused" Jan 20 18:10:10 crc kubenswrapper[4661]: I0120 18:10:10.149076 4661 status_manager.go:851] "Failed to get status for pod" podUID="e2ed603c-7744-4ed0-b168-f022e5b8e145" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.36:6443: connect: connection refused" Jan 20 18:10:10 crc kubenswrapper[4661]: I0120 18:10:10.175190 4661 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="5947c5f0-b932-4127-a183-6b9023784c81" Jan 20 18:10:10 crc kubenswrapper[4661]: I0120 18:10:10.175241 4661 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="5947c5f0-b932-4127-a183-6b9023784c81" Jan 20 18:10:10 crc kubenswrapper[4661]: E0120 18:10:10.175908 4661 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.36:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 20 18:10:10 crc kubenswrapper[4661]: I0120 18:10:10.176741 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 20 18:10:10 crc kubenswrapper[4661]: W0120 18:10:10.197449 4661 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod71bb4a3aecc4ba5b26c4b7318770ce13.slice/crio-54a29cc43bfd74ad6b10453eefb0d4cf66d0b83644cf99c374b694f4d4c3896e WatchSource:0}: Error finding container 54a29cc43bfd74ad6b10453eefb0d4cf66d0b83644cf99c374b694f4d4c3896e: Status 404 returned error can't find the container with id 54a29cc43bfd74ad6b10453eefb0d4cf66d0b83644cf99c374b694f4d4c3896e Jan 20 18:10:10 crc kubenswrapper[4661]: E0120 18:10:10.500217 4661 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.36:6443: connect: connection refused" interval="7s" Jan 20 18:10:10 crc kubenswrapper[4661]: I0120 18:10:10.572484 4661 generic.go:334] "Generic (PLEG): container finished" podID="71bb4a3aecc4ba5b26c4b7318770ce13" containerID="e17694448ac2c464bdfb97694046afb6d41c2f145d1ae8286bb39db7fb7abb8c" exitCode=0 Jan 20 18:10:10 crc kubenswrapper[4661]: I0120 18:10:10.572545 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerDied","Data":"e17694448ac2c464bdfb97694046afb6d41c2f145d1ae8286bb39db7fb7abb8c"} Jan 20 18:10:10 crc kubenswrapper[4661]: I0120 18:10:10.572626 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"54a29cc43bfd74ad6b10453eefb0d4cf66d0b83644cf99c374b694f4d4c3896e"} Jan 20 18:10:10 crc kubenswrapper[4661]: I0120 18:10:10.573007 4661 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="5947c5f0-b932-4127-a183-6b9023784c81" Jan 20 18:10:10 crc kubenswrapper[4661]: I0120 18:10:10.573031 4661 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="5947c5f0-b932-4127-a183-6b9023784c81" Jan 20 18:10:10 crc kubenswrapper[4661]: E0120 18:10:10.573578 4661 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.36:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 20 18:10:10 crc kubenswrapper[4661]: I0120 18:10:10.574339 4661 status_manager.go:851] "Failed to get status for pod" podUID="e2ed603c-7744-4ed0-b168-f022e5b8e145" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.36:6443: connect: connection refused" Jan 20 18:10:10 crc kubenswrapper[4661]: I0120 18:10:10.574800 4661 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.36:6443: connect: connection refused" Jan 20 18:10:11 crc kubenswrapper[4661]: I0120 18:10:11.582466 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"d204196ec8ad834e573921c94e83c1f9d032774ae1037c5bcde5b17106a55566"} Jan 20 18:10:11 crc kubenswrapper[4661]: I0120 18:10:11.582731 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"a6933c9163e2d7594787e87f7ab07acc7f4f09b24cb17e21db16428db881d971"} Jan 20 18:10:11 crc kubenswrapper[4661]: I0120 18:10:11.582745 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"4b8c588f90e626d54f0f6acf483e3be0d79a581f6eec61bd5302ef17745c40cb"} Jan 20 18:10:11 crc kubenswrapper[4661]: I0120 18:10:11.582755 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"0b95180bb38a8ea9ed6258950ca15370974dda2f44cb0617c9b09120738dc508"} Jan 20 18:10:12 crc kubenswrapper[4661]: I0120 18:10:12.589749 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"33d2f3415040bc187d9d69e2cb6f0373e2d4f825e94a1b0933b6c44dae75dea3"} Jan 20 18:10:12 crc kubenswrapper[4661]: I0120 18:10:12.590262 4661 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="5947c5f0-b932-4127-a183-6b9023784c81" Jan 20 18:10:12 crc kubenswrapper[4661]: I0120 18:10:12.590278 4661 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="5947c5f0-b932-4127-a183-6b9023784c81" Jan 20 18:10:12 crc kubenswrapper[4661]: I0120 18:10:12.590472 4661 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 20 18:10:14 crc kubenswrapper[4661]: I0120 18:10:14.462555 4661 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 20 18:10:14 crc kubenswrapper[4661]: I0120 18:10:14.462861 4661 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Jan 20 18:10:14 crc kubenswrapper[4661]: I0120 18:10:14.463041 4661 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Jan 20 18:10:15 crc kubenswrapper[4661]: I0120 18:10:15.177211 4661 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 20 18:10:15 crc kubenswrapper[4661]: I0120 18:10:15.177262 4661 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 20 18:10:15 crc kubenswrapper[4661]: I0120 18:10:15.191271 4661 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 20 18:10:16 crc kubenswrapper[4661]: I0120 18:10:16.848130 4661 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-bh9mt" podUID="1f0c818b-31de-43ee-a20a-1fc174261b42" containerName="oauth-openshift" containerID="cri-o://6df115046f10fec312525b9af37d3626b33aee7a1e999247aa0d2f2ea30d2a64" gracePeriod=15 Jan 20 18:10:16 crc kubenswrapper[4661]: I0120 18:10:16.852830 4661 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 20 18:10:16 crc kubenswrapper[4661]: E0120 18:10:16.942229 4661 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1f0c818b_31de_43ee_a20a_1fc174261b42.slice/crio-6df115046f10fec312525b9af37d3626b33aee7a1e999247aa0d2f2ea30d2a64.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1f0c818b_31de_43ee_a20a_1fc174261b42.slice/crio-conmon-6df115046f10fec312525b9af37d3626b33aee7a1e999247aa0d2f2ea30d2a64.scope\": RecentStats: unable to find data in memory cache]" Jan 20 18:10:17 crc kubenswrapper[4661]: I0120 18:10:17.279841 4661 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-bh9mt" Jan 20 18:10:17 crc kubenswrapper[4661]: I0120 18:10:17.410242 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/1f0c818b-31de-43ee-a20a-1fc174261b42-v4-0-config-system-session\") pod \"1f0c818b-31de-43ee-a20a-1fc174261b42\" (UID: \"1f0c818b-31de-43ee-a20a-1fc174261b42\") " Jan 20 18:10:17 crc kubenswrapper[4661]: I0120 18:10:17.410286 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1f0c818b-31de-43ee-a20a-1fc174261b42-v4-0-config-system-trusted-ca-bundle\") pod \"1f0c818b-31de-43ee-a20a-1fc174261b42\" (UID: \"1f0c818b-31de-43ee-a20a-1fc174261b42\") " Jan 20 18:10:17 crc kubenswrapper[4661]: I0120 18:10:17.410308 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/1f0c818b-31de-43ee-a20a-1fc174261b42-v4-0-config-user-template-login\") pod \"1f0c818b-31de-43ee-a20a-1fc174261b42\" (UID: \"1f0c818b-31de-43ee-a20a-1fc174261b42\") " Jan 20 18:10:17 crc kubenswrapper[4661]: I0120 18:10:17.410343 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/1f0c818b-31de-43ee-a20a-1fc174261b42-v4-0-config-user-idp-0-file-data\") pod \"1f0c818b-31de-43ee-a20a-1fc174261b42\" (UID: \"1f0c818b-31de-43ee-a20a-1fc174261b42\") " Jan 20 18:10:17 crc kubenswrapper[4661]: I0120 18:10:17.410359 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/1f0c818b-31de-43ee-a20a-1fc174261b42-v4-0-config-system-service-ca\") pod \"1f0c818b-31de-43ee-a20a-1fc174261b42\" (UID: \"1f0c818b-31de-43ee-a20a-1fc174261b42\") " Jan 20 18:10:17 crc kubenswrapper[4661]: I0120 18:10:17.410404 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hnr4m\" (UniqueName: \"kubernetes.io/projected/1f0c818b-31de-43ee-a20a-1fc174261b42-kube-api-access-hnr4m\") pod \"1f0c818b-31de-43ee-a20a-1fc174261b42\" (UID: \"1f0c818b-31de-43ee-a20a-1fc174261b42\") " Jan 20 18:10:17 crc kubenswrapper[4661]: I0120 18:10:17.410422 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/1f0c818b-31de-43ee-a20a-1fc174261b42-v4-0-config-user-template-error\") pod \"1f0c818b-31de-43ee-a20a-1fc174261b42\" (UID: \"1f0c818b-31de-43ee-a20a-1fc174261b42\") " Jan 20 18:10:17 crc kubenswrapper[4661]: I0120 18:10:17.410446 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/1f0c818b-31de-43ee-a20a-1fc174261b42-audit-policies\") pod \"1f0c818b-31de-43ee-a20a-1fc174261b42\" (UID: \"1f0c818b-31de-43ee-a20a-1fc174261b42\") " Jan 20 18:10:17 crc kubenswrapper[4661]: I0120 18:10:17.410467 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/1f0c818b-31de-43ee-a20a-1fc174261b42-v4-0-config-system-ocp-branding-template\") pod \"1f0c818b-31de-43ee-a20a-1fc174261b42\" (UID: \"1f0c818b-31de-43ee-a20a-1fc174261b42\") " Jan 20 18:10:17 crc kubenswrapper[4661]: I0120 18:10:17.410492 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/1f0c818b-31de-43ee-a20a-1fc174261b42-v4-0-config-system-router-certs\") pod \"1f0c818b-31de-43ee-a20a-1fc174261b42\" (UID: \"1f0c818b-31de-43ee-a20a-1fc174261b42\") " Jan 20 18:10:17 crc kubenswrapper[4661]: I0120 18:10:17.410509 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/1f0c818b-31de-43ee-a20a-1fc174261b42-v4-0-config-system-serving-cert\") pod \"1f0c818b-31de-43ee-a20a-1fc174261b42\" (UID: \"1f0c818b-31de-43ee-a20a-1fc174261b42\") " Jan 20 18:10:17 crc kubenswrapper[4661]: I0120 18:10:17.410529 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/1f0c818b-31de-43ee-a20a-1fc174261b42-v4-0-config-user-template-provider-selection\") pod \"1f0c818b-31de-43ee-a20a-1fc174261b42\" (UID: \"1f0c818b-31de-43ee-a20a-1fc174261b42\") " Jan 20 18:10:17 crc kubenswrapper[4661]: I0120 18:10:17.410545 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/1f0c818b-31de-43ee-a20a-1fc174261b42-audit-dir\") pod \"1f0c818b-31de-43ee-a20a-1fc174261b42\" (UID: \"1f0c818b-31de-43ee-a20a-1fc174261b42\") " Jan 20 18:10:17 crc kubenswrapper[4661]: I0120 18:10:17.410559 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/1f0c818b-31de-43ee-a20a-1fc174261b42-v4-0-config-system-cliconfig\") pod \"1f0c818b-31de-43ee-a20a-1fc174261b42\" (UID: \"1f0c818b-31de-43ee-a20a-1fc174261b42\") " Jan 20 18:10:17 crc kubenswrapper[4661]: I0120 18:10:17.411463 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1f0c818b-31de-43ee-a20a-1fc174261b42-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "1f0c818b-31de-43ee-a20a-1fc174261b42" (UID: "1f0c818b-31de-43ee-a20a-1fc174261b42"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 18:10:17 crc kubenswrapper[4661]: I0120 18:10:17.411745 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1f0c818b-31de-43ee-a20a-1fc174261b42-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "1f0c818b-31de-43ee-a20a-1fc174261b42" (UID: "1f0c818b-31de-43ee-a20a-1fc174261b42"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 18:10:17 crc kubenswrapper[4661]: I0120 18:10:17.411941 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1f0c818b-31de-43ee-a20a-1fc174261b42-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "1f0c818b-31de-43ee-a20a-1fc174261b42" (UID: "1f0c818b-31de-43ee-a20a-1fc174261b42"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 18:10:17 crc kubenswrapper[4661]: I0120 18:10:17.412549 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1f0c818b-31de-43ee-a20a-1fc174261b42-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "1f0c818b-31de-43ee-a20a-1fc174261b42" (UID: "1f0c818b-31de-43ee-a20a-1fc174261b42"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 18:10:17 crc kubenswrapper[4661]: I0120 18:10:17.419505 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1f0c818b-31de-43ee-a20a-1fc174261b42-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "1f0c818b-31de-43ee-a20a-1fc174261b42" (UID: "1f0c818b-31de-43ee-a20a-1fc174261b42"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 20 18:10:17 crc kubenswrapper[4661]: I0120 18:10:17.420722 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1f0c818b-31de-43ee-a20a-1fc174261b42-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "1f0c818b-31de-43ee-a20a-1fc174261b42" (UID: "1f0c818b-31de-43ee-a20a-1fc174261b42"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 18:10:17 crc kubenswrapper[4661]: I0120 18:10:17.426179 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1f0c818b-31de-43ee-a20a-1fc174261b42-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "1f0c818b-31de-43ee-a20a-1fc174261b42" (UID: "1f0c818b-31de-43ee-a20a-1fc174261b42"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 18:10:17 crc kubenswrapper[4661]: I0120 18:10:17.426487 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1f0c818b-31de-43ee-a20a-1fc174261b42-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "1f0c818b-31de-43ee-a20a-1fc174261b42" (UID: "1f0c818b-31de-43ee-a20a-1fc174261b42"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 18:10:17 crc kubenswrapper[4661]: I0120 18:10:17.432615 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1f0c818b-31de-43ee-a20a-1fc174261b42-kube-api-access-hnr4m" (OuterVolumeSpecName: "kube-api-access-hnr4m") pod "1f0c818b-31de-43ee-a20a-1fc174261b42" (UID: "1f0c818b-31de-43ee-a20a-1fc174261b42"). InnerVolumeSpecName "kube-api-access-hnr4m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 18:10:17 crc kubenswrapper[4661]: I0120 18:10:17.432861 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1f0c818b-31de-43ee-a20a-1fc174261b42-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "1f0c818b-31de-43ee-a20a-1fc174261b42" (UID: "1f0c818b-31de-43ee-a20a-1fc174261b42"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 18:10:17 crc kubenswrapper[4661]: I0120 18:10:17.433255 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1f0c818b-31de-43ee-a20a-1fc174261b42-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "1f0c818b-31de-43ee-a20a-1fc174261b42" (UID: "1f0c818b-31de-43ee-a20a-1fc174261b42"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 18:10:17 crc kubenswrapper[4661]: I0120 18:10:17.445284 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1f0c818b-31de-43ee-a20a-1fc174261b42-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "1f0c818b-31de-43ee-a20a-1fc174261b42" (UID: "1f0c818b-31de-43ee-a20a-1fc174261b42"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 18:10:17 crc kubenswrapper[4661]: I0120 18:10:17.446986 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1f0c818b-31de-43ee-a20a-1fc174261b42-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "1f0c818b-31de-43ee-a20a-1fc174261b42" (UID: "1f0c818b-31de-43ee-a20a-1fc174261b42"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 18:10:17 crc kubenswrapper[4661]: I0120 18:10:17.455531 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1f0c818b-31de-43ee-a20a-1fc174261b42-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "1f0c818b-31de-43ee-a20a-1fc174261b42" (UID: "1f0c818b-31de-43ee-a20a-1fc174261b42"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 18:10:17 crc kubenswrapper[4661]: I0120 18:10:17.511919 4661 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/1f0c818b-31de-43ee-a20a-1fc174261b42-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Jan 20 18:10:17 crc kubenswrapper[4661]: I0120 18:10:17.511972 4661 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1f0c818b-31de-43ee-a20a-1fc174261b42-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 20 18:10:17 crc kubenswrapper[4661]: I0120 18:10:17.511984 4661 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/1f0c818b-31de-43ee-a20a-1fc174261b42-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Jan 20 18:10:17 crc kubenswrapper[4661]: I0120 18:10:17.511996 4661 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/1f0c818b-31de-43ee-a20a-1fc174261b42-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Jan 20 18:10:17 crc kubenswrapper[4661]: I0120 18:10:17.512007 4661 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/1f0c818b-31de-43ee-a20a-1fc174261b42-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Jan 20 18:10:17 crc kubenswrapper[4661]: I0120 18:10:17.512018 4661 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hnr4m\" (UniqueName: \"kubernetes.io/projected/1f0c818b-31de-43ee-a20a-1fc174261b42-kube-api-access-hnr4m\") on node \"crc\" DevicePath \"\"" Jan 20 18:10:17 crc kubenswrapper[4661]: I0120 18:10:17.512046 4661 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/1f0c818b-31de-43ee-a20a-1fc174261b42-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Jan 20 18:10:17 crc kubenswrapper[4661]: I0120 18:10:17.512060 4661 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/1f0c818b-31de-43ee-a20a-1fc174261b42-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 20 18:10:17 crc kubenswrapper[4661]: I0120 18:10:17.512074 4661 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/1f0c818b-31de-43ee-a20a-1fc174261b42-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Jan 20 18:10:17 crc kubenswrapper[4661]: I0120 18:10:17.512085 4661 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/1f0c818b-31de-43ee-a20a-1fc174261b42-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Jan 20 18:10:17 crc kubenswrapper[4661]: I0120 18:10:17.512097 4661 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/1f0c818b-31de-43ee-a20a-1fc174261b42-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 20 18:10:17 crc kubenswrapper[4661]: I0120 18:10:17.512121 4661 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/1f0c818b-31de-43ee-a20a-1fc174261b42-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Jan 20 18:10:17 crc kubenswrapper[4661]: I0120 18:10:17.512132 4661 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/1f0c818b-31de-43ee-a20a-1fc174261b42-audit-dir\") on node \"crc\" DevicePath \"\"" Jan 20 18:10:17 crc kubenswrapper[4661]: I0120 18:10:17.512141 4661 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/1f0c818b-31de-43ee-a20a-1fc174261b42-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Jan 20 18:10:17 crc kubenswrapper[4661]: I0120 18:10:17.610841 4661 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 20 18:10:17 crc kubenswrapper[4661]: I0120 18:10:17.614953 4661 generic.go:334] "Generic (PLEG): container finished" podID="1f0c818b-31de-43ee-a20a-1fc174261b42" containerID="6df115046f10fec312525b9af37d3626b33aee7a1e999247aa0d2f2ea30d2a64" exitCode=0 Jan 20 18:10:17 crc kubenswrapper[4661]: I0120 18:10:17.614990 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-bh9mt" event={"ID":"1f0c818b-31de-43ee-a20a-1fc174261b42","Type":"ContainerDied","Data":"6df115046f10fec312525b9af37d3626b33aee7a1e999247aa0d2f2ea30d2a64"} Jan 20 18:10:17 crc kubenswrapper[4661]: I0120 18:10:17.615018 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-bh9mt" event={"ID":"1f0c818b-31de-43ee-a20a-1fc174261b42","Type":"ContainerDied","Data":"3a058474f96937277440c2a027dc0ba305d810d19fee387fa794396334472896"} Jan 20 18:10:17 crc kubenswrapper[4661]: I0120 18:10:17.615035 4661 scope.go:117] "RemoveContainer" containerID="6df115046f10fec312525b9af37d3626b33aee7a1e999247aa0d2f2ea30d2a64" Jan 20 18:10:17 crc kubenswrapper[4661]: I0120 18:10:17.615138 4661 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-bh9mt" Jan 20 18:10:17 crc kubenswrapper[4661]: I0120 18:10:17.637050 4661 scope.go:117] "RemoveContainer" containerID="6df115046f10fec312525b9af37d3626b33aee7a1e999247aa0d2f2ea30d2a64" Jan 20 18:10:17 crc kubenswrapper[4661]: E0120 18:10:17.637654 4661 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6df115046f10fec312525b9af37d3626b33aee7a1e999247aa0d2f2ea30d2a64\": container with ID starting with 6df115046f10fec312525b9af37d3626b33aee7a1e999247aa0d2f2ea30d2a64 not found: ID does not exist" containerID="6df115046f10fec312525b9af37d3626b33aee7a1e999247aa0d2f2ea30d2a64" Jan 20 18:10:17 crc kubenswrapper[4661]: I0120 18:10:17.637705 4661 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6df115046f10fec312525b9af37d3626b33aee7a1e999247aa0d2f2ea30d2a64"} err="failed to get container status \"6df115046f10fec312525b9af37d3626b33aee7a1e999247aa0d2f2ea30d2a64\": rpc error: code = NotFound desc = could not find container \"6df115046f10fec312525b9af37d3626b33aee7a1e999247aa0d2f2ea30d2a64\": container with ID starting with 6df115046f10fec312525b9af37d3626b33aee7a1e999247aa0d2f2ea30d2a64 not found: ID does not exist" Jan 20 18:10:17 crc kubenswrapper[4661]: I0120 18:10:17.663642 4661 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="6c86cce9-f1ca-479e-bf56-44fa9fb7c2c1" Jan 20 18:10:18 crc kubenswrapper[4661]: E0120 18:10:18.077717 4661 reflector.go:158] "Unhandled Error" err="object-\"openshift-authentication\"/\"v4-0-config-system-router-certs\": Failed to watch *v1.Secret: unknown (get secrets)" logger="UnhandledError" Jan 20 18:10:18 crc kubenswrapper[4661]: E0120 18:10:18.198952 4661 reflector.go:158] "Unhandled Error" err="object-\"openshift-authentication\"/\"v4-0-config-user-idp-0-file-data\": Failed to watch *v1.Secret: unknown (get secrets)" logger="UnhandledError" Jan 20 18:10:18 crc kubenswrapper[4661]: E0120 18:10:18.553764 4661 reflector.go:158] "Unhandled Error" err="object-\"openshift-authentication\"/\"v4-0-config-system-ocp-branding-template\": Failed to watch *v1.Secret: unknown (get secrets)" logger="UnhandledError" Jan 20 18:10:18 crc kubenswrapper[4661]: I0120 18:10:18.623131 4661 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="5947c5f0-b932-4127-a183-6b9023784c81" Jan 20 18:10:18 crc kubenswrapper[4661]: I0120 18:10:18.623173 4661 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="5947c5f0-b932-4127-a183-6b9023784c81" Jan 20 18:10:18 crc kubenswrapper[4661]: I0120 18:10:18.628219 4661 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="6c86cce9-f1ca-479e-bf56-44fa9fb7c2c1" Jan 20 18:10:18 crc kubenswrapper[4661]: I0120 18:10:18.629245 4661 status_manager.go:308] "Container readiness changed before pod has synced" pod="openshift-kube-apiserver/kube-apiserver-crc" containerID="cri-o://0b95180bb38a8ea9ed6258950ca15370974dda2f44cb0617c9b09120738dc508" Jan 20 18:10:18 crc kubenswrapper[4661]: I0120 18:10:18.629274 4661 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 20 18:10:19 crc kubenswrapper[4661]: I0120 18:10:19.630741 4661 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="5947c5f0-b932-4127-a183-6b9023784c81" Jan 20 18:10:19 crc kubenswrapper[4661]: I0120 18:10:19.630782 4661 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="5947c5f0-b932-4127-a183-6b9023784c81" Jan 20 18:10:19 crc kubenswrapper[4661]: I0120 18:10:19.637221 4661 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="6c86cce9-f1ca-479e-bf56-44fa9fb7c2c1" Jan 20 18:10:24 crc kubenswrapper[4661]: I0120 18:10:24.461918 4661 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Jan 20 18:10:24 crc kubenswrapper[4661]: I0120 18:10:24.462190 4661 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Jan 20 18:10:27 crc kubenswrapper[4661]: I0120 18:10:27.341306 4661 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Jan 20 18:10:27 crc kubenswrapper[4661]: I0120 18:10:27.343705 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Jan 20 18:10:28 crc kubenswrapper[4661]: I0120 18:10:28.053317 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Jan 20 18:10:28 crc kubenswrapper[4661]: I0120 18:10:28.109222 4661 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Jan 20 18:10:28 crc kubenswrapper[4661]: I0120 18:10:28.630143 4661 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Jan 20 18:10:28 crc kubenswrapper[4661]: I0120 18:10:28.696572 4661 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Jan 20 18:10:28 crc kubenswrapper[4661]: I0120 18:10:28.728726 4661 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Jan 20 18:10:28 crc kubenswrapper[4661]: I0120 18:10:28.896945 4661 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Jan 20 18:10:29 crc kubenswrapper[4661]: I0120 18:10:29.044697 4661 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Jan 20 18:10:29 crc kubenswrapper[4661]: I0120 18:10:29.693554 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Jan 20 18:10:29 crc kubenswrapper[4661]: I0120 18:10:29.866020 4661 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 20 18:10:29 crc kubenswrapper[4661]: I0120 18:10:29.969188 4661 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Jan 20 18:10:30 crc kubenswrapper[4661]: I0120 18:10:30.034510 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Jan 20 18:10:30 crc kubenswrapper[4661]: I0120 18:10:30.038924 4661 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Jan 20 18:10:30 crc kubenswrapper[4661]: I0120 18:10:30.060465 4661 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Jan 20 18:10:30 crc kubenswrapper[4661]: I0120 18:10:30.133077 4661 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Jan 20 18:10:30 crc kubenswrapper[4661]: I0120 18:10:30.349582 4661 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Jan 20 18:10:30 crc kubenswrapper[4661]: I0120 18:10:30.360925 4661 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Jan 20 18:10:30 crc kubenswrapper[4661]: I0120 18:10:30.386440 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Jan 20 18:10:30 crc kubenswrapper[4661]: I0120 18:10:30.619197 4661 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Jan 20 18:10:30 crc kubenswrapper[4661]: I0120 18:10:30.662476 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Jan 20 18:10:30 crc kubenswrapper[4661]: I0120 18:10:30.782389 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 20 18:10:30 crc kubenswrapper[4661]: I0120 18:10:30.936499 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Jan 20 18:10:31 crc kubenswrapper[4661]: I0120 18:10:31.005225 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Jan 20 18:10:31 crc kubenswrapper[4661]: I0120 18:10:31.089250 4661 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Jan 20 18:10:31 crc kubenswrapper[4661]: I0120 18:10:31.158433 4661 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Jan 20 18:10:31 crc kubenswrapper[4661]: I0120 18:10:31.236315 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Jan 20 18:10:31 crc kubenswrapper[4661]: I0120 18:10:31.445468 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Jan 20 18:10:31 crc kubenswrapper[4661]: I0120 18:10:31.486867 4661 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Jan 20 18:10:31 crc kubenswrapper[4661]: I0120 18:10:31.539803 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Jan 20 18:10:31 crc kubenswrapper[4661]: I0120 18:10:31.609030 4661 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Jan 20 18:10:31 crc kubenswrapper[4661]: I0120 18:10:31.635172 4661 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Jan 20 18:10:31 crc kubenswrapper[4661]: I0120 18:10:31.705713 4661 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Jan 20 18:10:31 crc kubenswrapper[4661]: I0120 18:10:31.719547 4661 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Jan 20 18:10:31 crc kubenswrapper[4661]: I0120 18:10:31.742469 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Jan 20 18:10:31 crc kubenswrapper[4661]: I0120 18:10:31.754069 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Jan 20 18:10:31 crc kubenswrapper[4661]: I0120 18:10:31.905404 4661 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Jan 20 18:10:31 crc kubenswrapper[4661]: I0120 18:10:31.912272 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Jan 20 18:10:32 crc kubenswrapper[4661]: I0120 18:10:32.089367 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Jan 20 18:10:32 crc kubenswrapper[4661]: I0120 18:10:32.103920 4661 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Jan 20 18:10:32 crc kubenswrapper[4661]: I0120 18:10:32.157945 4661 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Jan 20 18:10:32 crc kubenswrapper[4661]: I0120 18:10:32.288781 4661 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Jan 20 18:10:32 crc kubenswrapper[4661]: I0120 18:10:32.319501 4661 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Jan 20 18:10:32 crc kubenswrapper[4661]: I0120 18:10:32.333407 4661 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Jan 20 18:10:32 crc kubenswrapper[4661]: I0120 18:10:32.392276 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Jan 20 18:10:32 crc kubenswrapper[4661]: I0120 18:10:32.394898 4661 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Jan 20 18:10:32 crc kubenswrapper[4661]: I0120 18:10:32.419625 4661 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Jan 20 18:10:32 crc kubenswrapper[4661]: I0120 18:10:32.471516 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Jan 20 18:10:32 crc kubenswrapper[4661]: I0120 18:10:32.495249 4661 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Jan 20 18:10:32 crc kubenswrapper[4661]: I0120 18:10:32.497207 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 20 18:10:32 crc kubenswrapper[4661]: I0120 18:10:32.611779 4661 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Jan 20 18:10:32 crc kubenswrapper[4661]: I0120 18:10:32.714796 4661 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Jan 20 18:10:32 crc kubenswrapper[4661]: I0120 18:10:32.767080 4661 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 20 18:10:32 crc kubenswrapper[4661]: I0120 18:10:32.774590 4661 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Jan 20 18:10:32 crc kubenswrapper[4661]: I0120 18:10:32.900730 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Jan 20 18:10:32 crc kubenswrapper[4661]: I0120 18:10:32.924318 4661 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Jan 20 18:10:32 crc kubenswrapper[4661]: I0120 18:10:32.996286 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Jan 20 18:10:33 crc kubenswrapper[4661]: I0120 18:10:33.035544 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Jan 20 18:10:33 crc kubenswrapper[4661]: I0120 18:10:33.110162 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 20 18:10:33 crc kubenswrapper[4661]: I0120 18:10:33.121260 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Jan 20 18:10:33 crc kubenswrapper[4661]: I0120 18:10:33.192877 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Jan 20 18:10:33 crc kubenswrapper[4661]: I0120 18:10:33.316847 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Jan 20 18:10:33 crc kubenswrapper[4661]: I0120 18:10:33.345442 4661 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Jan 20 18:10:33 crc kubenswrapper[4661]: I0120 18:10:33.392107 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Jan 20 18:10:33 crc kubenswrapper[4661]: I0120 18:10:33.421757 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Jan 20 18:10:33 crc kubenswrapper[4661]: I0120 18:10:33.433517 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Jan 20 18:10:33 crc kubenswrapper[4661]: I0120 18:10:33.449369 4661 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Jan 20 18:10:33 crc kubenswrapper[4661]: I0120 18:10:33.555706 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Jan 20 18:10:33 crc kubenswrapper[4661]: I0120 18:10:33.608370 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Jan 20 18:10:33 crc kubenswrapper[4661]: I0120 18:10:33.612366 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Jan 20 18:10:33 crc kubenswrapper[4661]: I0120 18:10:33.682610 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Jan 20 18:10:33 crc kubenswrapper[4661]: I0120 18:10:33.683757 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Jan 20 18:10:33 crc kubenswrapper[4661]: I0120 18:10:33.808211 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Jan 20 18:10:33 crc kubenswrapper[4661]: I0120 18:10:33.820037 4661 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Jan 20 18:10:33 crc kubenswrapper[4661]: I0120 18:10:33.821335 4661 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Jan 20 18:10:33 crc kubenswrapper[4661]: I0120 18:10:33.922909 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Jan 20 18:10:34 crc kubenswrapper[4661]: I0120 18:10:34.011771 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Jan 20 18:10:34 crc kubenswrapper[4661]: I0120 18:10:34.065004 4661 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Jan 20 18:10:34 crc kubenswrapper[4661]: I0120 18:10:34.109806 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Jan 20 18:10:34 crc kubenswrapper[4661]: I0120 18:10:34.136817 4661 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Jan 20 18:10:34 crc kubenswrapper[4661]: I0120 18:10:34.161135 4661 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Jan 20 18:10:34 crc kubenswrapper[4661]: I0120 18:10:34.285177 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Jan 20 18:10:34 crc kubenswrapper[4661]: I0120 18:10:34.316004 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Jan 20 18:10:34 crc kubenswrapper[4661]: I0120 18:10:34.463022 4661 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Jan 20 18:10:34 crc kubenswrapper[4661]: I0120 18:10:34.463366 4661 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Jan 20 18:10:34 crc kubenswrapper[4661]: I0120 18:10:34.463423 4661 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 20 18:10:34 crc kubenswrapper[4661]: I0120 18:10:34.464020 4661 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="kube-controller-manager" containerStatusID={"Type":"cri-o","ID":"da05a19eaa18ff74143f4df1e25642832831b668e6c1e8319b8386868c778345"} pod="openshift-kube-controller-manager/kube-controller-manager-crc" containerMessage="Container kube-controller-manager failed startup probe, will be restarted" Jan 20 18:10:34 crc kubenswrapper[4661]: I0120 18:10:34.464145 4661 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" containerID="cri-o://da05a19eaa18ff74143f4df1e25642832831b668e6c1e8319b8386868c778345" gracePeriod=30 Jan 20 18:10:34 crc kubenswrapper[4661]: I0120 18:10:34.490837 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Jan 20 18:10:34 crc kubenswrapper[4661]: I0120 18:10:34.503560 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Jan 20 18:10:34 crc kubenswrapper[4661]: I0120 18:10:34.529931 4661 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 20 18:10:34 crc kubenswrapper[4661]: I0120 18:10:34.647647 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Jan 20 18:10:34 crc kubenswrapper[4661]: I0120 18:10:34.660746 4661 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Jan 20 18:10:34 crc kubenswrapper[4661]: I0120 18:10:34.698580 4661 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Jan 20 18:10:34 crc kubenswrapper[4661]: I0120 18:10:34.708437 4661 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Jan 20 18:10:34 crc kubenswrapper[4661]: I0120 18:10:34.792621 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Jan 20 18:10:34 crc kubenswrapper[4661]: I0120 18:10:34.829922 4661 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 20 18:10:34 crc kubenswrapper[4661]: I0120 18:10:34.882590 4661 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Jan 20 18:10:34 crc kubenswrapper[4661]: I0120 18:10:34.944768 4661 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 20 18:10:34 crc kubenswrapper[4661]: I0120 18:10:34.994444 4661 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Jan 20 18:10:35 crc kubenswrapper[4661]: I0120 18:10:35.008086 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Jan 20 18:10:35 crc kubenswrapper[4661]: I0120 18:10:35.103215 4661 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Jan 20 18:10:35 crc kubenswrapper[4661]: I0120 18:10:35.119612 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Jan 20 18:10:35 crc kubenswrapper[4661]: I0120 18:10:35.325634 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Jan 20 18:10:35 crc kubenswrapper[4661]: I0120 18:10:35.337985 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Jan 20 18:10:35 crc kubenswrapper[4661]: I0120 18:10:35.518434 4661 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Jan 20 18:10:35 crc kubenswrapper[4661]: I0120 18:10:35.521048 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Jan 20 18:10:35 crc kubenswrapper[4661]: I0120 18:10:35.522854 4661 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Jan 20 18:10:35 crc kubenswrapper[4661]: I0120 18:10:35.692191 4661 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Jan 20 18:10:35 crc kubenswrapper[4661]: I0120 18:10:35.719538 4661 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 20 18:10:35 crc kubenswrapper[4661]: I0120 18:10:35.953181 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Jan 20 18:10:35 crc kubenswrapper[4661]: I0120 18:10:35.953327 4661 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Jan 20 18:10:35 crc kubenswrapper[4661]: I0120 18:10:35.995155 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Jan 20 18:10:36 crc kubenswrapper[4661]: I0120 18:10:36.016233 4661 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Jan 20 18:10:36 crc kubenswrapper[4661]: I0120 18:10:36.025815 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Jan 20 18:10:36 crc kubenswrapper[4661]: I0120 18:10:36.034280 4661 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 20 18:10:36 crc kubenswrapper[4661]: I0120 18:10:36.090727 4661 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Jan 20 18:10:36 crc kubenswrapper[4661]: I0120 18:10:36.095104 4661 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Jan 20 18:10:36 crc kubenswrapper[4661]: I0120 18:10:36.108013 4661 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Jan 20 18:10:36 crc kubenswrapper[4661]: I0120 18:10:36.143149 4661 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Jan 20 18:10:36 crc kubenswrapper[4661]: I0120 18:10:36.188773 4661 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Jan 20 18:10:36 crc kubenswrapper[4661]: I0120 18:10:36.257739 4661 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 20 18:10:36 crc kubenswrapper[4661]: I0120 18:10:36.337056 4661 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Jan 20 18:10:36 crc kubenswrapper[4661]: I0120 18:10:36.357249 4661 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Jan 20 18:10:36 crc kubenswrapper[4661]: I0120 18:10:36.369044 4661 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-bh9mt","openshift-kube-apiserver/kube-apiserver-crc"] Jan 20 18:10:36 crc kubenswrapper[4661]: I0120 18:10:36.369572 4661 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 20 18:10:36 crc kubenswrapper[4661]: I0120 18:10:36.378424 4661 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 20 18:10:36 crc kubenswrapper[4661]: I0120 18:10:36.413209 4661 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=19.413175486 podStartE2EDuration="19.413175486s" podCreationTimestamp="2026-01-20 18:10:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 18:10:36.403308122 +0000 UTC m=+292.734097804" watchObservedRunningTime="2026-01-20 18:10:36.413175486 +0000 UTC m=+292.743965188" Jan 20 18:10:36 crc kubenswrapper[4661]: I0120 18:10:36.445218 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Jan 20 18:10:36 crc kubenswrapper[4661]: I0120 18:10:36.503026 4661 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Jan 20 18:10:36 crc kubenswrapper[4661]: I0120 18:10:36.644371 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Jan 20 18:10:36 crc kubenswrapper[4661]: I0120 18:10:36.723993 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 20 18:10:36 crc kubenswrapper[4661]: I0120 18:10:36.740467 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Jan 20 18:10:36 crc kubenswrapper[4661]: I0120 18:10:36.859451 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 20 18:10:36 crc kubenswrapper[4661]: I0120 18:10:36.892390 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Jan 20 18:10:36 crc kubenswrapper[4661]: I0120 18:10:36.902566 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Jan 20 18:10:36 crc kubenswrapper[4661]: I0120 18:10:36.926126 4661 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Jan 20 18:10:36 crc kubenswrapper[4661]: I0120 18:10:36.997261 4661 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Jan 20 18:10:37 crc kubenswrapper[4661]: I0120 18:10:37.010336 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Jan 20 18:10:37 crc kubenswrapper[4661]: I0120 18:10:37.026478 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Jan 20 18:10:37 crc kubenswrapper[4661]: I0120 18:10:37.049944 4661 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Jan 20 18:10:37 crc kubenswrapper[4661]: I0120 18:10:37.058752 4661 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Jan 20 18:10:37 crc kubenswrapper[4661]: I0120 18:10:37.066632 4661 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Jan 20 18:10:37 crc kubenswrapper[4661]: I0120 18:10:37.155303 4661 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Jan 20 18:10:37 crc kubenswrapper[4661]: I0120 18:10:37.165974 4661 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Jan 20 18:10:37 crc kubenswrapper[4661]: I0120 18:10:37.177199 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Jan 20 18:10:37 crc kubenswrapper[4661]: I0120 18:10:37.183529 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Jan 20 18:10:37 crc kubenswrapper[4661]: I0120 18:10:37.206283 4661 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Jan 20 18:10:37 crc kubenswrapper[4661]: I0120 18:10:37.232861 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Jan 20 18:10:37 crc kubenswrapper[4661]: I0120 18:10:37.522847 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Jan 20 18:10:37 crc kubenswrapper[4661]: I0120 18:10:37.536974 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Jan 20 18:10:37 crc kubenswrapper[4661]: I0120 18:10:37.567163 4661 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Jan 20 18:10:37 crc kubenswrapper[4661]: I0120 18:10:37.622266 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Jan 20 18:10:37 crc kubenswrapper[4661]: I0120 18:10:37.634402 4661 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Jan 20 18:10:37 crc kubenswrapper[4661]: I0120 18:10:37.680394 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Jan 20 18:10:37 crc kubenswrapper[4661]: I0120 18:10:37.695730 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 20 18:10:37 crc kubenswrapper[4661]: I0120 18:10:37.732064 4661 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 20 18:10:37 crc kubenswrapper[4661]: I0120 18:10:37.850413 4661 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Jan 20 18:10:37 crc kubenswrapper[4661]: I0120 18:10:37.865274 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Jan 20 18:10:38 crc kubenswrapper[4661]: I0120 18:10:37.999909 4661 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Jan 20 18:10:38 crc kubenswrapper[4661]: I0120 18:10:38.019225 4661 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Jan 20 18:10:38 crc kubenswrapper[4661]: I0120 18:10:38.149200 4661 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1f0c818b-31de-43ee-a20a-1fc174261b42" path="/var/lib/kubelet/pods/1f0c818b-31de-43ee-a20a-1fc174261b42/volumes" Jan 20 18:10:38 crc kubenswrapper[4661]: I0120 18:10:38.157541 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Jan 20 18:10:38 crc kubenswrapper[4661]: I0120 18:10:38.290559 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Jan 20 18:10:38 crc kubenswrapper[4661]: I0120 18:10:38.388718 4661 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Jan 20 18:10:38 crc kubenswrapper[4661]: I0120 18:10:38.505361 4661 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Jan 20 18:10:38 crc kubenswrapper[4661]: I0120 18:10:38.527052 4661 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Jan 20 18:10:38 crc kubenswrapper[4661]: I0120 18:10:38.609152 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Jan 20 18:10:38 crc kubenswrapper[4661]: I0120 18:10:38.611401 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Jan 20 18:10:38 crc kubenswrapper[4661]: I0120 18:10:38.633518 4661 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Jan 20 18:10:38 crc kubenswrapper[4661]: I0120 18:10:38.909960 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Jan 20 18:10:38 crc kubenswrapper[4661]: I0120 18:10:38.939226 4661 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Jan 20 18:10:38 crc kubenswrapper[4661]: I0120 18:10:38.984322 4661 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Jan 20 18:10:39 crc kubenswrapper[4661]: I0120 18:10:39.050583 4661 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Jan 20 18:10:39 crc kubenswrapper[4661]: I0120 18:10:39.109914 4661 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Jan 20 18:10:39 crc kubenswrapper[4661]: I0120 18:10:39.111859 4661 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Jan 20 18:10:39 crc kubenswrapper[4661]: I0120 18:10:39.140549 4661 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Jan 20 18:10:39 crc kubenswrapper[4661]: I0120 18:10:39.156763 4661 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Jan 20 18:10:39 crc kubenswrapper[4661]: I0120 18:10:39.210951 4661 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Jan 20 18:10:39 crc kubenswrapper[4661]: I0120 18:10:39.287936 4661 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Jan 20 18:10:39 crc kubenswrapper[4661]: I0120 18:10:39.301632 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Jan 20 18:10:39 crc kubenswrapper[4661]: I0120 18:10:39.448213 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Jan 20 18:10:39 crc kubenswrapper[4661]: I0120 18:10:39.500176 4661 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Jan 20 18:10:39 crc kubenswrapper[4661]: I0120 18:10:39.559823 4661 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Jan 20 18:10:39 crc kubenswrapper[4661]: I0120 18:10:39.687173 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Jan 20 18:10:39 crc kubenswrapper[4661]: I0120 18:10:39.699054 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Jan 20 18:10:39 crc kubenswrapper[4661]: I0120 18:10:39.846631 4661 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Jan 20 18:10:39 crc kubenswrapper[4661]: I0120 18:10:39.863464 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Jan 20 18:10:39 crc kubenswrapper[4661]: I0120 18:10:39.867486 4661 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Jan 20 18:10:39 crc kubenswrapper[4661]: I0120 18:10:39.904011 4661 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Jan 20 18:10:39 crc kubenswrapper[4661]: I0120 18:10:39.920401 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 20 18:10:40 crc kubenswrapper[4661]: I0120 18:10:40.000478 4661 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 20 18:10:40 crc kubenswrapper[4661]: I0120 18:10:40.000832 4661 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" containerID="cri-o://727bc80447e0f5d992ff97889a574647d7a8b35386fd4c933d3fa1bfd4bd76fa" gracePeriod=5 Jan 20 18:10:40 crc kubenswrapper[4661]: I0120 18:10:40.030942 4661 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Jan 20 18:10:40 crc kubenswrapper[4661]: I0120 18:10:40.090699 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Jan 20 18:10:40 crc kubenswrapper[4661]: I0120 18:10:40.152910 4661 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Jan 20 18:10:40 crc kubenswrapper[4661]: I0120 18:10:40.155636 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Jan 20 18:10:40 crc kubenswrapper[4661]: I0120 18:10:40.179205 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Jan 20 18:10:40 crc kubenswrapper[4661]: I0120 18:10:40.188785 4661 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Jan 20 18:10:40 crc kubenswrapper[4661]: I0120 18:10:40.256106 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Jan 20 18:10:40 crc kubenswrapper[4661]: I0120 18:10:40.431818 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Jan 20 18:10:40 crc kubenswrapper[4661]: I0120 18:10:40.447816 4661 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Jan 20 18:10:40 crc kubenswrapper[4661]: I0120 18:10:40.454705 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Jan 20 18:10:40 crc kubenswrapper[4661]: I0120 18:10:40.458690 4661 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Jan 20 18:10:40 crc kubenswrapper[4661]: I0120 18:10:40.480775 4661 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Jan 20 18:10:40 crc kubenswrapper[4661]: I0120 18:10:40.503835 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Jan 20 18:10:40 crc kubenswrapper[4661]: I0120 18:10:40.576320 4661 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Jan 20 18:10:40 crc kubenswrapper[4661]: I0120 18:10:40.601393 4661 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Jan 20 18:10:40 crc kubenswrapper[4661]: I0120 18:10:40.620952 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Jan 20 18:10:40 crc kubenswrapper[4661]: I0120 18:10:40.674048 4661 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Jan 20 18:10:40 crc kubenswrapper[4661]: I0120 18:10:40.711504 4661 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Jan 20 18:10:40 crc kubenswrapper[4661]: I0120 18:10:40.731783 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Jan 20 18:10:40 crc kubenswrapper[4661]: I0120 18:10:40.839281 4661 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Jan 20 18:10:40 crc kubenswrapper[4661]: I0120 18:10:40.865967 4661 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Jan 20 18:10:40 crc kubenswrapper[4661]: I0120 18:10:40.872053 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Jan 20 18:10:40 crc kubenswrapper[4661]: I0120 18:10:40.911782 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Jan 20 18:10:40 crc kubenswrapper[4661]: I0120 18:10:40.983531 4661 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Jan 20 18:10:41 crc kubenswrapper[4661]: I0120 18:10:41.017898 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Jan 20 18:10:41 crc kubenswrapper[4661]: I0120 18:10:41.079407 4661 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Jan 20 18:10:41 crc kubenswrapper[4661]: I0120 18:10:41.254387 4661 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Jan 20 18:10:41 crc kubenswrapper[4661]: I0120 18:10:41.275984 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Jan 20 18:10:41 crc kubenswrapper[4661]: I0120 18:10:41.313057 4661 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Jan 20 18:10:41 crc kubenswrapper[4661]: I0120 18:10:41.342163 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Jan 20 18:10:41 crc kubenswrapper[4661]: I0120 18:10:41.354524 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 20 18:10:41 crc kubenswrapper[4661]: I0120 18:10:41.619939 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Jan 20 18:10:41 crc kubenswrapper[4661]: I0120 18:10:41.743377 4661 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Jan 20 18:10:41 crc kubenswrapper[4661]: I0120 18:10:41.763488 4661 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Jan 20 18:10:41 crc kubenswrapper[4661]: I0120 18:10:41.837176 4661 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Jan 20 18:10:41 crc kubenswrapper[4661]: I0120 18:10:41.917406 4661 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Jan 20 18:10:42 crc kubenswrapper[4661]: I0120 18:10:42.002337 4661 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Jan 20 18:10:42 crc kubenswrapper[4661]: I0120 18:10:42.023256 4661 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Jan 20 18:10:42 crc kubenswrapper[4661]: I0120 18:10:42.052767 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Jan 20 18:10:42 crc kubenswrapper[4661]: I0120 18:10:42.191021 4661 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Jan 20 18:10:42 crc kubenswrapper[4661]: I0120 18:10:42.283916 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Jan 20 18:10:42 crc kubenswrapper[4661]: I0120 18:10:42.391011 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Jan 20 18:10:42 crc kubenswrapper[4661]: I0120 18:10:42.392772 4661 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Jan 20 18:10:42 crc kubenswrapper[4661]: I0120 18:10:42.423168 4661 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Jan 20 18:10:42 crc kubenswrapper[4661]: I0120 18:10:42.446035 4661 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Jan 20 18:10:42 crc kubenswrapper[4661]: I0120 18:10:42.505186 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Jan 20 18:10:42 crc kubenswrapper[4661]: I0120 18:10:42.836015 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Jan 20 18:10:42 crc kubenswrapper[4661]: I0120 18:10:42.851451 4661 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Jan 20 18:10:42 crc kubenswrapper[4661]: I0120 18:10:42.899825 4661 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Jan 20 18:10:43 crc kubenswrapper[4661]: I0120 18:10:43.119980 4661 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Jan 20 18:10:43 crc kubenswrapper[4661]: I0120 18:10:43.180986 4661 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Jan 20 18:10:43 crc kubenswrapper[4661]: I0120 18:10:43.248595 4661 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Jan 20 18:10:43 crc kubenswrapper[4661]: I0120 18:10:43.310030 4661 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-7f54ff7574-qv4rm"] Jan 20 18:10:43 crc kubenswrapper[4661]: E0120 18:10:43.310218 4661 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e2ed603c-7744-4ed0-b168-f022e5b8e145" containerName="installer" Jan 20 18:10:43 crc kubenswrapper[4661]: I0120 18:10:43.310230 4661 state_mem.go:107] "Deleted CPUSet assignment" podUID="e2ed603c-7744-4ed0-b168-f022e5b8e145" containerName="installer" Jan 20 18:10:43 crc kubenswrapper[4661]: E0120 18:10:43.310240 4661 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 20 18:10:43 crc kubenswrapper[4661]: I0120 18:10:43.310246 4661 state_mem.go:107] "Deleted CPUSet assignment" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 20 18:10:43 crc kubenswrapper[4661]: E0120 18:10:43.310257 4661 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f0c818b-31de-43ee-a20a-1fc174261b42" containerName="oauth-openshift" Jan 20 18:10:43 crc kubenswrapper[4661]: I0120 18:10:43.310263 4661 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f0c818b-31de-43ee-a20a-1fc174261b42" containerName="oauth-openshift" Jan 20 18:10:43 crc kubenswrapper[4661]: I0120 18:10:43.310360 4661 memory_manager.go:354] "RemoveStaleState removing state" podUID="1f0c818b-31de-43ee-a20a-1fc174261b42" containerName="oauth-openshift" Jan 20 18:10:43 crc kubenswrapper[4661]: I0120 18:10:43.310371 4661 memory_manager.go:354] "RemoveStaleState removing state" podUID="e2ed603c-7744-4ed0-b168-f022e5b8e145" containerName="installer" Jan 20 18:10:43 crc kubenswrapper[4661]: I0120 18:10:43.310380 4661 memory_manager.go:354] "RemoveStaleState removing state" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 20 18:10:43 crc kubenswrapper[4661]: I0120 18:10:43.310777 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-7f54ff7574-qv4rm" Jan 20 18:10:43 crc kubenswrapper[4661]: I0120 18:10:43.315115 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Jan 20 18:10:43 crc kubenswrapper[4661]: I0120 18:10:43.315868 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Jan 20 18:10:43 crc kubenswrapper[4661]: I0120 18:10:43.316130 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Jan 20 18:10:43 crc kubenswrapper[4661]: I0120 18:10:43.316320 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Jan 20 18:10:43 crc kubenswrapper[4661]: I0120 18:10:43.316559 4661 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Jan 20 18:10:43 crc kubenswrapper[4661]: I0120 18:10:43.316739 4661 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Jan 20 18:10:43 crc kubenswrapper[4661]: I0120 18:10:43.316888 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Jan 20 18:10:43 crc kubenswrapper[4661]: I0120 18:10:43.317129 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Jan 20 18:10:43 crc kubenswrapper[4661]: I0120 18:10:43.317330 4661 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Jan 20 18:10:43 crc kubenswrapper[4661]: I0120 18:10:43.319284 4661 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Jan 20 18:10:43 crc kubenswrapper[4661]: I0120 18:10:43.320996 4661 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Jan 20 18:10:43 crc kubenswrapper[4661]: I0120 18:10:43.323858 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Jan 20 18:10:43 crc kubenswrapper[4661]: I0120 18:10:43.331572 4661 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Jan 20 18:10:43 crc kubenswrapper[4661]: I0120 18:10:43.336835 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Jan 20 18:10:43 crc kubenswrapper[4661]: I0120 18:10:43.337890 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Jan 20 18:10:43 crc kubenswrapper[4661]: I0120 18:10:43.340107 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-7f54ff7574-qv4rm"] Jan 20 18:10:43 crc kubenswrapper[4661]: I0120 18:10:43.363158 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Jan 20 18:10:43 crc kubenswrapper[4661]: I0120 18:10:43.471876 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/ff8652fe-0f18-4e15-97d2-08e54353a88e-v4-0-config-system-router-certs\") pod \"oauth-openshift-7f54ff7574-qv4rm\" (UID: \"ff8652fe-0f18-4e15-97d2-08e54353a88e\") " pod="openshift-authentication/oauth-openshift-7f54ff7574-qv4rm" Jan 20 18:10:43 crc kubenswrapper[4661]: I0120 18:10:43.471942 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/ff8652fe-0f18-4e15-97d2-08e54353a88e-v4-0-config-system-cliconfig\") pod \"oauth-openshift-7f54ff7574-qv4rm\" (UID: \"ff8652fe-0f18-4e15-97d2-08e54353a88e\") " pod="openshift-authentication/oauth-openshift-7f54ff7574-qv4rm" Jan 20 18:10:43 crc kubenswrapper[4661]: I0120 18:10:43.471983 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/ff8652fe-0f18-4e15-97d2-08e54353a88e-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-7f54ff7574-qv4rm\" (UID: \"ff8652fe-0f18-4e15-97d2-08e54353a88e\") " pod="openshift-authentication/oauth-openshift-7f54ff7574-qv4rm" Jan 20 18:10:43 crc kubenswrapper[4661]: I0120 18:10:43.472010 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/ff8652fe-0f18-4e15-97d2-08e54353a88e-v4-0-config-system-serving-cert\") pod \"oauth-openshift-7f54ff7574-qv4rm\" (UID: \"ff8652fe-0f18-4e15-97d2-08e54353a88e\") " pod="openshift-authentication/oauth-openshift-7f54ff7574-qv4rm" Jan 20 18:10:43 crc kubenswrapper[4661]: I0120 18:10:43.472082 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ff8652fe-0f18-4e15-97d2-08e54353a88e-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-7f54ff7574-qv4rm\" (UID: \"ff8652fe-0f18-4e15-97d2-08e54353a88e\") " pod="openshift-authentication/oauth-openshift-7f54ff7574-qv4rm" Jan 20 18:10:43 crc kubenswrapper[4661]: I0120 18:10:43.472106 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/ff8652fe-0f18-4e15-97d2-08e54353a88e-v4-0-config-system-service-ca\") pod \"oauth-openshift-7f54ff7574-qv4rm\" (UID: \"ff8652fe-0f18-4e15-97d2-08e54353a88e\") " pod="openshift-authentication/oauth-openshift-7f54ff7574-qv4rm" Jan 20 18:10:43 crc kubenswrapper[4661]: I0120 18:10:43.472139 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/ff8652fe-0f18-4e15-97d2-08e54353a88e-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-7f54ff7574-qv4rm\" (UID: \"ff8652fe-0f18-4e15-97d2-08e54353a88e\") " pod="openshift-authentication/oauth-openshift-7f54ff7574-qv4rm" Jan 20 18:10:43 crc kubenswrapper[4661]: I0120 18:10:43.472162 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/ff8652fe-0f18-4e15-97d2-08e54353a88e-audit-dir\") pod \"oauth-openshift-7f54ff7574-qv4rm\" (UID: \"ff8652fe-0f18-4e15-97d2-08e54353a88e\") " pod="openshift-authentication/oauth-openshift-7f54ff7574-qv4rm" Jan 20 18:10:43 crc kubenswrapper[4661]: I0120 18:10:43.472189 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dnc55\" (UniqueName: \"kubernetes.io/projected/ff8652fe-0f18-4e15-97d2-08e54353a88e-kube-api-access-dnc55\") pod \"oauth-openshift-7f54ff7574-qv4rm\" (UID: \"ff8652fe-0f18-4e15-97d2-08e54353a88e\") " pod="openshift-authentication/oauth-openshift-7f54ff7574-qv4rm" Jan 20 18:10:43 crc kubenswrapper[4661]: I0120 18:10:43.472229 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/ff8652fe-0f18-4e15-97d2-08e54353a88e-v4-0-config-system-session\") pod \"oauth-openshift-7f54ff7574-qv4rm\" (UID: \"ff8652fe-0f18-4e15-97d2-08e54353a88e\") " pod="openshift-authentication/oauth-openshift-7f54ff7574-qv4rm" Jan 20 18:10:43 crc kubenswrapper[4661]: I0120 18:10:43.472246 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/ff8652fe-0f18-4e15-97d2-08e54353a88e-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-7f54ff7574-qv4rm\" (UID: \"ff8652fe-0f18-4e15-97d2-08e54353a88e\") " pod="openshift-authentication/oauth-openshift-7f54ff7574-qv4rm" Jan 20 18:10:43 crc kubenswrapper[4661]: I0120 18:10:43.472263 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/ff8652fe-0f18-4e15-97d2-08e54353a88e-v4-0-config-user-template-error\") pod \"oauth-openshift-7f54ff7574-qv4rm\" (UID: \"ff8652fe-0f18-4e15-97d2-08e54353a88e\") " pod="openshift-authentication/oauth-openshift-7f54ff7574-qv4rm" Jan 20 18:10:43 crc kubenswrapper[4661]: I0120 18:10:43.472379 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/ff8652fe-0f18-4e15-97d2-08e54353a88e-v4-0-config-user-template-login\") pod \"oauth-openshift-7f54ff7574-qv4rm\" (UID: \"ff8652fe-0f18-4e15-97d2-08e54353a88e\") " pod="openshift-authentication/oauth-openshift-7f54ff7574-qv4rm" Jan 20 18:10:43 crc kubenswrapper[4661]: I0120 18:10:43.472459 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/ff8652fe-0f18-4e15-97d2-08e54353a88e-audit-policies\") pod \"oauth-openshift-7f54ff7574-qv4rm\" (UID: \"ff8652fe-0f18-4e15-97d2-08e54353a88e\") " pod="openshift-authentication/oauth-openshift-7f54ff7574-qv4rm" Jan 20 18:10:43 crc kubenswrapper[4661]: I0120 18:10:43.573641 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/ff8652fe-0f18-4e15-97d2-08e54353a88e-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-7f54ff7574-qv4rm\" (UID: \"ff8652fe-0f18-4e15-97d2-08e54353a88e\") " pod="openshift-authentication/oauth-openshift-7f54ff7574-qv4rm" Jan 20 18:10:43 crc kubenswrapper[4661]: I0120 18:10:43.573717 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/ff8652fe-0f18-4e15-97d2-08e54353a88e-v4-0-config-system-serving-cert\") pod \"oauth-openshift-7f54ff7574-qv4rm\" (UID: \"ff8652fe-0f18-4e15-97d2-08e54353a88e\") " pod="openshift-authentication/oauth-openshift-7f54ff7574-qv4rm" Jan 20 18:10:43 crc kubenswrapper[4661]: I0120 18:10:43.573759 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ff8652fe-0f18-4e15-97d2-08e54353a88e-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-7f54ff7574-qv4rm\" (UID: \"ff8652fe-0f18-4e15-97d2-08e54353a88e\") " pod="openshift-authentication/oauth-openshift-7f54ff7574-qv4rm" Jan 20 18:10:43 crc kubenswrapper[4661]: I0120 18:10:43.573783 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/ff8652fe-0f18-4e15-97d2-08e54353a88e-v4-0-config-system-service-ca\") pod \"oauth-openshift-7f54ff7574-qv4rm\" (UID: \"ff8652fe-0f18-4e15-97d2-08e54353a88e\") " pod="openshift-authentication/oauth-openshift-7f54ff7574-qv4rm" Jan 20 18:10:43 crc kubenswrapper[4661]: I0120 18:10:43.573799 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/ff8652fe-0f18-4e15-97d2-08e54353a88e-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-7f54ff7574-qv4rm\" (UID: \"ff8652fe-0f18-4e15-97d2-08e54353a88e\") " pod="openshift-authentication/oauth-openshift-7f54ff7574-qv4rm" Jan 20 18:10:43 crc kubenswrapper[4661]: I0120 18:10:43.573841 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/ff8652fe-0f18-4e15-97d2-08e54353a88e-audit-dir\") pod \"oauth-openshift-7f54ff7574-qv4rm\" (UID: \"ff8652fe-0f18-4e15-97d2-08e54353a88e\") " pod="openshift-authentication/oauth-openshift-7f54ff7574-qv4rm" Jan 20 18:10:43 crc kubenswrapper[4661]: I0120 18:10:43.573871 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dnc55\" (UniqueName: \"kubernetes.io/projected/ff8652fe-0f18-4e15-97d2-08e54353a88e-kube-api-access-dnc55\") pod \"oauth-openshift-7f54ff7574-qv4rm\" (UID: \"ff8652fe-0f18-4e15-97d2-08e54353a88e\") " pod="openshift-authentication/oauth-openshift-7f54ff7574-qv4rm" Jan 20 18:10:43 crc kubenswrapper[4661]: I0120 18:10:43.573910 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/ff8652fe-0f18-4e15-97d2-08e54353a88e-v4-0-config-system-session\") pod \"oauth-openshift-7f54ff7574-qv4rm\" (UID: \"ff8652fe-0f18-4e15-97d2-08e54353a88e\") " pod="openshift-authentication/oauth-openshift-7f54ff7574-qv4rm" Jan 20 18:10:43 crc kubenswrapper[4661]: I0120 18:10:43.573926 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/ff8652fe-0f18-4e15-97d2-08e54353a88e-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-7f54ff7574-qv4rm\" (UID: \"ff8652fe-0f18-4e15-97d2-08e54353a88e\") " pod="openshift-authentication/oauth-openshift-7f54ff7574-qv4rm" Jan 20 18:10:43 crc kubenswrapper[4661]: I0120 18:10:43.573942 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/ff8652fe-0f18-4e15-97d2-08e54353a88e-v4-0-config-user-template-error\") pod \"oauth-openshift-7f54ff7574-qv4rm\" (UID: \"ff8652fe-0f18-4e15-97d2-08e54353a88e\") " pod="openshift-authentication/oauth-openshift-7f54ff7574-qv4rm" Jan 20 18:10:43 crc kubenswrapper[4661]: I0120 18:10:43.573975 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/ff8652fe-0f18-4e15-97d2-08e54353a88e-v4-0-config-user-template-login\") pod \"oauth-openshift-7f54ff7574-qv4rm\" (UID: \"ff8652fe-0f18-4e15-97d2-08e54353a88e\") " pod="openshift-authentication/oauth-openshift-7f54ff7574-qv4rm" Jan 20 18:10:43 crc kubenswrapper[4661]: I0120 18:10:43.573995 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/ff8652fe-0f18-4e15-97d2-08e54353a88e-audit-policies\") pod \"oauth-openshift-7f54ff7574-qv4rm\" (UID: \"ff8652fe-0f18-4e15-97d2-08e54353a88e\") " pod="openshift-authentication/oauth-openshift-7f54ff7574-qv4rm" Jan 20 18:10:43 crc kubenswrapper[4661]: I0120 18:10:43.574025 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/ff8652fe-0f18-4e15-97d2-08e54353a88e-v4-0-config-system-router-certs\") pod \"oauth-openshift-7f54ff7574-qv4rm\" (UID: \"ff8652fe-0f18-4e15-97d2-08e54353a88e\") " pod="openshift-authentication/oauth-openshift-7f54ff7574-qv4rm" Jan 20 18:10:43 crc kubenswrapper[4661]: I0120 18:10:43.574092 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/ff8652fe-0f18-4e15-97d2-08e54353a88e-audit-dir\") pod \"oauth-openshift-7f54ff7574-qv4rm\" (UID: \"ff8652fe-0f18-4e15-97d2-08e54353a88e\") " pod="openshift-authentication/oauth-openshift-7f54ff7574-qv4rm" Jan 20 18:10:43 crc kubenswrapper[4661]: I0120 18:10:43.574269 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/ff8652fe-0f18-4e15-97d2-08e54353a88e-v4-0-config-system-cliconfig\") pod \"oauth-openshift-7f54ff7574-qv4rm\" (UID: \"ff8652fe-0f18-4e15-97d2-08e54353a88e\") " pod="openshift-authentication/oauth-openshift-7f54ff7574-qv4rm" Jan 20 18:10:43 crc kubenswrapper[4661]: I0120 18:10:43.575884 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/ff8652fe-0f18-4e15-97d2-08e54353a88e-audit-policies\") pod \"oauth-openshift-7f54ff7574-qv4rm\" (UID: \"ff8652fe-0f18-4e15-97d2-08e54353a88e\") " pod="openshift-authentication/oauth-openshift-7f54ff7574-qv4rm" Jan 20 18:10:43 crc kubenswrapper[4661]: I0120 18:10:43.576138 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/ff8652fe-0f18-4e15-97d2-08e54353a88e-v4-0-config-system-cliconfig\") pod \"oauth-openshift-7f54ff7574-qv4rm\" (UID: \"ff8652fe-0f18-4e15-97d2-08e54353a88e\") " pod="openshift-authentication/oauth-openshift-7f54ff7574-qv4rm" Jan 20 18:10:43 crc kubenswrapper[4661]: I0120 18:10:43.576289 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/ff8652fe-0f18-4e15-97d2-08e54353a88e-v4-0-config-system-service-ca\") pod \"oauth-openshift-7f54ff7574-qv4rm\" (UID: \"ff8652fe-0f18-4e15-97d2-08e54353a88e\") " pod="openshift-authentication/oauth-openshift-7f54ff7574-qv4rm" Jan 20 18:10:43 crc kubenswrapper[4661]: I0120 18:10:43.577299 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ff8652fe-0f18-4e15-97d2-08e54353a88e-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-7f54ff7574-qv4rm\" (UID: \"ff8652fe-0f18-4e15-97d2-08e54353a88e\") " pod="openshift-authentication/oauth-openshift-7f54ff7574-qv4rm" Jan 20 18:10:43 crc kubenswrapper[4661]: I0120 18:10:43.581865 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/ff8652fe-0f18-4e15-97d2-08e54353a88e-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-7f54ff7574-qv4rm\" (UID: \"ff8652fe-0f18-4e15-97d2-08e54353a88e\") " pod="openshift-authentication/oauth-openshift-7f54ff7574-qv4rm" Jan 20 18:10:43 crc kubenswrapper[4661]: I0120 18:10:43.587314 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/ff8652fe-0f18-4e15-97d2-08e54353a88e-v4-0-config-system-router-certs\") pod \"oauth-openshift-7f54ff7574-qv4rm\" (UID: \"ff8652fe-0f18-4e15-97d2-08e54353a88e\") " pod="openshift-authentication/oauth-openshift-7f54ff7574-qv4rm" Jan 20 18:10:43 crc kubenswrapper[4661]: I0120 18:10:43.589450 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/ff8652fe-0f18-4e15-97d2-08e54353a88e-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-7f54ff7574-qv4rm\" (UID: \"ff8652fe-0f18-4e15-97d2-08e54353a88e\") " pod="openshift-authentication/oauth-openshift-7f54ff7574-qv4rm" Jan 20 18:10:43 crc kubenswrapper[4661]: I0120 18:10:43.590501 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/ff8652fe-0f18-4e15-97d2-08e54353a88e-v4-0-config-user-template-error\") pod \"oauth-openshift-7f54ff7574-qv4rm\" (UID: \"ff8652fe-0f18-4e15-97d2-08e54353a88e\") " pod="openshift-authentication/oauth-openshift-7f54ff7574-qv4rm" Jan 20 18:10:43 crc kubenswrapper[4661]: I0120 18:10:43.591190 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/ff8652fe-0f18-4e15-97d2-08e54353a88e-v4-0-config-system-serving-cert\") pod \"oauth-openshift-7f54ff7574-qv4rm\" (UID: \"ff8652fe-0f18-4e15-97d2-08e54353a88e\") " pod="openshift-authentication/oauth-openshift-7f54ff7574-qv4rm" Jan 20 18:10:43 crc kubenswrapper[4661]: I0120 18:10:43.592585 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/ff8652fe-0f18-4e15-97d2-08e54353a88e-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-7f54ff7574-qv4rm\" (UID: \"ff8652fe-0f18-4e15-97d2-08e54353a88e\") " pod="openshift-authentication/oauth-openshift-7f54ff7574-qv4rm" Jan 20 18:10:43 crc kubenswrapper[4661]: I0120 18:10:43.597428 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dnc55\" (UniqueName: \"kubernetes.io/projected/ff8652fe-0f18-4e15-97d2-08e54353a88e-kube-api-access-dnc55\") pod \"oauth-openshift-7f54ff7574-qv4rm\" (UID: \"ff8652fe-0f18-4e15-97d2-08e54353a88e\") " pod="openshift-authentication/oauth-openshift-7f54ff7574-qv4rm" Jan 20 18:10:43 crc kubenswrapper[4661]: I0120 18:10:43.599008 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/ff8652fe-0f18-4e15-97d2-08e54353a88e-v4-0-config-user-template-login\") pod \"oauth-openshift-7f54ff7574-qv4rm\" (UID: \"ff8652fe-0f18-4e15-97d2-08e54353a88e\") " pod="openshift-authentication/oauth-openshift-7f54ff7574-qv4rm" Jan 20 18:10:43 crc kubenswrapper[4661]: I0120 18:10:43.600151 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/ff8652fe-0f18-4e15-97d2-08e54353a88e-v4-0-config-system-session\") pod \"oauth-openshift-7f54ff7574-qv4rm\" (UID: \"ff8652fe-0f18-4e15-97d2-08e54353a88e\") " pod="openshift-authentication/oauth-openshift-7f54ff7574-qv4rm" Jan 20 18:10:43 crc kubenswrapper[4661]: I0120 18:10:43.654881 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-7f54ff7574-qv4rm" Jan 20 18:10:43 crc kubenswrapper[4661]: I0120 18:10:43.858968 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Jan 20 18:10:43 crc kubenswrapper[4661]: I0120 18:10:43.923286 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Jan 20 18:10:43 crc kubenswrapper[4661]: I0120 18:10:43.992253 4661 cert_rotation.go:91] certificate rotation detected, shutting down client connections to start using new credentials Jan 20 18:10:44 crc kubenswrapper[4661]: I0120 18:10:44.134869 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-7f54ff7574-qv4rm"] Jan 20 18:10:44 crc kubenswrapper[4661]: I0120 18:10:44.804259 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-7f54ff7574-qv4rm" event={"ID":"ff8652fe-0f18-4e15-97d2-08e54353a88e","Type":"ContainerStarted","Data":"2701788574b56e41a5e9737c6b3862cb5e5124341256705393920f33b85070fa"} Jan 20 18:10:44 crc kubenswrapper[4661]: I0120 18:10:44.804583 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-7f54ff7574-qv4rm" event={"ID":"ff8652fe-0f18-4e15-97d2-08e54353a88e","Type":"ContainerStarted","Data":"580f9665699c50c18bf3bb55680eb581aa21708a64bfe482cb9d1a71f48a68b5"} Jan 20 18:10:44 crc kubenswrapper[4661]: I0120 18:10:44.804855 4661 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-7f54ff7574-qv4rm" Jan 20 18:10:44 crc kubenswrapper[4661]: I0120 18:10:44.896983 4661 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-7f54ff7574-qv4rm" Jan 20 18:10:44 crc kubenswrapper[4661]: I0120 18:10:44.925644 4661 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-7f54ff7574-qv4rm" podStartSLOduration=53.925624734 podStartE2EDuration="53.925624734s" podCreationTimestamp="2026-01-20 18:09:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 18:10:44.830312425 +0000 UTC m=+301.161102117" watchObservedRunningTime="2026-01-20 18:10:44.925624734 +0000 UTC m=+301.256414406" Jan 20 18:10:45 crc kubenswrapper[4661]: I0120 18:10:45.594571 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Jan 20 18:10:45 crc kubenswrapper[4661]: I0120 18:10:45.595086 4661 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 20 18:10:45 crc kubenswrapper[4661]: I0120 18:10:45.714813 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 20 18:10:45 crc kubenswrapper[4661]: I0120 18:10:45.714875 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 20 18:10:45 crc kubenswrapper[4661]: I0120 18:10:45.714903 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 20 18:10:45 crc kubenswrapper[4661]: I0120 18:10:45.714920 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests" (OuterVolumeSpecName: "manifests") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 20 18:10:45 crc kubenswrapper[4661]: I0120 18:10:45.714960 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 20 18:10:45 crc kubenswrapper[4661]: I0120 18:10:45.714988 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 20 18:10:45 crc kubenswrapper[4661]: I0120 18:10:45.715256 4661 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") on node \"crc\" DevicePath \"\"" Jan 20 18:10:45 crc kubenswrapper[4661]: I0120 18:10:45.715306 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock" (OuterVolumeSpecName: "var-lock") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 20 18:10:45 crc kubenswrapper[4661]: I0120 18:10:45.715338 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log" (OuterVolumeSpecName: "var-log") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 20 18:10:45 crc kubenswrapper[4661]: I0120 18:10:45.715362 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 20 18:10:45 crc kubenswrapper[4661]: I0120 18:10:45.727478 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 20 18:10:45 crc kubenswrapper[4661]: I0120 18:10:45.819833 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Jan 20 18:10:45 crc kubenswrapper[4661]: I0120 18:10:45.819931 4661 generic.go:334] "Generic (PLEG): container finished" podID="f85e55b1a89d02b0cb034b1ea31ed45a" containerID="727bc80447e0f5d992ff97889a574647d7a8b35386fd4c933d3fa1bfd4bd76fa" exitCode=137 Jan 20 18:10:45 crc kubenswrapper[4661]: I0120 18:10:45.820053 4661 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 20 18:10:45 crc kubenswrapper[4661]: I0120 18:10:45.820138 4661 scope.go:117] "RemoveContainer" containerID="727bc80447e0f5d992ff97889a574647d7a8b35386fd4c933d3fa1bfd4bd76fa" Jan 20 18:10:45 crc kubenswrapper[4661]: I0120 18:10:45.822145 4661 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 20 18:10:45 crc kubenswrapper[4661]: I0120 18:10:45.822237 4661 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") on node \"crc\" DevicePath \"\"" Jan 20 18:10:45 crc kubenswrapper[4661]: I0120 18:10:45.822268 4661 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 20 18:10:45 crc kubenswrapper[4661]: I0120 18:10:45.822294 4661 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") on node \"crc\" DevicePath \"\"" Jan 20 18:10:45 crc kubenswrapper[4661]: I0120 18:10:45.849627 4661 scope.go:117] "RemoveContainer" containerID="727bc80447e0f5d992ff97889a574647d7a8b35386fd4c933d3fa1bfd4bd76fa" Jan 20 18:10:45 crc kubenswrapper[4661]: E0120 18:10:45.851376 4661 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"727bc80447e0f5d992ff97889a574647d7a8b35386fd4c933d3fa1bfd4bd76fa\": container with ID starting with 727bc80447e0f5d992ff97889a574647d7a8b35386fd4c933d3fa1bfd4bd76fa not found: ID does not exist" containerID="727bc80447e0f5d992ff97889a574647d7a8b35386fd4c933d3fa1bfd4bd76fa" Jan 20 18:10:45 crc kubenswrapper[4661]: I0120 18:10:45.851455 4661 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"727bc80447e0f5d992ff97889a574647d7a8b35386fd4c933d3fa1bfd4bd76fa"} err="failed to get container status \"727bc80447e0f5d992ff97889a574647d7a8b35386fd4c933d3fa1bfd4bd76fa\": rpc error: code = NotFound desc = could not find container \"727bc80447e0f5d992ff97889a574647d7a8b35386fd4c933d3fa1bfd4bd76fa\": container with ID starting with 727bc80447e0f5d992ff97889a574647d7a8b35386fd4c933d3fa1bfd4bd76fa not found: ID does not exist" Jan 20 18:10:46 crc kubenswrapper[4661]: I0120 18:10:46.157602 4661 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" path="/var/lib/kubelet/pods/f85e55b1a89d02b0cb034b1ea31ed45a/volumes" Jan 20 18:11:00 crc kubenswrapper[4661]: I0120 18:11:00.340824 4661 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Jan 20 18:11:04 crc kubenswrapper[4661]: I0120 18:11:04.982071 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/1.log" Jan 20 18:11:04 crc kubenswrapper[4661]: I0120 18:11:04.986909 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Jan 20 18:11:04 crc kubenswrapper[4661]: I0120 18:11:04.986995 4661 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="da05a19eaa18ff74143f4df1e25642832831b668e6c1e8319b8386868c778345" exitCode=137 Jan 20 18:11:04 crc kubenswrapper[4661]: I0120 18:11:04.987045 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"da05a19eaa18ff74143f4df1e25642832831b668e6c1e8319b8386868c778345"} Jan 20 18:11:04 crc kubenswrapper[4661]: I0120 18:11:04.987094 4661 scope.go:117] "RemoveContainer" containerID="008613eee577926f777b6eba5a93379dca1203429fb29918bb057f2aba5eba4e" Jan 20 18:11:05 crc kubenswrapper[4661]: I0120 18:11:05.997520 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/1.log" Jan 20 18:11:05 crc kubenswrapper[4661]: I0120 18:11:05.999564 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"b162a160ff8e2f18a9a48aea0bacaec5c4f2d5a0c67acd255a1ef88090be5285"} Jan 20 18:11:06 crc kubenswrapper[4661]: I0120 18:11:06.853241 4661 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 20 18:11:14 crc kubenswrapper[4661]: I0120 18:11:14.462587 4661 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 20 18:11:14 crc kubenswrapper[4661]: I0120 18:11:14.472571 4661 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 20 18:11:15 crc kubenswrapper[4661]: I0120 18:11:15.143186 4661 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 20 18:11:25 crc kubenswrapper[4661]: I0120 18:11:25.022886 4661 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-wd4nq"] Jan 20 18:11:25 crc kubenswrapper[4661]: I0120 18:11:25.023327 4661 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-wd4nq" podUID="c223ef1c-922a-42b8-b8d0-428a27f5ae6d" containerName="controller-manager" containerID="cri-o://53d40a2a00f8e68cb2f04c1ac5bca2cdb1fb4d312ff10b495cb1fb8fb2e4bd2e" gracePeriod=30 Jan 20 18:11:25 crc kubenswrapper[4661]: I0120 18:11:25.026802 4661 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-j48cg"] Jan 20 18:11:25 crc kubenswrapper[4661]: I0120 18:11:25.027032 4661 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-j48cg" podUID="457c15d5-4066-4d88-bbb4-a9fe13de20cd" containerName="route-controller-manager" containerID="cri-o://5b9a63bea591f294d59b34ca047fcbc567057b96a9f92f7e9e704c64575782d8" gracePeriod=30 Jan 20 18:11:25 crc kubenswrapper[4661]: I0120 18:11:25.210483 4661 generic.go:334] "Generic (PLEG): container finished" podID="c223ef1c-922a-42b8-b8d0-428a27f5ae6d" containerID="53d40a2a00f8e68cb2f04c1ac5bca2cdb1fb4d312ff10b495cb1fb8fb2e4bd2e" exitCode=0 Jan 20 18:11:25 crc kubenswrapper[4661]: I0120 18:11:25.210567 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-wd4nq" event={"ID":"c223ef1c-922a-42b8-b8d0-428a27f5ae6d","Type":"ContainerDied","Data":"53d40a2a00f8e68cb2f04c1ac5bca2cdb1fb4d312ff10b495cb1fb8fb2e4bd2e"} Jan 20 18:11:25 crc kubenswrapper[4661]: I0120 18:11:25.217476 4661 generic.go:334] "Generic (PLEG): container finished" podID="457c15d5-4066-4d88-bbb4-a9fe13de20cd" containerID="5b9a63bea591f294d59b34ca047fcbc567057b96a9f92f7e9e704c64575782d8" exitCode=0 Jan 20 18:11:25 crc kubenswrapper[4661]: I0120 18:11:25.217523 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-j48cg" event={"ID":"457c15d5-4066-4d88-bbb4-a9fe13de20cd","Type":"ContainerDied","Data":"5b9a63bea591f294d59b34ca047fcbc567057b96a9f92f7e9e704c64575782d8"} Jan 20 18:11:25 crc kubenswrapper[4661]: I0120 18:11:25.495097 4661 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-wd4nq" Jan 20 18:11:25 crc kubenswrapper[4661]: I0120 18:11:25.581205 4661 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-j48cg" Jan 20 18:11:25 crc kubenswrapper[4661]: I0120 18:11:25.608631 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c223ef1c-922a-42b8-b8d0-428a27f5ae6d-config\") pod \"c223ef1c-922a-42b8-b8d0-428a27f5ae6d\" (UID: \"c223ef1c-922a-42b8-b8d0-428a27f5ae6d\") " Jan 20 18:11:25 crc kubenswrapper[4661]: I0120 18:11:25.608730 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/457c15d5-4066-4d88-bbb4-a9fe13de20cd-serving-cert\") pod \"457c15d5-4066-4d88-bbb4-a9fe13de20cd\" (UID: \"457c15d5-4066-4d88-bbb4-a9fe13de20cd\") " Jan 20 18:11:25 crc kubenswrapper[4661]: I0120 18:11:25.608767 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l8vww\" (UniqueName: \"kubernetes.io/projected/c223ef1c-922a-42b8-b8d0-428a27f5ae6d-kube-api-access-l8vww\") pod \"c223ef1c-922a-42b8-b8d0-428a27f5ae6d\" (UID: \"c223ef1c-922a-42b8-b8d0-428a27f5ae6d\") " Jan 20 18:11:25 crc kubenswrapper[4661]: I0120 18:11:25.608796 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c223ef1c-922a-42b8-b8d0-428a27f5ae6d-serving-cert\") pod \"c223ef1c-922a-42b8-b8d0-428a27f5ae6d\" (UID: \"c223ef1c-922a-42b8-b8d0-428a27f5ae6d\") " Jan 20 18:11:25 crc kubenswrapper[4661]: I0120 18:11:25.608847 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/457c15d5-4066-4d88-bbb4-a9fe13de20cd-config\") pod \"457c15d5-4066-4d88-bbb4-a9fe13de20cd\" (UID: \"457c15d5-4066-4d88-bbb4-a9fe13de20cd\") " Jan 20 18:11:25 crc kubenswrapper[4661]: I0120 18:11:25.608867 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c223ef1c-922a-42b8-b8d0-428a27f5ae6d-proxy-ca-bundles\") pod \"c223ef1c-922a-42b8-b8d0-428a27f5ae6d\" (UID: \"c223ef1c-922a-42b8-b8d0-428a27f5ae6d\") " Jan 20 18:11:25 crc kubenswrapper[4661]: I0120 18:11:25.608903 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c223ef1c-922a-42b8-b8d0-428a27f5ae6d-client-ca\") pod \"c223ef1c-922a-42b8-b8d0-428a27f5ae6d\" (UID: \"c223ef1c-922a-42b8-b8d0-428a27f5ae6d\") " Jan 20 18:11:25 crc kubenswrapper[4661]: I0120 18:11:25.608922 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g8xsw\" (UniqueName: \"kubernetes.io/projected/457c15d5-4066-4d88-bbb4-a9fe13de20cd-kube-api-access-g8xsw\") pod \"457c15d5-4066-4d88-bbb4-a9fe13de20cd\" (UID: \"457c15d5-4066-4d88-bbb4-a9fe13de20cd\") " Jan 20 18:11:25 crc kubenswrapper[4661]: I0120 18:11:25.608941 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/457c15d5-4066-4d88-bbb4-a9fe13de20cd-client-ca\") pod \"457c15d5-4066-4d88-bbb4-a9fe13de20cd\" (UID: \"457c15d5-4066-4d88-bbb4-a9fe13de20cd\") " Jan 20 18:11:25 crc kubenswrapper[4661]: I0120 18:11:25.609744 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/457c15d5-4066-4d88-bbb4-a9fe13de20cd-client-ca" (OuterVolumeSpecName: "client-ca") pod "457c15d5-4066-4d88-bbb4-a9fe13de20cd" (UID: "457c15d5-4066-4d88-bbb4-a9fe13de20cd"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 18:11:25 crc kubenswrapper[4661]: I0120 18:11:25.609960 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c223ef1c-922a-42b8-b8d0-428a27f5ae6d-client-ca" (OuterVolumeSpecName: "client-ca") pod "c223ef1c-922a-42b8-b8d0-428a27f5ae6d" (UID: "c223ef1c-922a-42b8-b8d0-428a27f5ae6d"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 18:11:25 crc kubenswrapper[4661]: I0120 18:11:25.610272 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c223ef1c-922a-42b8-b8d0-428a27f5ae6d-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "c223ef1c-922a-42b8-b8d0-428a27f5ae6d" (UID: "c223ef1c-922a-42b8-b8d0-428a27f5ae6d"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 18:11:25 crc kubenswrapper[4661]: I0120 18:11:25.610353 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/457c15d5-4066-4d88-bbb4-a9fe13de20cd-config" (OuterVolumeSpecName: "config") pod "457c15d5-4066-4d88-bbb4-a9fe13de20cd" (UID: "457c15d5-4066-4d88-bbb4-a9fe13de20cd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 18:11:25 crc kubenswrapper[4661]: I0120 18:11:25.610886 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c223ef1c-922a-42b8-b8d0-428a27f5ae6d-config" (OuterVolumeSpecName: "config") pod "c223ef1c-922a-42b8-b8d0-428a27f5ae6d" (UID: "c223ef1c-922a-42b8-b8d0-428a27f5ae6d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 18:11:25 crc kubenswrapper[4661]: I0120 18:11:25.643189 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/457c15d5-4066-4d88-bbb4-a9fe13de20cd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "457c15d5-4066-4d88-bbb4-a9fe13de20cd" (UID: "457c15d5-4066-4d88-bbb4-a9fe13de20cd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 18:11:25 crc kubenswrapper[4661]: I0120 18:11:25.644637 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c223ef1c-922a-42b8-b8d0-428a27f5ae6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "c223ef1c-922a-42b8-b8d0-428a27f5ae6d" (UID: "c223ef1c-922a-42b8-b8d0-428a27f5ae6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 18:11:25 crc kubenswrapper[4661]: I0120 18:11:25.644901 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c223ef1c-922a-42b8-b8d0-428a27f5ae6d-kube-api-access-l8vww" (OuterVolumeSpecName: "kube-api-access-l8vww") pod "c223ef1c-922a-42b8-b8d0-428a27f5ae6d" (UID: "c223ef1c-922a-42b8-b8d0-428a27f5ae6d"). InnerVolumeSpecName "kube-api-access-l8vww". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 18:11:25 crc kubenswrapper[4661]: I0120 18:11:25.644992 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/457c15d5-4066-4d88-bbb4-a9fe13de20cd-kube-api-access-g8xsw" (OuterVolumeSpecName: "kube-api-access-g8xsw") pod "457c15d5-4066-4d88-bbb4-a9fe13de20cd" (UID: "457c15d5-4066-4d88-bbb4-a9fe13de20cd"). InnerVolumeSpecName "kube-api-access-g8xsw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 18:11:25 crc kubenswrapper[4661]: I0120 18:11:25.710371 4661 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/457c15d5-4066-4d88-bbb4-a9fe13de20cd-config\") on node \"crc\" DevicePath \"\"" Jan 20 18:11:25 crc kubenswrapper[4661]: I0120 18:11:25.710408 4661 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c223ef1c-922a-42b8-b8d0-428a27f5ae6d-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 20 18:11:25 crc kubenswrapper[4661]: I0120 18:11:25.710420 4661 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c223ef1c-922a-42b8-b8d0-428a27f5ae6d-client-ca\") on node \"crc\" DevicePath \"\"" Jan 20 18:11:25 crc kubenswrapper[4661]: I0120 18:11:25.710429 4661 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g8xsw\" (UniqueName: \"kubernetes.io/projected/457c15d5-4066-4d88-bbb4-a9fe13de20cd-kube-api-access-g8xsw\") on node \"crc\" DevicePath \"\"" Jan 20 18:11:25 crc kubenswrapper[4661]: I0120 18:11:25.710438 4661 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/457c15d5-4066-4d88-bbb4-a9fe13de20cd-client-ca\") on node \"crc\" DevicePath \"\"" Jan 20 18:11:25 crc kubenswrapper[4661]: I0120 18:11:25.710446 4661 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c223ef1c-922a-42b8-b8d0-428a27f5ae6d-config\") on node \"crc\" DevicePath \"\"" Jan 20 18:11:25 crc kubenswrapper[4661]: I0120 18:11:25.710457 4661 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/457c15d5-4066-4d88-bbb4-a9fe13de20cd-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 20 18:11:25 crc kubenswrapper[4661]: I0120 18:11:25.710465 4661 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l8vww\" (UniqueName: \"kubernetes.io/projected/c223ef1c-922a-42b8-b8d0-428a27f5ae6d-kube-api-access-l8vww\") on node \"crc\" DevicePath \"\"" Jan 20 18:11:25 crc kubenswrapper[4661]: I0120 18:11:25.710473 4661 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c223ef1c-922a-42b8-b8d0-428a27f5ae6d-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 20 18:11:26 crc kubenswrapper[4661]: I0120 18:11:26.227216 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-wd4nq" event={"ID":"c223ef1c-922a-42b8-b8d0-428a27f5ae6d","Type":"ContainerDied","Data":"5f1cbe3dfa9e2632fdd27e9e8f31f139dbbc95a11f739f8e4d9a8d35fb388264"} Jan 20 18:11:26 crc kubenswrapper[4661]: I0120 18:11:26.227272 4661 scope.go:117] "RemoveContainer" containerID="53d40a2a00f8e68cb2f04c1ac5bca2cdb1fb4d312ff10b495cb1fb8fb2e4bd2e" Jan 20 18:11:26 crc kubenswrapper[4661]: I0120 18:11:26.227385 4661 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-wd4nq" Jan 20 18:11:26 crc kubenswrapper[4661]: I0120 18:11:26.230850 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-j48cg" event={"ID":"457c15d5-4066-4d88-bbb4-a9fe13de20cd","Type":"ContainerDied","Data":"a0e24c38ee002ec2152c650325d169931e8f23fe4d6b5c87987d1d7ee1d9decf"} Jan 20 18:11:26 crc kubenswrapper[4661]: I0120 18:11:26.230920 4661 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-j48cg" Jan 20 18:11:26 crc kubenswrapper[4661]: I0120 18:11:26.250162 4661 scope.go:117] "RemoveContainer" containerID="5b9a63bea591f294d59b34ca047fcbc567057b96a9f92f7e9e704c64575782d8" Jan 20 18:11:26 crc kubenswrapper[4661]: I0120 18:11:26.255862 4661 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-j48cg"] Jan 20 18:11:26 crc kubenswrapper[4661]: I0120 18:11:26.264609 4661 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-j48cg"] Jan 20 18:11:26 crc kubenswrapper[4661]: I0120 18:11:26.273442 4661 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-wd4nq"] Jan 20 18:11:26 crc kubenswrapper[4661]: I0120 18:11:26.278316 4661 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-wd4nq"] Jan 20 18:11:26 crc kubenswrapper[4661]: I0120 18:11:26.977789 4661 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-795b8d5757-4qlvx"] Jan 20 18:11:26 crc kubenswrapper[4661]: E0120 18:11:26.978126 4661 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c223ef1c-922a-42b8-b8d0-428a27f5ae6d" containerName="controller-manager" Jan 20 18:11:26 crc kubenswrapper[4661]: I0120 18:11:26.978168 4661 state_mem.go:107] "Deleted CPUSet assignment" podUID="c223ef1c-922a-42b8-b8d0-428a27f5ae6d" containerName="controller-manager" Jan 20 18:11:26 crc kubenswrapper[4661]: E0120 18:11:26.978198 4661 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="457c15d5-4066-4d88-bbb4-a9fe13de20cd" containerName="route-controller-manager" Jan 20 18:11:26 crc kubenswrapper[4661]: I0120 18:11:26.978210 4661 state_mem.go:107] "Deleted CPUSet assignment" podUID="457c15d5-4066-4d88-bbb4-a9fe13de20cd" containerName="route-controller-manager" Jan 20 18:11:26 crc kubenswrapper[4661]: I0120 18:11:26.978380 4661 memory_manager.go:354] "RemoveStaleState removing state" podUID="457c15d5-4066-4d88-bbb4-a9fe13de20cd" containerName="route-controller-manager" Jan 20 18:11:26 crc kubenswrapper[4661]: I0120 18:11:26.978413 4661 memory_manager.go:354] "RemoveStaleState removing state" podUID="c223ef1c-922a-42b8-b8d0-428a27f5ae6d" containerName="controller-manager" Jan 20 18:11:26 crc kubenswrapper[4661]: I0120 18:11:26.979060 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-795b8d5757-4qlvx" Jan 20 18:11:26 crc kubenswrapper[4661]: I0120 18:11:26.982097 4661 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 20 18:11:26 crc kubenswrapper[4661]: I0120 18:11:26.982098 4661 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 20 18:11:26 crc kubenswrapper[4661]: I0120 18:11:26.983309 4661 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 20 18:11:26 crc kubenswrapper[4661]: I0120 18:11:26.983521 4661 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 20 18:11:26 crc kubenswrapper[4661]: I0120 18:11:26.984428 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 20 18:11:26 crc kubenswrapper[4661]: I0120 18:11:26.984775 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 20 18:11:26 crc kubenswrapper[4661]: I0120 18:11:26.987700 4661 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-5494bbdbdf-l8f54"] Jan 20 18:11:26 crc kubenswrapper[4661]: I0120 18:11:26.988544 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5494bbdbdf-l8f54" Jan 20 18:11:26 crc kubenswrapper[4661]: I0120 18:11:26.990045 4661 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 20 18:11:26 crc kubenswrapper[4661]: I0120 18:11:26.990393 4661 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 20 18:11:26 crc kubenswrapper[4661]: I0120 18:11:26.990962 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 20 18:11:26 crc kubenswrapper[4661]: I0120 18:11:26.991300 4661 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 20 18:11:26 crc kubenswrapper[4661]: I0120 18:11:26.991517 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 20 18:11:26 crc kubenswrapper[4661]: I0120 18:11:26.991729 4661 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 20 18:11:26 crc kubenswrapper[4661]: I0120 18:11:26.997713 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5494bbdbdf-l8f54"] Jan 20 18:11:27 crc kubenswrapper[4661]: I0120 18:11:27.002621 4661 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 20 18:11:27 crc kubenswrapper[4661]: I0120 18:11:27.004835 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-795b8d5757-4qlvx"] Jan 20 18:11:27 crc kubenswrapper[4661]: I0120 18:11:27.030050 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b0e9e6c1-bd84-4fea-9dd4-9e63a79221ec-proxy-ca-bundles\") pod \"controller-manager-5494bbdbdf-l8f54\" (UID: \"b0e9e6c1-bd84-4fea-9dd4-9e63a79221ec\") " pod="openshift-controller-manager/controller-manager-5494bbdbdf-l8f54" Jan 20 18:11:27 crc kubenswrapper[4661]: I0120 18:11:27.030100 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4pqgt\" (UniqueName: \"kubernetes.io/projected/b0e9e6c1-bd84-4fea-9dd4-9e63a79221ec-kube-api-access-4pqgt\") pod \"controller-manager-5494bbdbdf-l8f54\" (UID: \"b0e9e6c1-bd84-4fea-9dd4-9e63a79221ec\") " pod="openshift-controller-manager/controller-manager-5494bbdbdf-l8f54" Jan 20 18:11:27 crc kubenswrapper[4661]: I0120 18:11:27.030191 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b0e9e6c1-bd84-4fea-9dd4-9e63a79221ec-config\") pod \"controller-manager-5494bbdbdf-l8f54\" (UID: \"b0e9e6c1-bd84-4fea-9dd4-9e63a79221ec\") " pod="openshift-controller-manager/controller-manager-5494bbdbdf-l8f54" Jan 20 18:11:27 crc kubenswrapper[4661]: I0120 18:11:27.030272 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b0e9e6c1-bd84-4fea-9dd4-9e63a79221ec-serving-cert\") pod \"controller-manager-5494bbdbdf-l8f54\" (UID: \"b0e9e6c1-bd84-4fea-9dd4-9e63a79221ec\") " pod="openshift-controller-manager/controller-manager-5494bbdbdf-l8f54" Jan 20 18:11:27 crc kubenswrapper[4661]: I0120 18:11:27.030325 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b0e9e6c1-bd84-4fea-9dd4-9e63a79221ec-client-ca\") pod \"controller-manager-5494bbdbdf-l8f54\" (UID: \"b0e9e6c1-bd84-4fea-9dd4-9e63a79221ec\") " pod="openshift-controller-manager/controller-manager-5494bbdbdf-l8f54" Jan 20 18:11:27 crc kubenswrapper[4661]: I0120 18:11:27.131796 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eb9c754e-79ee-46d7-9d9c-8d2dc3ea55a6-config\") pod \"route-controller-manager-795b8d5757-4qlvx\" (UID: \"eb9c754e-79ee-46d7-9d9c-8d2dc3ea55a6\") " pod="openshift-route-controller-manager/route-controller-manager-795b8d5757-4qlvx" Jan 20 18:11:27 crc kubenswrapper[4661]: I0120 18:11:27.131860 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b0e9e6c1-bd84-4fea-9dd4-9e63a79221ec-proxy-ca-bundles\") pod \"controller-manager-5494bbdbdf-l8f54\" (UID: \"b0e9e6c1-bd84-4fea-9dd4-9e63a79221ec\") " pod="openshift-controller-manager/controller-manager-5494bbdbdf-l8f54" Jan 20 18:11:27 crc kubenswrapper[4661]: I0120 18:11:27.131884 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4pqgt\" (UniqueName: \"kubernetes.io/projected/b0e9e6c1-bd84-4fea-9dd4-9e63a79221ec-kube-api-access-4pqgt\") pod \"controller-manager-5494bbdbdf-l8f54\" (UID: \"b0e9e6c1-bd84-4fea-9dd4-9e63a79221ec\") " pod="openshift-controller-manager/controller-manager-5494bbdbdf-l8f54" Jan 20 18:11:27 crc kubenswrapper[4661]: I0120 18:11:27.131907 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b0e9e6c1-bd84-4fea-9dd4-9e63a79221ec-config\") pod \"controller-manager-5494bbdbdf-l8f54\" (UID: \"b0e9e6c1-bd84-4fea-9dd4-9e63a79221ec\") " pod="openshift-controller-manager/controller-manager-5494bbdbdf-l8f54" Jan 20 18:11:27 crc kubenswrapper[4661]: I0120 18:11:27.131929 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b0e9e6c1-bd84-4fea-9dd4-9e63a79221ec-serving-cert\") pod \"controller-manager-5494bbdbdf-l8f54\" (UID: \"b0e9e6c1-bd84-4fea-9dd4-9e63a79221ec\") " pod="openshift-controller-manager/controller-manager-5494bbdbdf-l8f54" Jan 20 18:11:27 crc kubenswrapper[4661]: I0120 18:11:27.131955 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zb6sq\" (UniqueName: \"kubernetes.io/projected/eb9c754e-79ee-46d7-9d9c-8d2dc3ea55a6-kube-api-access-zb6sq\") pod \"route-controller-manager-795b8d5757-4qlvx\" (UID: \"eb9c754e-79ee-46d7-9d9c-8d2dc3ea55a6\") " pod="openshift-route-controller-manager/route-controller-manager-795b8d5757-4qlvx" Jan 20 18:11:27 crc kubenswrapper[4661]: I0120 18:11:27.131973 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b0e9e6c1-bd84-4fea-9dd4-9e63a79221ec-client-ca\") pod \"controller-manager-5494bbdbdf-l8f54\" (UID: \"b0e9e6c1-bd84-4fea-9dd4-9e63a79221ec\") " pod="openshift-controller-manager/controller-manager-5494bbdbdf-l8f54" Jan 20 18:11:27 crc kubenswrapper[4661]: I0120 18:11:27.131995 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/eb9c754e-79ee-46d7-9d9c-8d2dc3ea55a6-client-ca\") pod \"route-controller-manager-795b8d5757-4qlvx\" (UID: \"eb9c754e-79ee-46d7-9d9c-8d2dc3ea55a6\") " pod="openshift-route-controller-manager/route-controller-manager-795b8d5757-4qlvx" Jan 20 18:11:27 crc kubenswrapper[4661]: I0120 18:11:27.132012 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/eb9c754e-79ee-46d7-9d9c-8d2dc3ea55a6-serving-cert\") pod \"route-controller-manager-795b8d5757-4qlvx\" (UID: \"eb9c754e-79ee-46d7-9d9c-8d2dc3ea55a6\") " pod="openshift-route-controller-manager/route-controller-manager-795b8d5757-4qlvx" Jan 20 18:11:27 crc kubenswrapper[4661]: I0120 18:11:27.133099 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b0e9e6c1-bd84-4fea-9dd4-9e63a79221ec-proxy-ca-bundles\") pod \"controller-manager-5494bbdbdf-l8f54\" (UID: \"b0e9e6c1-bd84-4fea-9dd4-9e63a79221ec\") " pod="openshift-controller-manager/controller-manager-5494bbdbdf-l8f54" Jan 20 18:11:27 crc kubenswrapper[4661]: I0120 18:11:27.134187 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b0e9e6c1-bd84-4fea-9dd4-9e63a79221ec-config\") pod \"controller-manager-5494bbdbdf-l8f54\" (UID: \"b0e9e6c1-bd84-4fea-9dd4-9e63a79221ec\") " pod="openshift-controller-manager/controller-manager-5494bbdbdf-l8f54" Jan 20 18:11:27 crc kubenswrapper[4661]: I0120 18:11:27.134504 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b0e9e6c1-bd84-4fea-9dd4-9e63a79221ec-client-ca\") pod \"controller-manager-5494bbdbdf-l8f54\" (UID: \"b0e9e6c1-bd84-4fea-9dd4-9e63a79221ec\") " pod="openshift-controller-manager/controller-manager-5494bbdbdf-l8f54" Jan 20 18:11:27 crc kubenswrapper[4661]: I0120 18:11:27.138262 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b0e9e6c1-bd84-4fea-9dd4-9e63a79221ec-serving-cert\") pod \"controller-manager-5494bbdbdf-l8f54\" (UID: \"b0e9e6c1-bd84-4fea-9dd4-9e63a79221ec\") " pod="openshift-controller-manager/controller-manager-5494bbdbdf-l8f54" Jan 20 18:11:27 crc kubenswrapper[4661]: I0120 18:11:27.170300 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4pqgt\" (UniqueName: \"kubernetes.io/projected/b0e9e6c1-bd84-4fea-9dd4-9e63a79221ec-kube-api-access-4pqgt\") pod \"controller-manager-5494bbdbdf-l8f54\" (UID: \"b0e9e6c1-bd84-4fea-9dd4-9e63a79221ec\") " pod="openshift-controller-manager/controller-manager-5494bbdbdf-l8f54" Jan 20 18:11:27 crc kubenswrapper[4661]: I0120 18:11:27.233393 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zb6sq\" (UniqueName: \"kubernetes.io/projected/eb9c754e-79ee-46d7-9d9c-8d2dc3ea55a6-kube-api-access-zb6sq\") pod \"route-controller-manager-795b8d5757-4qlvx\" (UID: \"eb9c754e-79ee-46d7-9d9c-8d2dc3ea55a6\") " pod="openshift-route-controller-manager/route-controller-manager-795b8d5757-4qlvx" Jan 20 18:11:27 crc kubenswrapper[4661]: I0120 18:11:27.233449 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/eb9c754e-79ee-46d7-9d9c-8d2dc3ea55a6-client-ca\") pod \"route-controller-manager-795b8d5757-4qlvx\" (UID: \"eb9c754e-79ee-46d7-9d9c-8d2dc3ea55a6\") " pod="openshift-route-controller-manager/route-controller-manager-795b8d5757-4qlvx" Jan 20 18:11:27 crc kubenswrapper[4661]: I0120 18:11:27.233472 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/eb9c754e-79ee-46d7-9d9c-8d2dc3ea55a6-serving-cert\") pod \"route-controller-manager-795b8d5757-4qlvx\" (UID: \"eb9c754e-79ee-46d7-9d9c-8d2dc3ea55a6\") " pod="openshift-route-controller-manager/route-controller-manager-795b8d5757-4qlvx" Jan 20 18:11:27 crc kubenswrapper[4661]: I0120 18:11:27.233500 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eb9c754e-79ee-46d7-9d9c-8d2dc3ea55a6-config\") pod \"route-controller-manager-795b8d5757-4qlvx\" (UID: \"eb9c754e-79ee-46d7-9d9c-8d2dc3ea55a6\") " pod="openshift-route-controller-manager/route-controller-manager-795b8d5757-4qlvx" Jan 20 18:11:27 crc kubenswrapper[4661]: I0120 18:11:27.234337 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/eb9c754e-79ee-46d7-9d9c-8d2dc3ea55a6-client-ca\") pod \"route-controller-manager-795b8d5757-4qlvx\" (UID: \"eb9c754e-79ee-46d7-9d9c-8d2dc3ea55a6\") " pod="openshift-route-controller-manager/route-controller-manager-795b8d5757-4qlvx" Jan 20 18:11:27 crc kubenswrapper[4661]: I0120 18:11:27.234856 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eb9c754e-79ee-46d7-9d9c-8d2dc3ea55a6-config\") pod \"route-controller-manager-795b8d5757-4qlvx\" (UID: \"eb9c754e-79ee-46d7-9d9c-8d2dc3ea55a6\") " pod="openshift-route-controller-manager/route-controller-manager-795b8d5757-4qlvx" Jan 20 18:11:27 crc kubenswrapper[4661]: I0120 18:11:27.237365 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/eb9c754e-79ee-46d7-9d9c-8d2dc3ea55a6-serving-cert\") pod \"route-controller-manager-795b8d5757-4qlvx\" (UID: \"eb9c754e-79ee-46d7-9d9c-8d2dc3ea55a6\") " pod="openshift-route-controller-manager/route-controller-manager-795b8d5757-4qlvx" Jan 20 18:11:27 crc kubenswrapper[4661]: I0120 18:11:27.256986 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zb6sq\" (UniqueName: \"kubernetes.io/projected/eb9c754e-79ee-46d7-9d9c-8d2dc3ea55a6-kube-api-access-zb6sq\") pod \"route-controller-manager-795b8d5757-4qlvx\" (UID: \"eb9c754e-79ee-46d7-9d9c-8d2dc3ea55a6\") " pod="openshift-route-controller-manager/route-controller-manager-795b8d5757-4qlvx" Jan 20 18:11:27 crc kubenswrapper[4661]: I0120 18:11:27.297944 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-795b8d5757-4qlvx" Jan 20 18:11:27 crc kubenswrapper[4661]: I0120 18:11:27.308296 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5494bbdbdf-l8f54" Jan 20 18:11:27 crc kubenswrapper[4661]: I0120 18:11:27.644037 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5494bbdbdf-l8f54"] Jan 20 18:11:27 crc kubenswrapper[4661]: I0120 18:11:27.800729 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-795b8d5757-4qlvx"] Jan 20 18:11:27 crc kubenswrapper[4661]: W0120 18:11:27.805743 4661 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podeb9c754e_79ee_46d7_9d9c_8d2dc3ea55a6.slice/crio-05a0adbc9a277ab333a2d765625743b08ee573035eec54baaa0d70b185b665d8 WatchSource:0}: Error finding container 05a0adbc9a277ab333a2d765625743b08ee573035eec54baaa0d70b185b665d8: Status 404 returned error can't find the container with id 05a0adbc9a277ab333a2d765625743b08ee573035eec54baaa0d70b185b665d8 Jan 20 18:11:28 crc kubenswrapper[4661]: I0120 18:11:28.147479 4661 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="457c15d5-4066-4d88-bbb4-a9fe13de20cd" path="/var/lib/kubelet/pods/457c15d5-4066-4d88-bbb4-a9fe13de20cd/volumes" Jan 20 18:11:28 crc kubenswrapper[4661]: I0120 18:11:28.148271 4661 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c223ef1c-922a-42b8-b8d0-428a27f5ae6d" path="/var/lib/kubelet/pods/c223ef1c-922a-42b8-b8d0-428a27f5ae6d/volumes" Jan 20 18:11:28 crc kubenswrapper[4661]: I0120 18:11:28.244512 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-795b8d5757-4qlvx" event={"ID":"eb9c754e-79ee-46d7-9d9c-8d2dc3ea55a6","Type":"ContainerStarted","Data":"dc3fc549f55c23f7aece1dd8b6baa4a74c23c1cfd7e34f71b3b74c9f4f437d42"} Jan 20 18:11:28 crc kubenswrapper[4661]: I0120 18:11:28.245471 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-795b8d5757-4qlvx" event={"ID":"eb9c754e-79ee-46d7-9d9c-8d2dc3ea55a6","Type":"ContainerStarted","Data":"05a0adbc9a277ab333a2d765625743b08ee573035eec54baaa0d70b185b665d8"} Jan 20 18:11:28 crc kubenswrapper[4661]: I0120 18:11:28.245572 4661 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-795b8d5757-4qlvx" Jan 20 18:11:28 crc kubenswrapper[4661]: I0120 18:11:28.245761 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5494bbdbdf-l8f54" event={"ID":"b0e9e6c1-bd84-4fea-9dd4-9e63a79221ec","Type":"ContainerStarted","Data":"03b211d9f0d823bbb48a0a17a7ad883c10620323e37466bf1b1453c96e930212"} Jan 20 18:11:28 crc kubenswrapper[4661]: I0120 18:11:28.245810 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5494bbdbdf-l8f54" event={"ID":"b0e9e6c1-bd84-4fea-9dd4-9e63a79221ec","Type":"ContainerStarted","Data":"01e326edafa225a6e654546d622c33038092b5787ac98197fc84ad10858bf8ab"} Jan 20 18:11:28 crc kubenswrapper[4661]: I0120 18:11:28.246042 4661 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-5494bbdbdf-l8f54" Jan 20 18:11:28 crc kubenswrapper[4661]: I0120 18:11:28.269587 4661 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-795b8d5757-4qlvx" podStartSLOduration=3.269570868 podStartE2EDuration="3.269570868s" podCreationTimestamp="2026-01-20 18:11:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 18:11:28.269218419 +0000 UTC m=+344.600008081" watchObservedRunningTime="2026-01-20 18:11:28.269570868 +0000 UTC m=+344.600360530" Jan 20 18:11:28 crc kubenswrapper[4661]: I0120 18:11:28.276050 4661 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-5494bbdbdf-l8f54" Jan 20 18:11:28 crc kubenswrapper[4661]: I0120 18:11:28.290998 4661 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-795b8d5757-4qlvx" Jan 20 18:11:28 crc kubenswrapper[4661]: I0120 18:11:28.325835 4661 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-5494bbdbdf-l8f54" podStartSLOduration=3.325807878 podStartE2EDuration="3.325807878s" podCreationTimestamp="2026-01-20 18:11:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 18:11:28.299548613 +0000 UTC m=+344.630338285" watchObservedRunningTime="2026-01-20 18:11:28.325807878 +0000 UTC m=+344.656597540" Jan 20 18:11:34 crc kubenswrapper[4661]: I0120 18:11:34.160520 4661 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-5jf5n"] Jan 20 18:11:34 crc kubenswrapper[4661]: I0120 18:11:34.161486 4661 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-5jf5n" podUID="ac4a870b-4ca8-4046-b9b1-6001f8b13a51" containerName="registry-server" containerID="cri-o://4ce98ee0b20cad31342ee8168f1f3238f10a2059a3f32502cebb6762ceffac4f" gracePeriod=30 Jan 20 18:11:34 crc kubenswrapper[4661]: I0120 18:11:34.172331 4661 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-7lfrt"] Jan 20 18:11:34 crc kubenswrapper[4661]: I0120 18:11:34.172827 4661 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-7lfrt" podUID="c34658be-616a-469f-a560-61709f82cde6" containerName="registry-server" containerID="cri-o://4cd41c4745ea7ae489870a1a22a7169f3afbab9860a41d8ce1244f1853a7bd8d" gracePeriod=30 Jan 20 18:11:34 crc kubenswrapper[4661]: I0120 18:11:34.188298 4661 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-flvxz"] Jan 20 18:11:34 crc kubenswrapper[4661]: I0120 18:11:34.188637 4661 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-flvxz" podUID="9a7d9c3e-88e9-44b2-98bc-6aab91fbf9b4" containerName="marketplace-operator" containerID="cri-o://4f5c399f1f43fa72aa1342d35d3156571d0a9548a99a1e3dd64db11795daec82" gracePeriod=30 Jan 20 18:11:34 crc kubenswrapper[4661]: I0120 18:11:34.200841 4661 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-rvkbf"] Jan 20 18:11:34 crc kubenswrapper[4661]: I0120 18:11:34.201452 4661 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-rvkbf" podUID="40861cf6-5e11-46ad-be02-b415c4f06dee" containerName="registry-server" containerID="cri-o://20844769653e3bc8dd46c3d2e967c83d2cddc6e594ea5c2d198fb826c885b921" gracePeriod=30 Jan 20 18:11:34 crc kubenswrapper[4661]: I0120 18:11:34.209941 4661 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-cr2m2"] Jan 20 18:11:34 crc kubenswrapper[4661]: I0120 18:11:34.210224 4661 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-cr2m2" podUID="a087d508-430f-45ba-bff2-b58d06cebd51" containerName="registry-server" containerID="cri-o://46382e293347af2c3406396bb7946e118169b344875f2a17c16febcf9dd77459" gracePeriod=30 Jan 20 18:11:34 crc kubenswrapper[4661]: I0120 18:11:34.216984 4661 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-p2zck"] Jan 20 18:11:34 crc kubenswrapper[4661]: I0120 18:11:34.217610 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-p2zck" Jan 20 18:11:34 crc kubenswrapper[4661]: I0120 18:11:34.236125 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-p2zck"] Jan 20 18:11:34 crc kubenswrapper[4661]: I0120 18:11:34.284350 4661 generic.go:334] "Generic (PLEG): container finished" podID="ac4a870b-4ca8-4046-b9b1-6001f8b13a51" containerID="4ce98ee0b20cad31342ee8168f1f3238f10a2059a3f32502cebb6762ceffac4f" exitCode=0 Jan 20 18:11:34 crc kubenswrapper[4661]: I0120 18:11:34.284385 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5jf5n" event={"ID":"ac4a870b-4ca8-4046-b9b1-6001f8b13a51","Type":"ContainerDied","Data":"4ce98ee0b20cad31342ee8168f1f3238f10a2059a3f32502cebb6762ceffac4f"} Jan 20 18:11:34 crc kubenswrapper[4661]: I0120 18:11:34.368533 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wx5st\" (UniqueName: \"kubernetes.io/projected/663dd2a4-8e69-41d7-b561-4419dd0b4e90-kube-api-access-wx5st\") pod \"marketplace-operator-79b997595-p2zck\" (UID: \"663dd2a4-8e69-41d7-b561-4419dd0b4e90\") " pod="openshift-marketplace/marketplace-operator-79b997595-p2zck" Jan 20 18:11:34 crc kubenswrapper[4661]: I0120 18:11:34.368600 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/663dd2a4-8e69-41d7-b561-4419dd0b4e90-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-p2zck\" (UID: \"663dd2a4-8e69-41d7-b561-4419dd0b4e90\") " pod="openshift-marketplace/marketplace-operator-79b997595-p2zck" Jan 20 18:11:34 crc kubenswrapper[4661]: I0120 18:11:34.368641 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/663dd2a4-8e69-41d7-b561-4419dd0b4e90-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-p2zck\" (UID: \"663dd2a4-8e69-41d7-b561-4419dd0b4e90\") " pod="openshift-marketplace/marketplace-operator-79b997595-p2zck" Jan 20 18:11:34 crc kubenswrapper[4661]: I0120 18:11:34.470193 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wx5st\" (UniqueName: \"kubernetes.io/projected/663dd2a4-8e69-41d7-b561-4419dd0b4e90-kube-api-access-wx5st\") pod \"marketplace-operator-79b997595-p2zck\" (UID: \"663dd2a4-8e69-41d7-b561-4419dd0b4e90\") " pod="openshift-marketplace/marketplace-operator-79b997595-p2zck" Jan 20 18:11:34 crc kubenswrapper[4661]: I0120 18:11:34.470762 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/663dd2a4-8e69-41d7-b561-4419dd0b4e90-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-p2zck\" (UID: \"663dd2a4-8e69-41d7-b561-4419dd0b4e90\") " pod="openshift-marketplace/marketplace-operator-79b997595-p2zck" Jan 20 18:11:34 crc kubenswrapper[4661]: I0120 18:11:34.470806 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/663dd2a4-8e69-41d7-b561-4419dd0b4e90-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-p2zck\" (UID: \"663dd2a4-8e69-41d7-b561-4419dd0b4e90\") " pod="openshift-marketplace/marketplace-operator-79b997595-p2zck" Jan 20 18:11:34 crc kubenswrapper[4661]: I0120 18:11:34.472099 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/663dd2a4-8e69-41d7-b561-4419dd0b4e90-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-p2zck\" (UID: \"663dd2a4-8e69-41d7-b561-4419dd0b4e90\") " pod="openshift-marketplace/marketplace-operator-79b997595-p2zck" Jan 20 18:11:34 crc kubenswrapper[4661]: I0120 18:11:34.476443 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/663dd2a4-8e69-41d7-b561-4419dd0b4e90-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-p2zck\" (UID: \"663dd2a4-8e69-41d7-b561-4419dd0b4e90\") " pod="openshift-marketplace/marketplace-operator-79b997595-p2zck" Jan 20 18:11:34 crc kubenswrapper[4661]: I0120 18:11:34.487645 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wx5st\" (UniqueName: \"kubernetes.io/projected/663dd2a4-8e69-41d7-b561-4419dd0b4e90-kube-api-access-wx5st\") pod \"marketplace-operator-79b997595-p2zck\" (UID: \"663dd2a4-8e69-41d7-b561-4419dd0b4e90\") " pod="openshift-marketplace/marketplace-operator-79b997595-p2zck" Jan 20 18:11:34 crc kubenswrapper[4661]: I0120 18:11:34.543986 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-p2zck" Jan 20 18:11:34 crc kubenswrapper[4661]: I0120 18:11:34.662769 4661 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5jf5n" Jan 20 18:11:34 crc kubenswrapper[4661]: I0120 18:11:34.778494 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wchwz\" (UniqueName: \"kubernetes.io/projected/ac4a870b-4ca8-4046-b9b1-6001f8b13a51-kube-api-access-wchwz\") pod \"ac4a870b-4ca8-4046-b9b1-6001f8b13a51\" (UID: \"ac4a870b-4ca8-4046-b9b1-6001f8b13a51\") " Jan 20 18:11:34 crc kubenswrapper[4661]: I0120 18:11:34.778627 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ac4a870b-4ca8-4046-b9b1-6001f8b13a51-utilities\") pod \"ac4a870b-4ca8-4046-b9b1-6001f8b13a51\" (UID: \"ac4a870b-4ca8-4046-b9b1-6001f8b13a51\") " Jan 20 18:11:34 crc kubenswrapper[4661]: I0120 18:11:34.778690 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ac4a870b-4ca8-4046-b9b1-6001f8b13a51-catalog-content\") pod \"ac4a870b-4ca8-4046-b9b1-6001f8b13a51\" (UID: \"ac4a870b-4ca8-4046-b9b1-6001f8b13a51\") " Jan 20 18:11:34 crc kubenswrapper[4661]: I0120 18:11:34.779463 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ac4a870b-4ca8-4046-b9b1-6001f8b13a51-utilities" (OuterVolumeSpecName: "utilities") pod "ac4a870b-4ca8-4046-b9b1-6001f8b13a51" (UID: "ac4a870b-4ca8-4046-b9b1-6001f8b13a51"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 18:11:34 crc kubenswrapper[4661]: I0120 18:11:34.785567 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ac4a870b-4ca8-4046-b9b1-6001f8b13a51-kube-api-access-wchwz" (OuterVolumeSpecName: "kube-api-access-wchwz") pod "ac4a870b-4ca8-4046-b9b1-6001f8b13a51" (UID: "ac4a870b-4ca8-4046-b9b1-6001f8b13a51"). InnerVolumeSpecName "kube-api-access-wchwz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 18:11:34 crc kubenswrapper[4661]: I0120 18:11:34.826875 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ac4a870b-4ca8-4046-b9b1-6001f8b13a51-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ac4a870b-4ca8-4046-b9b1-6001f8b13a51" (UID: "ac4a870b-4ca8-4046-b9b1-6001f8b13a51"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 18:11:34 crc kubenswrapper[4661]: I0120 18:11:34.881320 4661 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ac4a870b-4ca8-4046-b9b1-6001f8b13a51-utilities\") on node \"crc\" DevicePath \"\"" Jan 20 18:11:34 crc kubenswrapper[4661]: I0120 18:11:34.881352 4661 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ac4a870b-4ca8-4046-b9b1-6001f8b13a51-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 20 18:11:34 crc kubenswrapper[4661]: I0120 18:11:34.881364 4661 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wchwz\" (UniqueName: \"kubernetes.io/projected/ac4a870b-4ca8-4046-b9b1-6001f8b13a51-kube-api-access-wchwz\") on node \"crc\" DevicePath \"\"" Jan 20 18:11:34 crc kubenswrapper[4661]: I0120 18:11:34.896511 4661 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rvkbf" Jan 20 18:11:34 crc kubenswrapper[4661]: I0120 18:11:34.903250 4661 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-cr2m2" Jan 20 18:11:34 crc kubenswrapper[4661]: I0120 18:11:34.953230 4661 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7lfrt" Jan 20 18:11:34 crc kubenswrapper[4661]: I0120 18:11:34.955915 4661 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-flvxz" Jan 20 18:11:34 crc kubenswrapper[4661]: I0120 18:11:34.982204 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9g6cx\" (UniqueName: \"kubernetes.io/projected/40861cf6-5e11-46ad-be02-b415c4f06dee-kube-api-access-9g6cx\") pod \"40861cf6-5e11-46ad-be02-b415c4f06dee\" (UID: \"40861cf6-5e11-46ad-be02-b415c4f06dee\") " Jan 20 18:11:34 crc kubenswrapper[4661]: I0120 18:11:34.982292 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/40861cf6-5e11-46ad-be02-b415c4f06dee-utilities\") pod \"40861cf6-5e11-46ad-be02-b415c4f06dee\" (UID: \"40861cf6-5e11-46ad-be02-b415c4f06dee\") " Jan 20 18:11:34 crc kubenswrapper[4661]: I0120 18:11:34.982313 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a087d508-430f-45ba-bff2-b58d06cebd51-utilities\") pod \"a087d508-430f-45ba-bff2-b58d06cebd51\" (UID: \"a087d508-430f-45ba-bff2-b58d06cebd51\") " Jan 20 18:11:34 crc kubenswrapper[4661]: I0120 18:11:34.982355 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2t9xx\" (UniqueName: \"kubernetes.io/projected/a087d508-430f-45ba-bff2-b58d06cebd51-kube-api-access-2t9xx\") pod \"a087d508-430f-45ba-bff2-b58d06cebd51\" (UID: \"a087d508-430f-45ba-bff2-b58d06cebd51\") " Jan 20 18:11:34 crc kubenswrapper[4661]: I0120 18:11:34.982401 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a087d508-430f-45ba-bff2-b58d06cebd51-catalog-content\") pod \"a087d508-430f-45ba-bff2-b58d06cebd51\" (UID: \"a087d508-430f-45ba-bff2-b58d06cebd51\") " Jan 20 18:11:34 crc kubenswrapper[4661]: I0120 18:11:34.982435 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/40861cf6-5e11-46ad-be02-b415c4f06dee-catalog-content\") pod \"40861cf6-5e11-46ad-be02-b415c4f06dee\" (UID: \"40861cf6-5e11-46ad-be02-b415c4f06dee\") " Jan 20 18:11:34 crc kubenswrapper[4661]: I0120 18:11:34.984193 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a087d508-430f-45ba-bff2-b58d06cebd51-utilities" (OuterVolumeSpecName: "utilities") pod "a087d508-430f-45ba-bff2-b58d06cebd51" (UID: "a087d508-430f-45ba-bff2-b58d06cebd51"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 18:11:34 crc kubenswrapper[4661]: I0120 18:11:34.984409 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/40861cf6-5e11-46ad-be02-b415c4f06dee-utilities" (OuterVolumeSpecName: "utilities") pod "40861cf6-5e11-46ad-be02-b415c4f06dee" (UID: "40861cf6-5e11-46ad-be02-b415c4f06dee"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 18:11:34 crc kubenswrapper[4661]: I0120 18:11:34.990030 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/40861cf6-5e11-46ad-be02-b415c4f06dee-kube-api-access-9g6cx" (OuterVolumeSpecName: "kube-api-access-9g6cx") pod "40861cf6-5e11-46ad-be02-b415c4f06dee" (UID: "40861cf6-5e11-46ad-be02-b415c4f06dee"). InnerVolumeSpecName "kube-api-access-9g6cx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 18:11:34 crc kubenswrapper[4661]: I0120 18:11:34.992069 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a087d508-430f-45ba-bff2-b58d06cebd51-kube-api-access-2t9xx" (OuterVolumeSpecName: "kube-api-access-2t9xx") pod "a087d508-430f-45ba-bff2-b58d06cebd51" (UID: "a087d508-430f-45ba-bff2-b58d06cebd51"). InnerVolumeSpecName "kube-api-access-2t9xx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 18:11:35 crc kubenswrapper[4661]: I0120 18:11:35.009705 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/40861cf6-5e11-46ad-be02-b415c4f06dee-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "40861cf6-5e11-46ad-be02-b415c4f06dee" (UID: "40861cf6-5e11-46ad-be02-b415c4f06dee"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 18:11:35 crc kubenswrapper[4661]: I0120 18:11:35.083545 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fhsns\" (UniqueName: \"kubernetes.io/projected/9a7d9c3e-88e9-44b2-98bc-6aab91fbf9b4-kube-api-access-fhsns\") pod \"9a7d9c3e-88e9-44b2-98bc-6aab91fbf9b4\" (UID: \"9a7d9c3e-88e9-44b2-98bc-6aab91fbf9b4\") " Jan 20 18:11:35 crc kubenswrapper[4661]: I0120 18:11:35.083714 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/9a7d9c3e-88e9-44b2-98bc-6aab91fbf9b4-marketplace-operator-metrics\") pod \"9a7d9c3e-88e9-44b2-98bc-6aab91fbf9b4\" (UID: \"9a7d9c3e-88e9-44b2-98bc-6aab91fbf9b4\") " Jan 20 18:11:35 crc kubenswrapper[4661]: I0120 18:11:35.083735 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lpsks\" (UniqueName: \"kubernetes.io/projected/c34658be-616a-469f-a560-61709f82cde6-kube-api-access-lpsks\") pod \"c34658be-616a-469f-a560-61709f82cde6\" (UID: \"c34658be-616a-469f-a560-61709f82cde6\") " Jan 20 18:11:35 crc kubenswrapper[4661]: I0120 18:11:35.083785 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c34658be-616a-469f-a560-61709f82cde6-catalog-content\") pod \"c34658be-616a-469f-a560-61709f82cde6\" (UID: \"c34658be-616a-469f-a560-61709f82cde6\") " Jan 20 18:11:35 crc kubenswrapper[4661]: I0120 18:11:35.083825 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c34658be-616a-469f-a560-61709f82cde6-utilities\") pod \"c34658be-616a-469f-a560-61709f82cde6\" (UID: \"c34658be-616a-469f-a560-61709f82cde6\") " Jan 20 18:11:35 crc kubenswrapper[4661]: I0120 18:11:35.083895 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9a7d9c3e-88e9-44b2-98bc-6aab91fbf9b4-marketplace-trusted-ca\") pod \"9a7d9c3e-88e9-44b2-98bc-6aab91fbf9b4\" (UID: \"9a7d9c3e-88e9-44b2-98bc-6aab91fbf9b4\") " Jan 20 18:11:35 crc kubenswrapper[4661]: I0120 18:11:35.084146 4661 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/40861cf6-5e11-46ad-be02-b415c4f06dee-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 20 18:11:35 crc kubenswrapper[4661]: I0120 18:11:35.084180 4661 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9g6cx\" (UniqueName: \"kubernetes.io/projected/40861cf6-5e11-46ad-be02-b415c4f06dee-kube-api-access-9g6cx\") on node \"crc\" DevicePath \"\"" Jan 20 18:11:35 crc kubenswrapper[4661]: I0120 18:11:35.084192 4661 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/40861cf6-5e11-46ad-be02-b415c4f06dee-utilities\") on node \"crc\" DevicePath \"\"" Jan 20 18:11:35 crc kubenswrapper[4661]: I0120 18:11:35.084204 4661 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a087d508-430f-45ba-bff2-b58d06cebd51-utilities\") on node \"crc\" DevicePath \"\"" Jan 20 18:11:35 crc kubenswrapper[4661]: I0120 18:11:35.084213 4661 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2t9xx\" (UniqueName: \"kubernetes.io/projected/a087d508-430f-45ba-bff2-b58d06cebd51-kube-api-access-2t9xx\") on node \"crc\" DevicePath \"\"" Jan 20 18:11:35 crc kubenswrapper[4661]: I0120 18:11:35.084870 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9a7d9c3e-88e9-44b2-98bc-6aab91fbf9b4-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "9a7d9c3e-88e9-44b2-98bc-6aab91fbf9b4" (UID: "9a7d9c3e-88e9-44b2-98bc-6aab91fbf9b4"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 18:11:35 crc kubenswrapper[4661]: I0120 18:11:35.085969 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c34658be-616a-469f-a560-61709f82cde6-utilities" (OuterVolumeSpecName: "utilities") pod "c34658be-616a-469f-a560-61709f82cde6" (UID: "c34658be-616a-469f-a560-61709f82cde6"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 18:11:35 crc kubenswrapper[4661]: I0120 18:11:35.087529 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9a7d9c3e-88e9-44b2-98bc-6aab91fbf9b4-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "9a7d9c3e-88e9-44b2-98bc-6aab91fbf9b4" (UID: "9a7d9c3e-88e9-44b2-98bc-6aab91fbf9b4"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 18:11:35 crc kubenswrapper[4661]: I0120 18:11:35.089144 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9a7d9c3e-88e9-44b2-98bc-6aab91fbf9b4-kube-api-access-fhsns" (OuterVolumeSpecName: "kube-api-access-fhsns") pod "9a7d9c3e-88e9-44b2-98bc-6aab91fbf9b4" (UID: "9a7d9c3e-88e9-44b2-98bc-6aab91fbf9b4"). InnerVolumeSpecName "kube-api-access-fhsns". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 18:11:35 crc kubenswrapper[4661]: I0120 18:11:35.089466 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c34658be-616a-469f-a560-61709f82cde6-kube-api-access-lpsks" (OuterVolumeSpecName: "kube-api-access-lpsks") pod "c34658be-616a-469f-a560-61709f82cde6" (UID: "c34658be-616a-469f-a560-61709f82cde6"). InnerVolumeSpecName "kube-api-access-lpsks". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 18:11:35 crc kubenswrapper[4661]: I0120 18:11:35.104686 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a087d508-430f-45ba-bff2-b58d06cebd51-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a087d508-430f-45ba-bff2-b58d06cebd51" (UID: "a087d508-430f-45ba-bff2-b58d06cebd51"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 18:11:35 crc kubenswrapper[4661]: I0120 18:11:35.124204 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-p2zck"] Jan 20 18:11:35 crc kubenswrapper[4661]: I0120 18:11:35.143221 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c34658be-616a-469f-a560-61709f82cde6-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c34658be-616a-469f-a560-61709f82cde6" (UID: "c34658be-616a-469f-a560-61709f82cde6"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 18:11:35 crc kubenswrapper[4661]: I0120 18:11:35.185579 4661 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fhsns\" (UniqueName: \"kubernetes.io/projected/9a7d9c3e-88e9-44b2-98bc-6aab91fbf9b4-kube-api-access-fhsns\") on node \"crc\" DevicePath \"\"" Jan 20 18:11:35 crc kubenswrapper[4661]: I0120 18:11:35.186247 4661 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/9a7d9c3e-88e9-44b2-98bc-6aab91fbf9b4-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Jan 20 18:11:35 crc kubenswrapper[4661]: I0120 18:11:35.186316 4661 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lpsks\" (UniqueName: \"kubernetes.io/projected/c34658be-616a-469f-a560-61709f82cde6-kube-api-access-lpsks\") on node \"crc\" DevicePath \"\"" Jan 20 18:11:35 crc kubenswrapper[4661]: I0120 18:11:35.186401 4661 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c34658be-616a-469f-a560-61709f82cde6-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 20 18:11:35 crc kubenswrapper[4661]: I0120 18:11:35.186460 4661 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a087d508-430f-45ba-bff2-b58d06cebd51-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 20 18:11:35 crc kubenswrapper[4661]: I0120 18:11:35.186527 4661 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c34658be-616a-469f-a560-61709f82cde6-utilities\") on node \"crc\" DevicePath \"\"" Jan 20 18:11:35 crc kubenswrapper[4661]: I0120 18:11:35.186584 4661 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9a7d9c3e-88e9-44b2-98bc-6aab91fbf9b4-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 20 18:11:35 crc kubenswrapper[4661]: I0120 18:11:35.293508 4661 generic.go:334] "Generic (PLEG): container finished" podID="9a7d9c3e-88e9-44b2-98bc-6aab91fbf9b4" containerID="4f5c399f1f43fa72aa1342d35d3156571d0a9548a99a1e3dd64db11795daec82" exitCode=0 Jan 20 18:11:35 crc kubenswrapper[4661]: I0120 18:11:35.293576 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-flvxz" event={"ID":"9a7d9c3e-88e9-44b2-98bc-6aab91fbf9b4","Type":"ContainerDied","Data":"4f5c399f1f43fa72aa1342d35d3156571d0a9548a99a1e3dd64db11795daec82"} Jan 20 18:11:35 crc kubenswrapper[4661]: I0120 18:11:35.293605 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-flvxz" event={"ID":"9a7d9c3e-88e9-44b2-98bc-6aab91fbf9b4","Type":"ContainerDied","Data":"01ccab6430930e877dfb12601065669655cc9d4b67d76dc0c147bcc8885b4281"} Jan 20 18:11:35 crc kubenswrapper[4661]: I0120 18:11:35.293648 4661 scope.go:117] "RemoveContainer" containerID="4f5c399f1f43fa72aa1342d35d3156571d0a9548a99a1e3dd64db11795daec82" Jan 20 18:11:35 crc kubenswrapper[4661]: I0120 18:11:35.293953 4661 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-flvxz" Jan 20 18:11:35 crc kubenswrapper[4661]: I0120 18:11:35.297814 4661 generic.go:334] "Generic (PLEG): container finished" podID="c34658be-616a-469f-a560-61709f82cde6" containerID="4cd41c4745ea7ae489870a1a22a7169f3afbab9860a41d8ce1244f1853a7bd8d" exitCode=0 Jan 20 18:11:35 crc kubenswrapper[4661]: I0120 18:11:35.297939 4661 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7lfrt" Jan 20 18:11:35 crc kubenswrapper[4661]: I0120 18:11:35.298829 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7lfrt" event={"ID":"c34658be-616a-469f-a560-61709f82cde6","Type":"ContainerDied","Data":"4cd41c4745ea7ae489870a1a22a7169f3afbab9860a41d8ce1244f1853a7bd8d"} Jan 20 18:11:35 crc kubenswrapper[4661]: I0120 18:11:35.298869 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7lfrt" event={"ID":"c34658be-616a-469f-a560-61709f82cde6","Type":"ContainerDied","Data":"10f52907222a143512351165bdb5cb0d5a92bf6f9309ecde7fc3da63cfec33fc"} Jan 20 18:11:35 crc kubenswrapper[4661]: I0120 18:11:35.306580 4661 generic.go:334] "Generic (PLEG): container finished" podID="40861cf6-5e11-46ad-be02-b415c4f06dee" containerID="20844769653e3bc8dd46c3d2e967c83d2cddc6e594ea5c2d198fb826c885b921" exitCode=0 Jan 20 18:11:35 crc kubenswrapper[4661]: I0120 18:11:35.306655 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rvkbf" event={"ID":"40861cf6-5e11-46ad-be02-b415c4f06dee","Type":"ContainerDied","Data":"20844769653e3bc8dd46c3d2e967c83d2cddc6e594ea5c2d198fb826c885b921"} Jan 20 18:11:35 crc kubenswrapper[4661]: I0120 18:11:35.306713 4661 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rvkbf" Jan 20 18:11:35 crc kubenswrapper[4661]: I0120 18:11:35.306698 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rvkbf" event={"ID":"40861cf6-5e11-46ad-be02-b415c4f06dee","Type":"ContainerDied","Data":"c5a0f84abd162ad6153c3a53b6a491e853d2d91e534c3544a4f1ae3cf85c58e0"} Jan 20 18:11:35 crc kubenswrapper[4661]: I0120 18:11:35.309475 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-p2zck" event={"ID":"663dd2a4-8e69-41d7-b561-4419dd0b4e90","Type":"ContainerStarted","Data":"cb605b31d6c721854f21dcad1525a33d22eb65304b62b37c6ae38eab108198ac"} Jan 20 18:11:35 crc kubenswrapper[4661]: I0120 18:11:35.309502 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-p2zck" event={"ID":"663dd2a4-8e69-41d7-b561-4419dd0b4e90","Type":"ContainerStarted","Data":"3a53f9ac4f0cd5537ad84f367063aee33b395ab93de22e9aa37bdc9c85b6c94f"} Jan 20 18:11:35 crc kubenswrapper[4661]: I0120 18:11:35.310314 4661 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-p2zck" Jan 20 18:11:35 crc kubenswrapper[4661]: I0120 18:11:35.313087 4661 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-p2zck container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.60:8080/healthz\": dial tcp 10.217.0.60:8080: connect: connection refused" start-of-body= Jan 20 18:11:35 crc kubenswrapper[4661]: I0120 18:11:35.313117 4661 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-p2zck" podUID="663dd2a4-8e69-41d7-b561-4419dd0b4e90" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.60:8080/healthz\": dial tcp 10.217.0.60:8080: connect: connection refused" Jan 20 18:11:35 crc kubenswrapper[4661]: I0120 18:11:35.321987 4661 generic.go:334] "Generic (PLEG): container finished" podID="a087d508-430f-45ba-bff2-b58d06cebd51" containerID="46382e293347af2c3406396bb7946e118169b344875f2a17c16febcf9dd77459" exitCode=0 Jan 20 18:11:35 crc kubenswrapper[4661]: I0120 18:11:35.322167 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cr2m2" event={"ID":"a087d508-430f-45ba-bff2-b58d06cebd51","Type":"ContainerDied","Data":"46382e293347af2c3406396bb7946e118169b344875f2a17c16febcf9dd77459"} Jan 20 18:11:35 crc kubenswrapper[4661]: I0120 18:11:35.322209 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cr2m2" event={"ID":"a087d508-430f-45ba-bff2-b58d06cebd51","Type":"ContainerDied","Data":"fc7b89aecf983e94a9a987d79dc56f07000002ff3b29e19fd1f61da471c75e48"} Jan 20 18:11:35 crc kubenswrapper[4661]: I0120 18:11:35.323413 4661 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-cr2m2" Jan 20 18:11:35 crc kubenswrapper[4661]: I0120 18:11:35.327504 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5jf5n" event={"ID":"ac4a870b-4ca8-4046-b9b1-6001f8b13a51","Type":"ContainerDied","Data":"3393c739a03766e293b89fad2ab822403cc320eb3766252b4fd88c54c3cf37ca"} Jan 20 18:11:35 crc kubenswrapper[4661]: I0120 18:11:35.327643 4661 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5jf5n" Jan 20 18:11:35 crc kubenswrapper[4661]: I0120 18:11:35.366001 4661 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-p2zck" podStartSLOduration=1.365972829 podStartE2EDuration="1.365972829s" podCreationTimestamp="2026-01-20 18:11:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 18:11:35.339990181 +0000 UTC m=+351.670779883" watchObservedRunningTime="2026-01-20 18:11:35.365972829 +0000 UTC m=+351.696762511" Jan 20 18:11:35 crc kubenswrapper[4661]: I0120 18:11:35.384798 4661 scope.go:117] "RemoveContainer" containerID="4f5c399f1f43fa72aa1342d35d3156571d0a9548a99a1e3dd64db11795daec82" Jan 20 18:11:35 crc kubenswrapper[4661]: E0120 18:11:35.385143 4661 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4f5c399f1f43fa72aa1342d35d3156571d0a9548a99a1e3dd64db11795daec82\": container with ID starting with 4f5c399f1f43fa72aa1342d35d3156571d0a9548a99a1e3dd64db11795daec82 not found: ID does not exist" containerID="4f5c399f1f43fa72aa1342d35d3156571d0a9548a99a1e3dd64db11795daec82" Jan 20 18:11:35 crc kubenswrapper[4661]: I0120 18:11:35.385165 4661 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4f5c399f1f43fa72aa1342d35d3156571d0a9548a99a1e3dd64db11795daec82"} err="failed to get container status \"4f5c399f1f43fa72aa1342d35d3156571d0a9548a99a1e3dd64db11795daec82\": rpc error: code = NotFound desc = could not find container \"4f5c399f1f43fa72aa1342d35d3156571d0a9548a99a1e3dd64db11795daec82\": container with ID starting with 4f5c399f1f43fa72aa1342d35d3156571d0a9548a99a1e3dd64db11795daec82 not found: ID does not exist" Jan 20 18:11:35 crc kubenswrapper[4661]: I0120 18:11:35.385186 4661 scope.go:117] "RemoveContainer" containerID="4cd41c4745ea7ae489870a1a22a7169f3afbab9860a41d8ce1244f1853a7bd8d" Jan 20 18:11:35 crc kubenswrapper[4661]: I0120 18:11:35.400910 4661 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-rvkbf"] Jan 20 18:11:35 crc kubenswrapper[4661]: I0120 18:11:35.404366 4661 scope.go:117] "RemoveContainer" containerID="0b11b2cf1afdb0b38de19fd37e8bce3a11679416f3be4f41a7680340086385da" Jan 20 18:11:35 crc kubenswrapper[4661]: I0120 18:11:35.406152 4661 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-rvkbf"] Jan 20 18:11:35 crc kubenswrapper[4661]: I0120 18:11:35.418277 4661 scope.go:117] "RemoveContainer" containerID="b547dbca4059534c8408283f55e6dc6ff26c310b608dff887bc41d78c755fe63" Jan 20 18:11:35 crc kubenswrapper[4661]: I0120 18:11:35.421424 4661 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-5jf5n"] Jan 20 18:11:35 crc kubenswrapper[4661]: I0120 18:11:35.427527 4661 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-5jf5n"] Jan 20 18:11:35 crc kubenswrapper[4661]: I0120 18:11:35.433047 4661 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-7lfrt"] Jan 20 18:11:35 crc kubenswrapper[4661]: I0120 18:11:35.435610 4661 scope.go:117] "RemoveContainer" containerID="4cd41c4745ea7ae489870a1a22a7169f3afbab9860a41d8ce1244f1853a7bd8d" Jan 20 18:11:35 crc kubenswrapper[4661]: E0120 18:11:35.435982 4661 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4cd41c4745ea7ae489870a1a22a7169f3afbab9860a41d8ce1244f1853a7bd8d\": container with ID starting with 4cd41c4745ea7ae489870a1a22a7169f3afbab9860a41d8ce1244f1853a7bd8d not found: ID does not exist" containerID="4cd41c4745ea7ae489870a1a22a7169f3afbab9860a41d8ce1244f1853a7bd8d" Jan 20 18:11:35 crc kubenswrapper[4661]: I0120 18:11:35.436015 4661 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4cd41c4745ea7ae489870a1a22a7169f3afbab9860a41d8ce1244f1853a7bd8d"} err="failed to get container status \"4cd41c4745ea7ae489870a1a22a7169f3afbab9860a41d8ce1244f1853a7bd8d\": rpc error: code = NotFound desc = could not find container \"4cd41c4745ea7ae489870a1a22a7169f3afbab9860a41d8ce1244f1853a7bd8d\": container with ID starting with 4cd41c4745ea7ae489870a1a22a7169f3afbab9860a41d8ce1244f1853a7bd8d not found: ID does not exist" Jan 20 18:11:35 crc kubenswrapper[4661]: I0120 18:11:35.436041 4661 scope.go:117] "RemoveContainer" containerID="0b11b2cf1afdb0b38de19fd37e8bce3a11679416f3be4f41a7680340086385da" Jan 20 18:11:35 crc kubenswrapper[4661]: E0120 18:11:35.437008 4661 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0b11b2cf1afdb0b38de19fd37e8bce3a11679416f3be4f41a7680340086385da\": container with ID starting with 0b11b2cf1afdb0b38de19fd37e8bce3a11679416f3be4f41a7680340086385da not found: ID does not exist" containerID="0b11b2cf1afdb0b38de19fd37e8bce3a11679416f3be4f41a7680340086385da" Jan 20 18:11:35 crc kubenswrapper[4661]: I0120 18:11:35.437038 4661 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0b11b2cf1afdb0b38de19fd37e8bce3a11679416f3be4f41a7680340086385da"} err="failed to get container status \"0b11b2cf1afdb0b38de19fd37e8bce3a11679416f3be4f41a7680340086385da\": rpc error: code = NotFound desc = could not find container \"0b11b2cf1afdb0b38de19fd37e8bce3a11679416f3be4f41a7680340086385da\": container with ID starting with 0b11b2cf1afdb0b38de19fd37e8bce3a11679416f3be4f41a7680340086385da not found: ID does not exist" Jan 20 18:11:35 crc kubenswrapper[4661]: I0120 18:11:35.437060 4661 scope.go:117] "RemoveContainer" containerID="b547dbca4059534c8408283f55e6dc6ff26c310b608dff887bc41d78c755fe63" Jan 20 18:11:35 crc kubenswrapper[4661]: E0120 18:11:35.437324 4661 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b547dbca4059534c8408283f55e6dc6ff26c310b608dff887bc41d78c755fe63\": container with ID starting with b547dbca4059534c8408283f55e6dc6ff26c310b608dff887bc41d78c755fe63 not found: ID does not exist" containerID="b547dbca4059534c8408283f55e6dc6ff26c310b608dff887bc41d78c755fe63" Jan 20 18:11:35 crc kubenswrapper[4661]: I0120 18:11:35.437362 4661 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b547dbca4059534c8408283f55e6dc6ff26c310b608dff887bc41d78c755fe63"} err="failed to get container status \"b547dbca4059534c8408283f55e6dc6ff26c310b608dff887bc41d78c755fe63\": rpc error: code = NotFound desc = could not find container \"b547dbca4059534c8408283f55e6dc6ff26c310b608dff887bc41d78c755fe63\": container with ID starting with b547dbca4059534c8408283f55e6dc6ff26c310b608dff887bc41d78c755fe63 not found: ID does not exist" Jan 20 18:11:35 crc kubenswrapper[4661]: I0120 18:11:35.437388 4661 scope.go:117] "RemoveContainer" containerID="20844769653e3bc8dd46c3d2e967c83d2cddc6e594ea5c2d198fb826c885b921" Jan 20 18:11:35 crc kubenswrapper[4661]: I0120 18:11:35.440957 4661 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-7lfrt"] Jan 20 18:11:35 crc kubenswrapper[4661]: I0120 18:11:35.447196 4661 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-cr2m2"] Jan 20 18:11:35 crc kubenswrapper[4661]: I0120 18:11:35.450761 4661 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-cr2m2"] Jan 20 18:11:35 crc kubenswrapper[4661]: I0120 18:11:35.453079 4661 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-flvxz"] Jan 20 18:11:35 crc kubenswrapper[4661]: I0120 18:11:35.455723 4661 scope.go:117] "RemoveContainer" containerID="cd3e7e6ec69cbad2d4f4e9a3db89938b0258db3414dfaa789a8baf24e8f85f76" Jan 20 18:11:35 crc kubenswrapper[4661]: I0120 18:11:35.456461 4661 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-flvxz"] Jan 20 18:11:35 crc kubenswrapper[4661]: I0120 18:11:35.467997 4661 scope.go:117] "RemoveContainer" containerID="bb4ecb237666be5a9f78edbc6f8062be1066f0db17312852a44589c207cf3970" Jan 20 18:11:35 crc kubenswrapper[4661]: I0120 18:11:35.483311 4661 scope.go:117] "RemoveContainer" containerID="20844769653e3bc8dd46c3d2e967c83d2cddc6e594ea5c2d198fb826c885b921" Jan 20 18:11:35 crc kubenswrapper[4661]: E0120 18:11:35.483845 4661 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"20844769653e3bc8dd46c3d2e967c83d2cddc6e594ea5c2d198fb826c885b921\": container with ID starting with 20844769653e3bc8dd46c3d2e967c83d2cddc6e594ea5c2d198fb826c885b921 not found: ID does not exist" containerID="20844769653e3bc8dd46c3d2e967c83d2cddc6e594ea5c2d198fb826c885b921" Jan 20 18:11:35 crc kubenswrapper[4661]: I0120 18:11:35.483888 4661 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"20844769653e3bc8dd46c3d2e967c83d2cddc6e594ea5c2d198fb826c885b921"} err="failed to get container status \"20844769653e3bc8dd46c3d2e967c83d2cddc6e594ea5c2d198fb826c885b921\": rpc error: code = NotFound desc = could not find container \"20844769653e3bc8dd46c3d2e967c83d2cddc6e594ea5c2d198fb826c885b921\": container with ID starting with 20844769653e3bc8dd46c3d2e967c83d2cddc6e594ea5c2d198fb826c885b921 not found: ID does not exist" Jan 20 18:11:35 crc kubenswrapper[4661]: I0120 18:11:35.483917 4661 scope.go:117] "RemoveContainer" containerID="cd3e7e6ec69cbad2d4f4e9a3db89938b0258db3414dfaa789a8baf24e8f85f76" Jan 20 18:11:35 crc kubenswrapper[4661]: E0120 18:11:35.484448 4661 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cd3e7e6ec69cbad2d4f4e9a3db89938b0258db3414dfaa789a8baf24e8f85f76\": container with ID starting with cd3e7e6ec69cbad2d4f4e9a3db89938b0258db3414dfaa789a8baf24e8f85f76 not found: ID does not exist" containerID="cd3e7e6ec69cbad2d4f4e9a3db89938b0258db3414dfaa789a8baf24e8f85f76" Jan 20 18:11:35 crc kubenswrapper[4661]: I0120 18:11:35.484485 4661 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cd3e7e6ec69cbad2d4f4e9a3db89938b0258db3414dfaa789a8baf24e8f85f76"} err="failed to get container status \"cd3e7e6ec69cbad2d4f4e9a3db89938b0258db3414dfaa789a8baf24e8f85f76\": rpc error: code = NotFound desc = could not find container \"cd3e7e6ec69cbad2d4f4e9a3db89938b0258db3414dfaa789a8baf24e8f85f76\": container with ID starting with cd3e7e6ec69cbad2d4f4e9a3db89938b0258db3414dfaa789a8baf24e8f85f76 not found: ID does not exist" Jan 20 18:11:35 crc kubenswrapper[4661]: I0120 18:11:35.484526 4661 scope.go:117] "RemoveContainer" containerID="bb4ecb237666be5a9f78edbc6f8062be1066f0db17312852a44589c207cf3970" Jan 20 18:11:35 crc kubenswrapper[4661]: E0120 18:11:35.484929 4661 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bb4ecb237666be5a9f78edbc6f8062be1066f0db17312852a44589c207cf3970\": container with ID starting with bb4ecb237666be5a9f78edbc6f8062be1066f0db17312852a44589c207cf3970 not found: ID does not exist" containerID="bb4ecb237666be5a9f78edbc6f8062be1066f0db17312852a44589c207cf3970" Jan 20 18:11:35 crc kubenswrapper[4661]: I0120 18:11:35.484964 4661 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bb4ecb237666be5a9f78edbc6f8062be1066f0db17312852a44589c207cf3970"} err="failed to get container status \"bb4ecb237666be5a9f78edbc6f8062be1066f0db17312852a44589c207cf3970\": rpc error: code = NotFound desc = could not find container \"bb4ecb237666be5a9f78edbc6f8062be1066f0db17312852a44589c207cf3970\": container with ID starting with bb4ecb237666be5a9f78edbc6f8062be1066f0db17312852a44589c207cf3970 not found: ID does not exist" Jan 20 18:11:35 crc kubenswrapper[4661]: I0120 18:11:35.484990 4661 scope.go:117] "RemoveContainer" containerID="46382e293347af2c3406396bb7946e118169b344875f2a17c16febcf9dd77459" Jan 20 18:11:35 crc kubenswrapper[4661]: I0120 18:11:35.496536 4661 scope.go:117] "RemoveContainer" containerID="b6129bd7a5902f3f43d546782dd51acfd7c00abc59dfc57f56760f6f5c2a1663" Jan 20 18:11:35 crc kubenswrapper[4661]: I0120 18:11:35.512276 4661 scope.go:117] "RemoveContainer" containerID="947a5bd7969601103f94f4dde8dbee930156c260f00fc2e4df04935a6a402c6d" Jan 20 18:11:35 crc kubenswrapper[4661]: I0120 18:11:35.524283 4661 scope.go:117] "RemoveContainer" containerID="46382e293347af2c3406396bb7946e118169b344875f2a17c16febcf9dd77459" Jan 20 18:11:35 crc kubenswrapper[4661]: E0120 18:11:35.525056 4661 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"46382e293347af2c3406396bb7946e118169b344875f2a17c16febcf9dd77459\": container with ID starting with 46382e293347af2c3406396bb7946e118169b344875f2a17c16febcf9dd77459 not found: ID does not exist" containerID="46382e293347af2c3406396bb7946e118169b344875f2a17c16febcf9dd77459" Jan 20 18:11:35 crc kubenswrapper[4661]: I0120 18:11:35.525128 4661 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"46382e293347af2c3406396bb7946e118169b344875f2a17c16febcf9dd77459"} err="failed to get container status \"46382e293347af2c3406396bb7946e118169b344875f2a17c16febcf9dd77459\": rpc error: code = NotFound desc = could not find container \"46382e293347af2c3406396bb7946e118169b344875f2a17c16febcf9dd77459\": container with ID starting with 46382e293347af2c3406396bb7946e118169b344875f2a17c16febcf9dd77459 not found: ID does not exist" Jan 20 18:11:35 crc kubenswrapper[4661]: I0120 18:11:35.525192 4661 scope.go:117] "RemoveContainer" containerID="b6129bd7a5902f3f43d546782dd51acfd7c00abc59dfc57f56760f6f5c2a1663" Jan 20 18:11:35 crc kubenswrapper[4661]: E0120 18:11:35.525646 4661 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b6129bd7a5902f3f43d546782dd51acfd7c00abc59dfc57f56760f6f5c2a1663\": container with ID starting with b6129bd7a5902f3f43d546782dd51acfd7c00abc59dfc57f56760f6f5c2a1663 not found: ID does not exist" containerID="b6129bd7a5902f3f43d546782dd51acfd7c00abc59dfc57f56760f6f5c2a1663" Jan 20 18:11:35 crc kubenswrapper[4661]: I0120 18:11:35.525883 4661 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b6129bd7a5902f3f43d546782dd51acfd7c00abc59dfc57f56760f6f5c2a1663"} err="failed to get container status \"b6129bd7a5902f3f43d546782dd51acfd7c00abc59dfc57f56760f6f5c2a1663\": rpc error: code = NotFound desc = could not find container \"b6129bd7a5902f3f43d546782dd51acfd7c00abc59dfc57f56760f6f5c2a1663\": container with ID starting with b6129bd7a5902f3f43d546782dd51acfd7c00abc59dfc57f56760f6f5c2a1663 not found: ID does not exist" Jan 20 18:11:35 crc kubenswrapper[4661]: I0120 18:11:35.525910 4661 scope.go:117] "RemoveContainer" containerID="947a5bd7969601103f94f4dde8dbee930156c260f00fc2e4df04935a6a402c6d" Jan 20 18:11:35 crc kubenswrapper[4661]: E0120 18:11:35.527294 4661 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"947a5bd7969601103f94f4dde8dbee930156c260f00fc2e4df04935a6a402c6d\": container with ID starting with 947a5bd7969601103f94f4dde8dbee930156c260f00fc2e4df04935a6a402c6d not found: ID does not exist" containerID="947a5bd7969601103f94f4dde8dbee930156c260f00fc2e4df04935a6a402c6d" Jan 20 18:11:35 crc kubenswrapper[4661]: I0120 18:11:35.527333 4661 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"947a5bd7969601103f94f4dde8dbee930156c260f00fc2e4df04935a6a402c6d"} err="failed to get container status \"947a5bd7969601103f94f4dde8dbee930156c260f00fc2e4df04935a6a402c6d\": rpc error: code = NotFound desc = could not find container \"947a5bd7969601103f94f4dde8dbee930156c260f00fc2e4df04935a6a402c6d\": container with ID starting with 947a5bd7969601103f94f4dde8dbee930156c260f00fc2e4df04935a6a402c6d not found: ID does not exist" Jan 20 18:11:35 crc kubenswrapper[4661]: I0120 18:11:35.527349 4661 scope.go:117] "RemoveContainer" containerID="4ce98ee0b20cad31342ee8168f1f3238f10a2059a3f32502cebb6762ceffac4f" Jan 20 18:11:35 crc kubenswrapper[4661]: I0120 18:11:35.549623 4661 scope.go:117] "RemoveContainer" containerID="407855e057965dd9437ac8d98b2130c749094b681a7a4707f85912584fde6750" Jan 20 18:11:35 crc kubenswrapper[4661]: I0120 18:11:35.569079 4661 scope.go:117] "RemoveContainer" containerID="7a5c42800d36e5d6c21494b0c03b43ed58b44f390b48c5f0040c4ce81df19867" Jan 20 18:11:36 crc kubenswrapper[4661]: I0120 18:11:36.152136 4661 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="40861cf6-5e11-46ad-be02-b415c4f06dee" path="/var/lib/kubelet/pods/40861cf6-5e11-46ad-be02-b415c4f06dee/volumes" Jan 20 18:11:36 crc kubenswrapper[4661]: I0120 18:11:36.153785 4661 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9a7d9c3e-88e9-44b2-98bc-6aab91fbf9b4" path="/var/lib/kubelet/pods/9a7d9c3e-88e9-44b2-98bc-6aab91fbf9b4/volumes" Jan 20 18:11:36 crc kubenswrapper[4661]: I0120 18:11:36.155128 4661 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a087d508-430f-45ba-bff2-b58d06cebd51" path="/var/lib/kubelet/pods/a087d508-430f-45ba-bff2-b58d06cebd51/volumes" Jan 20 18:11:36 crc kubenswrapper[4661]: I0120 18:11:36.157585 4661 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ac4a870b-4ca8-4046-b9b1-6001f8b13a51" path="/var/lib/kubelet/pods/ac4a870b-4ca8-4046-b9b1-6001f8b13a51/volumes" Jan 20 18:11:36 crc kubenswrapper[4661]: I0120 18:11:36.159162 4661 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c34658be-616a-469f-a560-61709f82cde6" path="/var/lib/kubelet/pods/c34658be-616a-469f-a560-61709f82cde6/volumes" Jan 20 18:11:36 crc kubenswrapper[4661]: I0120 18:11:36.359750 4661 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-p2zck" Jan 20 18:11:38 crc kubenswrapper[4661]: I0120 18:11:38.257498 4661 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-4qf7z"] Jan 20 18:11:38 crc kubenswrapper[4661]: E0120 18:11:38.258116 4661 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c34658be-616a-469f-a560-61709f82cde6" containerName="extract-content" Jan 20 18:11:38 crc kubenswrapper[4661]: I0120 18:11:38.258132 4661 state_mem.go:107] "Deleted CPUSet assignment" podUID="c34658be-616a-469f-a560-61709f82cde6" containerName="extract-content" Jan 20 18:11:38 crc kubenswrapper[4661]: E0120 18:11:38.258144 4661 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c34658be-616a-469f-a560-61709f82cde6" containerName="extract-utilities" Jan 20 18:11:38 crc kubenswrapper[4661]: I0120 18:11:38.258152 4661 state_mem.go:107] "Deleted CPUSet assignment" podUID="c34658be-616a-469f-a560-61709f82cde6" containerName="extract-utilities" Jan 20 18:11:38 crc kubenswrapper[4661]: E0120 18:11:38.258162 4661 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ac4a870b-4ca8-4046-b9b1-6001f8b13a51" containerName="extract-utilities" Jan 20 18:11:38 crc kubenswrapper[4661]: I0120 18:11:38.258170 4661 state_mem.go:107] "Deleted CPUSet assignment" podUID="ac4a870b-4ca8-4046-b9b1-6001f8b13a51" containerName="extract-utilities" Jan 20 18:11:38 crc kubenswrapper[4661]: E0120 18:11:38.258181 4661 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c34658be-616a-469f-a560-61709f82cde6" containerName="registry-server" Jan 20 18:11:38 crc kubenswrapper[4661]: I0120 18:11:38.258189 4661 state_mem.go:107] "Deleted CPUSet assignment" podUID="c34658be-616a-469f-a560-61709f82cde6" containerName="registry-server" Jan 20 18:11:38 crc kubenswrapper[4661]: E0120 18:11:38.258198 4661 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="40861cf6-5e11-46ad-be02-b415c4f06dee" containerName="registry-server" Jan 20 18:11:38 crc kubenswrapper[4661]: I0120 18:11:38.258205 4661 state_mem.go:107] "Deleted CPUSet assignment" podUID="40861cf6-5e11-46ad-be02-b415c4f06dee" containerName="registry-server" Jan 20 18:11:38 crc kubenswrapper[4661]: E0120 18:11:38.258227 4661 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a087d508-430f-45ba-bff2-b58d06cebd51" containerName="extract-utilities" Jan 20 18:11:38 crc kubenswrapper[4661]: I0120 18:11:38.258235 4661 state_mem.go:107] "Deleted CPUSet assignment" podUID="a087d508-430f-45ba-bff2-b58d06cebd51" containerName="extract-utilities" Jan 20 18:11:38 crc kubenswrapper[4661]: E0120 18:11:38.258246 4661 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="40861cf6-5e11-46ad-be02-b415c4f06dee" containerName="extract-utilities" Jan 20 18:11:38 crc kubenswrapper[4661]: I0120 18:11:38.258253 4661 state_mem.go:107] "Deleted CPUSet assignment" podUID="40861cf6-5e11-46ad-be02-b415c4f06dee" containerName="extract-utilities" Jan 20 18:11:38 crc kubenswrapper[4661]: E0120 18:11:38.258262 4661 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9a7d9c3e-88e9-44b2-98bc-6aab91fbf9b4" containerName="marketplace-operator" Jan 20 18:11:38 crc kubenswrapper[4661]: I0120 18:11:38.258268 4661 state_mem.go:107] "Deleted CPUSet assignment" podUID="9a7d9c3e-88e9-44b2-98bc-6aab91fbf9b4" containerName="marketplace-operator" Jan 20 18:11:38 crc kubenswrapper[4661]: E0120 18:11:38.258279 4661 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a087d508-430f-45ba-bff2-b58d06cebd51" containerName="extract-content" Jan 20 18:11:38 crc kubenswrapper[4661]: I0120 18:11:38.258288 4661 state_mem.go:107] "Deleted CPUSet assignment" podUID="a087d508-430f-45ba-bff2-b58d06cebd51" containerName="extract-content" Jan 20 18:11:38 crc kubenswrapper[4661]: E0120 18:11:38.258300 4661 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="40861cf6-5e11-46ad-be02-b415c4f06dee" containerName="extract-content" Jan 20 18:11:38 crc kubenswrapper[4661]: I0120 18:11:38.258307 4661 state_mem.go:107] "Deleted CPUSet assignment" podUID="40861cf6-5e11-46ad-be02-b415c4f06dee" containerName="extract-content" Jan 20 18:11:38 crc kubenswrapper[4661]: E0120 18:11:38.258318 4661 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ac4a870b-4ca8-4046-b9b1-6001f8b13a51" containerName="extract-content" Jan 20 18:11:38 crc kubenswrapper[4661]: I0120 18:11:38.258325 4661 state_mem.go:107] "Deleted CPUSet assignment" podUID="ac4a870b-4ca8-4046-b9b1-6001f8b13a51" containerName="extract-content" Jan 20 18:11:38 crc kubenswrapper[4661]: E0120 18:11:38.258335 4661 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a087d508-430f-45ba-bff2-b58d06cebd51" containerName="registry-server" Jan 20 18:11:38 crc kubenswrapper[4661]: I0120 18:11:38.258342 4661 state_mem.go:107] "Deleted CPUSet assignment" podUID="a087d508-430f-45ba-bff2-b58d06cebd51" containerName="registry-server" Jan 20 18:11:38 crc kubenswrapper[4661]: E0120 18:11:38.258359 4661 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ac4a870b-4ca8-4046-b9b1-6001f8b13a51" containerName="registry-server" Jan 20 18:11:38 crc kubenswrapper[4661]: I0120 18:11:38.258368 4661 state_mem.go:107] "Deleted CPUSet assignment" podUID="ac4a870b-4ca8-4046-b9b1-6001f8b13a51" containerName="registry-server" Jan 20 18:11:38 crc kubenswrapper[4661]: I0120 18:11:38.258468 4661 memory_manager.go:354] "RemoveStaleState removing state" podUID="40861cf6-5e11-46ad-be02-b415c4f06dee" containerName="registry-server" Jan 20 18:11:38 crc kubenswrapper[4661]: I0120 18:11:38.258483 4661 memory_manager.go:354] "RemoveStaleState removing state" podUID="ac4a870b-4ca8-4046-b9b1-6001f8b13a51" containerName="registry-server" Jan 20 18:11:38 crc kubenswrapper[4661]: I0120 18:11:38.258495 4661 memory_manager.go:354] "RemoveStaleState removing state" podUID="9a7d9c3e-88e9-44b2-98bc-6aab91fbf9b4" containerName="marketplace-operator" Jan 20 18:11:38 crc kubenswrapper[4661]: I0120 18:11:38.258504 4661 memory_manager.go:354] "RemoveStaleState removing state" podUID="c34658be-616a-469f-a560-61709f82cde6" containerName="registry-server" Jan 20 18:11:38 crc kubenswrapper[4661]: I0120 18:11:38.258512 4661 memory_manager.go:354] "RemoveStaleState removing state" podUID="a087d508-430f-45ba-bff2-b58d06cebd51" containerName="registry-server" Jan 20 18:11:38 crc kubenswrapper[4661]: I0120 18:11:38.261774 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4qf7z" Jan 20 18:11:38 crc kubenswrapper[4661]: I0120 18:11:38.267127 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 20 18:11:38 crc kubenswrapper[4661]: I0120 18:11:38.278136 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-4qf7z"] Jan 20 18:11:38 crc kubenswrapper[4661]: I0120 18:11:38.322007 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c4a07359-5af4-415a-af87-0b579fb7d0dc-utilities\") pod \"certified-operators-4qf7z\" (UID: \"c4a07359-5af4-415a-af87-0b579fb7d0dc\") " pod="openshift-marketplace/certified-operators-4qf7z" Jan 20 18:11:38 crc kubenswrapper[4661]: I0120 18:11:38.322087 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c4a07359-5af4-415a-af87-0b579fb7d0dc-catalog-content\") pod \"certified-operators-4qf7z\" (UID: \"c4a07359-5af4-415a-af87-0b579fb7d0dc\") " pod="openshift-marketplace/certified-operators-4qf7z" Jan 20 18:11:38 crc kubenswrapper[4661]: I0120 18:11:38.322144 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b4bkg\" (UniqueName: \"kubernetes.io/projected/c4a07359-5af4-415a-af87-0b579fb7d0dc-kube-api-access-b4bkg\") pod \"certified-operators-4qf7z\" (UID: \"c4a07359-5af4-415a-af87-0b579fb7d0dc\") " pod="openshift-marketplace/certified-operators-4qf7z" Jan 20 18:11:38 crc kubenswrapper[4661]: I0120 18:11:38.422514 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c4a07359-5af4-415a-af87-0b579fb7d0dc-utilities\") pod \"certified-operators-4qf7z\" (UID: \"c4a07359-5af4-415a-af87-0b579fb7d0dc\") " pod="openshift-marketplace/certified-operators-4qf7z" Jan 20 18:11:38 crc kubenswrapper[4661]: I0120 18:11:38.422556 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c4a07359-5af4-415a-af87-0b579fb7d0dc-catalog-content\") pod \"certified-operators-4qf7z\" (UID: \"c4a07359-5af4-415a-af87-0b579fb7d0dc\") " pod="openshift-marketplace/certified-operators-4qf7z" Jan 20 18:11:38 crc kubenswrapper[4661]: I0120 18:11:38.422598 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b4bkg\" (UniqueName: \"kubernetes.io/projected/c4a07359-5af4-415a-af87-0b579fb7d0dc-kube-api-access-b4bkg\") pod \"certified-operators-4qf7z\" (UID: \"c4a07359-5af4-415a-af87-0b579fb7d0dc\") " pod="openshift-marketplace/certified-operators-4qf7z" Jan 20 18:11:38 crc kubenswrapper[4661]: I0120 18:11:38.423474 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c4a07359-5af4-415a-af87-0b579fb7d0dc-catalog-content\") pod \"certified-operators-4qf7z\" (UID: \"c4a07359-5af4-415a-af87-0b579fb7d0dc\") " pod="openshift-marketplace/certified-operators-4qf7z" Jan 20 18:11:38 crc kubenswrapper[4661]: I0120 18:11:38.423812 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c4a07359-5af4-415a-af87-0b579fb7d0dc-utilities\") pod \"certified-operators-4qf7z\" (UID: \"c4a07359-5af4-415a-af87-0b579fb7d0dc\") " pod="openshift-marketplace/certified-operators-4qf7z" Jan 20 18:11:38 crc kubenswrapper[4661]: I0120 18:11:38.453421 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b4bkg\" (UniqueName: \"kubernetes.io/projected/c4a07359-5af4-415a-af87-0b579fb7d0dc-kube-api-access-b4bkg\") pod \"certified-operators-4qf7z\" (UID: \"c4a07359-5af4-415a-af87-0b579fb7d0dc\") " pod="openshift-marketplace/certified-operators-4qf7z" Jan 20 18:11:38 crc kubenswrapper[4661]: I0120 18:11:38.455836 4661 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-snrjs"] Jan 20 18:11:38 crc kubenswrapper[4661]: I0120 18:11:38.461730 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-snrjs" Jan 20 18:11:38 crc kubenswrapper[4661]: I0120 18:11:38.465150 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 20 18:11:38 crc kubenswrapper[4661]: I0120 18:11:38.469438 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-snrjs"] Jan 20 18:11:38 crc kubenswrapper[4661]: I0120 18:11:38.524080 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2zqg6\" (UniqueName: \"kubernetes.io/projected/6c7424e7-2b2f-4f1b-8970-9061b4f651ff-kube-api-access-2zqg6\") pod \"community-operators-snrjs\" (UID: \"6c7424e7-2b2f-4f1b-8970-9061b4f651ff\") " pod="openshift-marketplace/community-operators-snrjs" Jan 20 18:11:38 crc kubenswrapper[4661]: I0120 18:11:38.524142 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6c7424e7-2b2f-4f1b-8970-9061b4f651ff-catalog-content\") pod \"community-operators-snrjs\" (UID: \"6c7424e7-2b2f-4f1b-8970-9061b4f651ff\") " pod="openshift-marketplace/community-operators-snrjs" Jan 20 18:11:38 crc kubenswrapper[4661]: I0120 18:11:38.524314 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6c7424e7-2b2f-4f1b-8970-9061b4f651ff-utilities\") pod \"community-operators-snrjs\" (UID: \"6c7424e7-2b2f-4f1b-8970-9061b4f651ff\") " pod="openshift-marketplace/community-operators-snrjs" Jan 20 18:11:38 crc kubenswrapper[4661]: I0120 18:11:38.625536 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2zqg6\" (UniqueName: \"kubernetes.io/projected/6c7424e7-2b2f-4f1b-8970-9061b4f651ff-kube-api-access-2zqg6\") pod \"community-operators-snrjs\" (UID: \"6c7424e7-2b2f-4f1b-8970-9061b4f651ff\") " pod="openshift-marketplace/community-operators-snrjs" Jan 20 18:11:38 crc kubenswrapper[4661]: I0120 18:11:38.625596 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6c7424e7-2b2f-4f1b-8970-9061b4f651ff-catalog-content\") pod \"community-operators-snrjs\" (UID: \"6c7424e7-2b2f-4f1b-8970-9061b4f651ff\") " pod="openshift-marketplace/community-operators-snrjs" Jan 20 18:11:38 crc kubenswrapper[4661]: I0120 18:11:38.625621 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6c7424e7-2b2f-4f1b-8970-9061b4f651ff-utilities\") pod \"community-operators-snrjs\" (UID: \"6c7424e7-2b2f-4f1b-8970-9061b4f651ff\") " pod="openshift-marketplace/community-operators-snrjs" Jan 20 18:11:38 crc kubenswrapper[4661]: I0120 18:11:38.626117 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6c7424e7-2b2f-4f1b-8970-9061b4f651ff-utilities\") pod \"community-operators-snrjs\" (UID: \"6c7424e7-2b2f-4f1b-8970-9061b4f651ff\") " pod="openshift-marketplace/community-operators-snrjs" Jan 20 18:11:38 crc kubenswrapper[4661]: I0120 18:11:38.626410 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6c7424e7-2b2f-4f1b-8970-9061b4f651ff-catalog-content\") pod \"community-operators-snrjs\" (UID: \"6c7424e7-2b2f-4f1b-8970-9061b4f651ff\") " pod="openshift-marketplace/community-operators-snrjs" Jan 20 18:11:38 crc kubenswrapper[4661]: I0120 18:11:38.651794 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2zqg6\" (UniqueName: \"kubernetes.io/projected/6c7424e7-2b2f-4f1b-8970-9061b4f651ff-kube-api-access-2zqg6\") pod \"community-operators-snrjs\" (UID: \"6c7424e7-2b2f-4f1b-8970-9061b4f651ff\") " pod="openshift-marketplace/community-operators-snrjs" Jan 20 18:11:38 crc kubenswrapper[4661]: I0120 18:11:38.652817 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4qf7z" Jan 20 18:11:38 crc kubenswrapper[4661]: I0120 18:11:38.791193 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-snrjs" Jan 20 18:11:39 crc kubenswrapper[4661]: I0120 18:11:39.104281 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-4qf7z"] Jan 20 18:11:39 crc kubenswrapper[4661]: W0120 18:11:39.115799 4661 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc4a07359_5af4_415a_af87_0b579fb7d0dc.slice/crio-613f4fa21a3f3c2c300cf8e4e5b2e14a21d228f3295c2a0967b1eeddad06936c WatchSource:0}: Error finding container 613f4fa21a3f3c2c300cf8e4e5b2e14a21d228f3295c2a0967b1eeddad06936c: Status 404 returned error can't find the container with id 613f4fa21a3f3c2c300cf8e4e5b2e14a21d228f3295c2a0967b1eeddad06936c Jan 20 18:11:39 crc kubenswrapper[4661]: I0120 18:11:39.235843 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-snrjs"] Jan 20 18:11:39 crc kubenswrapper[4661]: W0120 18:11:39.247938 4661 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6c7424e7_2b2f_4f1b_8970_9061b4f651ff.slice/crio-2f2dbe0543ca7ef4d75165865c31d041484bcf0dda6498c8d8c1477861104a23 WatchSource:0}: Error finding container 2f2dbe0543ca7ef4d75165865c31d041484bcf0dda6498c8d8c1477861104a23: Status 404 returned error can't find the container with id 2f2dbe0543ca7ef4d75165865c31d041484bcf0dda6498c8d8c1477861104a23 Jan 20 18:11:39 crc kubenswrapper[4661]: I0120 18:11:39.372530 4661 generic.go:334] "Generic (PLEG): container finished" podID="c4a07359-5af4-415a-af87-0b579fb7d0dc" containerID="bc784cf3d6dd02d6fd9fb149af57096aa251452ab14bb6898abe179b2bb770bd" exitCode=0 Jan 20 18:11:39 crc kubenswrapper[4661]: I0120 18:11:39.373482 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4qf7z" event={"ID":"c4a07359-5af4-415a-af87-0b579fb7d0dc","Type":"ContainerDied","Data":"bc784cf3d6dd02d6fd9fb149af57096aa251452ab14bb6898abe179b2bb770bd"} Jan 20 18:11:39 crc kubenswrapper[4661]: I0120 18:11:39.373585 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4qf7z" event={"ID":"c4a07359-5af4-415a-af87-0b579fb7d0dc","Type":"ContainerStarted","Data":"613f4fa21a3f3c2c300cf8e4e5b2e14a21d228f3295c2a0967b1eeddad06936c"} Jan 20 18:11:39 crc kubenswrapper[4661]: I0120 18:11:39.375740 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-snrjs" event={"ID":"6c7424e7-2b2f-4f1b-8970-9061b4f651ff","Type":"ContainerStarted","Data":"2f2dbe0543ca7ef4d75165865c31d041484bcf0dda6498c8d8c1477861104a23"} Jan 20 18:11:40 crc kubenswrapper[4661]: I0120 18:11:40.384275 4661 generic.go:334] "Generic (PLEG): container finished" podID="6c7424e7-2b2f-4f1b-8970-9061b4f651ff" containerID="709af231f8dbae5139158fffb2d97c503b16dcf82201fcf20a24cf82174f6d28" exitCode=0 Jan 20 18:11:40 crc kubenswrapper[4661]: I0120 18:11:40.384399 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-snrjs" event={"ID":"6c7424e7-2b2f-4f1b-8970-9061b4f651ff","Type":"ContainerDied","Data":"709af231f8dbae5139158fffb2d97c503b16dcf82201fcf20a24cf82174f6d28"} Jan 20 18:11:40 crc kubenswrapper[4661]: I0120 18:11:40.389368 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4qf7z" event={"ID":"c4a07359-5af4-415a-af87-0b579fb7d0dc","Type":"ContainerStarted","Data":"550931af398618446eedcc45b75ce624dabfe7d992df10fdbc1e4ba69e9a175c"} Jan 20 18:11:40 crc kubenswrapper[4661]: I0120 18:11:40.654028 4661 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-cm29g"] Jan 20 18:11:40 crc kubenswrapper[4661]: I0120 18:11:40.654971 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-cm29g" Jan 20 18:11:40 crc kubenswrapper[4661]: I0120 18:11:40.658114 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 20 18:11:40 crc kubenswrapper[4661]: I0120 18:11:40.667224 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-cm29g"] Jan 20 18:11:40 crc kubenswrapper[4661]: I0120 18:11:40.753388 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k244j\" (UniqueName: \"kubernetes.io/projected/a4aacd2d-5f80-4352-bd51-48f7a3a9b8ef-kube-api-access-k244j\") pod \"redhat-operators-cm29g\" (UID: \"a4aacd2d-5f80-4352-bd51-48f7a3a9b8ef\") " pod="openshift-marketplace/redhat-operators-cm29g" Jan 20 18:11:40 crc kubenswrapper[4661]: I0120 18:11:40.753483 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a4aacd2d-5f80-4352-bd51-48f7a3a9b8ef-catalog-content\") pod \"redhat-operators-cm29g\" (UID: \"a4aacd2d-5f80-4352-bd51-48f7a3a9b8ef\") " pod="openshift-marketplace/redhat-operators-cm29g" Jan 20 18:11:40 crc kubenswrapper[4661]: I0120 18:11:40.753514 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a4aacd2d-5f80-4352-bd51-48f7a3a9b8ef-utilities\") pod \"redhat-operators-cm29g\" (UID: \"a4aacd2d-5f80-4352-bd51-48f7a3a9b8ef\") " pod="openshift-marketplace/redhat-operators-cm29g" Jan 20 18:11:40 crc kubenswrapper[4661]: I0120 18:11:40.850307 4661 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-chn68"] Jan 20 18:11:40 crc kubenswrapper[4661]: I0120 18:11:40.851333 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-chn68" Jan 20 18:11:40 crc kubenswrapper[4661]: I0120 18:11:40.854431 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k244j\" (UniqueName: \"kubernetes.io/projected/a4aacd2d-5f80-4352-bd51-48f7a3a9b8ef-kube-api-access-k244j\") pod \"redhat-operators-cm29g\" (UID: \"a4aacd2d-5f80-4352-bd51-48f7a3a9b8ef\") " pod="openshift-marketplace/redhat-operators-cm29g" Jan 20 18:11:40 crc kubenswrapper[4661]: I0120 18:11:40.854497 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a4aacd2d-5f80-4352-bd51-48f7a3a9b8ef-catalog-content\") pod \"redhat-operators-cm29g\" (UID: \"a4aacd2d-5f80-4352-bd51-48f7a3a9b8ef\") " pod="openshift-marketplace/redhat-operators-cm29g" Jan 20 18:11:40 crc kubenswrapper[4661]: I0120 18:11:40.854532 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a4aacd2d-5f80-4352-bd51-48f7a3a9b8ef-utilities\") pod \"redhat-operators-cm29g\" (UID: \"a4aacd2d-5f80-4352-bd51-48f7a3a9b8ef\") " pod="openshift-marketplace/redhat-operators-cm29g" Jan 20 18:11:40 crc kubenswrapper[4661]: I0120 18:11:40.855311 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a4aacd2d-5f80-4352-bd51-48f7a3a9b8ef-catalog-content\") pod \"redhat-operators-cm29g\" (UID: \"a4aacd2d-5f80-4352-bd51-48f7a3a9b8ef\") " pod="openshift-marketplace/redhat-operators-cm29g" Jan 20 18:11:40 crc kubenswrapper[4661]: I0120 18:11:40.855509 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a4aacd2d-5f80-4352-bd51-48f7a3a9b8ef-utilities\") pod \"redhat-operators-cm29g\" (UID: \"a4aacd2d-5f80-4352-bd51-48f7a3a9b8ef\") " pod="openshift-marketplace/redhat-operators-cm29g" Jan 20 18:11:40 crc kubenswrapper[4661]: I0120 18:11:40.855699 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 20 18:11:40 crc kubenswrapper[4661]: I0120 18:11:40.862689 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-chn68"] Jan 20 18:11:40 crc kubenswrapper[4661]: I0120 18:11:40.878232 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k244j\" (UniqueName: \"kubernetes.io/projected/a4aacd2d-5f80-4352-bd51-48f7a3a9b8ef-kube-api-access-k244j\") pod \"redhat-operators-cm29g\" (UID: \"a4aacd2d-5f80-4352-bd51-48f7a3a9b8ef\") " pod="openshift-marketplace/redhat-operators-cm29g" Jan 20 18:11:40 crc kubenswrapper[4661]: I0120 18:11:40.956316 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/631fc07f-b0f0-4f54-881f-bc76a8ec7b34-utilities\") pod \"redhat-marketplace-chn68\" (UID: \"631fc07f-b0f0-4f54-881f-bc76a8ec7b34\") " pod="openshift-marketplace/redhat-marketplace-chn68" Jan 20 18:11:40 crc kubenswrapper[4661]: I0120 18:11:40.956390 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lq4v9\" (UniqueName: \"kubernetes.io/projected/631fc07f-b0f0-4f54-881f-bc76a8ec7b34-kube-api-access-lq4v9\") pod \"redhat-marketplace-chn68\" (UID: \"631fc07f-b0f0-4f54-881f-bc76a8ec7b34\") " pod="openshift-marketplace/redhat-marketplace-chn68" Jan 20 18:11:40 crc kubenswrapper[4661]: I0120 18:11:40.956445 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/631fc07f-b0f0-4f54-881f-bc76a8ec7b34-catalog-content\") pod \"redhat-marketplace-chn68\" (UID: \"631fc07f-b0f0-4f54-881f-bc76a8ec7b34\") " pod="openshift-marketplace/redhat-marketplace-chn68" Jan 20 18:11:40 crc kubenswrapper[4661]: I0120 18:11:40.974501 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-cm29g" Jan 20 18:11:41 crc kubenswrapper[4661]: I0120 18:11:41.058554 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/631fc07f-b0f0-4f54-881f-bc76a8ec7b34-utilities\") pod \"redhat-marketplace-chn68\" (UID: \"631fc07f-b0f0-4f54-881f-bc76a8ec7b34\") " pod="openshift-marketplace/redhat-marketplace-chn68" Jan 20 18:11:41 crc kubenswrapper[4661]: I0120 18:11:41.058628 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lq4v9\" (UniqueName: \"kubernetes.io/projected/631fc07f-b0f0-4f54-881f-bc76a8ec7b34-kube-api-access-lq4v9\") pod \"redhat-marketplace-chn68\" (UID: \"631fc07f-b0f0-4f54-881f-bc76a8ec7b34\") " pod="openshift-marketplace/redhat-marketplace-chn68" Jan 20 18:11:41 crc kubenswrapper[4661]: I0120 18:11:41.058739 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/631fc07f-b0f0-4f54-881f-bc76a8ec7b34-catalog-content\") pod \"redhat-marketplace-chn68\" (UID: \"631fc07f-b0f0-4f54-881f-bc76a8ec7b34\") " pod="openshift-marketplace/redhat-marketplace-chn68" Jan 20 18:11:41 crc kubenswrapper[4661]: I0120 18:11:41.059567 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/631fc07f-b0f0-4f54-881f-bc76a8ec7b34-catalog-content\") pod \"redhat-marketplace-chn68\" (UID: \"631fc07f-b0f0-4f54-881f-bc76a8ec7b34\") " pod="openshift-marketplace/redhat-marketplace-chn68" Jan 20 18:11:41 crc kubenswrapper[4661]: I0120 18:11:41.060020 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/631fc07f-b0f0-4f54-881f-bc76a8ec7b34-utilities\") pod \"redhat-marketplace-chn68\" (UID: \"631fc07f-b0f0-4f54-881f-bc76a8ec7b34\") " pod="openshift-marketplace/redhat-marketplace-chn68" Jan 20 18:11:41 crc kubenswrapper[4661]: I0120 18:11:41.082360 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lq4v9\" (UniqueName: \"kubernetes.io/projected/631fc07f-b0f0-4f54-881f-bc76a8ec7b34-kube-api-access-lq4v9\") pod \"redhat-marketplace-chn68\" (UID: \"631fc07f-b0f0-4f54-881f-bc76a8ec7b34\") " pod="openshift-marketplace/redhat-marketplace-chn68" Jan 20 18:11:41 crc kubenswrapper[4661]: I0120 18:11:41.168240 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-chn68" Jan 20 18:11:41 crc kubenswrapper[4661]: I0120 18:11:41.414614 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-snrjs" event={"ID":"6c7424e7-2b2f-4f1b-8970-9061b4f651ff","Type":"ContainerStarted","Data":"3342d76d7e4bddb169ffd11a8739e02f4f5ab69224f0a4dc218e19491af0e64a"} Jan 20 18:11:41 crc kubenswrapper[4661]: I0120 18:11:41.420553 4661 generic.go:334] "Generic (PLEG): container finished" podID="c4a07359-5af4-415a-af87-0b579fb7d0dc" containerID="550931af398618446eedcc45b75ce624dabfe7d992df10fdbc1e4ba69e9a175c" exitCode=0 Jan 20 18:11:41 crc kubenswrapper[4661]: I0120 18:11:41.420604 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4qf7z" event={"ID":"c4a07359-5af4-415a-af87-0b579fb7d0dc","Type":"ContainerDied","Data":"550931af398618446eedcc45b75ce624dabfe7d992df10fdbc1e4ba69e9a175c"} Jan 20 18:11:41 crc kubenswrapper[4661]: I0120 18:11:41.495406 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-cm29g"] Jan 20 18:11:41 crc kubenswrapper[4661]: I0120 18:11:41.618328 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-chn68"] Jan 20 18:11:41 crc kubenswrapper[4661]: W0120 18:11:41.625162 4661 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod631fc07f_b0f0_4f54_881f_bc76a8ec7b34.slice/crio-abec5c3d98cb7250f46c09a424bf0bfca0596a0c6a3c798365de92a1cf26066e WatchSource:0}: Error finding container abec5c3d98cb7250f46c09a424bf0bfca0596a0c6a3c798365de92a1cf26066e: Status 404 returned error can't find the container with id abec5c3d98cb7250f46c09a424bf0bfca0596a0c6a3c798365de92a1cf26066e Jan 20 18:11:42 crc kubenswrapper[4661]: I0120 18:11:42.427084 4661 generic.go:334] "Generic (PLEG): container finished" podID="6c7424e7-2b2f-4f1b-8970-9061b4f651ff" containerID="3342d76d7e4bddb169ffd11a8739e02f4f5ab69224f0a4dc218e19491af0e64a" exitCode=0 Jan 20 18:11:42 crc kubenswrapper[4661]: I0120 18:11:42.427152 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-snrjs" event={"ID":"6c7424e7-2b2f-4f1b-8970-9061b4f651ff","Type":"ContainerDied","Data":"3342d76d7e4bddb169ffd11a8739e02f4f5ab69224f0a4dc218e19491af0e64a"} Jan 20 18:11:42 crc kubenswrapper[4661]: I0120 18:11:42.429373 4661 generic.go:334] "Generic (PLEG): container finished" podID="631fc07f-b0f0-4f54-881f-bc76a8ec7b34" containerID="6f220a19a357871701c96f2424d6d099451b47024f2dc5874c7e35159043a94e" exitCode=0 Jan 20 18:11:42 crc kubenswrapper[4661]: I0120 18:11:42.429423 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-chn68" event={"ID":"631fc07f-b0f0-4f54-881f-bc76a8ec7b34","Type":"ContainerDied","Data":"6f220a19a357871701c96f2424d6d099451b47024f2dc5874c7e35159043a94e"} Jan 20 18:11:42 crc kubenswrapper[4661]: I0120 18:11:42.429438 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-chn68" event={"ID":"631fc07f-b0f0-4f54-881f-bc76a8ec7b34","Type":"ContainerStarted","Data":"abec5c3d98cb7250f46c09a424bf0bfca0596a0c6a3c798365de92a1cf26066e"} Jan 20 18:11:42 crc kubenswrapper[4661]: I0120 18:11:42.433121 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4qf7z" event={"ID":"c4a07359-5af4-415a-af87-0b579fb7d0dc","Type":"ContainerStarted","Data":"53af4bcb38c48d6c833d5c39cd583b03e5d3bebea804c69ffd5797907b48571a"} Jan 20 18:11:42 crc kubenswrapper[4661]: I0120 18:11:42.434609 4661 generic.go:334] "Generic (PLEG): container finished" podID="a4aacd2d-5f80-4352-bd51-48f7a3a9b8ef" containerID="8d8485ba745bd3db45c18074f0de9426343b218d7901623e3a6220437154a51d" exitCode=0 Jan 20 18:11:42 crc kubenswrapper[4661]: I0120 18:11:42.434646 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cm29g" event={"ID":"a4aacd2d-5f80-4352-bd51-48f7a3a9b8ef","Type":"ContainerDied","Data":"8d8485ba745bd3db45c18074f0de9426343b218d7901623e3a6220437154a51d"} Jan 20 18:11:42 crc kubenswrapper[4661]: I0120 18:11:42.434685 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cm29g" event={"ID":"a4aacd2d-5f80-4352-bd51-48f7a3a9b8ef","Type":"ContainerStarted","Data":"d5ad744e64eb7d4889a6fd38f9348887df348f267846212d112242f6e89c5079"} Jan 20 18:11:42 crc kubenswrapper[4661]: I0120 18:11:42.468603 4661 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-4qf7z" podStartSLOduration=1.6994164980000002 podStartE2EDuration="4.468585813s" podCreationTimestamp="2026-01-20 18:11:38 +0000 UTC" firstStartedPulling="2026-01-20 18:11:39.375597785 +0000 UTC m=+355.706387447" lastFinishedPulling="2026-01-20 18:11:42.1447671 +0000 UTC m=+358.475556762" observedRunningTime="2026-01-20 18:11:42.467482194 +0000 UTC m=+358.798271866" watchObservedRunningTime="2026-01-20 18:11:42.468585813 +0000 UTC m=+358.799375485" Jan 20 18:11:43 crc kubenswrapper[4661]: I0120 18:11:43.442402 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-snrjs" event={"ID":"6c7424e7-2b2f-4f1b-8970-9061b4f651ff","Type":"ContainerStarted","Data":"ef7ef877a16775c117053f1241dea97ae685de43d80a0b99aebde737f6285ace"} Jan 20 18:11:43 crc kubenswrapper[4661]: I0120 18:11:43.446132 4661 generic.go:334] "Generic (PLEG): container finished" podID="631fc07f-b0f0-4f54-881f-bc76a8ec7b34" containerID="36a1ca41c3292e7c3d93657628dc2ed3c9d16852aa6608c9a52b40fc9fe20a3d" exitCode=0 Jan 20 18:11:43 crc kubenswrapper[4661]: I0120 18:11:43.446330 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-chn68" event={"ID":"631fc07f-b0f0-4f54-881f-bc76a8ec7b34","Type":"ContainerDied","Data":"36a1ca41c3292e7c3d93657628dc2ed3c9d16852aa6608c9a52b40fc9fe20a3d"} Jan 20 18:11:43 crc kubenswrapper[4661]: I0120 18:11:43.468524 4661 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-snrjs" podStartSLOduration=3.017801184 podStartE2EDuration="5.468504158s" podCreationTimestamp="2026-01-20 18:11:38 +0000 UTC" firstStartedPulling="2026-01-20 18:11:40.386916026 +0000 UTC m=+356.717705728" lastFinishedPulling="2026-01-20 18:11:42.83761904 +0000 UTC m=+359.168408702" observedRunningTime="2026-01-20 18:11:43.465037155 +0000 UTC m=+359.795826817" watchObservedRunningTime="2026-01-20 18:11:43.468504158 +0000 UTC m=+359.799293820" Jan 20 18:11:44 crc kubenswrapper[4661]: I0120 18:11:44.456036 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cm29g" event={"ID":"a4aacd2d-5f80-4352-bd51-48f7a3a9b8ef","Type":"ContainerStarted","Data":"11edd39a28ceaceff85f062f54e1041efb629606bb981486e40711d62401b281"} Jan 20 18:11:45 crc kubenswrapper[4661]: I0120 18:11:45.463982 4661 generic.go:334] "Generic (PLEG): container finished" podID="a4aacd2d-5f80-4352-bd51-48f7a3a9b8ef" containerID="11edd39a28ceaceff85f062f54e1041efb629606bb981486e40711d62401b281" exitCode=0 Jan 20 18:11:45 crc kubenswrapper[4661]: I0120 18:11:45.464984 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cm29g" event={"ID":"a4aacd2d-5f80-4352-bd51-48f7a3a9b8ef","Type":"ContainerDied","Data":"11edd39a28ceaceff85f062f54e1041efb629606bb981486e40711d62401b281"} Jan 20 18:11:45 crc kubenswrapper[4661]: I0120 18:11:45.476800 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-chn68" event={"ID":"631fc07f-b0f0-4f54-881f-bc76a8ec7b34","Type":"ContainerStarted","Data":"2ad60d96bdce8c18bf24bb0852c8bb9fc22d20e55f4818de23b0d537e38e5c73"} Jan 20 18:11:45 crc kubenswrapper[4661]: I0120 18:11:45.501610 4661 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-chn68" podStartSLOduration=2.991996596 podStartE2EDuration="5.501591651s" podCreationTimestamp="2026-01-20 18:11:40 +0000 UTC" firstStartedPulling="2026-01-20 18:11:42.431418606 +0000 UTC m=+358.762208268" lastFinishedPulling="2026-01-20 18:11:44.941013671 +0000 UTC m=+361.271803323" observedRunningTime="2026-01-20 18:11:45.499034852 +0000 UTC m=+361.829824514" watchObservedRunningTime="2026-01-20 18:11:45.501591651 +0000 UTC m=+361.832381313" Jan 20 18:11:47 crc kubenswrapper[4661]: I0120 18:11:47.492565 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cm29g" event={"ID":"a4aacd2d-5f80-4352-bd51-48f7a3a9b8ef","Type":"ContainerStarted","Data":"97e2844f4af667a63f23d92a8e27211a1af117f09ac342a0ce741e2fea29f3e9"} Jan 20 18:11:47 crc kubenswrapper[4661]: I0120 18:11:47.518324 4661 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-cm29g" podStartSLOduration=3.642416608 podStartE2EDuration="7.518301153s" podCreationTimestamp="2026-01-20 18:11:40 +0000 UTC" firstStartedPulling="2026-01-20 18:11:42.435566117 +0000 UTC m=+358.766355779" lastFinishedPulling="2026-01-20 18:11:46.311450662 +0000 UTC m=+362.642240324" observedRunningTime="2026-01-20 18:11:47.518203251 +0000 UTC m=+363.848992913" watchObservedRunningTime="2026-01-20 18:11:47.518301153 +0000 UTC m=+363.849090845" Jan 20 18:11:48 crc kubenswrapper[4661]: I0120 18:11:48.654088 4661 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-4qf7z" Jan 20 18:11:48 crc kubenswrapper[4661]: I0120 18:11:48.654136 4661 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-4qf7z" Jan 20 18:11:48 crc kubenswrapper[4661]: I0120 18:11:48.714823 4661 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-4qf7z" Jan 20 18:11:48 crc kubenswrapper[4661]: I0120 18:11:48.793152 4661 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-snrjs" Jan 20 18:11:48 crc kubenswrapper[4661]: I0120 18:11:48.793203 4661 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-snrjs" Jan 20 18:11:48 crc kubenswrapper[4661]: I0120 18:11:48.828734 4661 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-snrjs" Jan 20 18:11:49 crc kubenswrapper[4661]: I0120 18:11:49.538947 4661 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-snrjs" Jan 20 18:11:49 crc kubenswrapper[4661]: I0120 18:11:49.539104 4661 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-4qf7z" Jan 20 18:11:50 crc kubenswrapper[4661]: I0120 18:11:50.975094 4661 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-cm29g" Jan 20 18:11:50 crc kubenswrapper[4661]: I0120 18:11:50.975398 4661 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-cm29g" Jan 20 18:11:51 crc kubenswrapper[4661]: I0120 18:11:51.168874 4661 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-chn68" Jan 20 18:11:51 crc kubenswrapper[4661]: I0120 18:11:51.168924 4661 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-chn68" Jan 20 18:11:51 crc kubenswrapper[4661]: I0120 18:11:51.275369 4661 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-chn68" Jan 20 18:11:51 crc kubenswrapper[4661]: I0120 18:11:51.553452 4661 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-chn68" Jan 20 18:11:52 crc kubenswrapper[4661]: I0120 18:11:52.026348 4661 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-cm29g" podUID="a4aacd2d-5f80-4352-bd51-48f7a3a9b8ef" containerName="registry-server" probeResult="failure" output=< Jan 20 18:11:52 crc kubenswrapper[4661]: timeout: failed to connect service ":50051" within 1s Jan 20 18:11:52 crc kubenswrapper[4661]: > Jan 20 18:11:59 crc kubenswrapper[4661]: I0120 18:11:59.324023 4661 patch_prober.go:28] interesting pod/machine-config-daemon-svf7c container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 20 18:11:59 crc kubenswrapper[4661]: I0120 18:11:59.324328 4661 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-svf7c" podUID="78855c94-da90-4523-8d65-70f7fd153dee" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 20 18:12:00 crc kubenswrapper[4661]: I0120 18:12:00.000800 4661 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-5494bbdbdf-l8f54"] Jan 20 18:12:00 crc kubenswrapper[4661]: I0120 18:12:00.001247 4661 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-5494bbdbdf-l8f54" podUID="b0e9e6c1-bd84-4fea-9dd4-9e63a79221ec" containerName="controller-manager" containerID="cri-o://03b211d9f0d823bbb48a0a17a7ad883c10620323e37466bf1b1453c96e930212" gracePeriod=30 Jan 20 18:12:00 crc kubenswrapper[4661]: I0120 18:12:00.075089 4661 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-795b8d5757-4qlvx"] Jan 20 18:12:00 crc kubenswrapper[4661]: I0120 18:12:00.075560 4661 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-795b8d5757-4qlvx" podUID="eb9c754e-79ee-46d7-9d9c-8d2dc3ea55a6" containerName="route-controller-manager" containerID="cri-o://dc3fc549f55c23f7aece1dd8b6baa4a74c23c1cfd7e34f71b3b74c9f4f437d42" gracePeriod=30 Jan 20 18:12:01 crc kubenswrapper[4661]: I0120 18:12:01.034223 4661 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-cm29g" Jan 20 18:12:01 crc kubenswrapper[4661]: I0120 18:12:01.083059 4661 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-cm29g" Jan 20 18:12:03 crc kubenswrapper[4661]: I0120 18:12:03.577006 4661 generic.go:334] "Generic (PLEG): container finished" podID="b0e9e6c1-bd84-4fea-9dd4-9e63a79221ec" containerID="03b211d9f0d823bbb48a0a17a7ad883c10620323e37466bf1b1453c96e930212" exitCode=0 Jan 20 18:12:03 crc kubenswrapper[4661]: I0120 18:12:03.577080 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5494bbdbdf-l8f54" event={"ID":"b0e9e6c1-bd84-4fea-9dd4-9e63a79221ec","Type":"ContainerDied","Data":"03b211d9f0d823bbb48a0a17a7ad883c10620323e37466bf1b1453c96e930212"} Jan 20 18:12:03 crc kubenswrapper[4661]: I0120 18:12:03.580518 4661 generic.go:334] "Generic (PLEG): container finished" podID="eb9c754e-79ee-46d7-9d9c-8d2dc3ea55a6" containerID="dc3fc549f55c23f7aece1dd8b6baa4a74c23c1cfd7e34f71b3b74c9f4f437d42" exitCode=0 Jan 20 18:12:03 crc kubenswrapper[4661]: I0120 18:12:03.580717 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-795b8d5757-4qlvx" event={"ID":"eb9c754e-79ee-46d7-9d9c-8d2dc3ea55a6","Type":"ContainerDied","Data":"dc3fc549f55c23f7aece1dd8b6baa4a74c23c1cfd7e34f71b3b74c9f4f437d42"} Jan 20 18:12:04 crc kubenswrapper[4661]: I0120 18:12:04.210237 4661 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5494bbdbdf-l8f54" Jan 20 18:12:04 crc kubenswrapper[4661]: I0120 18:12:04.216810 4661 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-795b8d5757-4qlvx" Jan 20 18:12:04 crc kubenswrapper[4661]: I0120 18:12:04.234933 4661 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-5897c66b94-jqn9j"] Jan 20 18:12:04 crc kubenswrapper[4661]: E0120 18:12:04.235230 4661 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb9c754e-79ee-46d7-9d9c-8d2dc3ea55a6" containerName="route-controller-manager" Jan 20 18:12:04 crc kubenswrapper[4661]: I0120 18:12:04.235254 4661 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb9c754e-79ee-46d7-9d9c-8d2dc3ea55a6" containerName="route-controller-manager" Jan 20 18:12:04 crc kubenswrapper[4661]: E0120 18:12:04.235262 4661 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b0e9e6c1-bd84-4fea-9dd4-9e63a79221ec" containerName="controller-manager" Jan 20 18:12:04 crc kubenswrapper[4661]: I0120 18:12:04.235270 4661 state_mem.go:107] "Deleted CPUSet assignment" podUID="b0e9e6c1-bd84-4fea-9dd4-9e63a79221ec" containerName="controller-manager" Jan 20 18:12:04 crc kubenswrapper[4661]: I0120 18:12:04.235373 4661 memory_manager.go:354] "RemoveStaleState removing state" podUID="eb9c754e-79ee-46d7-9d9c-8d2dc3ea55a6" containerName="route-controller-manager" Jan 20 18:12:04 crc kubenswrapper[4661]: I0120 18:12:04.235387 4661 memory_manager.go:354] "RemoveStaleState removing state" podUID="b0e9e6c1-bd84-4fea-9dd4-9e63a79221ec" containerName="controller-manager" Jan 20 18:12:04 crc kubenswrapper[4661]: I0120 18:12:04.235881 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5897c66b94-jqn9j" Jan 20 18:12:04 crc kubenswrapper[4661]: I0120 18:12:04.271556 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5897c66b94-jqn9j"] Jan 20 18:12:04 crc kubenswrapper[4661]: I0120 18:12:04.314071 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4pqgt\" (UniqueName: \"kubernetes.io/projected/b0e9e6c1-bd84-4fea-9dd4-9e63a79221ec-kube-api-access-4pqgt\") pod \"b0e9e6c1-bd84-4fea-9dd4-9e63a79221ec\" (UID: \"b0e9e6c1-bd84-4fea-9dd4-9e63a79221ec\") " Jan 20 18:12:04 crc kubenswrapper[4661]: I0120 18:12:04.315357 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/eb9c754e-79ee-46d7-9d9c-8d2dc3ea55a6-client-ca\") pod \"eb9c754e-79ee-46d7-9d9c-8d2dc3ea55a6\" (UID: \"eb9c754e-79ee-46d7-9d9c-8d2dc3ea55a6\") " Jan 20 18:12:04 crc kubenswrapper[4661]: I0120 18:12:04.315491 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eb9c754e-79ee-46d7-9d9c-8d2dc3ea55a6-config\") pod \"eb9c754e-79ee-46d7-9d9c-8d2dc3ea55a6\" (UID: \"eb9c754e-79ee-46d7-9d9c-8d2dc3ea55a6\") " Jan 20 18:12:04 crc kubenswrapper[4661]: I0120 18:12:04.315597 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b0e9e6c1-bd84-4fea-9dd4-9e63a79221ec-client-ca\") pod \"b0e9e6c1-bd84-4fea-9dd4-9e63a79221ec\" (UID: \"b0e9e6c1-bd84-4fea-9dd4-9e63a79221ec\") " Jan 20 18:12:04 crc kubenswrapper[4661]: I0120 18:12:04.315760 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b0e9e6c1-bd84-4fea-9dd4-9e63a79221ec-proxy-ca-bundles\") pod \"b0e9e6c1-bd84-4fea-9dd4-9e63a79221ec\" (UID: \"b0e9e6c1-bd84-4fea-9dd4-9e63a79221ec\") " Jan 20 18:12:04 crc kubenswrapper[4661]: I0120 18:12:04.315867 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b0e9e6c1-bd84-4fea-9dd4-9e63a79221ec-serving-cert\") pod \"b0e9e6c1-bd84-4fea-9dd4-9e63a79221ec\" (UID: \"b0e9e6c1-bd84-4fea-9dd4-9e63a79221ec\") " Jan 20 18:12:04 crc kubenswrapper[4661]: I0120 18:12:04.315958 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/eb9c754e-79ee-46d7-9d9c-8d2dc3ea55a6-serving-cert\") pod \"eb9c754e-79ee-46d7-9d9c-8d2dc3ea55a6\" (UID: \"eb9c754e-79ee-46d7-9d9c-8d2dc3ea55a6\") " Jan 20 18:12:04 crc kubenswrapper[4661]: I0120 18:12:04.316057 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b0e9e6c1-bd84-4fea-9dd4-9e63a79221ec-config\") pod \"b0e9e6c1-bd84-4fea-9dd4-9e63a79221ec\" (UID: \"b0e9e6c1-bd84-4fea-9dd4-9e63a79221ec\") " Jan 20 18:12:04 crc kubenswrapper[4661]: I0120 18:12:04.316145 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zb6sq\" (UniqueName: \"kubernetes.io/projected/eb9c754e-79ee-46d7-9d9c-8d2dc3ea55a6-kube-api-access-zb6sq\") pod \"eb9c754e-79ee-46d7-9d9c-8d2dc3ea55a6\" (UID: \"eb9c754e-79ee-46d7-9d9c-8d2dc3ea55a6\") " Jan 20 18:12:04 crc kubenswrapper[4661]: I0120 18:12:04.316202 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b0e9e6c1-bd84-4fea-9dd4-9e63a79221ec-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "b0e9e6c1-bd84-4fea-9dd4-9e63a79221ec" (UID: "b0e9e6c1-bd84-4fea-9dd4-9e63a79221ec"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 18:12:04 crc kubenswrapper[4661]: I0120 18:12:04.316257 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b0e9e6c1-bd84-4fea-9dd4-9e63a79221ec-client-ca" (OuterVolumeSpecName: "client-ca") pod "b0e9e6c1-bd84-4fea-9dd4-9e63a79221ec" (UID: "b0e9e6c1-bd84-4fea-9dd4-9e63a79221ec"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 18:12:04 crc kubenswrapper[4661]: I0120 18:12:04.316293 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eb9c754e-79ee-46d7-9d9c-8d2dc3ea55a6-client-ca" (OuterVolumeSpecName: "client-ca") pod "eb9c754e-79ee-46d7-9d9c-8d2dc3ea55a6" (UID: "eb9c754e-79ee-46d7-9d9c-8d2dc3ea55a6"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 18:12:04 crc kubenswrapper[4661]: I0120 18:12:04.316488 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9525fe4d-1051-4fc7-81db-e7059d70737f-client-ca\") pod \"controller-manager-5897c66b94-jqn9j\" (UID: \"9525fe4d-1051-4fc7-81db-e7059d70737f\") " pod="openshift-controller-manager/controller-manager-5897c66b94-jqn9j" Jan 20 18:12:04 crc kubenswrapper[4661]: I0120 18:12:04.316577 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9525fe4d-1051-4fc7-81db-e7059d70737f-serving-cert\") pod \"controller-manager-5897c66b94-jqn9j\" (UID: \"9525fe4d-1051-4fc7-81db-e7059d70737f\") " pod="openshift-controller-manager/controller-manager-5897c66b94-jqn9j" Jan 20 18:12:04 crc kubenswrapper[4661]: I0120 18:12:04.316778 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9525fe4d-1051-4fc7-81db-e7059d70737f-config\") pod \"controller-manager-5897c66b94-jqn9j\" (UID: \"9525fe4d-1051-4fc7-81db-e7059d70737f\") " pod="openshift-controller-manager/controller-manager-5897c66b94-jqn9j" Jan 20 18:12:04 crc kubenswrapper[4661]: I0120 18:12:04.316629 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eb9c754e-79ee-46d7-9d9c-8d2dc3ea55a6-config" (OuterVolumeSpecName: "config") pod "eb9c754e-79ee-46d7-9d9c-8d2dc3ea55a6" (UID: "eb9c754e-79ee-46d7-9d9c-8d2dc3ea55a6"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 18:12:04 crc kubenswrapper[4661]: I0120 18:12:04.316860 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b0e9e6c1-bd84-4fea-9dd4-9e63a79221ec-config" (OuterVolumeSpecName: "config") pod "b0e9e6c1-bd84-4fea-9dd4-9e63a79221ec" (UID: "b0e9e6c1-bd84-4fea-9dd4-9e63a79221ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 18:12:04 crc kubenswrapper[4661]: I0120 18:12:04.317154 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9d6gs\" (UniqueName: \"kubernetes.io/projected/9525fe4d-1051-4fc7-81db-e7059d70737f-kube-api-access-9d6gs\") pod \"controller-manager-5897c66b94-jqn9j\" (UID: \"9525fe4d-1051-4fc7-81db-e7059d70737f\") " pod="openshift-controller-manager/controller-manager-5897c66b94-jqn9j" Jan 20 18:12:04 crc kubenswrapper[4661]: I0120 18:12:04.317354 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/9525fe4d-1051-4fc7-81db-e7059d70737f-proxy-ca-bundles\") pod \"controller-manager-5897c66b94-jqn9j\" (UID: \"9525fe4d-1051-4fc7-81db-e7059d70737f\") " pod="openshift-controller-manager/controller-manager-5897c66b94-jqn9j" Jan 20 18:12:04 crc kubenswrapper[4661]: I0120 18:12:04.317532 4661 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b0e9e6c1-bd84-4fea-9dd4-9e63a79221ec-config\") on node \"crc\" DevicePath \"\"" Jan 20 18:12:04 crc kubenswrapper[4661]: I0120 18:12:04.317549 4661 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/eb9c754e-79ee-46d7-9d9c-8d2dc3ea55a6-client-ca\") on node \"crc\" DevicePath \"\"" Jan 20 18:12:04 crc kubenswrapper[4661]: I0120 18:12:04.317560 4661 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eb9c754e-79ee-46d7-9d9c-8d2dc3ea55a6-config\") on node \"crc\" DevicePath \"\"" Jan 20 18:12:04 crc kubenswrapper[4661]: I0120 18:12:04.317570 4661 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b0e9e6c1-bd84-4fea-9dd4-9e63a79221ec-client-ca\") on node \"crc\" DevicePath \"\"" Jan 20 18:12:04 crc kubenswrapper[4661]: I0120 18:12:04.317578 4661 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b0e9e6c1-bd84-4fea-9dd4-9e63a79221ec-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 20 18:12:04 crc kubenswrapper[4661]: I0120 18:12:04.322277 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eb9c754e-79ee-46d7-9d9c-8d2dc3ea55a6-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "eb9c754e-79ee-46d7-9d9c-8d2dc3ea55a6" (UID: "eb9c754e-79ee-46d7-9d9c-8d2dc3ea55a6"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 18:12:04 crc kubenswrapper[4661]: I0120 18:12:04.322364 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eb9c754e-79ee-46d7-9d9c-8d2dc3ea55a6-kube-api-access-zb6sq" (OuterVolumeSpecName: "kube-api-access-zb6sq") pod "eb9c754e-79ee-46d7-9d9c-8d2dc3ea55a6" (UID: "eb9c754e-79ee-46d7-9d9c-8d2dc3ea55a6"). InnerVolumeSpecName "kube-api-access-zb6sq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 18:12:04 crc kubenswrapper[4661]: I0120 18:12:04.322593 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b0e9e6c1-bd84-4fea-9dd4-9e63a79221ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "b0e9e6c1-bd84-4fea-9dd4-9e63a79221ec" (UID: "b0e9e6c1-bd84-4fea-9dd4-9e63a79221ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 18:12:04 crc kubenswrapper[4661]: I0120 18:12:04.333164 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b0e9e6c1-bd84-4fea-9dd4-9e63a79221ec-kube-api-access-4pqgt" (OuterVolumeSpecName: "kube-api-access-4pqgt") pod "b0e9e6c1-bd84-4fea-9dd4-9e63a79221ec" (UID: "b0e9e6c1-bd84-4fea-9dd4-9e63a79221ec"). InnerVolumeSpecName "kube-api-access-4pqgt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 18:12:04 crc kubenswrapper[4661]: I0120 18:12:04.418129 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9525fe4d-1051-4fc7-81db-e7059d70737f-client-ca\") pod \"controller-manager-5897c66b94-jqn9j\" (UID: \"9525fe4d-1051-4fc7-81db-e7059d70737f\") " pod="openshift-controller-manager/controller-manager-5897c66b94-jqn9j" Jan 20 18:12:04 crc kubenswrapper[4661]: I0120 18:12:04.418183 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9525fe4d-1051-4fc7-81db-e7059d70737f-serving-cert\") pod \"controller-manager-5897c66b94-jqn9j\" (UID: \"9525fe4d-1051-4fc7-81db-e7059d70737f\") " pod="openshift-controller-manager/controller-manager-5897c66b94-jqn9j" Jan 20 18:12:04 crc kubenswrapper[4661]: I0120 18:12:04.418204 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9525fe4d-1051-4fc7-81db-e7059d70737f-config\") pod \"controller-manager-5897c66b94-jqn9j\" (UID: \"9525fe4d-1051-4fc7-81db-e7059d70737f\") " pod="openshift-controller-manager/controller-manager-5897c66b94-jqn9j" Jan 20 18:12:04 crc kubenswrapper[4661]: I0120 18:12:04.418234 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9d6gs\" (UniqueName: \"kubernetes.io/projected/9525fe4d-1051-4fc7-81db-e7059d70737f-kube-api-access-9d6gs\") pod \"controller-manager-5897c66b94-jqn9j\" (UID: \"9525fe4d-1051-4fc7-81db-e7059d70737f\") " pod="openshift-controller-manager/controller-manager-5897c66b94-jqn9j" Jan 20 18:12:04 crc kubenswrapper[4661]: I0120 18:12:04.418278 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/9525fe4d-1051-4fc7-81db-e7059d70737f-proxy-ca-bundles\") pod \"controller-manager-5897c66b94-jqn9j\" (UID: \"9525fe4d-1051-4fc7-81db-e7059d70737f\") " pod="openshift-controller-manager/controller-manager-5897c66b94-jqn9j" Jan 20 18:12:04 crc kubenswrapper[4661]: I0120 18:12:04.418333 4661 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b0e9e6c1-bd84-4fea-9dd4-9e63a79221ec-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 20 18:12:04 crc kubenswrapper[4661]: I0120 18:12:04.418345 4661 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/eb9c754e-79ee-46d7-9d9c-8d2dc3ea55a6-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 20 18:12:04 crc kubenswrapper[4661]: I0120 18:12:04.418355 4661 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zb6sq\" (UniqueName: \"kubernetes.io/projected/eb9c754e-79ee-46d7-9d9c-8d2dc3ea55a6-kube-api-access-zb6sq\") on node \"crc\" DevicePath \"\"" Jan 20 18:12:04 crc kubenswrapper[4661]: I0120 18:12:04.418365 4661 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4pqgt\" (UniqueName: \"kubernetes.io/projected/b0e9e6c1-bd84-4fea-9dd4-9e63a79221ec-kube-api-access-4pqgt\") on node \"crc\" DevicePath \"\"" Jan 20 18:12:04 crc kubenswrapper[4661]: I0120 18:12:04.419418 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/9525fe4d-1051-4fc7-81db-e7059d70737f-proxy-ca-bundles\") pod \"controller-manager-5897c66b94-jqn9j\" (UID: \"9525fe4d-1051-4fc7-81db-e7059d70737f\") " pod="openshift-controller-manager/controller-manager-5897c66b94-jqn9j" Jan 20 18:12:04 crc kubenswrapper[4661]: I0120 18:12:04.420606 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9525fe4d-1051-4fc7-81db-e7059d70737f-config\") pod \"controller-manager-5897c66b94-jqn9j\" (UID: \"9525fe4d-1051-4fc7-81db-e7059d70737f\") " pod="openshift-controller-manager/controller-manager-5897c66b94-jqn9j" Jan 20 18:12:04 crc kubenswrapper[4661]: I0120 18:12:04.420716 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9525fe4d-1051-4fc7-81db-e7059d70737f-client-ca\") pod \"controller-manager-5897c66b94-jqn9j\" (UID: \"9525fe4d-1051-4fc7-81db-e7059d70737f\") " pod="openshift-controller-manager/controller-manager-5897c66b94-jqn9j" Jan 20 18:12:04 crc kubenswrapper[4661]: I0120 18:12:04.426467 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9525fe4d-1051-4fc7-81db-e7059d70737f-serving-cert\") pod \"controller-manager-5897c66b94-jqn9j\" (UID: \"9525fe4d-1051-4fc7-81db-e7059d70737f\") " pod="openshift-controller-manager/controller-manager-5897c66b94-jqn9j" Jan 20 18:12:04 crc kubenswrapper[4661]: I0120 18:12:04.440955 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9d6gs\" (UniqueName: \"kubernetes.io/projected/9525fe4d-1051-4fc7-81db-e7059d70737f-kube-api-access-9d6gs\") pod \"controller-manager-5897c66b94-jqn9j\" (UID: \"9525fe4d-1051-4fc7-81db-e7059d70737f\") " pod="openshift-controller-manager/controller-manager-5897c66b94-jqn9j" Jan 20 18:12:04 crc kubenswrapper[4661]: I0120 18:12:04.558022 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5897c66b94-jqn9j" Jan 20 18:12:04 crc kubenswrapper[4661]: I0120 18:12:04.618750 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5494bbdbdf-l8f54" event={"ID":"b0e9e6c1-bd84-4fea-9dd4-9e63a79221ec","Type":"ContainerDied","Data":"01e326edafa225a6e654546d622c33038092b5787ac98197fc84ad10858bf8ab"} Jan 20 18:12:04 crc kubenswrapper[4661]: I0120 18:12:04.618814 4661 scope.go:117] "RemoveContainer" containerID="03b211d9f0d823bbb48a0a17a7ad883c10620323e37466bf1b1453c96e930212" Jan 20 18:12:04 crc kubenswrapper[4661]: I0120 18:12:04.619654 4661 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5494bbdbdf-l8f54" Jan 20 18:12:04 crc kubenswrapper[4661]: I0120 18:12:04.625015 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-795b8d5757-4qlvx" event={"ID":"eb9c754e-79ee-46d7-9d9c-8d2dc3ea55a6","Type":"ContainerDied","Data":"05a0adbc9a277ab333a2d765625743b08ee573035eec54baaa0d70b185b665d8"} Jan 20 18:12:04 crc kubenswrapper[4661]: I0120 18:12:04.625058 4661 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-795b8d5757-4qlvx" Jan 20 18:12:04 crc kubenswrapper[4661]: I0120 18:12:04.674714 4661 scope.go:117] "RemoveContainer" containerID="dc3fc549f55c23f7aece1dd8b6baa4a74c23c1cfd7e34f71b3b74c9f4f437d42" Jan 20 18:12:04 crc kubenswrapper[4661]: I0120 18:12:04.709925 4661 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-795b8d5757-4qlvx"] Jan 20 18:12:04 crc kubenswrapper[4661]: I0120 18:12:04.711378 4661 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-795b8d5757-4qlvx"] Jan 20 18:12:04 crc kubenswrapper[4661]: I0120 18:12:04.724639 4661 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-5494bbdbdf-l8f54"] Jan 20 18:12:04 crc kubenswrapper[4661]: I0120 18:12:04.732056 4661 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-5494bbdbdf-l8f54"] Jan 20 18:12:04 crc kubenswrapper[4661]: I0120 18:12:04.834754 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5897c66b94-jqn9j"] Jan 20 18:12:05 crc kubenswrapper[4661]: I0120 18:12:05.633011 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5897c66b94-jqn9j" event={"ID":"9525fe4d-1051-4fc7-81db-e7059d70737f","Type":"ContainerStarted","Data":"049ebafe97fdc4c97d4e0f5904c63939a0daf98f644c24718a6a750d1d6ab15f"} Jan 20 18:12:05 crc kubenswrapper[4661]: I0120 18:12:05.633062 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5897c66b94-jqn9j" event={"ID":"9525fe4d-1051-4fc7-81db-e7059d70737f","Type":"ContainerStarted","Data":"d19925adf552d5c13010aa0b2f7f0dbf0bdb4411a444debfd74ca715c799505e"} Jan 20 18:12:05 crc kubenswrapper[4661]: I0120 18:12:05.634481 4661 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-5897c66b94-jqn9j" Jan 20 18:12:05 crc kubenswrapper[4661]: I0120 18:12:05.639609 4661 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-5897c66b94-jqn9j" Jan 20 18:12:05 crc kubenswrapper[4661]: I0120 18:12:05.659850 4661 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-5897c66b94-jqn9j" podStartSLOduration=5.659829701 podStartE2EDuration="5.659829701s" podCreationTimestamp="2026-01-20 18:12:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 18:12:05.655485744 +0000 UTC m=+381.986275416" watchObservedRunningTime="2026-01-20 18:12:05.659829701 +0000 UTC m=+381.990619373" Jan 20 18:12:06 crc kubenswrapper[4661]: I0120 18:12:06.149955 4661 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b0e9e6c1-bd84-4fea-9dd4-9e63a79221ec" path="/var/lib/kubelet/pods/b0e9e6c1-bd84-4fea-9dd4-9e63a79221ec/volumes" Jan 20 18:12:06 crc kubenswrapper[4661]: I0120 18:12:06.151385 4661 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eb9c754e-79ee-46d7-9d9c-8d2dc3ea55a6" path="/var/lib/kubelet/pods/eb9c754e-79ee-46d7-9d9c-8d2dc3ea55a6/volumes" Jan 20 18:12:07 crc kubenswrapper[4661]: I0120 18:12:07.015984 4661 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7dccc768cd-s5b85"] Jan 20 18:12:07 crc kubenswrapper[4661]: I0120 18:12:07.016703 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7dccc768cd-s5b85" Jan 20 18:12:07 crc kubenswrapper[4661]: I0120 18:12:07.019838 4661 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 20 18:12:07 crc kubenswrapper[4661]: I0120 18:12:07.020314 4661 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 20 18:12:07 crc kubenswrapper[4661]: I0120 18:12:07.021373 4661 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 20 18:12:07 crc kubenswrapper[4661]: I0120 18:12:07.021560 4661 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 20 18:12:07 crc kubenswrapper[4661]: I0120 18:12:07.023757 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 20 18:12:07 crc kubenswrapper[4661]: I0120 18:12:07.024066 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 20 18:12:07 crc kubenswrapper[4661]: I0120 18:12:07.042748 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7dccc768cd-s5b85"] Jan 20 18:12:07 crc kubenswrapper[4661]: I0120 18:12:07.181090 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2ce9b787-0ffd-45a3-84df-c297a8bccd80-serving-cert\") pod \"route-controller-manager-7dccc768cd-s5b85\" (UID: \"2ce9b787-0ffd-45a3-84df-c297a8bccd80\") " pod="openshift-route-controller-manager/route-controller-manager-7dccc768cd-s5b85" Jan 20 18:12:07 crc kubenswrapper[4661]: I0120 18:12:07.181187 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7rpdp\" (UniqueName: \"kubernetes.io/projected/2ce9b787-0ffd-45a3-84df-c297a8bccd80-kube-api-access-7rpdp\") pod \"route-controller-manager-7dccc768cd-s5b85\" (UID: \"2ce9b787-0ffd-45a3-84df-c297a8bccd80\") " pod="openshift-route-controller-manager/route-controller-manager-7dccc768cd-s5b85" Jan 20 18:12:07 crc kubenswrapper[4661]: I0120 18:12:07.181214 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2ce9b787-0ffd-45a3-84df-c297a8bccd80-config\") pod \"route-controller-manager-7dccc768cd-s5b85\" (UID: \"2ce9b787-0ffd-45a3-84df-c297a8bccd80\") " pod="openshift-route-controller-manager/route-controller-manager-7dccc768cd-s5b85" Jan 20 18:12:07 crc kubenswrapper[4661]: I0120 18:12:07.181237 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2ce9b787-0ffd-45a3-84df-c297a8bccd80-client-ca\") pod \"route-controller-manager-7dccc768cd-s5b85\" (UID: \"2ce9b787-0ffd-45a3-84df-c297a8bccd80\") " pod="openshift-route-controller-manager/route-controller-manager-7dccc768cd-s5b85" Jan 20 18:12:07 crc kubenswrapper[4661]: I0120 18:12:07.283127 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7rpdp\" (UniqueName: \"kubernetes.io/projected/2ce9b787-0ffd-45a3-84df-c297a8bccd80-kube-api-access-7rpdp\") pod \"route-controller-manager-7dccc768cd-s5b85\" (UID: \"2ce9b787-0ffd-45a3-84df-c297a8bccd80\") " pod="openshift-route-controller-manager/route-controller-manager-7dccc768cd-s5b85" Jan 20 18:12:07 crc kubenswrapper[4661]: I0120 18:12:07.283458 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2ce9b787-0ffd-45a3-84df-c297a8bccd80-config\") pod \"route-controller-manager-7dccc768cd-s5b85\" (UID: \"2ce9b787-0ffd-45a3-84df-c297a8bccd80\") " pod="openshift-route-controller-manager/route-controller-manager-7dccc768cd-s5b85" Jan 20 18:12:07 crc kubenswrapper[4661]: I0120 18:12:07.284435 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2ce9b787-0ffd-45a3-84df-c297a8bccd80-client-ca\") pod \"route-controller-manager-7dccc768cd-s5b85\" (UID: \"2ce9b787-0ffd-45a3-84df-c297a8bccd80\") " pod="openshift-route-controller-manager/route-controller-manager-7dccc768cd-s5b85" Jan 20 18:12:07 crc kubenswrapper[4661]: I0120 18:12:07.284583 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2ce9b787-0ffd-45a3-84df-c297a8bccd80-config\") pod \"route-controller-manager-7dccc768cd-s5b85\" (UID: \"2ce9b787-0ffd-45a3-84df-c297a8bccd80\") " pod="openshift-route-controller-manager/route-controller-manager-7dccc768cd-s5b85" Jan 20 18:12:07 crc kubenswrapper[4661]: I0120 18:12:07.283489 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2ce9b787-0ffd-45a3-84df-c297a8bccd80-client-ca\") pod \"route-controller-manager-7dccc768cd-s5b85\" (UID: \"2ce9b787-0ffd-45a3-84df-c297a8bccd80\") " pod="openshift-route-controller-manager/route-controller-manager-7dccc768cd-s5b85" Jan 20 18:12:07 crc kubenswrapper[4661]: I0120 18:12:07.284703 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2ce9b787-0ffd-45a3-84df-c297a8bccd80-serving-cert\") pod \"route-controller-manager-7dccc768cd-s5b85\" (UID: \"2ce9b787-0ffd-45a3-84df-c297a8bccd80\") " pod="openshift-route-controller-manager/route-controller-manager-7dccc768cd-s5b85" Jan 20 18:12:07 crc kubenswrapper[4661]: I0120 18:12:07.292912 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2ce9b787-0ffd-45a3-84df-c297a8bccd80-serving-cert\") pod \"route-controller-manager-7dccc768cd-s5b85\" (UID: \"2ce9b787-0ffd-45a3-84df-c297a8bccd80\") " pod="openshift-route-controller-manager/route-controller-manager-7dccc768cd-s5b85" Jan 20 18:12:07 crc kubenswrapper[4661]: I0120 18:12:07.301144 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7rpdp\" (UniqueName: \"kubernetes.io/projected/2ce9b787-0ffd-45a3-84df-c297a8bccd80-kube-api-access-7rpdp\") pod \"route-controller-manager-7dccc768cd-s5b85\" (UID: \"2ce9b787-0ffd-45a3-84df-c297a8bccd80\") " pod="openshift-route-controller-manager/route-controller-manager-7dccc768cd-s5b85" Jan 20 18:12:07 crc kubenswrapper[4661]: I0120 18:12:07.335456 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7dccc768cd-s5b85" Jan 20 18:12:07 crc kubenswrapper[4661]: I0120 18:12:07.750281 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7dccc768cd-s5b85"] Jan 20 18:12:07 crc kubenswrapper[4661]: W0120 18:12:07.758086 4661 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2ce9b787_0ffd_45a3_84df_c297a8bccd80.slice/crio-318b56ff5ca193d6746ee4d495ad7a5fa57fa822199be75473ab9f5d4374e872 WatchSource:0}: Error finding container 318b56ff5ca193d6746ee4d495ad7a5fa57fa822199be75473ab9f5d4374e872: Status 404 returned error can't find the container with id 318b56ff5ca193d6746ee4d495ad7a5fa57fa822199be75473ab9f5d4374e872 Jan 20 18:12:08 crc kubenswrapper[4661]: I0120 18:12:08.648931 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7dccc768cd-s5b85" event={"ID":"2ce9b787-0ffd-45a3-84df-c297a8bccd80","Type":"ContainerStarted","Data":"27bfc1298ad490107e6d5740010a4b333560047cee7eef2901c2e7f58a63b0e0"} Jan 20 18:12:08 crc kubenswrapper[4661]: I0120 18:12:08.649346 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7dccc768cd-s5b85" event={"ID":"2ce9b787-0ffd-45a3-84df-c297a8bccd80","Type":"ContainerStarted","Data":"318b56ff5ca193d6746ee4d495ad7a5fa57fa822199be75473ab9f5d4374e872"} Jan 20 18:12:08 crc kubenswrapper[4661]: I0120 18:12:08.649927 4661 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-7dccc768cd-s5b85" Jan 20 18:12:08 crc kubenswrapper[4661]: I0120 18:12:08.654677 4661 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-7dccc768cd-s5b85" Jan 20 18:12:08 crc kubenswrapper[4661]: I0120 18:12:08.673803 4661 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-7dccc768cd-s5b85" podStartSLOduration=8.673785767 podStartE2EDuration="8.673785767s" podCreationTimestamp="2026-01-20 18:12:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 18:12:08.669870252 +0000 UTC m=+385.000659914" watchObservedRunningTime="2026-01-20 18:12:08.673785767 +0000 UTC m=+385.004575429" Jan 20 18:12:20 crc kubenswrapper[4661]: I0120 18:12:20.615613 4661 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-52www"] Jan 20 18:12:20 crc kubenswrapper[4661]: I0120 18:12:20.617010 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-52www" Jan 20 18:12:20 crc kubenswrapper[4661]: I0120 18:12:20.627009 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-52www"] Jan 20 18:12:20 crc kubenswrapper[4661]: I0120 18:12:20.783251 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/0c6313b1-3348-4864-9df3-b6e48fcae560-ca-trust-extracted\") pod \"image-registry-66df7c8f76-52www\" (UID: \"0c6313b1-3348-4864-9df3-b6e48fcae560\") " pod="openshift-image-registry/image-registry-66df7c8f76-52www" Jan 20 18:12:20 crc kubenswrapper[4661]: I0120 18:12:20.783333 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-52www\" (UID: \"0c6313b1-3348-4864-9df3-b6e48fcae560\") " pod="openshift-image-registry/image-registry-66df7c8f76-52www" Jan 20 18:12:20 crc kubenswrapper[4661]: I0120 18:12:20.783460 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/0c6313b1-3348-4864-9df3-b6e48fcae560-installation-pull-secrets\") pod \"image-registry-66df7c8f76-52www\" (UID: \"0c6313b1-3348-4864-9df3-b6e48fcae560\") " pod="openshift-image-registry/image-registry-66df7c8f76-52www" Jan 20 18:12:20 crc kubenswrapper[4661]: I0120 18:12:20.783504 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/0c6313b1-3348-4864-9df3-b6e48fcae560-registry-certificates\") pod \"image-registry-66df7c8f76-52www\" (UID: \"0c6313b1-3348-4864-9df3-b6e48fcae560\") " pod="openshift-image-registry/image-registry-66df7c8f76-52www" Jan 20 18:12:20 crc kubenswrapper[4661]: I0120 18:12:20.783620 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/0c6313b1-3348-4864-9df3-b6e48fcae560-registry-tls\") pod \"image-registry-66df7c8f76-52www\" (UID: \"0c6313b1-3348-4864-9df3-b6e48fcae560\") " pod="openshift-image-registry/image-registry-66df7c8f76-52www" Jan 20 18:12:20 crc kubenswrapper[4661]: I0120 18:12:20.783698 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/0c6313b1-3348-4864-9df3-b6e48fcae560-trusted-ca\") pod \"image-registry-66df7c8f76-52www\" (UID: \"0c6313b1-3348-4864-9df3-b6e48fcae560\") " pod="openshift-image-registry/image-registry-66df7c8f76-52www" Jan 20 18:12:20 crc kubenswrapper[4661]: I0120 18:12:20.783729 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/0c6313b1-3348-4864-9df3-b6e48fcae560-bound-sa-token\") pod \"image-registry-66df7c8f76-52www\" (UID: \"0c6313b1-3348-4864-9df3-b6e48fcae560\") " pod="openshift-image-registry/image-registry-66df7c8f76-52www" Jan 20 18:12:20 crc kubenswrapper[4661]: I0120 18:12:20.783947 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fsw7t\" (UniqueName: \"kubernetes.io/projected/0c6313b1-3348-4864-9df3-b6e48fcae560-kube-api-access-fsw7t\") pod \"image-registry-66df7c8f76-52www\" (UID: \"0c6313b1-3348-4864-9df3-b6e48fcae560\") " pod="openshift-image-registry/image-registry-66df7c8f76-52www" Jan 20 18:12:20 crc kubenswrapper[4661]: I0120 18:12:20.823261 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-52www\" (UID: \"0c6313b1-3348-4864-9df3-b6e48fcae560\") " pod="openshift-image-registry/image-registry-66df7c8f76-52www" Jan 20 18:12:20 crc kubenswrapper[4661]: I0120 18:12:20.884730 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/0c6313b1-3348-4864-9df3-b6e48fcae560-bound-sa-token\") pod \"image-registry-66df7c8f76-52www\" (UID: \"0c6313b1-3348-4864-9df3-b6e48fcae560\") " pod="openshift-image-registry/image-registry-66df7c8f76-52www" Jan 20 18:12:20 crc kubenswrapper[4661]: I0120 18:12:20.884781 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fsw7t\" (UniqueName: \"kubernetes.io/projected/0c6313b1-3348-4864-9df3-b6e48fcae560-kube-api-access-fsw7t\") pod \"image-registry-66df7c8f76-52www\" (UID: \"0c6313b1-3348-4864-9df3-b6e48fcae560\") " pod="openshift-image-registry/image-registry-66df7c8f76-52www" Jan 20 18:12:20 crc kubenswrapper[4661]: I0120 18:12:20.884825 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/0c6313b1-3348-4864-9df3-b6e48fcae560-ca-trust-extracted\") pod \"image-registry-66df7c8f76-52www\" (UID: \"0c6313b1-3348-4864-9df3-b6e48fcae560\") " pod="openshift-image-registry/image-registry-66df7c8f76-52www" Jan 20 18:12:20 crc kubenswrapper[4661]: I0120 18:12:20.884871 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/0c6313b1-3348-4864-9df3-b6e48fcae560-installation-pull-secrets\") pod \"image-registry-66df7c8f76-52www\" (UID: \"0c6313b1-3348-4864-9df3-b6e48fcae560\") " pod="openshift-image-registry/image-registry-66df7c8f76-52www" Jan 20 18:12:20 crc kubenswrapper[4661]: I0120 18:12:20.884893 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/0c6313b1-3348-4864-9df3-b6e48fcae560-registry-certificates\") pod \"image-registry-66df7c8f76-52www\" (UID: \"0c6313b1-3348-4864-9df3-b6e48fcae560\") " pod="openshift-image-registry/image-registry-66df7c8f76-52www" Jan 20 18:12:20 crc kubenswrapper[4661]: I0120 18:12:20.884943 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/0c6313b1-3348-4864-9df3-b6e48fcae560-registry-tls\") pod \"image-registry-66df7c8f76-52www\" (UID: \"0c6313b1-3348-4864-9df3-b6e48fcae560\") " pod="openshift-image-registry/image-registry-66df7c8f76-52www" Jan 20 18:12:20 crc kubenswrapper[4661]: I0120 18:12:20.884968 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/0c6313b1-3348-4864-9df3-b6e48fcae560-trusted-ca\") pod \"image-registry-66df7c8f76-52www\" (UID: \"0c6313b1-3348-4864-9df3-b6e48fcae560\") " pod="openshift-image-registry/image-registry-66df7c8f76-52www" Jan 20 18:12:20 crc kubenswrapper[4661]: I0120 18:12:20.886253 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/0c6313b1-3348-4864-9df3-b6e48fcae560-trusted-ca\") pod \"image-registry-66df7c8f76-52www\" (UID: \"0c6313b1-3348-4864-9df3-b6e48fcae560\") " pod="openshift-image-registry/image-registry-66df7c8f76-52www" Jan 20 18:12:20 crc kubenswrapper[4661]: I0120 18:12:20.886379 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/0c6313b1-3348-4864-9df3-b6e48fcae560-registry-certificates\") pod \"image-registry-66df7c8f76-52www\" (UID: \"0c6313b1-3348-4864-9df3-b6e48fcae560\") " pod="openshift-image-registry/image-registry-66df7c8f76-52www" Jan 20 18:12:20 crc kubenswrapper[4661]: I0120 18:12:20.886379 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/0c6313b1-3348-4864-9df3-b6e48fcae560-ca-trust-extracted\") pod \"image-registry-66df7c8f76-52www\" (UID: \"0c6313b1-3348-4864-9df3-b6e48fcae560\") " pod="openshift-image-registry/image-registry-66df7c8f76-52www" Jan 20 18:12:20 crc kubenswrapper[4661]: I0120 18:12:20.892253 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/0c6313b1-3348-4864-9df3-b6e48fcae560-installation-pull-secrets\") pod \"image-registry-66df7c8f76-52www\" (UID: \"0c6313b1-3348-4864-9df3-b6e48fcae560\") " pod="openshift-image-registry/image-registry-66df7c8f76-52www" Jan 20 18:12:20 crc kubenswrapper[4661]: I0120 18:12:20.892824 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/0c6313b1-3348-4864-9df3-b6e48fcae560-registry-tls\") pod \"image-registry-66df7c8f76-52www\" (UID: \"0c6313b1-3348-4864-9df3-b6e48fcae560\") " pod="openshift-image-registry/image-registry-66df7c8f76-52www" Jan 20 18:12:20 crc kubenswrapper[4661]: I0120 18:12:20.902096 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fsw7t\" (UniqueName: \"kubernetes.io/projected/0c6313b1-3348-4864-9df3-b6e48fcae560-kube-api-access-fsw7t\") pod \"image-registry-66df7c8f76-52www\" (UID: \"0c6313b1-3348-4864-9df3-b6e48fcae560\") " pod="openshift-image-registry/image-registry-66df7c8f76-52www" Jan 20 18:12:20 crc kubenswrapper[4661]: I0120 18:12:20.911593 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/0c6313b1-3348-4864-9df3-b6e48fcae560-bound-sa-token\") pod \"image-registry-66df7c8f76-52www\" (UID: \"0c6313b1-3348-4864-9df3-b6e48fcae560\") " pod="openshift-image-registry/image-registry-66df7c8f76-52www" Jan 20 18:12:20 crc kubenswrapper[4661]: I0120 18:12:20.936746 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-52www" Jan 20 18:12:21 crc kubenswrapper[4661]: I0120 18:12:21.171810 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-52www"] Jan 20 18:12:21 crc kubenswrapper[4661]: W0120 18:12:21.177797 4661 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0c6313b1_3348_4864_9df3_b6e48fcae560.slice/crio-874473879b3e54daf66331ba4fc44cdc44e82b46dbd5453c4986de23c4b8b4d6 WatchSource:0}: Error finding container 874473879b3e54daf66331ba4fc44cdc44e82b46dbd5453c4986de23c4b8b4d6: Status 404 returned error can't find the container with id 874473879b3e54daf66331ba4fc44cdc44e82b46dbd5453c4986de23c4b8b4d6 Jan 20 18:12:21 crc kubenswrapper[4661]: I0120 18:12:21.721643 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-52www" event={"ID":"0c6313b1-3348-4864-9df3-b6e48fcae560","Type":"ContainerStarted","Data":"d9d5af6d96c2ef71d1513df901178aafd9e051085c4d4eea715cd31a989c893a"} Jan 20 18:12:21 crc kubenswrapper[4661]: I0120 18:12:21.722214 4661 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-52www" Jan 20 18:12:21 crc kubenswrapper[4661]: I0120 18:12:21.722229 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-52www" event={"ID":"0c6313b1-3348-4864-9df3-b6e48fcae560","Type":"ContainerStarted","Data":"874473879b3e54daf66331ba4fc44cdc44e82b46dbd5453c4986de23c4b8b4d6"} Jan 20 18:12:21 crc kubenswrapper[4661]: I0120 18:12:21.754268 4661 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-52www" podStartSLOduration=1.754250763 podStartE2EDuration="1.754250763s" podCreationTimestamp="2026-01-20 18:12:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 18:12:21.747533782 +0000 UTC m=+398.078323444" watchObservedRunningTime="2026-01-20 18:12:21.754250763 +0000 UTC m=+398.085040425" Jan 20 18:12:29 crc kubenswrapper[4661]: I0120 18:12:29.323782 4661 patch_prober.go:28] interesting pod/machine-config-daemon-svf7c container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 20 18:12:29 crc kubenswrapper[4661]: I0120 18:12:29.324394 4661 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-svf7c" podUID="78855c94-da90-4523-8d65-70f7fd153dee" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 20 18:12:40 crc kubenswrapper[4661]: I0120 18:12:40.946500 4661 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-52www" Jan 20 18:12:41 crc kubenswrapper[4661]: I0120 18:12:41.040193 4661 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-7m2kh"] Jan 20 18:12:59 crc kubenswrapper[4661]: I0120 18:12:59.324240 4661 patch_prober.go:28] interesting pod/machine-config-daemon-svf7c container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 20 18:12:59 crc kubenswrapper[4661]: I0120 18:12:59.326176 4661 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-svf7c" podUID="78855c94-da90-4523-8d65-70f7fd153dee" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 20 18:12:59 crc kubenswrapper[4661]: I0120 18:12:59.326479 4661 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-svf7c" Jan 20 18:12:59 crc kubenswrapper[4661]: I0120 18:12:59.327436 4661 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"99d4d400e62492d7e5ae5501c92fd17df5fa6c400aad3dbfd4b4f9a9fbee2fb0"} pod="openshift-machine-config-operator/machine-config-daemon-svf7c" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 20 18:12:59 crc kubenswrapper[4661]: I0120 18:12:59.327594 4661 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-svf7c" podUID="78855c94-da90-4523-8d65-70f7fd153dee" containerName="machine-config-daemon" containerID="cri-o://99d4d400e62492d7e5ae5501c92fd17df5fa6c400aad3dbfd4b4f9a9fbee2fb0" gracePeriod=600 Jan 20 18:13:00 crc kubenswrapper[4661]: I0120 18:13:00.014282 4661 generic.go:334] "Generic (PLEG): container finished" podID="78855c94-da90-4523-8d65-70f7fd153dee" containerID="99d4d400e62492d7e5ae5501c92fd17df5fa6c400aad3dbfd4b4f9a9fbee2fb0" exitCode=0 Jan 20 18:13:00 crc kubenswrapper[4661]: I0120 18:13:00.014382 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-svf7c" event={"ID":"78855c94-da90-4523-8d65-70f7fd153dee","Type":"ContainerDied","Data":"99d4d400e62492d7e5ae5501c92fd17df5fa6c400aad3dbfd4b4f9a9fbee2fb0"} Jan 20 18:13:00 crc kubenswrapper[4661]: I0120 18:13:00.014640 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-svf7c" event={"ID":"78855c94-da90-4523-8d65-70f7fd153dee","Type":"ContainerStarted","Data":"c6bcba7fc6b732bb22c0a69a286a16f49bd4540d1c1a29be1bebfef4cffede69"} Jan 20 18:13:00 crc kubenswrapper[4661]: I0120 18:13:00.014687 4661 scope.go:117] "RemoveContainer" containerID="7dad5141c6e2e07d42bee1c473efffa900d0d900467b1524cd59962582696a3e" Jan 20 18:13:06 crc kubenswrapper[4661]: I0120 18:13:06.092970 4661 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-7m2kh" podUID="1a3225a4-585b-4ad0-9951-c5feae37b6cc" containerName="registry" containerID="cri-o://89f9fec8fd3522e747064cdfdb72ed825348852b56e5dba941f2e53edbd881e1" gracePeriod=30 Jan 20 18:13:06 crc kubenswrapper[4661]: I0120 18:13:06.498915 4661 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-7m2kh" Jan 20 18:13:06 crc kubenswrapper[4661]: I0120 18:13:06.643453 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/1a3225a4-585b-4ad0-9951-c5feae37b6cc-installation-pull-secrets\") pod \"1a3225a4-585b-4ad0-9951-c5feae37b6cc\" (UID: \"1a3225a4-585b-4ad0-9951-c5feae37b6cc\") " Jan 20 18:13:06 crc kubenswrapper[4661]: I0120 18:13:06.643538 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/1a3225a4-585b-4ad0-9951-c5feae37b6cc-trusted-ca\") pod \"1a3225a4-585b-4ad0-9951-c5feae37b6cc\" (UID: \"1a3225a4-585b-4ad0-9951-c5feae37b6cc\") " Jan 20 18:13:06 crc kubenswrapper[4661]: I0120 18:13:06.643629 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/1a3225a4-585b-4ad0-9951-c5feae37b6cc-bound-sa-token\") pod \"1a3225a4-585b-4ad0-9951-c5feae37b6cc\" (UID: \"1a3225a4-585b-4ad0-9951-c5feae37b6cc\") " Jan 20 18:13:06 crc kubenswrapper[4661]: I0120 18:13:06.643723 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/1a3225a4-585b-4ad0-9951-c5feae37b6cc-registry-certificates\") pod \"1a3225a4-585b-4ad0-9951-c5feae37b6cc\" (UID: \"1a3225a4-585b-4ad0-9951-c5feae37b6cc\") " Jan 20 18:13:06 crc kubenswrapper[4661]: I0120 18:13:06.643814 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/1a3225a4-585b-4ad0-9951-c5feae37b6cc-ca-trust-extracted\") pod \"1a3225a4-585b-4ad0-9951-c5feae37b6cc\" (UID: \"1a3225a4-585b-4ad0-9951-c5feae37b6cc\") " Jan 20 18:13:06 crc kubenswrapper[4661]: I0120 18:13:06.644105 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"1a3225a4-585b-4ad0-9951-c5feae37b6cc\" (UID: \"1a3225a4-585b-4ad0-9951-c5feae37b6cc\") " Jan 20 18:13:06 crc kubenswrapper[4661]: I0120 18:13:06.644205 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/1a3225a4-585b-4ad0-9951-c5feae37b6cc-registry-tls\") pod \"1a3225a4-585b-4ad0-9951-c5feae37b6cc\" (UID: \"1a3225a4-585b-4ad0-9951-c5feae37b6cc\") " Jan 20 18:13:06 crc kubenswrapper[4661]: I0120 18:13:06.644274 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vfg74\" (UniqueName: \"kubernetes.io/projected/1a3225a4-585b-4ad0-9951-c5feae37b6cc-kube-api-access-vfg74\") pod \"1a3225a4-585b-4ad0-9951-c5feae37b6cc\" (UID: \"1a3225a4-585b-4ad0-9951-c5feae37b6cc\") " Jan 20 18:13:06 crc kubenswrapper[4661]: I0120 18:13:06.644520 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1a3225a4-585b-4ad0-9951-c5feae37b6cc-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "1a3225a4-585b-4ad0-9951-c5feae37b6cc" (UID: "1a3225a4-585b-4ad0-9951-c5feae37b6cc"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 18:13:06 crc kubenswrapper[4661]: I0120 18:13:06.644731 4661 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/1a3225a4-585b-4ad0-9951-c5feae37b6cc-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 20 18:13:06 crc kubenswrapper[4661]: I0120 18:13:06.644929 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1a3225a4-585b-4ad0-9951-c5feae37b6cc-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "1a3225a4-585b-4ad0-9951-c5feae37b6cc" (UID: "1a3225a4-585b-4ad0-9951-c5feae37b6cc"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 18:13:06 crc kubenswrapper[4661]: I0120 18:13:06.656247 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1a3225a4-585b-4ad0-9951-c5feae37b6cc-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "1a3225a4-585b-4ad0-9951-c5feae37b6cc" (UID: "1a3225a4-585b-4ad0-9951-c5feae37b6cc"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 18:13:06 crc kubenswrapper[4661]: I0120 18:13:06.659880 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1a3225a4-585b-4ad0-9951-c5feae37b6cc-kube-api-access-vfg74" (OuterVolumeSpecName: "kube-api-access-vfg74") pod "1a3225a4-585b-4ad0-9951-c5feae37b6cc" (UID: "1a3225a4-585b-4ad0-9951-c5feae37b6cc"). InnerVolumeSpecName "kube-api-access-vfg74". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 18:13:06 crc kubenswrapper[4661]: I0120 18:13:06.661079 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1a3225a4-585b-4ad0-9951-c5feae37b6cc-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "1a3225a4-585b-4ad0-9951-c5feae37b6cc" (UID: "1a3225a4-585b-4ad0-9951-c5feae37b6cc"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 18:13:06 crc kubenswrapper[4661]: I0120 18:13:06.661336 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1a3225a4-585b-4ad0-9951-c5feae37b6cc-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "1a3225a4-585b-4ad0-9951-c5feae37b6cc" (UID: "1a3225a4-585b-4ad0-9951-c5feae37b6cc"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 18:13:06 crc kubenswrapper[4661]: I0120 18:13:06.664404 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "1a3225a4-585b-4ad0-9951-c5feae37b6cc" (UID: "1a3225a4-585b-4ad0-9951-c5feae37b6cc"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 20 18:13:06 crc kubenswrapper[4661]: I0120 18:13:06.665966 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1a3225a4-585b-4ad0-9951-c5feae37b6cc-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "1a3225a4-585b-4ad0-9951-c5feae37b6cc" (UID: "1a3225a4-585b-4ad0-9951-c5feae37b6cc"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 18:13:06 crc kubenswrapper[4661]: I0120 18:13:06.746462 4661 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/1a3225a4-585b-4ad0-9951-c5feae37b6cc-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 20 18:13:06 crc kubenswrapper[4661]: I0120 18:13:06.746557 4661 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/1a3225a4-585b-4ad0-9951-c5feae37b6cc-registry-certificates\") on node \"crc\" DevicePath \"\"" Jan 20 18:13:06 crc kubenswrapper[4661]: I0120 18:13:06.746588 4661 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/1a3225a4-585b-4ad0-9951-c5feae37b6cc-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Jan 20 18:13:06 crc kubenswrapper[4661]: I0120 18:13:06.746646 4661 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/1a3225a4-585b-4ad0-9951-c5feae37b6cc-registry-tls\") on node \"crc\" DevicePath \"\"" Jan 20 18:13:06 crc kubenswrapper[4661]: I0120 18:13:06.746922 4661 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vfg74\" (UniqueName: \"kubernetes.io/projected/1a3225a4-585b-4ad0-9951-c5feae37b6cc-kube-api-access-vfg74\") on node \"crc\" DevicePath \"\"" Jan 20 18:13:06 crc kubenswrapper[4661]: I0120 18:13:06.747007 4661 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/1a3225a4-585b-4ad0-9951-c5feae37b6cc-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Jan 20 18:13:07 crc kubenswrapper[4661]: I0120 18:13:07.068486 4661 generic.go:334] "Generic (PLEG): container finished" podID="1a3225a4-585b-4ad0-9951-c5feae37b6cc" containerID="89f9fec8fd3522e747064cdfdb72ed825348852b56e5dba941f2e53edbd881e1" exitCode=0 Jan 20 18:13:07 crc kubenswrapper[4661]: I0120 18:13:07.068530 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-7m2kh" event={"ID":"1a3225a4-585b-4ad0-9951-c5feae37b6cc","Type":"ContainerDied","Data":"89f9fec8fd3522e747064cdfdb72ed825348852b56e5dba941f2e53edbd881e1"} Jan 20 18:13:07 crc kubenswrapper[4661]: I0120 18:13:07.068557 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-7m2kh" event={"ID":"1a3225a4-585b-4ad0-9951-c5feae37b6cc","Type":"ContainerDied","Data":"f235571a41b0a8546bde42679a8fc71c3b6b4c2b1fc0c865be3d4f9fed59544d"} Jan 20 18:13:07 crc kubenswrapper[4661]: I0120 18:13:07.068577 4661 scope.go:117] "RemoveContainer" containerID="89f9fec8fd3522e747064cdfdb72ed825348852b56e5dba941f2e53edbd881e1" Jan 20 18:13:07 crc kubenswrapper[4661]: I0120 18:13:07.068862 4661 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-7m2kh" Jan 20 18:13:07 crc kubenswrapper[4661]: I0120 18:13:07.085438 4661 scope.go:117] "RemoveContainer" containerID="89f9fec8fd3522e747064cdfdb72ed825348852b56e5dba941f2e53edbd881e1" Jan 20 18:13:07 crc kubenswrapper[4661]: E0120 18:13:07.086148 4661 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"89f9fec8fd3522e747064cdfdb72ed825348852b56e5dba941f2e53edbd881e1\": container with ID starting with 89f9fec8fd3522e747064cdfdb72ed825348852b56e5dba941f2e53edbd881e1 not found: ID does not exist" containerID="89f9fec8fd3522e747064cdfdb72ed825348852b56e5dba941f2e53edbd881e1" Jan 20 18:13:07 crc kubenswrapper[4661]: I0120 18:13:07.086302 4661 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"89f9fec8fd3522e747064cdfdb72ed825348852b56e5dba941f2e53edbd881e1"} err="failed to get container status \"89f9fec8fd3522e747064cdfdb72ed825348852b56e5dba941f2e53edbd881e1\": rpc error: code = NotFound desc = could not find container \"89f9fec8fd3522e747064cdfdb72ed825348852b56e5dba941f2e53edbd881e1\": container with ID starting with 89f9fec8fd3522e747064cdfdb72ed825348852b56e5dba941f2e53edbd881e1 not found: ID does not exist" Jan 20 18:13:07 crc kubenswrapper[4661]: I0120 18:13:07.096202 4661 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-7m2kh"] Jan 20 18:13:07 crc kubenswrapper[4661]: I0120 18:13:07.102433 4661 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-7m2kh"] Jan 20 18:13:08 crc kubenswrapper[4661]: I0120 18:13:08.154024 4661 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1a3225a4-585b-4ad0-9951-c5feae37b6cc" path="/var/lib/kubelet/pods/1a3225a4-585b-4ad0-9951-c5feae37b6cc/volumes" Jan 20 18:14:59 crc kubenswrapper[4661]: I0120 18:14:59.324053 4661 patch_prober.go:28] interesting pod/machine-config-daemon-svf7c container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 20 18:14:59 crc kubenswrapper[4661]: I0120 18:14:59.324547 4661 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-svf7c" podUID="78855c94-da90-4523-8d65-70f7fd153dee" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 20 18:15:00 crc kubenswrapper[4661]: I0120 18:15:00.158557 4661 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29482215-vhxww"] Jan 20 18:15:00 crc kubenswrapper[4661]: E0120 18:15:00.159023 4661 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1a3225a4-585b-4ad0-9951-c5feae37b6cc" containerName="registry" Jan 20 18:15:00 crc kubenswrapper[4661]: I0120 18:15:00.159138 4661 state_mem.go:107] "Deleted CPUSet assignment" podUID="1a3225a4-585b-4ad0-9951-c5feae37b6cc" containerName="registry" Jan 20 18:15:00 crc kubenswrapper[4661]: I0120 18:15:00.159290 4661 memory_manager.go:354] "RemoveStaleState removing state" podUID="1a3225a4-585b-4ad0-9951-c5feae37b6cc" containerName="registry" Jan 20 18:15:00 crc kubenswrapper[4661]: I0120 18:15:00.159733 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29482215-vhxww" Jan 20 18:15:00 crc kubenswrapper[4661]: I0120 18:15:00.162763 4661 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 20 18:15:00 crc kubenswrapper[4661]: I0120 18:15:00.162848 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 20 18:15:00 crc kubenswrapper[4661]: I0120 18:15:00.171258 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29482215-vhxww"] Jan 20 18:15:00 crc kubenswrapper[4661]: I0120 18:15:00.271046 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n5jn2\" (UniqueName: \"kubernetes.io/projected/63f3a411-1b30-43f8-a0d4-dcf489b965c2-kube-api-access-n5jn2\") pod \"collect-profiles-29482215-vhxww\" (UID: \"63f3a411-1b30-43f8-a0d4-dcf489b965c2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29482215-vhxww" Jan 20 18:15:00 crc kubenswrapper[4661]: I0120 18:15:00.271298 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/63f3a411-1b30-43f8-a0d4-dcf489b965c2-secret-volume\") pod \"collect-profiles-29482215-vhxww\" (UID: \"63f3a411-1b30-43f8-a0d4-dcf489b965c2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29482215-vhxww" Jan 20 18:15:00 crc kubenswrapper[4661]: I0120 18:15:00.271384 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/63f3a411-1b30-43f8-a0d4-dcf489b965c2-config-volume\") pod \"collect-profiles-29482215-vhxww\" (UID: \"63f3a411-1b30-43f8-a0d4-dcf489b965c2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29482215-vhxww" Jan 20 18:15:00 crc kubenswrapper[4661]: I0120 18:15:00.372002 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/63f3a411-1b30-43f8-a0d4-dcf489b965c2-secret-volume\") pod \"collect-profiles-29482215-vhxww\" (UID: \"63f3a411-1b30-43f8-a0d4-dcf489b965c2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29482215-vhxww" Jan 20 18:15:00 crc kubenswrapper[4661]: I0120 18:15:00.372076 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/63f3a411-1b30-43f8-a0d4-dcf489b965c2-config-volume\") pod \"collect-profiles-29482215-vhxww\" (UID: \"63f3a411-1b30-43f8-a0d4-dcf489b965c2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29482215-vhxww" Jan 20 18:15:00 crc kubenswrapper[4661]: I0120 18:15:00.372772 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n5jn2\" (UniqueName: \"kubernetes.io/projected/63f3a411-1b30-43f8-a0d4-dcf489b965c2-kube-api-access-n5jn2\") pod \"collect-profiles-29482215-vhxww\" (UID: \"63f3a411-1b30-43f8-a0d4-dcf489b965c2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29482215-vhxww" Jan 20 18:15:00 crc kubenswrapper[4661]: I0120 18:15:00.373092 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/63f3a411-1b30-43f8-a0d4-dcf489b965c2-config-volume\") pod \"collect-profiles-29482215-vhxww\" (UID: \"63f3a411-1b30-43f8-a0d4-dcf489b965c2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29482215-vhxww" Jan 20 18:15:00 crc kubenswrapper[4661]: I0120 18:15:00.387414 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/63f3a411-1b30-43f8-a0d4-dcf489b965c2-secret-volume\") pod \"collect-profiles-29482215-vhxww\" (UID: \"63f3a411-1b30-43f8-a0d4-dcf489b965c2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29482215-vhxww" Jan 20 18:15:00 crc kubenswrapper[4661]: I0120 18:15:00.397847 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n5jn2\" (UniqueName: \"kubernetes.io/projected/63f3a411-1b30-43f8-a0d4-dcf489b965c2-kube-api-access-n5jn2\") pod \"collect-profiles-29482215-vhxww\" (UID: \"63f3a411-1b30-43f8-a0d4-dcf489b965c2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29482215-vhxww" Jan 20 18:15:00 crc kubenswrapper[4661]: I0120 18:15:00.515714 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29482215-vhxww" Jan 20 18:15:00 crc kubenswrapper[4661]: I0120 18:15:00.719527 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29482215-vhxww"] Jan 20 18:15:00 crc kubenswrapper[4661]: I0120 18:15:00.800532 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29482215-vhxww" event={"ID":"63f3a411-1b30-43f8-a0d4-dcf489b965c2","Type":"ContainerStarted","Data":"64c217cd69734de849ef872a5884e5b65d46eb9897256618fbe743d40fc9817d"} Jan 20 18:15:01 crc kubenswrapper[4661]: I0120 18:15:01.808904 4661 generic.go:334] "Generic (PLEG): container finished" podID="63f3a411-1b30-43f8-a0d4-dcf489b965c2" containerID="c7b52367cbb7cd7548a6b5d3b0d16fb75925f643c6e2896da0bdc597e4ec0832" exitCode=0 Jan 20 18:15:01 crc kubenswrapper[4661]: I0120 18:15:01.808972 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29482215-vhxww" event={"ID":"63f3a411-1b30-43f8-a0d4-dcf489b965c2","Type":"ContainerDied","Data":"c7b52367cbb7cd7548a6b5d3b0d16fb75925f643c6e2896da0bdc597e4ec0832"} Jan 20 18:15:03 crc kubenswrapper[4661]: I0120 18:15:03.095172 4661 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29482215-vhxww" Jan 20 18:15:03 crc kubenswrapper[4661]: I0120 18:15:03.208957 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/63f3a411-1b30-43f8-a0d4-dcf489b965c2-config-volume\") pod \"63f3a411-1b30-43f8-a0d4-dcf489b965c2\" (UID: \"63f3a411-1b30-43f8-a0d4-dcf489b965c2\") " Jan 20 18:15:03 crc kubenswrapper[4661]: I0120 18:15:03.209078 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n5jn2\" (UniqueName: \"kubernetes.io/projected/63f3a411-1b30-43f8-a0d4-dcf489b965c2-kube-api-access-n5jn2\") pod \"63f3a411-1b30-43f8-a0d4-dcf489b965c2\" (UID: \"63f3a411-1b30-43f8-a0d4-dcf489b965c2\") " Jan 20 18:15:03 crc kubenswrapper[4661]: I0120 18:15:03.209252 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/63f3a411-1b30-43f8-a0d4-dcf489b965c2-secret-volume\") pod \"63f3a411-1b30-43f8-a0d4-dcf489b965c2\" (UID: \"63f3a411-1b30-43f8-a0d4-dcf489b965c2\") " Jan 20 18:15:03 crc kubenswrapper[4661]: I0120 18:15:03.210163 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/63f3a411-1b30-43f8-a0d4-dcf489b965c2-config-volume" (OuterVolumeSpecName: "config-volume") pod "63f3a411-1b30-43f8-a0d4-dcf489b965c2" (UID: "63f3a411-1b30-43f8-a0d4-dcf489b965c2"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 18:15:03 crc kubenswrapper[4661]: I0120 18:15:03.214647 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/63f3a411-1b30-43f8-a0d4-dcf489b965c2-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "63f3a411-1b30-43f8-a0d4-dcf489b965c2" (UID: "63f3a411-1b30-43f8-a0d4-dcf489b965c2"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 18:15:03 crc kubenswrapper[4661]: I0120 18:15:03.214753 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/63f3a411-1b30-43f8-a0d4-dcf489b965c2-kube-api-access-n5jn2" (OuterVolumeSpecName: "kube-api-access-n5jn2") pod "63f3a411-1b30-43f8-a0d4-dcf489b965c2" (UID: "63f3a411-1b30-43f8-a0d4-dcf489b965c2"). InnerVolumeSpecName "kube-api-access-n5jn2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 18:15:03 crc kubenswrapper[4661]: I0120 18:15:03.310940 4661 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/63f3a411-1b30-43f8-a0d4-dcf489b965c2-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 20 18:15:03 crc kubenswrapper[4661]: I0120 18:15:03.310986 4661 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/63f3a411-1b30-43f8-a0d4-dcf489b965c2-config-volume\") on node \"crc\" DevicePath \"\"" Jan 20 18:15:03 crc kubenswrapper[4661]: I0120 18:15:03.310999 4661 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n5jn2\" (UniqueName: \"kubernetes.io/projected/63f3a411-1b30-43f8-a0d4-dcf489b965c2-kube-api-access-n5jn2\") on node \"crc\" DevicePath \"\"" Jan 20 18:15:03 crc kubenswrapper[4661]: I0120 18:15:03.823960 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29482215-vhxww" event={"ID":"63f3a411-1b30-43f8-a0d4-dcf489b965c2","Type":"ContainerDied","Data":"64c217cd69734de849ef872a5884e5b65d46eb9897256618fbe743d40fc9817d"} Jan 20 18:15:03 crc kubenswrapper[4661]: I0120 18:15:03.824235 4661 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="64c217cd69734de849ef872a5884e5b65d46eb9897256618fbe743d40fc9817d" Jan 20 18:15:03 crc kubenswrapper[4661]: I0120 18:15:03.824028 4661 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29482215-vhxww" Jan 20 18:15:29 crc kubenswrapper[4661]: I0120 18:15:29.323539 4661 patch_prober.go:28] interesting pod/machine-config-daemon-svf7c container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 20 18:15:29 crc kubenswrapper[4661]: I0120 18:15:29.324558 4661 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-svf7c" podUID="78855c94-da90-4523-8d65-70f7fd153dee" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 20 18:15:59 crc kubenswrapper[4661]: I0120 18:15:59.323466 4661 patch_prober.go:28] interesting pod/machine-config-daemon-svf7c container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 20 18:15:59 crc kubenswrapper[4661]: I0120 18:15:59.323974 4661 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-svf7c" podUID="78855c94-da90-4523-8d65-70f7fd153dee" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 20 18:15:59 crc kubenswrapper[4661]: I0120 18:15:59.324027 4661 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-svf7c" Jan 20 18:15:59 crc kubenswrapper[4661]: I0120 18:15:59.324712 4661 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"c6bcba7fc6b732bb22c0a69a286a16f49bd4540d1c1a29be1bebfef4cffede69"} pod="openshift-machine-config-operator/machine-config-daemon-svf7c" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 20 18:15:59 crc kubenswrapper[4661]: I0120 18:15:59.324783 4661 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-svf7c" podUID="78855c94-da90-4523-8d65-70f7fd153dee" containerName="machine-config-daemon" containerID="cri-o://c6bcba7fc6b732bb22c0a69a286a16f49bd4540d1c1a29be1bebfef4cffede69" gracePeriod=600 Jan 20 18:16:00 crc kubenswrapper[4661]: I0120 18:16:00.178654 4661 generic.go:334] "Generic (PLEG): container finished" podID="78855c94-da90-4523-8d65-70f7fd153dee" containerID="c6bcba7fc6b732bb22c0a69a286a16f49bd4540d1c1a29be1bebfef4cffede69" exitCode=0 Jan 20 18:16:00 crc kubenswrapper[4661]: I0120 18:16:00.178725 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-svf7c" event={"ID":"78855c94-da90-4523-8d65-70f7fd153dee","Type":"ContainerDied","Data":"c6bcba7fc6b732bb22c0a69a286a16f49bd4540d1c1a29be1bebfef4cffede69"} Jan 20 18:16:00 crc kubenswrapper[4661]: I0120 18:16:00.179181 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-svf7c" event={"ID":"78855c94-da90-4523-8d65-70f7fd153dee","Type":"ContainerStarted","Data":"07ea6c09f7f6b3cd3c82aa283c5480b53e463086680df4020a3d82e4e318e5b2"} Jan 20 18:16:00 crc kubenswrapper[4661]: I0120 18:16:00.179212 4661 scope.go:117] "RemoveContainer" containerID="99d4d400e62492d7e5ae5501c92fd17df5fa6c400aad3dbfd4b4f9a9fbee2fb0" Jan 20 18:16:50 crc kubenswrapper[4661]: I0120 18:16:50.183403 4661 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-f7qg8"] Jan 20 18:16:50 crc kubenswrapper[4661]: E0120 18:16:50.184661 4661 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="63f3a411-1b30-43f8-a0d4-dcf489b965c2" containerName="collect-profiles" Jan 20 18:16:50 crc kubenswrapper[4661]: I0120 18:16:50.184696 4661 state_mem.go:107] "Deleted CPUSet assignment" podUID="63f3a411-1b30-43f8-a0d4-dcf489b965c2" containerName="collect-profiles" Jan 20 18:16:50 crc kubenswrapper[4661]: I0120 18:16:50.184816 4661 memory_manager.go:354] "RemoveStaleState removing state" podUID="63f3a411-1b30-43f8-a0d4-dcf489b965c2" containerName="collect-profiles" Jan 20 18:16:50 crc kubenswrapper[4661]: I0120 18:16:50.185308 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-f7qg8" Jan 20 18:16:50 crc kubenswrapper[4661]: I0120 18:16:50.191018 4661 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Jan 20 18:16:50 crc kubenswrapper[4661]: I0120 18:16:50.192729 4661 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Jan 20 18:16:50 crc kubenswrapper[4661]: I0120 18:16:50.192784 4661 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-4p5td" Jan 20 18:16:50 crc kubenswrapper[4661]: I0120 18:16:50.198691 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-f7qg8"] Jan 20 18:16:50 crc kubenswrapper[4661]: I0120 18:16:50.214387 4661 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-858654f9db-2wqcl"] Jan 20 18:16:50 crc kubenswrapper[4661]: I0120 18:16:50.215093 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-2wqcl" Jan 20 18:16:50 crc kubenswrapper[4661]: I0120 18:16:50.216098 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zdptl\" (UniqueName: \"kubernetes.io/projected/13a9f3bc-c133-49ea-9cfd-bc8c107e32c6-kube-api-access-zdptl\") pod \"cert-manager-858654f9db-2wqcl\" (UID: \"13a9f3bc-c133-49ea-9cfd-bc8c107e32c6\") " pod="cert-manager/cert-manager-858654f9db-2wqcl" Jan 20 18:16:50 crc kubenswrapper[4661]: I0120 18:16:50.216126 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4r5c5\" (UniqueName: \"kubernetes.io/projected/3c6e82bb-badf-4079-abf0-566f4b6f0776-kube-api-access-4r5c5\") pod \"cert-manager-cainjector-cf98fcc89-f7qg8\" (UID: \"3c6e82bb-badf-4079-abf0-566f4b6f0776\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-f7qg8" Jan 20 18:16:50 crc kubenswrapper[4661]: I0120 18:16:50.217565 4661 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-jzm4r" Jan 20 18:16:50 crc kubenswrapper[4661]: I0120 18:16:50.221028 4661 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-scrrz"] Jan 20 18:16:50 crc kubenswrapper[4661]: I0120 18:16:50.221742 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-scrrz" Jan 20 18:16:50 crc kubenswrapper[4661]: I0120 18:16:50.223424 4661 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-jnmb4" Jan 20 18:16:50 crc kubenswrapper[4661]: I0120 18:16:50.239812 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-2wqcl"] Jan 20 18:16:50 crc kubenswrapper[4661]: I0120 18:16:50.248220 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-scrrz"] Jan 20 18:16:50 crc kubenswrapper[4661]: I0120 18:16:50.316952 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zdptl\" (UniqueName: \"kubernetes.io/projected/13a9f3bc-c133-49ea-9cfd-bc8c107e32c6-kube-api-access-zdptl\") pod \"cert-manager-858654f9db-2wqcl\" (UID: \"13a9f3bc-c133-49ea-9cfd-bc8c107e32c6\") " pod="cert-manager/cert-manager-858654f9db-2wqcl" Jan 20 18:16:50 crc kubenswrapper[4661]: I0120 18:16:50.317003 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4r5c5\" (UniqueName: \"kubernetes.io/projected/3c6e82bb-badf-4079-abf0-566f4b6f0776-kube-api-access-4r5c5\") pod \"cert-manager-cainjector-cf98fcc89-f7qg8\" (UID: \"3c6e82bb-badf-4079-abf0-566f4b6f0776\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-f7qg8" Jan 20 18:16:50 crc kubenswrapper[4661]: I0120 18:16:50.317050 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q5gtj\" (UniqueName: \"kubernetes.io/projected/b1feddfe-5c29-4eba-99c5-65849498f0dc-kube-api-access-q5gtj\") pod \"cert-manager-webhook-687f57d79b-scrrz\" (UID: \"b1feddfe-5c29-4eba-99c5-65849498f0dc\") " pod="cert-manager/cert-manager-webhook-687f57d79b-scrrz" Jan 20 18:16:50 crc kubenswrapper[4661]: I0120 18:16:50.334785 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zdptl\" (UniqueName: \"kubernetes.io/projected/13a9f3bc-c133-49ea-9cfd-bc8c107e32c6-kube-api-access-zdptl\") pod \"cert-manager-858654f9db-2wqcl\" (UID: \"13a9f3bc-c133-49ea-9cfd-bc8c107e32c6\") " pod="cert-manager/cert-manager-858654f9db-2wqcl" Jan 20 18:16:50 crc kubenswrapper[4661]: I0120 18:16:50.336132 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4r5c5\" (UniqueName: \"kubernetes.io/projected/3c6e82bb-badf-4079-abf0-566f4b6f0776-kube-api-access-4r5c5\") pod \"cert-manager-cainjector-cf98fcc89-f7qg8\" (UID: \"3c6e82bb-badf-4079-abf0-566f4b6f0776\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-f7qg8" Jan 20 18:16:50 crc kubenswrapper[4661]: I0120 18:16:50.420861 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q5gtj\" (UniqueName: \"kubernetes.io/projected/b1feddfe-5c29-4eba-99c5-65849498f0dc-kube-api-access-q5gtj\") pod \"cert-manager-webhook-687f57d79b-scrrz\" (UID: \"b1feddfe-5c29-4eba-99c5-65849498f0dc\") " pod="cert-manager/cert-manager-webhook-687f57d79b-scrrz" Jan 20 18:16:50 crc kubenswrapper[4661]: I0120 18:16:50.437267 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q5gtj\" (UniqueName: \"kubernetes.io/projected/b1feddfe-5c29-4eba-99c5-65849498f0dc-kube-api-access-q5gtj\") pod \"cert-manager-webhook-687f57d79b-scrrz\" (UID: \"b1feddfe-5c29-4eba-99c5-65849498f0dc\") " pod="cert-manager/cert-manager-webhook-687f57d79b-scrrz" Jan 20 18:16:50 crc kubenswrapper[4661]: I0120 18:16:50.500095 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-f7qg8" Jan 20 18:16:50 crc kubenswrapper[4661]: I0120 18:16:50.530586 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-2wqcl" Jan 20 18:16:50 crc kubenswrapper[4661]: I0120 18:16:50.540936 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-scrrz" Jan 20 18:16:50 crc kubenswrapper[4661]: I0120 18:16:50.826201 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-2wqcl"] Jan 20 18:16:50 crc kubenswrapper[4661]: I0120 18:16:50.834136 4661 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 20 18:16:50 crc kubenswrapper[4661]: I0120 18:16:50.866744 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-f7qg8"] Jan 20 18:16:50 crc kubenswrapper[4661]: W0120 18:16:50.870528 4661 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3c6e82bb_badf_4079_abf0_566f4b6f0776.slice/crio-8365d9f9f2e4d2f576bfffc851e1f4fa91dc932d5fb0a0b6005d330884e5881f WatchSource:0}: Error finding container 8365d9f9f2e4d2f576bfffc851e1f4fa91dc932d5fb0a0b6005d330884e5881f: Status 404 returned error can't find the container with id 8365d9f9f2e4d2f576bfffc851e1f4fa91dc932d5fb0a0b6005d330884e5881f Jan 20 18:16:51 crc kubenswrapper[4661]: I0120 18:16:51.033848 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-scrrz"] Jan 20 18:16:51 crc kubenswrapper[4661]: W0120 18:16:51.041508 4661 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb1feddfe_5c29_4eba_99c5_65849498f0dc.slice/crio-fba78fdd296d83f9c3e08f056e4502e367e6d6c551565dd35121ee1aa87beec3 WatchSource:0}: Error finding container fba78fdd296d83f9c3e08f056e4502e367e6d6c551565dd35121ee1aa87beec3: Status 404 returned error can't find the container with id fba78fdd296d83f9c3e08f056e4502e367e6d6c551565dd35121ee1aa87beec3 Jan 20 18:16:51 crc kubenswrapper[4661]: I0120 18:16:51.480385 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-f7qg8" event={"ID":"3c6e82bb-badf-4079-abf0-566f4b6f0776","Type":"ContainerStarted","Data":"8365d9f9f2e4d2f576bfffc851e1f4fa91dc932d5fb0a0b6005d330884e5881f"} Jan 20 18:16:51 crc kubenswrapper[4661]: I0120 18:16:51.483253 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-scrrz" event={"ID":"b1feddfe-5c29-4eba-99c5-65849498f0dc","Type":"ContainerStarted","Data":"fba78fdd296d83f9c3e08f056e4502e367e6d6c551565dd35121ee1aa87beec3"} Jan 20 18:16:51 crc kubenswrapper[4661]: I0120 18:16:51.484972 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-2wqcl" event={"ID":"13a9f3bc-c133-49ea-9cfd-bc8c107e32c6","Type":"ContainerStarted","Data":"af9885ebcda38d7889108e120c6fc90c08b427dc5d65ef6979fd6de2687692a8"} Jan 20 18:16:56 crc kubenswrapper[4661]: I0120 18:16:56.538577 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-2wqcl" event={"ID":"13a9f3bc-c133-49ea-9cfd-bc8c107e32c6","Type":"ContainerStarted","Data":"4cc12b1de7442873533ea5a45632fbeda50293f417b59e8c39d5bfe90ab3c7b1"} Jan 20 18:16:56 crc kubenswrapper[4661]: I0120 18:16:56.540155 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-f7qg8" event={"ID":"3c6e82bb-badf-4079-abf0-566f4b6f0776","Type":"ContainerStarted","Data":"40501d427174ec056df5e7f3067e757e5db5b5006d1294ee836cdbf87ec1ab84"} Jan 20 18:16:56 crc kubenswrapper[4661]: I0120 18:16:56.541848 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-scrrz" event={"ID":"b1feddfe-5c29-4eba-99c5-65849498f0dc","Type":"ContainerStarted","Data":"3a45be3e09c7a248a9c645c303cc194f4005d07547089d827bfba45e490e678d"} Jan 20 18:16:56 crc kubenswrapper[4661]: I0120 18:16:56.542193 4661 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-687f57d79b-scrrz" Jan 20 18:16:56 crc kubenswrapper[4661]: I0120 18:16:56.561450 4661 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-858654f9db-2wqcl" podStartSLOduration=2.108067848 podStartE2EDuration="6.561417601s" podCreationTimestamp="2026-01-20 18:16:50 +0000 UTC" firstStartedPulling="2026-01-20 18:16:50.833866062 +0000 UTC m=+667.164655734" lastFinishedPulling="2026-01-20 18:16:55.287215805 +0000 UTC m=+671.618005487" observedRunningTime="2026-01-20 18:16:56.55417193 +0000 UTC m=+672.884961592" watchObservedRunningTime="2026-01-20 18:16:56.561417601 +0000 UTC m=+672.892207263" Jan 20 18:16:56 crc kubenswrapper[4661]: I0120 18:16:56.572154 4661 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-cf98fcc89-f7qg8" podStartSLOduration=2.1430627270000002 podStartE2EDuration="6.572132203s" podCreationTimestamp="2026-01-20 18:16:50 +0000 UTC" firstStartedPulling="2026-01-20 18:16:50.872735364 +0000 UTC m=+667.203525016" lastFinishedPulling="2026-01-20 18:16:55.30180483 +0000 UTC m=+671.632594492" observedRunningTime="2026-01-20 18:16:56.568221279 +0000 UTC m=+672.899010971" watchObservedRunningTime="2026-01-20 18:16:56.572132203 +0000 UTC m=+672.902921865" Jan 20 18:17:00 crc kubenswrapper[4661]: I0120 18:17:00.092421 4661 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-687f57d79b-scrrz" podStartSLOduration=5.767296297 podStartE2EDuration="10.092404055s" podCreationTimestamp="2026-01-20 18:16:50 +0000 UTC" firstStartedPulling="2026-01-20 18:16:51.044007852 +0000 UTC m=+667.374797504" lastFinishedPulling="2026-01-20 18:16:55.3691156 +0000 UTC m=+671.699905262" observedRunningTime="2026-01-20 18:16:56.590617241 +0000 UTC m=+672.921406903" watchObservedRunningTime="2026-01-20 18:17:00.092404055 +0000 UTC m=+676.423193717" Jan 20 18:17:00 crc kubenswrapper[4661]: I0120 18:17:00.095827 4661 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-fxb9d"] Jan 20 18:17:00 crc kubenswrapper[4661]: I0120 18:17:00.096187 4661 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-fxb9d" podUID="3856f23c-8dc3-4b36-b3b7-955dff315250" containerName="ovn-controller" containerID="cri-o://407e4d66f22050b80251fcb98ac7168d601d70dff1679bdaca0fc82d6068da41" gracePeriod=30 Jan 20 18:17:00 crc kubenswrapper[4661]: I0120 18:17:00.096485 4661 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-fxb9d" podUID="3856f23c-8dc3-4b36-b3b7-955dff315250" containerName="northd" containerID="cri-o://1f5f5d96326cd37c1101488fff8b4ce215ce84766faf13112bed7df0a767de0c" gracePeriod=30 Jan 20 18:17:00 crc kubenswrapper[4661]: I0120 18:17:00.096532 4661 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-fxb9d" podUID="3856f23c-8dc3-4b36-b3b7-955dff315250" containerName="ovn-acl-logging" containerID="cri-o://d53da47c39bd1f10fe866890f30f12f27cb0cfce0348c89fc0e89b3e8f563f2f" gracePeriod=30 Jan 20 18:17:00 crc kubenswrapper[4661]: I0120 18:17:00.096535 4661 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-fxb9d" podUID="3856f23c-8dc3-4b36-b3b7-955dff315250" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://37fb98a4cea5fe59a694ef52ebebfd3366649970415c8bd3b1307e6d150ffe66" gracePeriod=30 Jan 20 18:17:00 crc kubenswrapper[4661]: I0120 18:17:00.096567 4661 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-fxb9d" podUID="3856f23c-8dc3-4b36-b3b7-955dff315250" containerName="kube-rbac-proxy-node" containerID="cri-o://54a53d0636da9c6e7974633697967fa21ba02b0357019aca7c83994f57d06d84" gracePeriod=30 Jan 20 18:17:00 crc kubenswrapper[4661]: I0120 18:17:00.096509 4661 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-fxb9d" podUID="3856f23c-8dc3-4b36-b3b7-955dff315250" containerName="sbdb" containerID="cri-o://dfbc19df20b659446872267891c3a922b6a01e39d8f0557505f25cdc5ba1a648" gracePeriod=30 Jan 20 18:17:00 crc kubenswrapper[4661]: I0120 18:17:00.096458 4661 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-fxb9d" podUID="3856f23c-8dc3-4b36-b3b7-955dff315250" containerName="nbdb" containerID="cri-o://6bac19d8c5ba66dc20e5e4b90b2ba10efe69f218908b04abb221416f47e47f5b" gracePeriod=30 Jan 20 18:17:00 crc kubenswrapper[4661]: I0120 18:17:00.156619 4661 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-fxb9d" podUID="3856f23c-8dc3-4b36-b3b7-955dff315250" containerName="ovnkube-controller" containerID="cri-o://229a253605fb06114bb299f6125c0ea1a738620cfb8a51ac9b53d4eb809f736d" gracePeriod=30 Jan 20 18:17:00 crc kubenswrapper[4661]: I0120 18:17:00.544103 4661 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-687f57d79b-scrrz" Jan 20 18:17:01 crc kubenswrapper[4661]: I0120 18:17:01.579116 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-fxb9d_3856f23c-8dc3-4b36-b3b7-955dff315250/ovnkube-controller/3.log" Jan 20 18:17:01 crc kubenswrapper[4661]: I0120 18:17:01.583246 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-fxb9d_3856f23c-8dc3-4b36-b3b7-955dff315250/ovn-acl-logging/0.log" Jan 20 18:17:01 crc kubenswrapper[4661]: I0120 18:17:01.584145 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-fxb9d_3856f23c-8dc3-4b36-b3b7-955dff315250/ovn-controller/0.log" Jan 20 18:17:01 crc kubenswrapper[4661]: I0120 18:17:01.585323 4661 generic.go:334] "Generic (PLEG): container finished" podID="3856f23c-8dc3-4b36-b3b7-955dff315250" containerID="229a253605fb06114bb299f6125c0ea1a738620cfb8a51ac9b53d4eb809f736d" exitCode=0 Jan 20 18:17:01 crc kubenswrapper[4661]: I0120 18:17:01.585358 4661 generic.go:334] "Generic (PLEG): container finished" podID="3856f23c-8dc3-4b36-b3b7-955dff315250" containerID="dfbc19df20b659446872267891c3a922b6a01e39d8f0557505f25cdc5ba1a648" exitCode=0 Jan 20 18:17:01 crc kubenswrapper[4661]: I0120 18:17:01.585404 4661 generic.go:334] "Generic (PLEG): container finished" podID="3856f23c-8dc3-4b36-b3b7-955dff315250" containerID="6bac19d8c5ba66dc20e5e4b90b2ba10efe69f218908b04abb221416f47e47f5b" exitCode=0 Jan 20 18:17:01 crc kubenswrapper[4661]: I0120 18:17:01.585396 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fxb9d" event={"ID":"3856f23c-8dc3-4b36-b3b7-955dff315250","Type":"ContainerDied","Data":"229a253605fb06114bb299f6125c0ea1a738620cfb8a51ac9b53d4eb809f736d"} Jan 20 18:17:01 crc kubenswrapper[4661]: I0120 18:17:01.585462 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fxb9d" event={"ID":"3856f23c-8dc3-4b36-b3b7-955dff315250","Type":"ContainerDied","Data":"dfbc19df20b659446872267891c3a922b6a01e39d8f0557505f25cdc5ba1a648"} Jan 20 18:17:01 crc kubenswrapper[4661]: I0120 18:17:01.585417 4661 generic.go:334] "Generic (PLEG): container finished" podID="3856f23c-8dc3-4b36-b3b7-955dff315250" containerID="1f5f5d96326cd37c1101488fff8b4ce215ce84766faf13112bed7df0a767de0c" exitCode=0 Jan 20 18:17:01 crc kubenswrapper[4661]: I0120 18:17:01.585480 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fxb9d" event={"ID":"3856f23c-8dc3-4b36-b3b7-955dff315250","Type":"ContainerDied","Data":"6bac19d8c5ba66dc20e5e4b90b2ba10efe69f218908b04abb221416f47e47f5b"} Jan 20 18:17:01 crc kubenswrapper[4661]: I0120 18:17:01.585485 4661 generic.go:334] "Generic (PLEG): container finished" podID="3856f23c-8dc3-4b36-b3b7-955dff315250" containerID="37fb98a4cea5fe59a694ef52ebebfd3366649970415c8bd3b1307e6d150ffe66" exitCode=0 Jan 20 18:17:01 crc kubenswrapper[4661]: I0120 18:17:01.585495 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fxb9d" event={"ID":"3856f23c-8dc3-4b36-b3b7-955dff315250","Type":"ContainerDied","Data":"1f5f5d96326cd37c1101488fff8b4ce215ce84766faf13112bed7df0a767de0c"} Jan 20 18:17:01 crc kubenswrapper[4661]: I0120 18:17:01.585498 4661 generic.go:334] "Generic (PLEG): container finished" podID="3856f23c-8dc3-4b36-b3b7-955dff315250" containerID="54a53d0636da9c6e7974633697967fa21ba02b0357019aca7c83994f57d06d84" exitCode=0 Jan 20 18:17:01 crc kubenswrapper[4661]: I0120 18:17:01.585510 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fxb9d" event={"ID":"3856f23c-8dc3-4b36-b3b7-955dff315250","Type":"ContainerDied","Data":"37fb98a4cea5fe59a694ef52ebebfd3366649970415c8bd3b1307e6d150ffe66"} Jan 20 18:17:01 crc kubenswrapper[4661]: I0120 18:17:01.585523 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fxb9d" event={"ID":"3856f23c-8dc3-4b36-b3b7-955dff315250","Type":"ContainerDied","Data":"54a53d0636da9c6e7974633697967fa21ba02b0357019aca7c83994f57d06d84"} Jan 20 18:17:01 crc kubenswrapper[4661]: I0120 18:17:01.585535 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fxb9d" event={"ID":"3856f23c-8dc3-4b36-b3b7-955dff315250","Type":"ContainerDied","Data":"d53da47c39bd1f10fe866890f30f12f27cb0cfce0348c89fc0e89b3e8f563f2f"} Jan 20 18:17:01 crc kubenswrapper[4661]: I0120 18:17:01.585511 4661 generic.go:334] "Generic (PLEG): container finished" podID="3856f23c-8dc3-4b36-b3b7-955dff315250" containerID="d53da47c39bd1f10fe866890f30f12f27cb0cfce0348c89fc0e89b3e8f563f2f" exitCode=143 Jan 20 18:17:01 crc kubenswrapper[4661]: I0120 18:17:01.585554 4661 scope.go:117] "RemoveContainer" containerID="a87060cdc681c7299812827e762152da6ae48e5862cda4b15a238c2ac16c60e7" Jan 20 18:17:01 crc kubenswrapper[4661]: I0120 18:17:01.585575 4661 generic.go:334] "Generic (PLEG): container finished" podID="3856f23c-8dc3-4b36-b3b7-955dff315250" containerID="407e4d66f22050b80251fcb98ac7168d601d70dff1679bdaca0fc82d6068da41" exitCode=143 Jan 20 18:17:01 crc kubenswrapper[4661]: I0120 18:17:01.585719 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fxb9d" event={"ID":"3856f23c-8dc3-4b36-b3b7-955dff315250","Type":"ContainerDied","Data":"407e4d66f22050b80251fcb98ac7168d601d70dff1679bdaca0fc82d6068da41"} Jan 20 18:17:01 crc kubenswrapper[4661]: I0120 18:17:01.588437 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-z97p2_5b6f2401-3eb9-4ee4-b79c-6faee06bc21c/kube-multus/2.log" Jan 20 18:17:01 crc kubenswrapper[4661]: I0120 18:17:01.589069 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-z97p2_5b6f2401-3eb9-4ee4-b79c-6faee06bc21c/kube-multus/1.log" Jan 20 18:17:01 crc kubenswrapper[4661]: I0120 18:17:01.589147 4661 generic.go:334] "Generic (PLEG): container finished" podID="5b6f2401-3eb9-4ee4-b79c-6faee06bc21c" containerID="3b3a01654e524ee1a13ea5553a8ca6b24eb116690557d8b8604407d8577198dd" exitCode=2 Jan 20 18:17:01 crc kubenswrapper[4661]: I0120 18:17:01.589176 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-z97p2" event={"ID":"5b6f2401-3eb9-4ee4-b79c-6faee06bc21c","Type":"ContainerDied","Data":"3b3a01654e524ee1a13ea5553a8ca6b24eb116690557d8b8604407d8577198dd"} Jan 20 18:17:01 crc kubenswrapper[4661]: I0120 18:17:01.590124 4661 scope.go:117] "RemoveContainer" containerID="3b3a01654e524ee1a13ea5553a8ca6b24eb116690557d8b8604407d8577198dd" Jan 20 18:17:01 crc kubenswrapper[4661]: E0120 18:17:01.590529 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-z97p2_openshift-multus(5b6f2401-3eb9-4ee4-b79c-6faee06bc21c)\"" pod="openshift-multus/multus-z97p2" podUID="5b6f2401-3eb9-4ee4-b79c-6faee06bc21c" Jan 20 18:17:01 crc kubenswrapper[4661]: I0120 18:17:01.635479 4661 scope.go:117] "RemoveContainer" containerID="ab1840afe6e204ba16157cfa4140926ab50dd66d6b72a0e49e4ef986f62c7e34" Jan 20 18:17:02 crc kubenswrapper[4661]: I0120 18:17:02.378868 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-fxb9d_3856f23c-8dc3-4b36-b3b7-955dff315250/ovn-acl-logging/0.log" Jan 20 18:17:02 crc kubenswrapper[4661]: I0120 18:17:02.379639 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-fxb9d_3856f23c-8dc3-4b36-b3b7-955dff315250/ovn-controller/0.log" Jan 20 18:17:02 crc kubenswrapper[4661]: I0120 18:17:02.380291 4661 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-fxb9d" Jan 20 18:17:02 crc kubenswrapper[4661]: I0120 18:17:02.449939 4661 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-jdwwx"] Jan 20 18:17:02 crc kubenswrapper[4661]: E0120 18:17:02.450161 4661 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3856f23c-8dc3-4b36-b3b7-955dff315250" containerName="ovnkube-controller" Jan 20 18:17:02 crc kubenswrapper[4661]: I0120 18:17:02.450175 4661 state_mem.go:107] "Deleted CPUSet assignment" podUID="3856f23c-8dc3-4b36-b3b7-955dff315250" containerName="ovnkube-controller" Jan 20 18:17:02 crc kubenswrapper[4661]: E0120 18:17:02.450182 4661 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3856f23c-8dc3-4b36-b3b7-955dff315250" containerName="ovnkube-controller" Jan 20 18:17:02 crc kubenswrapper[4661]: I0120 18:17:02.450192 4661 state_mem.go:107] "Deleted CPUSet assignment" podUID="3856f23c-8dc3-4b36-b3b7-955dff315250" containerName="ovnkube-controller" Jan 20 18:17:02 crc kubenswrapper[4661]: E0120 18:17:02.450202 4661 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3856f23c-8dc3-4b36-b3b7-955dff315250" containerName="sbdb" Jan 20 18:17:02 crc kubenswrapper[4661]: I0120 18:17:02.450207 4661 state_mem.go:107] "Deleted CPUSet assignment" podUID="3856f23c-8dc3-4b36-b3b7-955dff315250" containerName="sbdb" Jan 20 18:17:02 crc kubenswrapper[4661]: E0120 18:17:02.450217 4661 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3856f23c-8dc3-4b36-b3b7-955dff315250" containerName="ovnkube-controller" Jan 20 18:17:02 crc kubenswrapper[4661]: I0120 18:17:02.450224 4661 state_mem.go:107] "Deleted CPUSet assignment" podUID="3856f23c-8dc3-4b36-b3b7-955dff315250" containerName="ovnkube-controller" Jan 20 18:17:02 crc kubenswrapper[4661]: E0120 18:17:02.450234 4661 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3856f23c-8dc3-4b36-b3b7-955dff315250" containerName="kubecfg-setup" Jan 20 18:17:02 crc kubenswrapper[4661]: I0120 18:17:02.450240 4661 state_mem.go:107] "Deleted CPUSet assignment" podUID="3856f23c-8dc3-4b36-b3b7-955dff315250" containerName="kubecfg-setup" Jan 20 18:17:02 crc kubenswrapper[4661]: E0120 18:17:02.450251 4661 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3856f23c-8dc3-4b36-b3b7-955dff315250" containerName="northd" Jan 20 18:17:02 crc kubenswrapper[4661]: I0120 18:17:02.450256 4661 state_mem.go:107] "Deleted CPUSet assignment" podUID="3856f23c-8dc3-4b36-b3b7-955dff315250" containerName="northd" Jan 20 18:17:02 crc kubenswrapper[4661]: E0120 18:17:02.450263 4661 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3856f23c-8dc3-4b36-b3b7-955dff315250" containerName="ovn-controller" Jan 20 18:17:02 crc kubenswrapper[4661]: I0120 18:17:02.450268 4661 state_mem.go:107] "Deleted CPUSet assignment" podUID="3856f23c-8dc3-4b36-b3b7-955dff315250" containerName="ovn-controller" Jan 20 18:17:02 crc kubenswrapper[4661]: E0120 18:17:02.450278 4661 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3856f23c-8dc3-4b36-b3b7-955dff315250" containerName="ovn-acl-logging" Jan 20 18:17:02 crc kubenswrapper[4661]: I0120 18:17:02.450284 4661 state_mem.go:107] "Deleted CPUSet assignment" podUID="3856f23c-8dc3-4b36-b3b7-955dff315250" containerName="ovn-acl-logging" Jan 20 18:17:02 crc kubenswrapper[4661]: E0120 18:17:02.450292 4661 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3856f23c-8dc3-4b36-b3b7-955dff315250" containerName="kube-rbac-proxy-node" Jan 20 18:17:02 crc kubenswrapper[4661]: I0120 18:17:02.450299 4661 state_mem.go:107] "Deleted CPUSet assignment" podUID="3856f23c-8dc3-4b36-b3b7-955dff315250" containerName="kube-rbac-proxy-node" Jan 20 18:17:02 crc kubenswrapper[4661]: E0120 18:17:02.450308 4661 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3856f23c-8dc3-4b36-b3b7-955dff315250" containerName="kube-rbac-proxy-ovn-metrics" Jan 20 18:17:02 crc kubenswrapper[4661]: I0120 18:17:02.450313 4661 state_mem.go:107] "Deleted CPUSet assignment" podUID="3856f23c-8dc3-4b36-b3b7-955dff315250" containerName="kube-rbac-proxy-ovn-metrics" Jan 20 18:17:02 crc kubenswrapper[4661]: E0120 18:17:02.450326 4661 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3856f23c-8dc3-4b36-b3b7-955dff315250" containerName="nbdb" Jan 20 18:17:02 crc kubenswrapper[4661]: I0120 18:17:02.450332 4661 state_mem.go:107] "Deleted CPUSet assignment" podUID="3856f23c-8dc3-4b36-b3b7-955dff315250" containerName="nbdb" Jan 20 18:17:02 crc kubenswrapper[4661]: I0120 18:17:02.450437 4661 memory_manager.go:354] "RemoveStaleState removing state" podUID="3856f23c-8dc3-4b36-b3b7-955dff315250" containerName="northd" Jan 20 18:17:02 crc kubenswrapper[4661]: I0120 18:17:02.450445 4661 memory_manager.go:354] "RemoveStaleState removing state" podUID="3856f23c-8dc3-4b36-b3b7-955dff315250" containerName="nbdb" Jan 20 18:17:02 crc kubenswrapper[4661]: I0120 18:17:02.450452 4661 memory_manager.go:354] "RemoveStaleState removing state" podUID="3856f23c-8dc3-4b36-b3b7-955dff315250" containerName="ovn-acl-logging" Jan 20 18:17:02 crc kubenswrapper[4661]: I0120 18:17:02.450461 4661 memory_manager.go:354] "RemoveStaleState removing state" podUID="3856f23c-8dc3-4b36-b3b7-955dff315250" containerName="kube-rbac-proxy-ovn-metrics" Jan 20 18:17:02 crc kubenswrapper[4661]: I0120 18:17:02.450470 4661 memory_manager.go:354] "RemoveStaleState removing state" podUID="3856f23c-8dc3-4b36-b3b7-955dff315250" containerName="ovnkube-controller" Jan 20 18:17:02 crc kubenswrapper[4661]: I0120 18:17:02.450476 4661 memory_manager.go:354] "RemoveStaleState removing state" podUID="3856f23c-8dc3-4b36-b3b7-955dff315250" containerName="ovn-controller" Jan 20 18:17:02 crc kubenswrapper[4661]: I0120 18:17:02.450483 4661 memory_manager.go:354] "RemoveStaleState removing state" podUID="3856f23c-8dc3-4b36-b3b7-955dff315250" containerName="ovnkube-controller" Jan 20 18:17:02 crc kubenswrapper[4661]: I0120 18:17:02.450490 4661 memory_manager.go:354] "RemoveStaleState removing state" podUID="3856f23c-8dc3-4b36-b3b7-955dff315250" containerName="ovnkube-controller" Jan 20 18:17:02 crc kubenswrapper[4661]: I0120 18:17:02.450497 4661 memory_manager.go:354] "RemoveStaleState removing state" podUID="3856f23c-8dc3-4b36-b3b7-955dff315250" containerName="sbdb" Jan 20 18:17:02 crc kubenswrapper[4661]: I0120 18:17:02.450505 4661 memory_manager.go:354] "RemoveStaleState removing state" podUID="3856f23c-8dc3-4b36-b3b7-955dff315250" containerName="kube-rbac-proxy-node" Jan 20 18:17:02 crc kubenswrapper[4661]: E0120 18:17:02.450603 4661 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3856f23c-8dc3-4b36-b3b7-955dff315250" containerName="ovnkube-controller" Jan 20 18:17:02 crc kubenswrapper[4661]: I0120 18:17:02.450611 4661 state_mem.go:107] "Deleted CPUSet assignment" podUID="3856f23c-8dc3-4b36-b3b7-955dff315250" containerName="ovnkube-controller" Jan 20 18:17:02 crc kubenswrapper[4661]: E0120 18:17:02.450618 4661 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3856f23c-8dc3-4b36-b3b7-955dff315250" containerName="ovnkube-controller" Jan 20 18:17:02 crc kubenswrapper[4661]: I0120 18:17:02.450624 4661 state_mem.go:107] "Deleted CPUSet assignment" podUID="3856f23c-8dc3-4b36-b3b7-955dff315250" containerName="ovnkube-controller" Jan 20 18:17:02 crc kubenswrapper[4661]: I0120 18:17:02.450771 4661 memory_manager.go:354] "RemoveStaleState removing state" podUID="3856f23c-8dc3-4b36-b3b7-955dff315250" containerName="ovnkube-controller" Jan 20 18:17:02 crc kubenswrapper[4661]: I0120 18:17:02.450778 4661 memory_manager.go:354] "RemoveStaleState removing state" podUID="3856f23c-8dc3-4b36-b3b7-955dff315250" containerName="ovnkube-controller" Jan 20 18:17:02 crc kubenswrapper[4661]: I0120 18:17:02.452461 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-jdwwx" Jan 20 18:17:02 crc kubenswrapper[4661]: I0120 18:17:02.498758 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/3856f23c-8dc3-4b36-b3b7-955dff315250-run-openvswitch\") pod \"3856f23c-8dc3-4b36-b3b7-955dff315250\" (UID: \"3856f23c-8dc3-4b36-b3b7-955dff315250\") " Jan 20 18:17:02 crc kubenswrapper[4661]: I0120 18:17:02.498818 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/3856f23c-8dc3-4b36-b3b7-955dff315250-log-socket\") pod \"3856f23c-8dc3-4b36-b3b7-955dff315250\" (UID: \"3856f23c-8dc3-4b36-b3b7-955dff315250\") " Jan 20 18:17:02 crc kubenswrapper[4661]: I0120 18:17:02.498861 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/3856f23c-8dc3-4b36-b3b7-955dff315250-env-overrides\") pod \"3856f23c-8dc3-4b36-b3b7-955dff315250\" (UID: \"3856f23c-8dc3-4b36-b3b7-955dff315250\") " Jan 20 18:17:02 crc kubenswrapper[4661]: I0120 18:17:02.498915 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/3856f23c-8dc3-4b36-b3b7-955dff315250-host-run-netns\") pod \"3856f23c-8dc3-4b36-b3b7-955dff315250\" (UID: \"3856f23c-8dc3-4b36-b3b7-955dff315250\") " Jan 20 18:17:02 crc kubenswrapper[4661]: I0120 18:17:02.498909 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3856f23c-8dc3-4b36-b3b7-955dff315250-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "3856f23c-8dc3-4b36-b3b7-955dff315250" (UID: "3856f23c-8dc3-4b36-b3b7-955dff315250"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 20 18:17:02 crc kubenswrapper[4661]: I0120 18:17:02.498939 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/3856f23c-8dc3-4b36-b3b7-955dff315250-host-cni-netd\") pod \"3856f23c-8dc3-4b36-b3b7-955dff315250\" (UID: \"3856f23c-8dc3-4b36-b3b7-955dff315250\") " Jan 20 18:17:02 crc kubenswrapper[4661]: I0120 18:17:02.498962 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3856f23c-8dc3-4b36-b3b7-955dff315250-log-socket" (OuterVolumeSpecName: "log-socket") pod "3856f23c-8dc3-4b36-b3b7-955dff315250" (UID: "3856f23c-8dc3-4b36-b3b7-955dff315250"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 20 18:17:02 crc kubenswrapper[4661]: I0120 18:17:02.498998 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3856f23c-8dc3-4b36-b3b7-955dff315250-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "3856f23c-8dc3-4b36-b3b7-955dff315250" (UID: "3856f23c-8dc3-4b36-b3b7-955dff315250"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 20 18:17:02 crc kubenswrapper[4661]: I0120 18:17:02.499015 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/3856f23c-8dc3-4b36-b3b7-955dff315250-node-log\") pod \"3856f23c-8dc3-4b36-b3b7-955dff315250\" (UID: \"3856f23c-8dc3-4b36-b3b7-955dff315250\") " Jan 20 18:17:02 crc kubenswrapper[4661]: I0120 18:17:02.499052 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/3856f23c-8dc3-4b36-b3b7-955dff315250-run-systemd\") pod \"3856f23c-8dc3-4b36-b3b7-955dff315250\" (UID: \"3856f23c-8dc3-4b36-b3b7-955dff315250\") " Jan 20 18:17:02 crc kubenswrapper[4661]: I0120 18:17:02.499070 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/3856f23c-8dc3-4b36-b3b7-955dff315250-host-kubelet\") pod \"3856f23c-8dc3-4b36-b3b7-955dff315250\" (UID: \"3856f23c-8dc3-4b36-b3b7-955dff315250\") " Jan 20 18:17:02 crc kubenswrapper[4661]: I0120 18:17:02.499095 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/3856f23c-8dc3-4b36-b3b7-955dff315250-host-var-lib-cni-networks-ovn-kubernetes\") pod \"3856f23c-8dc3-4b36-b3b7-955dff315250\" (UID: \"3856f23c-8dc3-4b36-b3b7-955dff315250\") " Jan 20 18:17:02 crc kubenswrapper[4661]: I0120 18:17:02.499119 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/3856f23c-8dc3-4b36-b3b7-955dff315250-etc-openvswitch\") pod \"3856f23c-8dc3-4b36-b3b7-955dff315250\" (UID: \"3856f23c-8dc3-4b36-b3b7-955dff315250\") " Jan 20 18:17:02 crc kubenswrapper[4661]: I0120 18:17:02.499139 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/3856f23c-8dc3-4b36-b3b7-955dff315250-host-cni-bin\") pod \"3856f23c-8dc3-4b36-b3b7-955dff315250\" (UID: \"3856f23c-8dc3-4b36-b3b7-955dff315250\") " Jan 20 18:17:02 crc kubenswrapper[4661]: I0120 18:17:02.499171 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/3856f23c-8dc3-4b36-b3b7-955dff315250-ovnkube-config\") pod \"3856f23c-8dc3-4b36-b3b7-955dff315250\" (UID: \"3856f23c-8dc3-4b36-b3b7-955dff315250\") " Jan 20 18:17:02 crc kubenswrapper[4661]: I0120 18:17:02.499194 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/3856f23c-8dc3-4b36-b3b7-955dff315250-ovnkube-script-lib\") pod \"3856f23c-8dc3-4b36-b3b7-955dff315250\" (UID: \"3856f23c-8dc3-4b36-b3b7-955dff315250\") " Jan 20 18:17:02 crc kubenswrapper[4661]: I0120 18:17:02.499215 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/3856f23c-8dc3-4b36-b3b7-955dff315250-var-lib-openvswitch\") pod \"3856f23c-8dc3-4b36-b3b7-955dff315250\" (UID: \"3856f23c-8dc3-4b36-b3b7-955dff315250\") " Jan 20 18:17:02 crc kubenswrapper[4661]: I0120 18:17:02.499234 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/3856f23c-8dc3-4b36-b3b7-955dff315250-host-run-ovn-kubernetes\") pod \"3856f23c-8dc3-4b36-b3b7-955dff315250\" (UID: \"3856f23c-8dc3-4b36-b3b7-955dff315250\") " Jan 20 18:17:02 crc kubenswrapper[4661]: I0120 18:17:02.499261 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-66kpm\" (UniqueName: \"kubernetes.io/projected/3856f23c-8dc3-4b36-b3b7-955dff315250-kube-api-access-66kpm\") pod \"3856f23c-8dc3-4b36-b3b7-955dff315250\" (UID: \"3856f23c-8dc3-4b36-b3b7-955dff315250\") " Jan 20 18:17:02 crc kubenswrapper[4661]: I0120 18:17:02.499317 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/3856f23c-8dc3-4b36-b3b7-955dff315250-ovn-node-metrics-cert\") pod \"3856f23c-8dc3-4b36-b3b7-955dff315250\" (UID: \"3856f23c-8dc3-4b36-b3b7-955dff315250\") " Jan 20 18:17:02 crc kubenswrapper[4661]: I0120 18:17:02.499336 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/3856f23c-8dc3-4b36-b3b7-955dff315250-host-slash\") pod \"3856f23c-8dc3-4b36-b3b7-955dff315250\" (UID: \"3856f23c-8dc3-4b36-b3b7-955dff315250\") " Jan 20 18:17:02 crc kubenswrapper[4661]: I0120 18:17:02.499349 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/3856f23c-8dc3-4b36-b3b7-955dff315250-systemd-units\") pod \"3856f23c-8dc3-4b36-b3b7-955dff315250\" (UID: \"3856f23c-8dc3-4b36-b3b7-955dff315250\") " Jan 20 18:17:02 crc kubenswrapper[4661]: I0120 18:17:02.499364 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/3856f23c-8dc3-4b36-b3b7-955dff315250-run-ovn\") pod \"3856f23c-8dc3-4b36-b3b7-955dff315250\" (UID: \"3856f23c-8dc3-4b36-b3b7-955dff315250\") " Jan 20 18:17:02 crc kubenswrapper[4661]: I0120 18:17:02.499424 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3856f23c-8dc3-4b36-b3b7-955dff315250-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "3856f23c-8dc3-4b36-b3b7-955dff315250" (UID: "3856f23c-8dc3-4b36-b3b7-955dff315250"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 18:17:02 crc kubenswrapper[4661]: I0120 18:17:02.499460 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3856f23c-8dc3-4b36-b3b7-955dff315250-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "3856f23c-8dc3-4b36-b3b7-955dff315250" (UID: "3856f23c-8dc3-4b36-b3b7-955dff315250"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 20 18:17:02 crc kubenswrapper[4661]: I0120 18:17:02.499583 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3856f23c-8dc3-4b36-b3b7-955dff315250-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "3856f23c-8dc3-4b36-b3b7-955dff315250" (UID: "3856f23c-8dc3-4b36-b3b7-955dff315250"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 20 18:17:02 crc kubenswrapper[4661]: I0120 18:17:02.499645 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3856f23c-8dc3-4b36-b3b7-955dff315250-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "3856f23c-8dc3-4b36-b3b7-955dff315250" (UID: "3856f23c-8dc3-4b36-b3b7-955dff315250"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 20 18:17:02 crc kubenswrapper[4661]: I0120 18:17:02.499816 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3856f23c-8dc3-4b36-b3b7-955dff315250-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "3856f23c-8dc3-4b36-b3b7-955dff315250" (UID: "3856f23c-8dc3-4b36-b3b7-955dff315250"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 18:17:02 crc kubenswrapper[4661]: I0120 18:17:02.499837 4661 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/3856f23c-8dc3-4b36-b3b7-955dff315250-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 20 18:17:02 crc kubenswrapper[4661]: I0120 18:17:02.499857 4661 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/3856f23c-8dc3-4b36-b3b7-955dff315250-run-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 20 18:17:02 crc kubenswrapper[4661]: I0120 18:17:02.499859 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3856f23c-8dc3-4b36-b3b7-955dff315250-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "3856f23c-8dc3-4b36-b3b7-955dff315250" (UID: "3856f23c-8dc3-4b36-b3b7-955dff315250"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 20 18:17:02 crc kubenswrapper[4661]: I0120 18:17:02.499874 4661 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/3856f23c-8dc3-4b36-b3b7-955dff315250-log-socket\") on node \"crc\" DevicePath \"\"" Jan 20 18:17:02 crc kubenswrapper[4661]: I0120 18:17:02.499888 4661 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/3856f23c-8dc3-4b36-b3b7-955dff315250-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 20 18:17:02 crc kubenswrapper[4661]: I0120 18:17:02.499892 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3856f23c-8dc3-4b36-b3b7-955dff315250-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "3856f23c-8dc3-4b36-b3b7-955dff315250" (UID: "3856f23c-8dc3-4b36-b3b7-955dff315250"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 20 18:17:02 crc kubenswrapper[4661]: I0120 18:17:02.499897 4661 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/3856f23c-8dc3-4b36-b3b7-955dff315250-host-run-netns\") on node \"crc\" DevicePath \"\"" Jan 20 18:17:02 crc kubenswrapper[4661]: I0120 18:17:02.499917 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3856f23c-8dc3-4b36-b3b7-955dff315250-node-log" (OuterVolumeSpecName: "node-log") pod "3856f23c-8dc3-4b36-b3b7-955dff315250" (UID: "3856f23c-8dc3-4b36-b3b7-955dff315250"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 20 18:17:02 crc kubenswrapper[4661]: I0120 18:17:02.499928 4661 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/3856f23c-8dc3-4b36-b3b7-955dff315250-host-cni-netd\") on node \"crc\" DevicePath \"\"" Jan 20 18:17:02 crc kubenswrapper[4661]: I0120 18:17:02.499951 4661 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/3856f23c-8dc3-4b36-b3b7-955dff315250-host-kubelet\") on node \"crc\" DevicePath \"\"" Jan 20 18:17:02 crc kubenswrapper[4661]: I0120 18:17:02.499984 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3856f23c-8dc3-4b36-b3b7-955dff315250-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "3856f23c-8dc3-4b36-b3b7-955dff315250" (UID: "3856f23c-8dc3-4b36-b3b7-955dff315250"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 20 18:17:02 crc kubenswrapper[4661]: I0120 18:17:02.500013 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3856f23c-8dc3-4b36-b3b7-955dff315250-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "3856f23c-8dc3-4b36-b3b7-955dff315250" (UID: "3856f23c-8dc3-4b36-b3b7-955dff315250"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 18:17:02 crc kubenswrapper[4661]: I0120 18:17:02.500076 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3856f23c-8dc3-4b36-b3b7-955dff315250-host-slash" (OuterVolumeSpecName: "host-slash") pod "3856f23c-8dc3-4b36-b3b7-955dff315250" (UID: "3856f23c-8dc3-4b36-b3b7-955dff315250"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 20 18:17:02 crc kubenswrapper[4661]: I0120 18:17:02.500110 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3856f23c-8dc3-4b36-b3b7-955dff315250-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "3856f23c-8dc3-4b36-b3b7-955dff315250" (UID: "3856f23c-8dc3-4b36-b3b7-955dff315250"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 20 18:17:02 crc kubenswrapper[4661]: I0120 18:17:02.500136 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3856f23c-8dc3-4b36-b3b7-955dff315250-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "3856f23c-8dc3-4b36-b3b7-955dff315250" (UID: "3856f23c-8dc3-4b36-b3b7-955dff315250"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 20 18:17:02 crc kubenswrapper[4661]: I0120 18:17:02.500157 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3856f23c-8dc3-4b36-b3b7-955dff315250-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "3856f23c-8dc3-4b36-b3b7-955dff315250" (UID: "3856f23c-8dc3-4b36-b3b7-955dff315250"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 20 18:17:02 crc kubenswrapper[4661]: I0120 18:17:02.509534 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3856f23c-8dc3-4b36-b3b7-955dff315250-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "3856f23c-8dc3-4b36-b3b7-955dff315250" (UID: "3856f23c-8dc3-4b36-b3b7-955dff315250"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 18:17:02 crc kubenswrapper[4661]: I0120 18:17:02.520132 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3856f23c-8dc3-4b36-b3b7-955dff315250-kube-api-access-66kpm" (OuterVolumeSpecName: "kube-api-access-66kpm") pod "3856f23c-8dc3-4b36-b3b7-955dff315250" (UID: "3856f23c-8dc3-4b36-b3b7-955dff315250"). InnerVolumeSpecName "kube-api-access-66kpm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 18:17:02 crc kubenswrapper[4661]: I0120 18:17:02.528159 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3856f23c-8dc3-4b36-b3b7-955dff315250-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "3856f23c-8dc3-4b36-b3b7-955dff315250" (UID: "3856f23c-8dc3-4b36-b3b7-955dff315250"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 20 18:17:02 crc kubenswrapper[4661]: I0120 18:17:02.599960 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-fxb9d_3856f23c-8dc3-4b36-b3b7-955dff315250/ovn-acl-logging/0.log" Jan 20 18:17:02 crc kubenswrapper[4661]: I0120 18:17:02.601422 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/75ea4440-ebbc-4d84-a8e0-cf0836dd585a-host-slash\") pod \"ovnkube-node-jdwwx\" (UID: \"75ea4440-ebbc-4d84-a8e0-cf0836dd585a\") " pod="openshift-ovn-kubernetes/ovnkube-node-jdwwx" Jan 20 18:17:02 crc kubenswrapper[4661]: I0120 18:17:02.601495 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/75ea4440-ebbc-4d84-a8e0-cf0836dd585a-var-lib-openvswitch\") pod \"ovnkube-node-jdwwx\" (UID: \"75ea4440-ebbc-4d84-a8e0-cf0836dd585a\") " pod="openshift-ovn-kubernetes/ovnkube-node-jdwwx" Jan 20 18:17:02 crc kubenswrapper[4661]: I0120 18:17:02.601522 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d47ll\" (UniqueName: \"kubernetes.io/projected/75ea4440-ebbc-4d84-a8e0-cf0836dd585a-kube-api-access-d47ll\") pod \"ovnkube-node-jdwwx\" (UID: \"75ea4440-ebbc-4d84-a8e0-cf0836dd585a\") " pod="openshift-ovn-kubernetes/ovnkube-node-jdwwx" Jan 20 18:17:02 crc kubenswrapper[4661]: I0120 18:17:02.601553 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/75ea4440-ebbc-4d84-a8e0-cf0836dd585a-run-systemd\") pod \"ovnkube-node-jdwwx\" (UID: \"75ea4440-ebbc-4d84-a8e0-cf0836dd585a\") " pod="openshift-ovn-kubernetes/ovnkube-node-jdwwx" Jan 20 18:17:02 crc kubenswrapper[4661]: I0120 18:17:02.601593 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/75ea4440-ebbc-4d84-a8e0-cf0836dd585a-host-run-netns\") pod \"ovnkube-node-jdwwx\" (UID: \"75ea4440-ebbc-4d84-a8e0-cf0836dd585a\") " pod="openshift-ovn-kubernetes/ovnkube-node-jdwwx" Jan 20 18:17:02 crc kubenswrapper[4661]: I0120 18:17:02.601622 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/75ea4440-ebbc-4d84-a8e0-cf0836dd585a-host-cni-netd\") pod \"ovnkube-node-jdwwx\" (UID: \"75ea4440-ebbc-4d84-a8e0-cf0836dd585a\") " pod="openshift-ovn-kubernetes/ovnkube-node-jdwwx" Jan 20 18:17:02 crc kubenswrapper[4661]: I0120 18:17:02.601655 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/75ea4440-ebbc-4d84-a8e0-cf0836dd585a-env-overrides\") pod \"ovnkube-node-jdwwx\" (UID: \"75ea4440-ebbc-4d84-a8e0-cf0836dd585a\") " pod="openshift-ovn-kubernetes/ovnkube-node-jdwwx" Jan 20 18:17:02 crc kubenswrapper[4661]: I0120 18:17:02.601711 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/75ea4440-ebbc-4d84-a8e0-cf0836dd585a-host-cni-bin\") pod \"ovnkube-node-jdwwx\" (UID: \"75ea4440-ebbc-4d84-a8e0-cf0836dd585a\") " pod="openshift-ovn-kubernetes/ovnkube-node-jdwwx" Jan 20 18:17:02 crc kubenswrapper[4661]: I0120 18:17:02.601750 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/75ea4440-ebbc-4d84-a8e0-cf0836dd585a-log-socket\") pod \"ovnkube-node-jdwwx\" (UID: \"75ea4440-ebbc-4d84-a8e0-cf0836dd585a\") " pod="openshift-ovn-kubernetes/ovnkube-node-jdwwx" Jan 20 18:17:02 crc kubenswrapper[4661]: I0120 18:17:02.601784 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/75ea4440-ebbc-4d84-a8e0-cf0836dd585a-host-run-ovn-kubernetes\") pod \"ovnkube-node-jdwwx\" (UID: \"75ea4440-ebbc-4d84-a8e0-cf0836dd585a\") " pod="openshift-ovn-kubernetes/ovnkube-node-jdwwx" Jan 20 18:17:02 crc kubenswrapper[4661]: I0120 18:17:02.601849 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/75ea4440-ebbc-4d84-a8e0-cf0836dd585a-systemd-units\") pod \"ovnkube-node-jdwwx\" (UID: \"75ea4440-ebbc-4d84-a8e0-cf0836dd585a\") " pod="openshift-ovn-kubernetes/ovnkube-node-jdwwx" Jan 20 18:17:02 crc kubenswrapper[4661]: I0120 18:17:02.601910 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/75ea4440-ebbc-4d84-a8e0-cf0836dd585a-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-jdwwx\" (UID: \"75ea4440-ebbc-4d84-a8e0-cf0836dd585a\") " pod="openshift-ovn-kubernetes/ovnkube-node-jdwwx" Jan 20 18:17:02 crc kubenswrapper[4661]: I0120 18:17:02.601919 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-fxb9d_3856f23c-8dc3-4b36-b3b7-955dff315250/ovn-controller/0.log" Jan 20 18:17:02 crc kubenswrapper[4661]: I0120 18:17:02.601985 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/75ea4440-ebbc-4d84-a8e0-cf0836dd585a-etc-openvswitch\") pod \"ovnkube-node-jdwwx\" (UID: \"75ea4440-ebbc-4d84-a8e0-cf0836dd585a\") " pod="openshift-ovn-kubernetes/ovnkube-node-jdwwx" Jan 20 18:17:02 crc kubenswrapper[4661]: I0120 18:17:02.602024 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/75ea4440-ebbc-4d84-a8e0-cf0836dd585a-host-kubelet\") pod \"ovnkube-node-jdwwx\" (UID: \"75ea4440-ebbc-4d84-a8e0-cf0836dd585a\") " pod="openshift-ovn-kubernetes/ovnkube-node-jdwwx" Jan 20 18:17:02 crc kubenswrapper[4661]: I0120 18:17:02.602058 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/75ea4440-ebbc-4d84-a8e0-cf0836dd585a-ovnkube-script-lib\") pod \"ovnkube-node-jdwwx\" (UID: \"75ea4440-ebbc-4d84-a8e0-cf0836dd585a\") " pod="openshift-ovn-kubernetes/ovnkube-node-jdwwx" Jan 20 18:17:02 crc kubenswrapper[4661]: I0120 18:17:02.602097 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/75ea4440-ebbc-4d84-a8e0-cf0836dd585a-run-ovn\") pod \"ovnkube-node-jdwwx\" (UID: \"75ea4440-ebbc-4d84-a8e0-cf0836dd585a\") " pod="openshift-ovn-kubernetes/ovnkube-node-jdwwx" Jan 20 18:17:02 crc kubenswrapper[4661]: I0120 18:17:02.602124 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/75ea4440-ebbc-4d84-a8e0-cf0836dd585a-node-log\") pod \"ovnkube-node-jdwwx\" (UID: \"75ea4440-ebbc-4d84-a8e0-cf0836dd585a\") " pod="openshift-ovn-kubernetes/ovnkube-node-jdwwx" Jan 20 18:17:02 crc kubenswrapper[4661]: I0120 18:17:02.602158 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/75ea4440-ebbc-4d84-a8e0-cf0836dd585a-ovnkube-config\") pod \"ovnkube-node-jdwwx\" (UID: \"75ea4440-ebbc-4d84-a8e0-cf0836dd585a\") " pod="openshift-ovn-kubernetes/ovnkube-node-jdwwx" Jan 20 18:17:02 crc kubenswrapper[4661]: I0120 18:17:02.602233 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/75ea4440-ebbc-4d84-a8e0-cf0836dd585a-run-openvswitch\") pod \"ovnkube-node-jdwwx\" (UID: \"75ea4440-ebbc-4d84-a8e0-cf0836dd585a\") " pod="openshift-ovn-kubernetes/ovnkube-node-jdwwx" Jan 20 18:17:02 crc kubenswrapper[4661]: I0120 18:17:02.602269 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/75ea4440-ebbc-4d84-a8e0-cf0836dd585a-ovn-node-metrics-cert\") pod \"ovnkube-node-jdwwx\" (UID: \"75ea4440-ebbc-4d84-a8e0-cf0836dd585a\") " pod="openshift-ovn-kubernetes/ovnkube-node-jdwwx" Jan 20 18:17:02 crc kubenswrapper[4661]: I0120 18:17:02.602330 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fxb9d" event={"ID":"3856f23c-8dc3-4b36-b3b7-955dff315250","Type":"ContainerDied","Data":"6fc1c0bb3d0c80288d19a789669a571d8829461ce364a38889c09a2c46f5f070"} Jan 20 18:17:02 crc kubenswrapper[4661]: I0120 18:17:02.602364 4661 scope.go:117] "RemoveContainer" containerID="229a253605fb06114bb299f6125c0ea1a738620cfb8a51ac9b53d4eb809f736d" Jan 20 18:17:02 crc kubenswrapper[4661]: I0120 18:17:02.602336 4661 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/3856f23c-8dc3-4b36-b3b7-955dff315250-node-log\") on node \"crc\" DevicePath \"\"" Jan 20 18:17:02 crc kubenswrapper[4661]: I0120 18:17:02.602417 4661 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/3856f23c-8dc3-4b36-b3b7-955dff315250-run-systemd\") on node \"crc\" DevicePath \"\"" Jan 20 18:17:02 crc kubenswrapper[4661]: I0120 18:17:02.602432 4661 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/3856f23c-8dc3-4b36-b3b7-955dff315250-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Jan 20 18:17:02 crc kubenswrapper[4661]: I0120 18:17:02.602442 4661 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/3856f23c-8dc3-4b36-b3b7-955dff315250-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 20 18:17:02 crc kubenswrapper[4661]: I0120 18:17:02.602453 4661 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/3856f23c-8dc3-4b36-b3b7-955dff315250-host-cni-bin\") on node \"crc\" DevicePath \"\"" Jan 20 18:17:02 crc kubenswrapper[4661]: I0120 18:17:02.602463 4661 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/3856f23c-8dc3-4b36-b3b7-955dff315250-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 20 18:17:02 crc kubenswrapper[4661]: I0120 18:17:02.602473 4661 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/3856f23c-8dc3-4b36-b3b7-955dff315250-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Jan 20 18:17:02 crc kubenswrapper[4661]: I0120 18:17:02.602483 4661 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/3856f23c-8dc3-4b36-b3b7-955dff315250-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 20 18:17:02 crc kubenswrapper[4661]: I0120 18:17:02.602494 4661 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/3856f23c-8dc3-4b36-b3b7-955dff315250-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Jan 20 18:17:02 crc kubenswrapper[4661]: I0120 18:17:02.602837 4661 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-66kpm\" (UniqueName: \"kubernetes.io/projected/3856f23c-8dc3-4b36-b3b7-955dff315250-kube-api-access-66kpm\") on node \"crc\" DevicePath \"\"" Jan 20 18:17:02 crc kubenswrapper[4661]: I0120 18:17:02.602903 4661 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/3856f23c-8dc3-4b36-b3b7-955dff315250-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 20 18:17:02 crc kubenswrapper[4661]: I0120 18:17:02.602913 4661 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/3856f23c-8dc3-4b36-b3b7-955dff315250-host-slash\") on node \"crc\" DevicePath \"\"" Jan 20 18:17:02 crc kubenswrapper[4661]: I0120 18:17:02.602461 4661 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-fxb9d" Jan 20 18:17:02 crc kubenswrapper[4661]: I0120 18:17:02.603181 4661 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/3856f23c-8dc3-4b36-b3b7-955dff315250-systemd-units\") on node \"crc\" DevicePath \"\"" Jan 20 18:17:02 crc kubenswrapper[4661]: I0120 18:17:02.604229 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-z97p2_5b6f2401-3eb9-4ee4-b79c-6faee06bc21c/kube-multus/2.log" Jan 20 18:17:02 crc kubenswrapper[4661]: I0120 18:17:02.635270 4661 scope.go:117] "RemoveContainer" containerID="dfbc19df20b659446872267891c3a922b6a01e39d8f0557505f25cdc5ba1a648" Jan 20 18:17:02 crc kubenswrapper[4661]: I0120 18:17:02.637530 4661 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-fxb9d"] Jan 20 18:17:02 crc kubenswrapper[4661]: I0120 18:17:02.648116 4661 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-fxb9d"] Jan 20 18:17:02 crc kubenswrapper[4661]: I0120 18:17:02.652740 4661 scope.go:117] "RemoveContainer" containerID="6bac19d8c5ba66dc20e5e4b90b2ba10efe69f218908b04abb221416f47e47f5b" Jan 20 18:17:02 crc kubenswrapper[4661]: I0120 18:17:02.667811 4661 scope.go:117] "RemoveContainer" containerID="1f5f5d96326cd37c1101488fff8b4ce215ce84766faf13112bed7df0a767de0c" Jan 20 18:17:02 crc kubenswrapper[4661]: I0120 18:17:02.689773 4661 scope.go:117] "RemoveContainer" containerID="37fb98a4cea5fe59a694ef52ebebfd3366649970415c8bd3b1307e6d150ffe66" Jan 20 18:17:02 crc kubenswrapper[4661]: I0120 18:17:02.704395 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/75ea4440-ebbc-4d84-a8e0-cf0836dd585a-host-run-netns\") pod \"ovnkube-node-jdwwx\" (UID: \"75ea4440-ebbc-4d84-a8e0-cf0836dd585a\") " pod="openshift-ovn-kubernetes/ovnkube-node-jdwwx" Jan 20 18:17:02 crc kubenswrapper[4661]: I0120 18:17:02.704444 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/75ea4440-ebbc-4d84-a8e0-cf0836dd585a-host-cni-netd\") pod \"ovnkube-node-jdwwx\" (UID: \"75ea4440-ebbc-4d84-a8e0-cf0836dd585a\") " pod="openshift-ovn-kubernetes/ovnkube-node-jdwwx" Jan 20 18:17:02 crc kubenswrapper[4661]: I0120 18:17:02.704476 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/75ea4440-ebbc-4d84-a8e0-cf0836dd585a-env-overrides\") pod \"ovnkube-node-jdwwx\" (UID: \"75ea4440-ebbc-4d84-a8e0-cf0836dd585a\") " pod="openshift-ovn-kubernetes/ovnkube-node-jdwwx" Jan 20 18:17:02 crc kubenswrapper[4661]: I0120 18:17:02.704498 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/75ea4440-ebbc-4d84-a8e0-cf0836dd585a-host-cni-bin\") pod \"ovnkube-node-jdwwx\" (UID: \"75ea4440-ebbc-4d84-a8e0-cf0836dd585a\") " pod="openshift-ovn-kubernetes/ovnkube-node-jdwwx" Jan 20 18:17:02 crc kubenswrapper[4661]: I0120 18:17:02.704503 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/75ea4440-ebbc-4d84-a8e0-cf0836dd585a-host-run-netns\") pod \"ovnkube-node-jdwwx\" (UID: \"75ea4440-ebbc-4d84-a8e0-cf0836dd585a\") " pod="openshift-ovn-kubernetes/ovnkube-node-jdwwx" Jan 20 18:17:02 crc kubenswrapper[4661]: I0120 18:17:02.704532 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/75ea4440-ebbc-4d84-a8e0-cf0836dd585a-log-socket\") pod \"ovnkube-node-jdwwx\" (UID: \"75ea4440-ebbc-4d84-a8e0-cf0836dd585a\") " pod="openshift-ovn-kubernetes/ovnkube-node-jdwwx" Jan 20 18:17:02 crc kubenswrapper[4661]: I0120 18:17:02.704557 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/75ea4440-ebbc-4d84-a8e0-cf0836dd585a-host-run-ovn-kubernetes\") pod \"ovnkube-node-jdwwx\" (UID: \"75ea4440-ebbc-4d84-a8e0-cf0836dd585a\") " pod="openshift-ovn-kubernetes/ovnkube-node-jdwwx" Jan 20 18:17:02 crc kubenswrapper[4661]: I0120 18:17:02.704585 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/75ea4440-ebbc-4d84-a8e0-cf0836dd585a-systemd-units\") pod \"ovnkube-node-jdwwx\" (UID: \"75ea4440-ebbc-4d84-a8e0-cf0836dd585a\") " pod="openshift-ovn-kubernetes/ovnkube-node-jdwwx" Jan 20 18:17:02 crc kubenswrapper[4661]: I0120 18:17:02.704588 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/75ea4440-ebbc-4d84-a8e0-cf0836dd585a-host-cni-bin\") pod \"ovnkube-node-jdwwx\" (UID: \"75ea4440-ebbc-4d84-a8e0-cf0836dd585a\") " pod="openshift-ovn-kubernetes/ovnkube-node-jdwwx" Jan 20 18:17:02 crc kubenswrapper[4661]: I0120 18:17:02.704606 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/75ea4440-ebbc-4d84-a8e0-cf0836dd585a-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-jdwwx\" (UID: \"75ea4440-ebbc-4d84-a8e0-cf0836dd585a\") " pod="openshift-ovn-kubernetes/ovnkube-node-jdwwx" Jan 20 18:17:02 crc kubenswrapper[4661]: I0120 18:17:02.704635 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/75ea4440-ebbc-4d84-a8e0-cf0836dd585a-etc-openvswitch\") pod \"ovnkube-node-jdwwx\" (UID: \"75ea4440-ebbc-4d84-a8e0-cf0836dd585a\") " pod="openshift-ovn-kubernetes/ovnkube-node-jdwwx" Jan 20 18:17:02 crc kubenswrapper[4661]: I0120 18:17:02.704639 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/75ea4440-ebbc-4d84-a8e0-cf0836dd585a-log-socket\") pod \"ovnkube-node-jdwwx\" (UID: \"75ea4440-ebbc-4d84-a8e0-cf0836dd585a\") " pod="openshift-ovn-kubernetes/ovnkube-node-jdwwx" Jan 20 18:17:02 crc kubenswrapper[4661]: I0120 18:17:02.704681 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/75ea4440-ebbc-4d84-a8e0-cf0836dd585a-host-kubelet\") pod \"ovnkube-node-jdwwx\" (UID: \"75ea4440-ebbc-4d84-a8e0-cf0836dd585a\") " pod="openshift-ovn-kubernetes/ovnkube-node-jdwwx" Jan 20 18:17:02 crc kubenswrapper[4661]: I0120 18:17:02.704705 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/75ea4440-ebbc-4d84-a8e0-cf0836dd585a-ovnkube-script-lib\") pod \"ovnkube-node-jdwwx\" (UID: \"75ea4440-ebbc-4d84-a8e0-cf0836dd585a\") " pod="openshift-ovn-kubernetes/ovnkube-node-jdwwx" Jan 20 18:17:02 crc kubenswrapper[4661]: I0120 18:17:02.704715 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/75ea4440-ebbc-4d84-a8e0-cf0836dd585a-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-jdwwx\" (UID: \"75ea4440-ebbc-4d84-a8e0-cf0836dd585a\") " pod="openshift-ovn-kubernetes/ovnkube-node-jdwwx" Jan 20 18:17:02 crc kubenswrapper[4661]: I0120 18:17:02.704732 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/75ea4440-ebbc-4d84-a8e0-cf0836dd585a-run-ovn\") pod \"ovnkube-node-jdwwx\" (UID: \"75ea4440-ebbc-4d84-a8e0-cf0836dd585a\") " pod="openshift-ovn-kubernetes/ovnkube-node-jdwwx" Jan 20 18:17:02 crc kubenswrapper[4661]: I0120 18:17:02.704722 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/75ea4440-ebbc-4d84-a8e0-cf0836dd585a-host-run-ovn-kubernetes\") pod \"ovnkube-node-jdwwx\" (UID: \"75ea4440-ebbc-4d84-a8e0-cf0836dd585a\") " pod="openshift-ovn-kubernetes/ovnkube-node-jdwwx" Jan 20 18:17:02 crc kubenswrapper[4661]: I0120 18:17:02.704754 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/75ea4440-ebbc-4d84-a8e0-cf0836dd585a-node-log\") pod \"ovnkube-node-jdwwx\" (UID: \"75ea4440-ebbc-4d84-a8e0-cf0836dd585a\") " pod="openshift-ovn-kubernetes/ovnkube-node-jdwwx" Jan 20 18:17:02 crc kubenswrapper[4661]: I0120 18:17:02.704769 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/75ea4440-ebbc-4d84-a8e0-cf0836dd585a-systemd-units\") pod \"ovnkube-node-jdwwx\" (UID: \"75ea4440-ebbc-4d84-a8e0-cf0836dd585a\") " pod="openshift-ovn-kubernetes/ovnkube-node-jdwwx" Jan 20 18:17:02 crc kubenswrapper[4661]: I0120 18:17:02.704853 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/75ea4440-ebbc-4d84-a8e0-cf0836dd585a-ovnkube-config\") pod \"ovnkube-node-jdwwx\" (UID: \"75ea4440-ebbc-4d84-a8e0-cf0836dd585a\") " pod="openshift-ovn-kubernetes/ovnkube-node-jdwwx" Jan 20 18:17:02 crc kubenswrapper[4661]: I0120 18:17:02.704918 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/75ea4440-ebbc-4d84-a8e0-cf0836dd585a-run-openvswitch\") pod \"ovnkube-node-jdwwx\" (UID: \"75ea4440-ebbc-4d84-a8e0-cf0836dd585a\") " pod="openshift-ovn-kubernetes/ovnkube-node-jdwwx" Jan 20 18:17:02 crc kubenswrapper[4661]: I0120 18:17:02.704946 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/75ea4440-ebbc-4d84-a8e0-cf0836dd585a-ovn-node-metrics-cert\") pod \"ovnkube-node-jdwwx\" (UID: \"75ea4440-ebbc-4d84-a8e0-cf0836dd585a\") " pod="openshift-ovn-kubernetes/ovnkube-node-jdwwx" Jan 20 18:17:02 crc kubenswrapper[4661]: I0120 18:17:02.705011 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/75ea4440-ebbc-4d84-a8e0-cf0836dd585a-host-slash\") pod \"ovnkube-node-jdwwx\" (UID: \"75ea4440-ebbc-4d84-a8e0-cf0836dd585a\") " pod="openshift-ovn-kubernetes/ovnkube-node-jdwwx" Jan 20 18:17:02 crc kubenswrapper[4661]: I0120 18:17:02.705071 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/75ea4440-ebbc-4d84-a8e0-cf0836dd585a-var-lib-openvswitch\") pod \"ovnkube-node-jdwwx\" (UID: \"75ea4440-ebbc-4d84-a8e0-cf0836dd585a\") " pod="openshift-ovn-kubernetes/ovnkube-node-jdwwx" Jan 20 18:17:02 crc kubenswrapper[4661]: I0120 18:17:02.705096 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d47ll\" (UniqueName: \"kubernetes.io/projected/75ea4440-ebbc-4d84-a8e0-cf0836dd585a-kube-api-access-d47ll\") pod \"ovnkube-node-jdwwx\" (UID: \"75ea4440-ebbc-4d84-a8e0-cf0836dd585a\") " pod="openshift-ovn-kubernetes/ovnkube-node-jdwwx" Jan 20 18:17:02 crc kubenswrapper[4661]: I0120 18:17:02.705117 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/75ea4440-ebbc-4d84-a8e0-cf0836dd585a-run-systemd\") pod \"ovnkube-node-jdwwx\" (UID: \"75ea4440-ebbc-4d84-a8e0-cf0836dd585a\") " pod="openshift-ovn-kubernetes/ovnkube-node-jdwwx" Jan 20 18:17:02 crc kubenswrapper[4661]: I0120 18:17:02.705193 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/75ea4440-ebbc-4d84-a8e0-cf0836dd585a-run-systemd\") pod \"ovnkube-node-jdwwx\" (UID: \"75ea4440-ebbc-4d84-a8e0-cf0836dd585a\") " pod="openshift-ovn-kubernetes/ovnkube-node-jdwwx" Jan 20 18:17:02 crc kubenswrapper[4661]: I0120 18:17:02.705202 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/75ea4440-ebbc-4d84-a8e0-cf0836dd585a-env-overrides\") pod \"ovnkube-node-jdwwx\" (UID: \"75ea4440-ebbc-4d84-a8e0-cf0836dd585a\") " pod="openshift-ovn-kubernetes/ovnkube-node-jdwwx" Jan 20 18:17:02 crc kubenswrapper[4661]: I0120 18:17:02.705220 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/75ea4440-ebbc-4d84-a8e0-cf0836dd585a-host-kubelet\") pod \"ovnkube-node-jdwwx\" (UID: \"75ea4440-ebbc-4d84-a8e0-cf0836dd585a\") " pod="openshift-ovn-kubernetes/ovnkube-node-jdwwx" Jan 20 18:17:02 crc kubenswrapper[4661]: I0120 18:17:02.705255 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/75ea4440-ebbc-4d84-a8e0-cf0836dd585a-run-openvswitch\") pod \"ovnkube-node-jdwwx\" (UID: \"75ea4440-ebbc-4d84-a8e0-cf0836dd585a\") " pod="openshift-ovn-kubernetes/ovnkube-node-jdwwx" Jan 20 18:17:02 crc kubenswrapper[4661]: I0120 18:17:02.705282 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/75ea4440-ebbc-4d84-a8e0-cf0836dd585a-etc-openvswitch\") pod \"ovnkube-node-jdwwx\" (UID: \"75ea4440-ebbc-4d84-a8e0-cf0836dd585a\") " pod="openshift-ovn-kubernetes/ovnkube-node-jdwwx" Jan 20 18:17:02 crc kubenswrapper[4661]: I0120 18:17:02.705311 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/75ea4440-ebbc-4d84-a8e0-cf0836dd585a-run-ovn\") pod \"ovnkube-node-jdwwx\" (UID: \"75ea4440-ebbc-4d84-a8e0-cf0836dd585a\") " pod="openshift-ovn-kubernetes/ovnkube-node-jdwwx" Jan 20 18:17:02 crc kubenswrapper[4661]: I0120 18:17:02.705327 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/75ea4440-ebbc-4d84-a8e0-cf0836dd585a-node-log\") pod \"ovnkube-node-jdwwx\" (UID: \"75ea4440-ebbc-4d84-a8e0-cf0836dd585a\") " pod="openshift-ovn-kubernetes/ovnkube-node-jdwwx" Jan 20 18:17:02 crc kubenswrapper[4661]: I0120 18:17:02.705764 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/75ea4440-ebbc-4d84-a8e0-cf0836dd585a-host-slash\") pod \"ovnkube-node-jdwwx\" (UID: \"75ea4440-ebbc-4d84-a8e0-cf0836dd585a\") " pod="openshift-ovn-kubernetes/ovnkube-node-jdwwx" Jan 20 18:17:02 crc kubenswrapper[4661]: I0120 18:17:02.705785 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/75ea4440-ebbc-4d84-a8e0-cf0836dd585a-ovnkube-script-lib\") pod \"ovnkube-node-jdwwx\" (UID: \"75ea4440-ebbc-4d84-a8e0-cf0836dd585a\") " pod="openshift-ovn-kubernetes/ovnkube-node-jdwwx" Jan 20 18:17:02 crc kubenswrapper[4661]: I0120 18:17:02.705830 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/75ea4440-ebbc-4d84-a8e0-cf0836dd585a-var-lib-openvswitch\") pod \"ovnkube-node-jdwwx\" (UID: \"75ea4440-ebbc-4d84-a8e0-cf0836dd585a\") " pod="openshift-ovn-kubernetes/ovnkube-node-jdwwx" Jan 20 18:17:02 crc kubenswrapper[4661]: I0120 18:17:02.705923 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/75ea4440-ebbc-4d84-a8e0-cf0836dd585a-ovnkube-config\") pod \"ovnkube-node-jdwwx\" (UID: \"75ea4440-ebbc-4d84-a8e0-cf0836dd585a\") " pod="openshift-ovn-kubernetes/ovnkube-node-jdwwx" Jan 20 18:17:02 crc kubenswrapper[4661]: I0120 18:17:02.706240 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/75ea4440-ebbc-4d84-a8e0-cf0836dd585a-host-cni-netd\") pod \"ovnkube-node-jdwwx\" (UID: \"75ea4440-ebbc-4d84-a8e0-cf0836dd585a\") " pod="openshift-ovn-kubernetes/ovnkube-node-jdwwx" Jan 20 18:17:02 crc kubenswrapper[4661]: I0120 18:17:02.708931 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/75ea4440-ebbc-4d84-a8e0-cf0836dd585a-ovn-node-metrics-cert\") pod \"ovnkube-node-jdwwx\" (UID: \"75ea4440-ebbc-4d84-a8e0-cf0836dd585a\") " pod="openshift-ovn-kubernetes/ovnkube-node-jdwwx" Jan 20 18:17:02 crc kubenswrapper[4661]: I0120 18:17:02.711338 4661 scope.go:117] "RemoveContainer" containerID="54a53d0636da9c6e7974633697967fa21ba02b0357019aca7c83994f57d06d84" Jan 20 18:17:02 crc kubenswrapper[4661]: I0120 18:17:02.727890 4661 scope.go:117] "RemoveContainer" containerID="d53da47c39bd1f10fe866890f30f12f27cb0cfce0348c89fc0e89b3e8f563f2f" Jan 20 18:17:02 crc kubenswrapper[4661]: I0120 18:17:02.729972 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d47ll\" (UniqueName: \"kubernetes.io/projected/75ea4440-ebbc-4d84-a8e0-cf0836dd585a-kube-api-access-d47ll\") pod \"ovnkube-node-jdwwx\" (UID: \"75ea4440-ebbc-4d84-a8e0-cf0836dd585a\") " pod="openshift-ovn-kubernetes/ovnkube-node-jdwwx" Jan 20 18:17:02 crc kubenswrapper[4661]: I0120 18:17:02.745797 4661 scope.go:117] "RemoveContainer" containerID="407e4d66f22050b80251fcb98ac7168d601d70dff1679bdaca0fc82d6068da41" Jan 20 18:17:02 crc kubenswrapper[4661]: I0120 18:17:02.762462 4661 scope.go:117] "RemoveContainer" containerID="babd416d0d33b286f533dc5bd8d6904d24fd23632efce36edb6e13183fbd390a" Jan 20 18:17:02 crc kubenswrapper[4661]: I0120 18:17:02.769447 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-jdwwx" Jan 20 18:17:03 crc kubenswrapper[4661]: I0120 18:17:03.612381 4661 generic.go:334] "Generic (PLEG): container finished" podID="75ea4440-ebbc-4d84-a8e0-cf0836dd585a" containerID="3c2bc3686f4e06c02d8b3248c55755796aefd9ed50ce6c85798cf82adabf9bf4" exitCode=0 Jan 20 18:17:03 crc kubenswrapper[4661]: I0120 18:17:03.612463 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jdwwx" event={"ID":"75ea4440-ebbc-4d84-a8e0-cf0836dd585a","Type":"ContainerDied","Data":"3c2bc3686f4e06c02d8b3248c55755796aefd9ed50ce6c85798cf82adabf9bf4"} Jan 20 18:17:03 crc kubenswrapper[4661]: I0120 18:17:03.613410 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jdwwx" event={"ID":"75ea4440-ebbc-4d84-a8e0-cf0836dd585a","Type":"ContainerStarted","Data":"79ac3fdd0d8969ebb63bdd7576f4baaa3d98a571f3d4087d765623e3fcdbcece"} Jan 20 18:17:04 crc kubenswrapper[4661]: I0120 18:17:04.156503 4661 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3856f23c-8dc3-4b36-b3b7-955dff315250" path="/var/lib/kubelet/pods/3856f23c-8dc3-4b36-b3b7-955dff315250/volumes" Jan 20 18:17:04 crc kubenswrapper[4661]: I0120 18:17:04.622608 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jdwwx" event={"ID":"75ea4440-ebbc-4d84-a8e0-cf0836dd585a","Type":"ContainerStarted","Data":"107edc31ae85847b28560454d54b302c6d0512582e6b280fdc8d2cd41c7c4b76"} Jan 20 18:17:04 crc kubenswrapper[4661]: I0120 18:17:04.622660 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jdwwx" event={"ID":"75ea4440-ebbc-4d84-a8e0-cf0836dd585a","Type":"ContainerStarted","Data":"28ce8a79737236762fca1c242f778cb77c9a34cfed37a2dd651a27a1cb897f71"} Jan 20 18:17:04 crc kubenswrapper[4661]: I0120 18:17:04.622722 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jdwwx" event={"ID":"75ea4440-ebbc-4d84-a8e0-cf0836dd585a","Type":"ContainerStarted","Data":"4d2dc18d4d7f1cdb4f79002dc774938c7c4b8f2ec924139f0ea75a93a372eb9f"} Jan 20 18:17:05 crc kubenswrapper[4661]: I0120 18:17:05.637118 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jdwwx" event={"ID":"75ea4440-ebbc-4d84-a8e0-cf0836dd585a","Type":"ContainerStarted","Data":"871ea0451a7dc6eaf069c52ce65fff9a770633557b6ff58c61211ba5bc5de26a"} Jan 20 18:17:05 crc kubenswrapper[4661]: I0120 18:17:05.637376 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jdwwx" event={"ID":"75ea4440-ebbc-4d84-a8e0-cf0836dd585a","Type":"ContainerStarted","Data":"871da3ab59d97de7321264e765058c7f4947591d1316543d6a273d7f91fe7cf7"} Jan 20 18:17:05 crc kubenswrapper[4661]: I0120 18:17:05.637390 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jdwwx" event={"ID":"75ea4440-ebbc-4d84-a8e0-cf0836dd585a","Type":"ContainerStarted","Data":"0aba60fc22dcdf60ec0d6181ad9d20694b008cc844111ed4d1d4f3c6c8cf4f0f"} Jan 20 18:17:06 crc kubenswrapper[4661]: I0120 18:17:06.646912 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jdwwx" event={"ID":"75ea4440-ebbc-4d84-a8e0-cf0836dd585a","Type":"ContainerStarted","Data":"e70c25b0931b3ddbf62f0342e5b36aca6db2caa2845db4b35ef6b56351d63bf0"} Jan 20 18:17:08 crc kubenswrapper[4661]: I0120 18:17:08.661893 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jdwwx" event={"ID":"75ea4440-ebbc-4d84-a8e0-cf0836dd585a","Type":"ContainerStarted","Data":"02933aa473a63ec7a50c032dc2513d122d432f2a9df4d87b2672e44832b83c29"} Jan 20 18:17:08 crc kubenswrapper[4661]: I0120 18:17:08.663093 4661 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-jdwwx" Jan 20 18:17:08 crc kubenswrapper[4661]: I0120 18:17:08.663115 4661 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-jdwwx" Jan 20 18:17:08 crc kubenswrapper[4661]: I0120 18:17:08.663152 4661 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-jdwwx" Jan 20 18:17:08 crc kubenswrapper[4661]: I0120 18:17:08.694566 4661 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-jdwwx" Jan 20 18:17:08 crc kubenswrapper[4661]: I0120 18:17:08.709783 4661 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-jdwwx" podStartSLOduration=6.709768968 podStartE2EDuration="6.709768968s" podCreationTimestamp="2026-01-20 18:17:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 18:17:08.708258324 +0000 UTC m=+685.039047986" watchObservedRunningTime="2026-01-20 18:17:08.709768968 +0000 UTC m=+685.040558630" Jan 20 18:17:08 crc kubenswrapper[4661]: I0120 18:17:08.714086 4661 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-jdwwx" Jan 20 18:17:13 crc kubenswrapper[4661]: I0120 18:17:13.142892 4661 scope.go:117] "RemoveContainer" containerID="3b3a01654e524ee1a13ea5553a8ca6b24eb116690557d8b8604407d8577198dd" Jan 20 18:17:13 crc kubenswrapper[4661]: E0120 18:17:13.143489 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-z97p2_openshift-multus(5b6f2401-3eb9-4ee4-b79c-6faee06bc21c)\"" pod="openshift-multus/multus-z97p2" podUID="5b6f2401-3eb9-4ee4-b79c-6faee06bc21c" Jan 20 18:17:26 crc kubenswrapper[4661]: I0120 18:17:26.142143 4661 scope.go:117] "RemoveContainer" containerID="3b3a01654e524ee1a13ea5553a8ca6b24eb116690557d8b8604407d8577198dd" Jan 20 18:17:28 crc kubenswrapper[4661]: I0120 18:17:28.802748 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-z97p2_5b6f2401-3eb9-4ee4-b79c-6faee06bc21c/kube-multus/2.log" Jan 20 18:17:28 crc kubenswrapper[4661]: I0120 18:17:28.803090 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-z97p2" event={"ID":"5b6f2401-3eb9-4ee4-b79c-6faee06bc21c","Type":"ContainerStarted","Data":"00f4b5130ca2e8dfa94feefef648d09ba56b099588829acc2839fb73c6161731"} Jan 20 18:17:32 crc kubenswrapper[4661]: I0120 18:17:32.837245 4661 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-jdwwx" Jan 20 18:17:43 crc kubenswrapper[4661]: I0120 18:17:43.747177 4661 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7138pmwn"] Jan 20 18:17:43 crc kubenswrapper[4661]: I0120 18:17:43.749000 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7138pmwn" Jan 20 18:17:43 crc kubenswrapper[4661]: W0120 18:17:43.751047 4661 reflector.go:561] object-"openshift-marketplace"/"default-dockercfg-vmwhc": failed to list *v1.Secret: secrets "default-dockercfg-vmwhc" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-marketplace": no relationship found between node 'crc' and this object Jan 20 18:17:43 crc kubenswrapper[4661]: E0120 18:17:43.751156 4661 reflector.go:158] "Unhandled Error" err="object-\"openshift-marketplace\"/\"default-dockercfg-vmwhc\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"default-dockercfg-vmwhc\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-marketplace\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 20 18:17:43 crc kubenswrapper[4661]: I0120 18:17:43.798463 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7138pmwn"] Jan 20 18:17:43 crc kubenswrapper[4661]: I0120 18:17:43.909703 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b42326cd-209d-43fd-8195-113ca565dfee-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7138pmwn\" (UID: \"b42326cd-209d-43fd-8195-113ca565dfee\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7138pmwn" Jan 20 18:17:43 crc kubenswrapper[4661]: I0120 18:17:43.909778 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rczjm\" (UniqueName: \"kubernetes.io/projected/b42326cd-209d-43fd-8195-113ca565dfee-kube-api-access-rczjm\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7138pmwn\" (UID: \"b42326cd-209d-43fd-8195-113ca565dfee\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7138pmwn" Jan 20 18:17:43 crc kubenswrapper[4661]: I0120 18:17:43.909871 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b42326cd-209d-43fd-8195-113ca565dfee-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7138pmwn\" (UID: \"b42326cd-209d-43fd-8195-113ca565dfee\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7138pmwn" Jan 20 18:17:44 crc kubenswrapper[4661]: I0120 18:17:44.011151 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczjm\" (UniqueName: \"kubernetes.io/projected/b42326cd-209d-43fd-8195-113ca565dfee-kube-api-access-rczjm\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7138pmwn\" (UID: \"b42326cd-209d-43fd-8195-113ca565dfee\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7138pmwn" Jan 20 18:17:44 crc kubenswrapper[4661]: I0120 18:17:44.011229 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b42326cd-209d-43fd-8195-113ca565dfee-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7138pmwn\" (UID: \"b42326cd-209d-43fd-8195-113ca565dfee\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7138pmwn" Jan 20 18:17:44 crc kubenswrapper[4661]: I0120 18:17:44.011263 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b42326cd-209d-43fd-8195-113ca565dfee-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7138pmwn\" (UID: \"b42326cd-209d-43fd-8195-113ca565dfee\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7138pmwn" Jan 20 18:17:44 crc kubenswrapper[4661]: I0120 18:17:44.011892 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b42326cd-209d-43fd-8195-113ca565dfee-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7138pmwn\" (UID: \"b42326cd-209d-43fd-8195-113ca565dfee\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7138pmwn" Jan 20 18:17:44 crc kubenswrapper[4661]: I0120 18:17:44.012034 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b42326cd-209d-43fd-8195-113ca565dfee-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7138pmwn\" (UID: \"b42326cd-209d-43fd-8195-113ca565dfee\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7138pmwn" Jan 20 18:17:44 crc kubenswrapper[4661]: I0120 18:17:44.034026 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczjm\" (UniqueName: \"kubernetes.io/projected/b42326cd-209d-43fd-8195-113ca565dfee-kube-api-access-rczjm\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7138pmwn\" (UID: \"b42326cd-209d-43fd-8195-113ca565dfee\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7138pmwn" Jan 20 18:17:44 crc kubenswrapper[4661]: I0120 18:17:44.645531 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 20 18:17:44 crc kubenswrapper[4661]: I0120 18:17:44.654110 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7138pmwn" Jan 20 18:17:44 crc kubenswrapper[4661]: I0120 18:17:44.892303 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7138pmwn"] Jan 20 18:17:44 crc kubenswrapper[4661]: W0120 18:17:44.901080 4661 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb42326cd_209d_43fd_8195_113ca565dfee.slice/crio-a80a5a231d4f24c75a554039ee895390009a9d7b1ac0ebff557a91d027645002 WatchSource:0}: Error finding container a80a5a231d4f24c75a554039ee895390009a9d7b1ac0ebff557a91d027645002: Status 404 returned error can't find the container with id a80a5a231d4f24c75a554039ee895390009a9d7b1ac0ebff557a91d027645002 Jan 20 18:17:44 crc kubenswrapper[4661]: I0120 18:17:44.915462 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7138pmwn" event={"ID":"b42326cd-209d-43fd-8195-113ca565dfee","Type":"ContainerStarted","Data":"a80a5a231d4f24c75a554039ee895390009a9d7b1ac0ebff557a91d027645002"} Jan 20 18:17:45 crc kubenswrapper[4661]: I0120 18:17:45.926923 4661 generic.go:334] "Generic (PLEG): container finished" podID="b42326cd-209d-43fd-8195-113ca565dfee" containerID="8b570e369b7755872e4bfc3efde0eec38629be223d075dc2c80869828bc55248" exitCode=0 Jan 20 18:17:45 crc kubenswrapper[4661]: I0120 18:17:45.927001 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7138pmwn" event={"ID":"b42326cd-209d-43fd-8195-113ca565dfee","Type":"ContainerDied","Data":"8b570e369b7755872e4bfc3efde0eec38629be223d075dc2c80869828bc55248"} Jan 20 18:17:48 crc kubenswrapper[4661]: I0120 18:17:48.951223 4661 generic.go:334] "Generic (PLEG): container finished" podID="b42326cd-209d-43fd-8195-113ca565dfee" containerID="18d0c95f0af41b3e59732271e1fccaf5cc27ed566ab34824419c7e7801a9c028" exitCode=0 Jan 20 18:17:48 crc kubenswrapper[4661]: I0120 18:17:48.951483 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7138pmwn" event={"ID":"b42326cd-209d-43fd-8195-113ca565dfee","Type":"ContainerDied","Data":"18d0c95f0af41b3e59732271e1fccaf5cc27ed566ab34824419c7e7801a9c028"} Jan 20 18:17:49 crc kubenswrapper[4661]: I0120 18:17:49.960107 4661 generic.go:334] "Generic (PLEG): container finished" podID="b42326cd-209d-43fd-8195-113ca565dfee" containerID="6bcb5681507f3ff7bf177f1ac447cc3689a14cdd265262c80e891afb330d8d00" exitCode=0 Jan 20 18:17:49 crc kubenswrapper[4661]: I0120 18:17:49.960196 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7138pmwn" event={"ID":"b42326cd-209d-43fd-8195-113ca565dfee","Type":"ContainerDied","Data":"6bcb5681507f3ff7bf177f1ac447cc3689a14cdd265262c80e891afb330d8d00"} Jan 20 18:17:51 crc kubenswrapper[4661]: I0120 18:17:51.179125 4661 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7138pmwn" Jan 20 18:17:51 crc kubenswrapper[4661]: I0120 18:17:51.225448 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b42326cd-209d-43fd-8195-113ca565dfee-bundle\") pod \"b42326cd-209d-43fd-8195-113ca565dfee\" (UID: \"b42326cd-209d-43fd-8195-113ca565dfee\") " Jan 20 18:17:51 crc kubenswrapper[4661]: I0120 18:17:51.225538 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b42326cd-209d-43fd-8195-113ca565dfee-util\") pod \"b42326cd-209d-43fd-8195-113ca565dfee\" (UID: \"b42326cd-209d-43fd-8195-113ca565dfee\") " Jan 20 18:17:51 crc kubenswrapper[4661]: I0120 18:17:51.225607 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rczjm\" (UniqueName: \"kubernetes.io/projected/b42326cd-209d-43fd-8195-113ca565dfee-kube-api-access-rczjm\") pod \"b42326cd-209d-43fd-8195-113ca565dfee\" (UID: \"b42326cd-209d-43fd-8195-113ca565dfee\") " Jan 20 18:17:51 crc kubenswrapper[4661]: I0120 18:17:51.226262 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b42326cd-209d-43fd-8195-113ca565dfee-bundle" (OuterVolumeSpecName: "bundle") pod "b42326cd-209d-43fd-8195-113ca565dfee" (UID: "b42326cd-209d-43fd-8195-113ca565dfee"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 18:17:51 crc kubenswrapper[4661]: I0120 18:17:51.233950 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b42326cd-209d-43fd-8195-113ca565dfee-kube-api-access-rczjm" (OuterVolumeSpecName: "kube-api-access-rczjm") pod "b42326cd-209d-43fd-8195-113ca565dfee" (UID: "b42326cd-209d-43fd-8195-113ca565dfee"). InnerVolumeSpecName "kube-api-access-rczjm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 18:17:51 crc kubenswrapper[4661]: I0120 18:17:51.238556 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b42326cd-209d-43fd-8195-113ca565dfee-util" (OuterVolumeSpecName: "util") pod "b42326cd-209d-43fd-8195-113ca565dfee" (UID: "b42326cd-209d-43fd-8195-113ca565dfee"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 18:17:51 crc kubenswrapper[4661]: I0120 18:17:51.327286 4661 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rczjm\" (UniqueName: \"kubernetes.io/projected/b42326cd-209d-43fd-8195-113ca565dfee-kube-api-access-rczjm\") on node \"crc\" DevicePath \"\"" Jan 20 18:17:51 crc kubenswrapper[4661]: I0120 18:17:51.327351 4661 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b42326cd-209d-43fd-8195-113ca565dfee-bundle\") on node \"crc\" DevicePath \"\"" Jan 20 18:17:51 crc kubenswrapper[4661]: I0120 18:17:51.327363 4661 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b42326cd-209d-43fd-8195-113ca565dfee-util\") on node \"crc\" DevicePath \"\"" Jan 20 18:17:51 crc kubenswrapper[4661]: I0120 18:17:51.978832 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7138pmwn" event={"ID":"b42326cd-209d-43fd-8195-113ca565dfee","Type":"ContainerDied","Data":"a80a5a231d4f24c75a554039ee895390009a9d7b1ac0ebff557a91d027645002"} Jan 20 18:17:51 crc kubenswrapper[4661]: I0120 18:17:51.978932 4661 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a80a5a231d4f24c75a554039ee895390009a9d7b1ac0ebff557a91d027645002" Jan 20 18:17:51 crc kubenswrapper[4661]: I0120 18:17:51.979052 4661 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7138pmwn" Jan 20 18:17:55 crc kubenswrapper[4661]: I0120 18:17:55.330191 4661 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-mkng6"] Jan 20 18:17:55 crc kubenswrapper[4661]: E0120 18:17:55.330753 4661 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b42326cd-209d-43fd-8195-113ca565dfee" containerName="pull" Jan 20 18:17:55 crc kubenswrapper[4661]: I0120 18:17:55.330770 4661 state_mem.go:107] "Deleted CPUSet assignment" podUID="b42326cd-209d-43fd-8195-113ca565dfee" containerName="pull" Jan 20 18:17:55 crc kubenswrapper[4661]: E0120 18:17:55.330792 4661 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b42326cd-209d-43fd-8195-113ca565dfee" containerName="util" Jan 20 18:17:55 crc kubenswrapper[4661]: I0120 18:17:55.330819 4661 state_mem.go:107] "Deleted CPUSet assignment" podUID="b42326cd-209d-43fd-8195-113ca565dfee" containerName="util" Jan 20 18:17:55 crc kubenswrapper[4661]: E0120 18:17:55.330839 4661 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b42326cd-209d-43fd-8195-113ca565dfee" containerName="extract" Jan 20 18:17:55 crc kubenswrapper[4661]: I0120 18:17:55.330849 4661 state_mem.go:107] "Deleted CPUSet assignment" podUID="b42326cd-209d-43fd-8195-113ca565dfee" containerName="extract" Jan 20 18:17:55 crc kubenswrapper[4661]: I0120 18:17:55.330965 4661 memory_manager.go:354] "RemoveStaleState removing state" podUID="b42326cd-209d-43fd-8195-113ca565dfee" containerName="extract" Jan 20 18:17:55 crc kubenswrapper[4661]: I0120 18:17:55.331407 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-646758c888-mkng6" Jan 20 18:17:55 crc kubenswrapper[4661]: I0120 18:17:55.336647 4661 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Jan 20 18:17:55 crc kubenswrapper[4661]: I0120 18:17:55.337916 4661 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Jan 20 18:17:55 crc kubenswrapper[4661]: I0120 18:17:55.337924 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-operator-dockercfg-ggtzs" Jan 20 18:17:55 crc kubenswrapper[4661]: I0120 18:17:55.373854 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bxxjn\" (UniqueName: \"kubernetes.io/projected/91e3ce75-26ba-42cb-b4dd-322bc9188bab-kube-api-access-bxxjn\") pod \"nmstate-operator-646758c888-mkng6\" (UID: \"91e3ce75-26ba-42cb-b4dd-322bc9188bab\") " pod="openshift-nmstate/nmstate-operator-646758c888-mkng6" Jan 20 18:17:55 crc kubenswrapper[4661]: I0120 18:17:55.381435 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-mkng6"] Jan 20 18:17:55 crc kubenswrapper[4661]: I0120 18:17:55.474768 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bxxjn\" (UniqueName: \"kubernetes.io/projected/91e3ce75-26ba-42cb-b4dd-322bc9188bab-kube-api-access-bxxjn\") pod \"nmstate-operator-646758c888-mkng6\" (UID: \"91e3ce75-26ba-42cb-b4dd-322bc9188bab\") " pod="openshift-nmstate/nmstate-operator-646758c888-mkng6" Jan 20 18:17:55 crc kubenswrapper[4661]: I0120 18:17:55.492356 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bxxjn\" (UniqueName: \"kubernetes.io/projected/91e3ce75-26ba-42cb-b4dd-322bc9188bab-kube-api-access-bxxjn\") pod \"nmstate-operator-646758c888-mkng6\" (UID: \"91e3ce75-26ba-42cb-b4dd-322bc9188bab\") " pod="openshift-nmstate/nmstate-operator-646758c888-mkng6" Jan 20 18:17:55 crc kubenswrapper[4661]: I0120 18:17:55.649419 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-646758c888-mkng6" Jan 20 18:17:55 crc kubenswrapper[4661]: I0120 18:17:55.893628 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-mkng6"] Jan 20 18:17:56 crc kubenswrapper[4661]: I0120 18:17:56.000716 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-646758c888-mkng6" event={"ID":"91e3ce75-26ba-42cb-b4dd-322bc9188bab","Type":"ContainerStarted","Data":"9722a60ce5337e7ed9efe4596de048e2aaf9750119f8fe53b3d10b42789873f7"} Jan 20 18:17:59 crc kubenswrapper[4661]: I0120 18:17:59.324628 4661 patch_prober.go:28] interesting pod/machine-config-daemon-svf7c container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 20 18:17:59 crc kubenswrapper[4661]: I0120 18:17:59.324703 4661 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-svf7c" podUID="78855c94-da90-4523-8d65-70f7fd153dee" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 20 18:18:01 crc kubenswrapper[4661]: I0120 18:18:01.032600 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-646758c888-mkng6" event={"ID":"91e3ce75-26ba-42cb-b4dd-322bc9188bab","Type":"ContainerStarted","Data":"e3c5d1bede09d32508341658e6e73b16b635e1b63f52210d8d040b560fcdbe76"} Jan 20 18:18:01 crc kubenswrapper[4661]: I0120 18:18:01.064429 4661 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-operator-646758c888-mkng6" podStartSLOduration=1.5986185640000001 podStartE2EDuration="6.064407578s" podCreationTimestamp="2026-01-20 18:17:55 +0000 UTC" firstStartedPulling="2026-01-20 18:17:55.903611797 +0000 UTC m=+732.234401499" lastFinishedPulling="2026-01-20 18:18:00.369400811 +0000 UTC m=+736.700190513" observedRunningTime="2026-01-20 18:18:01.062278 +0000 UTC m=+737.393067672" watchObservedRunningTime="2026-01-20 18:18:01.064407578 +0000 UTC m=+737.395197250" Jan 20 18:18:04 crc kubenswrapper[4661]: I0120 18:18:04.180145 4661 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-7pxb7"] Jan 20 18:18:04 crc kubenswrapper[4661]: I0120 18:18:04.181382 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-54757c584b-7pxb7" Jan 20 18:18:04 crc kubenswrapper[4661]: I0120 18:18:04.188838 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jkmm5\" (UniqueName: \"kubernetes.io/projected/7f2c01ac-294a-42b8-9988-22419d94a0ec-kube-api-access-jkmm5\") pod \"nmstate-metrics-54757c584b-7pxb7\" (UID: \"7f2c01ac-294a-42b8-9988-22419d94a0ec\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-7pxb7" Jan 20 18:18:04 crc kubenswrapper[4661]: I0120 18:18:04.197264 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-handler-dockercfg-7mxh8" Jan 20 18:18:04 crc kubenswrapper[4661]: I0120 18:18:04.206832 4661 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-6rl82"] Jan 20 18:18:04 crc kubenswrapper[4661]: I0120 18:18:04.207473 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-6rl82" Jan 20 18:18:04 crc kubenswrapper[4661]: I0120 18:18:04.212975 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Jan 20 18:18:04 crc kubenswrapper[4661]: I0120 18:18:04.235688 4661 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-handler-q9p2x"] Jan 20 18:18:04 crc kubenswrapper[4661]: I0120 18:18:04.236312 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-q9p2x" Jan 20 18:18:04 crc kubenswrapper[4661]: I0120 18:18:04.285198 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-7pxb7"] Jan 20 18:18:04 crc kubenswrapper[4661]: I0120 18:18:04.292035 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/fa442cfc-fd6e-4b5d-882d-aaa8de83f99a-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-6rl82\" (UID: \"fa442cfc-fd6e-4b5d-882d-aaa8de83f99a\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-6rl82" Jan 20 18:18:04 crc kubenswrapper[4661]: I0120 18:18:04.292091 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jkmm5\" (UniqueName: \"kubernetes.io/projected/7f2c01ac-294a-42b8-9988-22419d94a0ec-kube-api-access-jkmm5\") pod \"nmstate-metrics-54757c584b-7pxb7\" (UID: \"7f2c01ac-294a-42b8-9988-22419d94a0ec\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-7pxb7" Jan 20 18:18:04 crc kubenswrapper[4661]: I0120 18:18:04.292128 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m67bs\" (UniqueName: \"kubernetes.io/projected/fa442cfc-fd6e-4b5d-882d-aaa8de83f99a-kube-api-access-m67bs\") pod \"nmstate-webhook-8474b5b9d8-6rl82\" (UID: \"fa442cfc-fd6e-4b5d-882d-aaa8de83f99a\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-6rl82" Jan 20 18:18:04 crc kubenswrapper[4661]: I0120 18:18:04.292158 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/0b121ec2-f30a-46c4-a556-dd00cca2a1e3-nmstate-lock\") pod \"nmstate-handler-q9p2x\" (UID: \"0b121ec2-f30a-46c4-a556-dd00cca2a1e3\") " pod="openshift-nmstate/nmstate-handler-q9p2x" Jan 20 18:18:04 crc kubenswrapper[4661]: I0120 18:18:04.292181 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jx4dm\" (UniqueName: \"kubernetes.io/projected/0b121ec2-f30a-46c4-a556-dd00cca2a1e3-kube-api-access-jx4dm\") pod \"nmstate-handler-q9p2x\" (UID: \"0b121ec2-f30a-46c4-a556-dd00cca2a1e3\") " pod="openshift-nmstate/nmstate-handler-q9p2x" Jan 20 18:18:04 crc kubenswrapper[4661]: I0120 18:18:04.292205 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/0b121ec2-f30a-46c4-a556-dd00cca2a1e3-ovs-socket\") pod \"nmstate-handler-q9p2x\" (UID: \"0b121ec2-f30a-46c4-a556-dd00cca2a1e3\") " pod="openshift-nmstate/nmstate-handler-q9p2x" Jan 20 18:18:04 crc kubenswrapper[4661]: I0120 18:18:04.292229 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/0b121ec2-f30a-46c4-a556-dd00cca2a1e3-dbus-socket\") pod \"nmstate-handler-q9p2x\" (UID: \"0b121ec2-f30a-46c4-a556-dd00cca2a1e3\") " pod="openshift-nmstate/nmstate-handler-q9p2x" Jan 20 18:18:04 crc kubenswrapper[4661]: I0120 18:18:04.306587 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-6rl82"] Jan 20 18:18:04 crc kubenswrapper[4661]: I0120 18:18:04.315127 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jkmm5\" (UniqueName: \"kubernetes.io/projected/7f2c01ac-294a-42b8-9988-22419d94a0ec-kube-api-access-jkmm5\") pod \"nmstate-metrics-54757c584b-7pxb7\" (UID: \"7f2c01ac-294a-42b8-9988-22419d94a0ec\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-7pxb7" Jan 20 18:18:04 crc kubenswrapper[4661]: I0120 18:18:04.388564 4661 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-frgmz"] Jan 20 18:18:04 crc kubenswrapper[4661]: I0120 18:18:04.389288 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-frgmz" Jan 20 18:18:04 crc kubenswrapper[4661]: I0120 18:18:04.391240 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Jan 20 18:18:04 crc kubenswrapper[4661]: I0120 18:18:04.392492 4661 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Jan 20 18:18:04 crc kubenswrapper[4661]: I0120 18:18:04.392972 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/fa442cfc-fd6e-4b5d-882d-aaa8de83f99a-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-6rl82\" (UID: \"fa442cfc-fd6e-4b5d-882d-aaa8de83f99a\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-6rl82" Jan 20 18:18:04 crc kubenswrapper[4661]: I0120 18:18:04.393041 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m67bs\" (UniqueName: \"kubernetes.io/projected/fa442cfc-fd6e-4b5d-882d-aaa8de83f99a-kube-api-access-m67bs\") pod \"nmstate-webhook-8474b5b9d8-6rl82\" (UID: \"fa442cfc-fd6e-4b5d-882d-aaa8de83f99a\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-6rl82" Jan 20 18:18:04 crc kubenswrapper[4661]: I0120 18:18:04.393075 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jx4dm\" (UniqueName: \"kubernetes.io/projected/0b121ec2-f30a-46c4-a556-dd00cca2a1e3-kube-api-access-jx4dm\") pod \"nmstate-handler-q9p2x\" (UID: \"0b121ec2-f30a-46c4-a556-dd00cca2a1e3\") " pod="openshift-nmstate/nmstate-handler-q9p2x" Jan 20 18:18:04 crc kubenswrapper[4661]: E0120 18:18:04.393095 4661 secret.go:188] Couldn't get secret openshift-nmstate/openshift-nmstate-webhook: secret "openshift-nmstate-webhook" not found Jan 20 18:18:04 crc kubenswrapper[4661]: I0120 18:18:04.393097 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/0b121ec2-f30a-46c4-a556-dd00cca2a1e3-nmstate-lock\") pod \"nmstate-handler-q9p2x\" (UID: \"0b121ec2-f30a-46c4-a556-dd00cca2a1e3\") " pod="openshift-nmstate/nmstate-handler-q9p2x" Jan 20 18:18:04 crc kubenswrapper[4661]: I0120 18:18:04.393159 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/0b121ec2-f30a-46c4-a556-dd00cca2a1e3-ovs-socket\") pod \"nmstate-handler-q9p2x\" (UID: \"0b121ec2-f30a-46c4-a556-dd00cca2a1e3\") " pod="openshift-nmstate/nmstate-handler-q9p2x" Jan 20 18:18:04 crc kubenswrapper[4661]: I0120 18:18:04.393186 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/68fe7ab0-cff5-474c-aa0d-7c579ddc51bb-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-frgmz\" (UID: \"68fe7ab0-cff5-474c-aa0d-7c579ddc51bb\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-frgmz" Jan 20 18:18:04 crc kubenswrapper[4661]: I0120 18:18:04.393207 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-px29n\" (UniqueName: \"kubernetes.io/projected/68fe7ab0-cff5-474c-aa0d-7c579ddc51bb-kube-api-access-px29n\") pod \"nmstate-console-plugin-7754f76f8b-frgmz\" (UID: \"68fe7ab0-cff5-474c-aa0d-7c579ddc51bb\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-frgmz" Jan 20 18:18:04 crc kubenswrapper[4661]: I0120 18:18:04.393222 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/0b121ec2-f30a-46c4-a556-dd00cca2a1e3-dbus-socket\") pod \"nmstate-handler-q9p2x\" (UID: \"0b121ec2-f30a-46c4-a556-dd00cca2a1e3\") " pod="openshift-nmstate/nmstate-handler-q9p2x" Jan 20 18:18:04 crc kubenswrapper[4661]: I0120 18:18:04.393288 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/68fe7ab0-cff5-474c-aa0d-7c579ddc51bb-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-frgmz\" (UID: \"68fe7ab0-cff5-474c-aa0d-7c579ddc51bb\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-frgmz" Jan 20 18:18:04 crc kubenswrapper[4661]: I0120 18:18:04.393289 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/0b121ec2-f30a-46c4-a556-dd00cca2a1e3-ovs-socket\") pod \"nmstate-handler-q9p2x\" (UID: \"0b121ec2-f30a-46c4-a556-dd00cca2a1e3\") " pod="openshift-nmstate/nmstate-handler-q9p2x" Jan 20 18:18:04 crc kubenswrapper[4661]: E0120 18:18:04.393370 4661 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fa442cfc-fd6e-4b5d-882d-aaa8de83f99a-tls-key-pair podName:fa442cfc-fd6e-4b5d-882d-aaa8de83f99a nodeName:}" failed. No retries permitted until 2026-01-20 18:18:04.893355916 +0000 UTC m=+741.224145568 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tls-key-pair" (UniqueName: "kubernetes.io/secret/fa442cfc-fd6e-4b5d-882d-aaa8de83f99a-tls-key-pair") pod "nmstate-webhook-8474b5b9d8-6rl82" (UID: "fa442cfc-fd6e-4b5d-882d-aaa8de83f99a") : secret "openshift-nmstate-webhook" not found Jan 20 18:18:04 crc kubenswrapper[4661]: I0120 18:18:04.393421 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"default-dockercfg-sjcdf" Jan 20 18:18:04 crc kubenswrapper[4661]: I0120 18:18:04.393460 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/0b121ec2-f30a-46c4-a556-dd00cca2a1e3-dbus-socket\") pod \"nmstate-handler-q9p2x\" (UID: \"0b121ec2-f30a-46c4-a556-dd00cca2a1e3\") " pod="openshift-nmstate/nmstate-handler-q9p2x" Jan 20 18:18:04 crc kubenswrapper[4661]: I0120 18:18:04.393523 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/0b121ec2-f30a-46c4-a556-dd00cca2a1e3-nmstate-lock\") pod \"nmstate-handler-q9p2x\" (UID: \"0b121ec2-f30a-46c4-a556-dd00cca2a1e3\") " pod="openshift-nmstate/nmstate-handler-q9p2x" Jan 20 18:18:04 crc kubenswrapper[4661]: I0120 18:18:04.399713 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-frgmz"] Jan 20 18:18:04 crc kubenswrapper[4661]: I0120 18:18:04.432312 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jx4dm\" (UniqueName: \"kubernetes.io/projected/0b121ec2-f30a-46c4-a556-dd00cca2a1e3-kube-api-access-jx4dm\") pod \"nmstate-handler-q9p2x\" (UID: \"0b121ec2-f30a-46c4-a556-dd00cca2a1e3\") " pod="openshift-nmstate/nmstate-handler-q9p2x" Jan 20 18:18:04 crc kubenswrapper[4661]: I0120 18:18:04.444897 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m67bs\" (UniqueName: \"kubernetes.io/projected/fa442cfc-fd6e-4b5d-882d-aaa8de83f99a-kube-api-access-m67bs\") pod \"nmstate-webhook-8474b5b9d8-6rl82\" (UID: \"fa442cfc-fd6e-4b5d-882d-aaa8de83f99a\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-6rl82" Jan 20 18:18:04 crc kubenswrapper[4661]: I0120 18:18:04.494583 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/68fe7ab0-cff5-474c-aa0d-7c579ddc51bb-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-frgmz\" (UID: \"68fe7ab0-cff5-474c-aa0d-7c579ddc51bb\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-frgmz" Jan 20 18:18:04 crc kubenswrapper[4661]: I0120 18:18:04.494738 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/68fe7ab0-cff5-474c-aa0d-7c579ddc51bb-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-frgmz\" (UID: \"68fe7ab0-cff5-474c-aa0d-7c579ddc51bb\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-frgmz" Jan 20 18:18:04 crc kubenswrapper[4661]: E0120 18:18:04.494758 4661 secret.go:188] Couldn't get secret openshift-nmstate/plugin-serving-cert: secret "plugin-serving-cert" not found Jan 20 18:18:04 crc kubenswrapper[4661]: I0120 18:18:04.494771 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-px29n\" (UniqueName: \"kubernetes.io/projected/68fe7ab0-cff5-474c-aa0d-7c579ddc51bb-kube-api-access-px29n\") pod \"nmstate-console-plugin-7754f76f8b-frgmz\" (UID: \"68fe7ab0-cff5-474c-aa0d-7c579ddc51bb\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-frgmz" Jan 20 18:18:04 crc kubenswrapper[4661]: E0120 18:18:04.494825 4661 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/68fe7ab0-cff5-474c-aa0d-7c579ddc51bb-plugin-serving-cert podName:68fe7ab0-cff5-474c-aa0d-7c579ddc51bb nodeName:}" failed. No retries permitted until 2026-01-20 18:18:04.994807956 +0000 UTC m=+741.325597618 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "plugin-serving-cert" (UniqueName: "kubernetes.io/secret/68fe7ab0-cff5-474c-aa0d-7c579ddc51bb-plugin-serving-cert") pod "nmstate-console-plugin-7754f76f8b-frgmz" (UID: "68fe7ab0-cff5-474c-aa0d-7c579ddc51bb") : secret "plugin-serving-cert" not found Jan 20 18:18:04 crc kubenswrapper[4661]: I0120 18:18:04.495613 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/68fe7ab0-cff5-474c-aa0d-7c579ddc51bb-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-frgmz\" (UID: \"68fe7ab0-cff5-474c-aa0d-7c579ddc51bb\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-frgmz" Jan 20 18:18:04 crc kubenswrapper[4661]: I0120 18:18:04.500733 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-54757c584b-7pxb7" Jan 20 18:18:04 crc kubenswrapper[4661]: I0120 18:18:04.545633 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-px29n\" (UniqueName: \"kubernetes.io/projected/68fe7ab0-cff5-474c-aa0d-7c579ddc51bb-kube-api-access-px29n\") pod \"nmstate-console-plugin-7754f76f8b-frgmz\" (UID: \"68fe7ab0-cff5-474c-aa0d-7c579ddc51bb\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-frgmz" Jan 20 18:18:04 crc kubenswrapper[4661]: I0120 18:18:04.553879 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-q9p2x" Jan 20 18:18:04 crc kubenswrapper[4661]: I0120 18:18:04.601850 4661 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-5cd94474c7-dd6z7"] Jan 20 18:18:04 crc kubenswrapper[4661]: I0120 18:18:04.602823 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-5cd94474c7-dd6z7" Jan 20 18:18:04 crc kubenswrapper[4661]: I0120 18:18:04.614190 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-5cd94474c7-dd6z7"] Jan 20 18:18:04 crc kubenswrapper[4661]: I0120 18:18:04.796816 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/2bad6e35-a2c8-488c-bfa7-94d0895ac466-service-ca\") pod \"console-5cd94474c7-dd6z7\" (UID: \"2bad6e35-a2c8-488c-bfa7-94d0895ac466\") " pod="openshift-console/console-5cd94474c7-dd6z7" Jan 20 18:18:04 crc kubenswrapper[4661]: I0120 18:18:04.797083 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2bad6e35-a2c8-488c-bfa7-94d0895ac466-trusted-ca-bundle\") pod \"console-5cd94474c7-dd6z7\" (UID: \"2bad6e35-a2c8-488c-bfa7-94d0895ac466\") " pod="openshift-console/console-5cd94474c7-dd6z7" Jan 20 18:18:04 crc kubenswrapper[4661]: I0120 18:18:04.797104 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/2bad6e35-a2c8-488c-bfa7-94d0895ac466-console-config\") pod \"console-5cd94474c7-dd6z7\" (UID: \"2bad6e35-a2c8-488c-bfa7-94d0895ac466\") " pod="openshift-console/console-5cd94474c7-dd6z7" Jan 20 18:18:04 crc kubenswrapper[4661]: I0120 18:18:04.797123 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/2bad6e35-a2c8-488c-bfa7-94d0895ac466-console-serving-cert\") pod \"console-5cd94474c7-dd6z7\" (UID: \"2bad6e35-a2c8-488c-bfa7-94d0895ac466\") " pod="openshift-console/console-5cd94474c7-dd6z7" Jan 20 18:18:04 crc kubenswrapper[4661]: I0120 18:18:04.797178 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p4dnd\" (UniqueName: \"kubernetes.io/projected/2bad6e35-a2c8-488c-bfa7-94d0895ac466-kube-api-access-p4dnd\") pod \"console-5cd94474c7-dd6z7\" (UID: \"2bad6e35-a2c8-488c-bfa7-94d0895ac466\") " pod="openshift-console/console-5cd94474c7-dd6z7" Jan 20 18:18:04 crc kubenswrapper[4661]: I0120 18:18:04.797203 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/2bad6e35-a2c8-488c-bfa7-94d0895ac466-oauth-serving-cert\") pod \"console-5cd94474c7-dd6z7\" (UID: \"2bad6e35-a2c8-488c-bfa7-94d0895ac466\") " pod="openshift-console/console-5cd94474c7-dd6z7" Jan 20 18:18:04 crc kubenswrapper[4661]: I0120 18:18:04.797224 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/2bad6e35-a2c8-488c-bfa7-94d0895ac466-console-oauth-config\") pod \"console-5cd94474c7-dd6z7\" (UID: \"2bad6e35-a2c8-488c-bfa7-94d0895ac466\") " pod="openshift-console/console-5cd94474c7-dd6z7" Jan 20 18:18:04 crc kubenswrapper[4661]: I0120 18:18:04.808240 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-7pxb7"] Jan 20 18:18:04 crc kubenswrapper[4661]: I0120 18:18:04.898141 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/fa442cfc-fd6e-4b5d-882d-aaa8de83f99a-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-6rl82\" (UID: \"fa442cfc-fd6e-4b5d-882d-aaa8de83f99a\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-6rl82" Jan 20 18:18:04 crc kubenswrapper[4661]: I0120 18:18:04.898189 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/2bad6e35-a2c8-488c-bfa7-94d0895ac466-service-ca\") pod \"console-5cd94474c7-dd6z7\" (UID: \"2bad6e35-a2c8-488c-bfa7-94d0895ac466\") " pod="openshift-console/console-5cd94474c7-dd6z7" Jan 20 18:18:04 crc kubenswrapper[4661]: I0120 18:18:04.898224 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2bad6e35-a2c8-488c-bfa7-94d0895ac466-trusted-ca-bundle\") pod \"console-5cd94474c7-dd6z7\" (UID: \"2bad6e35-a2c8-488c-bfa7-94d0895ac466\") " pod="openshift-console/console-5cd94474c7-dd6z7" Jan 20 18:18:04 crc kubenswrapper[4661]: I0120 18:18:04.898240 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/2bad6e35-a2c8-488c-bfa7-94d0895ac466-console-config\") pod \"console-5cd94474c7-dd6z7\" (UID: \"2bad6e35-a2c8-488c-bfa7-94d0895ac466\") " pod="openshift-console/console-5cd94474c7-dd6z7" Jan 20 18:18:04 crc kubenswrapper[4661]: I0120 18:18:04.898259 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/2bad6e35-a2c8-488c-bfa7-94d0895ac466-console-serving-cert\") pod \"console-5cd94474c7-dd6z7\" (UID: \"2bad6e35-a2c8-488c-bfa7-94d0895ac466\") " pod="openshift-console/console-5cd94474c7-dd6z7" Jan 20 18:18:04 crc kubenswrapper[4661]: I0120 18:18:04.898295 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p4dnd\" (UniqueName: \"kubernetes.io/projected/2bad6e35-a2c8-488c-bfa7-94d0895ac466-kube-api-access-p4dnd\") pod \"console-5cd94474c7-dd6z7\" (UID: \"2bad6e35-a2c8-488c-bfa7-94d0895ac466\") " pod="openshift-console/console-5cd94474c7-dd6z7" Jan 20 18:18:04 crc kubenswrapper[4661]: I0120 18:18:04.898318 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/2bad6e35-a2c8-488c-bfa7-94d0895ac466-oauth-serving-cert\") pod \"console-5cd94474c7-dd6z7\" (UID: \"2bad6e35-a2c8-488c-bfa7-94d0895ac466\") " pod="openshift-console/console-5cd94474c7-dd6z7" Jan 20 18:18:04 crc kubenswrapper[4661]: I0120 18:18:04.898336 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/2bad6e35-a2c8-488c-bfa7-94d0895ac466-console-oauth-config\") pod \"console-5cd94474c7-dd6z7\" (UID: \"2bad6e35-a2c8-488c-bfa7-94d0895ac466\") " pod="openshift-console/console-5cd94474c7-dd6z7" Jan 20 18:18:04 crc kubenswrapper[4661]: I0120 18:18:04.899325 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/2bad6e35-a2c8-488c-bfa7-94d0895ac466-service-ca\") pod \"console-5cd94474c7-dd6z7\" (UID: \"2bad6e35-a2c8-488c-bfa7-94d0895ac466\") " pod="openshift-console/console-5cd94474c7-dd6z7" Jan 20 18:18:04 crc kubenswrapper[4661]: I0120 18:18:04.900036 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/2bad6e35-a2c8-488c-bfa7-94d0895ac466-oauth-serving-cert\") pod \"console-5cd94474c7-dd6z7\" (UID: \"2bad6e35-a2c8-488c-bfa7-94d0895ac466\") " pod="openshift-console/console-5cd94474c7-dd6z7" Jan 20 18:18:04 crc kubenswrapper[4661]: I0120 18:18:04.900934 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2bad6e35-a2c8-488c-bfa7-94d0895ac466-trusted-ca-bundle\") pod \"console-5cd94474c7-dd6z7\" (UID: \"2bad6e35-a2c8-488c-bfa7-94d0895ac466\") " pod="openshift-console/console-5cd94474c7-dd6z7" Jan 20 18:18:04 crc kubenswrapper[4661]: I0120 18:18:04.901140 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/2bad6e35-a2c8-488c-bfa7-94d0895ac466-console-config\") pod \"console-5cd94474c7-dd6z7\" (UID: \"2bad6e35-a2c8-488c-bfa7-94d0895ac466\") " pod="openshift-console/console-5cd94474c7-dd6z7" Jan 20 18:18:04 crc kubenswrapper[4661]: I0120 18:18:04.902836 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/2bad6e35-a2c8-488c-bfa7-94d0895ac466-console-serving-cert\") pod \"console-5cd94474c7-dd6z7\" (UID: \"2bad6e35-a2c8-488c-bfa7-94d0895ac466\") " pod="openshift-console/console-5cd94474c7-dd6z7" Jan 20 18:18:04 crc kubenswrapper[4661]: I0120 18:18:04.903368 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/fa442cfc-fd6e-4b5d-882d-aaa8de83f99a-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-6rl82\" (UID: \"fa442cfc-fd6e-4b5d-882d-aaa8de83f99a\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-6rl82" Jan 20 18:18:04 crc kubenswrapper[4661]: I0120 18:18:04.903772 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/2bad6e35-a2c8-488c-bfa7-94d0895ac466-console-oauth-config\") pod \"console-5cd94474c7-dd6z7\" (UID: \"2bad6e35-a2c8-488c-bfa7-94d0895ac466\") " pod="openshift-console/console-5cd94474c7-dd6z7" Jan 20 18:18:04 crc kubenswrapper[4661]: I0120 18:18:04.917131 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p4dnd\" (UniqueName: \"kubernetes.io/projected/2bad6e35-a2c8-488c-bfa7-94d0895ac466-kube-api-access-p4dnd\") pod \"console-5cd94474c7-dd6z7\" (UID: \"2bad6e35-a2c8-488c-bfa7-94d0895ac466\") " pod="openshift-console/console-5cd94474c7-dd6z7" Jan 20 18:18:04 crc kubenswrapper[4661]: I0120 18:18:04.920676 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-5cd94474c7-dd6z7" Jan 20 18:18:04 crc kubenswrapper[4661]: I0120 18:18:04.998824 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/68fe7ab0-cff5-474c-aa0d-7c579ddc51bb-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-frgmz\" (UID: \"68fe7ab0-cff5-474c-aa0d-7c579ddc51bb\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-frgmz" Jan 20 18:18:05 crc kubenswrapper[4661]: I0120 18:18:05.002112 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/68fe7ab0-cff5-474c-aa0d-7c579ddc51bb-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-frgmz\" (UID: \"68fe7ab0-cff5-474c-aa0d-7c579ddc51bb\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-frgmz" Jan 20 18:18:05 crc kubenswrapper[4661]: I0120 18:18:05.007873 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-frgmz" Jan 20 18:18:05 crc kubenswrapper[4661]: I0120 18:18:05.066546 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-q9p2x" event={"ID":"0b121ec2-f30a-46c4-a556-dd00cca2a1e3","Type":"ContainerStarted","Data":"a938a93ab01a16ab1ddcf82400381cecc3a07835f3deefb35152b903dd334e0a"} Jan 20 18:18:05 crc kubenswrapper[4661]: I0120 18:18:05.067246 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-7pxb7" event={"ID":"7f2c01ac-294a-42b8-9988-22419d94a0ec","Type":"ContainerStarted","Data":"e9610f1e4a0492bf5caa35482847a53327936f65266bfa5aad0741a201793cbc"} Jan 20 18:18:05 crc kubenswrapper[4661]: I0120 18:18:05.122185 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-6rl82" Jan 20 18:18:05 crc kubenswrapper[4661]: I0120 18:18:05.123947 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-5cd94474c7-dd6z7"] Jan 20 18:18:05 crc kubenswrapper[4661]: W0120 18:18:05.130293 4661 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2bad6e35_a2c8_488c_bfa7_94d0895ac466.slice/crio-2b50a89850453027580646b00655356412a42d288c2a59bd2710046f3316ae1f WatchSource:0}: Error finding container 2b50a89850453027580646b00655356412a42d288c2a59bd2710046f3316ae1f: Status 404 returned error can't find the container with id 2b50a89850453027580646b00655356412a42d288c2a59bd2710046f3316ae1f Jan 20 18:18:05 crc kubenswrapper[4661]: I0120 18:18:05.199878 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-frgmz"] Jan 20 18:18:05 crc kubenswrapper[4661]: I0120 18:18:05.309726 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-6rl82"] Jan 20 18:18:05 crc kubenswrapper[4661]: W0120 18:18:05.314571 4661 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfa442cfc_fd6e_4b5d_882d_aaa8de83f99a.slice/crio-c71f3b050113b73607eeab9dda39af78c39739eb8b86a3e1d1d6cddb2a3ae411 WatchSource:0}: Error finding container c71f3b050113b73607eeab9dda39af78c39739eb8b86a3e1d1d6cddb2a3ae411: Status 404 returned error can't find the container with id c71f3b050113b73607eeab9dda39af78c39739eb8b86a3e1d1d6cddb2a3ae411 Jan 20 18:18:06 crc kubenswrapper[4661]: I0120 18:18:06.072988 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-frgmz" event={"ID":"68fe7ab0-cff5-474c-aa0d-7c579ddc51bb","Type":"ContainerStarted","Data":"336004e4b60dc77bc8a37fafedbabd75c37240e24fc30a29d8f1b7f3a209e6f5"} Jan 20 18:18:06 crc kubenswrapper[4661]: I0120 18:18:06.074209 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-5cd94474c7-dd6z7" event={"ID":"2bad6e35-a2c8-488c-bfa7-94d0895ac466","Type":"ContainerStarted","Data":"b839909fd68bdacbb744990ff8326e37818433d08d5ca224b3d4c4378c8ba2a5"} Jan 20 18:18:06 crc kubenswrapper[4661]: I0120 18:18:06.074241 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-5cd94474c7-dd6z7" event={"ID":"2bad6e35-a2c8-488c-bfa7-94d0895ac466","Type":"ContainerStarted","Data":"2b50a89850453027580646b00655356412a42d288c2a59bd2710046f3316ae1f"} Jan 20 18:18:06 crc kubenswrapper[4661]: I0120 18:18:06.075744 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-6rl82" event={"ID":"fa442cfc-fd6e-4b5d-882d-aaa8de83f99a","Type":"ContainerStarted","Data":"c71f3b050113b73607eeab9dda39af78c39739eb8b86a3e1d1d6cddb2a3ae411"} Jan 20 18:18:06 crc kubenswrapper[4661]: I0120 18:18:06.094397 4661 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-5cd94474c7-dd6z7" podStartSLOduration=2.094381509 podStartE2EDuration="2.094381509s" podCreationTimestamp="2026-01-20 18:18:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 18:18:06.091280564 +0000 UTC m=+742.422070246" watchObservedRunningTime="2026-01-20 18:18:06.094381509 +0000 UTC m=+742.425171171" Jan 20 18:18:09 crc kubenswrapper[4661]: I0120 18:18:09.106467 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-7pxb7" event={"ID":"7f2c01ac-294a-42b8-9988-22419d94a0ec","Type":"ContainerStarted","Data":"24a43f19381776d25612062e301c85d901d882618732f3705fcca3906063a87b"} Jan 20 18:18:09 crc kubenswrapper[4661]: I0120 18:18:09.112317 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-6rl82" event={"ID":"fa442cfc-fd6e-4b5d-882d-aaa8de83f99a","Type":"ContainerStarted","Data":"3e3b99e68fd96ed79dc6bd62fe1985104f14adc4d667aa3b602382b647dd3047"} Jan 20 18:18:09 crc kubenswrapper[4661]: I0120 18:18:09.112509 4661 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-6rl82" Jan 20 18:18:09 crc kubenswrapper[4661]: I0120 18:18:09.116900 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-q9p2x" event={"ID":"0b121ec2-f30a-46c4-a556-dd00cca2a1e3","Type":"ContainerStarted","Data":"59f9ef414ecc956ac9e38a4078c95fa6a605a45b40cc2e830f35710ef5114007"} Jan 20 18:18:09 crc kubenswrapper[4661]: I0120 18:18:09.117556 4661 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-q9p2x" Jan 20 18:18:09 crc kubenswrapper[4661]: I0120 18:18:09.136907 4661 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-6rl82" podStartSLOduration=2.536295421 podStartE2EDuration="5.136893375s" podCreationTimestamp="2026-01-20 18:18:04 +0000 UTC" firstStartedPulling="2026-01-20 18:18:05.316096938 +0000 UTC m=+741.646886590" lastFinishedPulling="2026-01-20 18:18:07.916694882 +0000 UTC m=+744.247484544" observedRunningTime="2026-01-20 18:18:09.134260463 +0000 UTC m=+745.465050125" watchObservedRunningTime="2026-01-20 18:18:09.136893375 +0000 UTC m=+745.467683037" Jan 20 18:18:09 crc kubenswrapper[4661]: I0120 18:18:09.159191 4661 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-handler-q9p2x" podStartSLOduration=1.19013011 podStartE2EDuration="5.159176321s" podCreationTimestamp="2026-01-20 18:18:04 +0000 UTC" firstStartedPulling="2026-01-20 18:18:04.593162061 +0000 UTC m=+740.923951723" lastFinishedPulling="2026-01-20 18:18:08.562208272 +0000 UTC m=+744.892997934" observedRunningTime="2026-01-20 18:18:09.155123461 +0000 UTC m=+745.485913143" watchObservedRunningTime="2026-01-20 18:18:09.159176321 +0000 UTC m=+745.489965983" Jan 20 18:18:12 crc kubenswrapper[4661]: I0120 18:18:12.140003 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-7pxb7" event={"ID":"7f2c01ac-294a-42b8-9988-22419d94a0ec","Type":"ContainerStarted","Data":"857ab5d96892fdaa5e3bf8c19c2916de93932b4f5d2667f5d04df6012b5180ef"} Jan 20 18:18:12 crc kubenswrapper[4661]: I0120 18:18:12.179971 4661 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-metrics-54757c584b-7pxb7" podStartSLOduration=1.641088199 podStartE2EDuration="8.179954157s" podCreationTimestamp="2026-01-20 18:18:04 +0000 UTC" firstStartedPulling="2026-01-20 18:18:04.814293407 +0000 UTC m=+741.145083069" lastFinishedPulling="2026-01-20 18:18:11.353159345 +0000 UTC m=+747.683949027" observedRunningTime="2026-01-20 18:18:12.17310159 +0000 UTC m=+748.503891282" watchObservedRunningTime="2026-01-20 18:18:12.179954157 +0000 UTC m=+748.510743829" Jan 20 18:18:14 crc kubenswrapper[4661]: I0120 18:18:14.576253 4661 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-q9p2x" Jan 20 18:18:14 crc kubenswrapper[4661]: I0120 18:18:14.921319 4661 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-5cd94474c7-dd6z7" Jan 20 18:18:14 crc kubenswrapper[4661]: I0120 18:18:14.921602 4661 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-5cd94474c7-dd6z7" Jan 20 18:18:14 crc kubenswrapper[4661]: I0120 18:18:14.927709 4661 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-5cd94474c7-dd6z7" Jan 20 18:18:15 crc kubenswrapper[4661]: I0120 18:18:15.160258 4661 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-5cd94474c7-dd6z7" Jan 20 18:18:15 crc kubenswrapper[4661]: I0120 18:18:15.243744 4661 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-phg9x"] Jan 20 18:18:17 crc kubenswrapper[4661]: I0120 18:18:17.171176 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-frgmz" event={"ID":"68fe7ab0-cff5-474c-aa0d-7c579ddc51bb","Type":"ContainerStarted","Data":"ed6694ff8958eecca6dbec663785b1767cf915932669f6d6d8aef595fbaaaf03"} Jan 20 18:18:17 crc kubenswrapper[4661]: I0120 18:18:17.188808 4661 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-frgmz" podStartSLOduration=1.428568248 podStartE2EDuration="13.188784223s" podCreationTimestamp="2026-01-20 18:18:04 +0000 UTC" firstStartedPulling="2026-01-20 18:18:05.220806546 +0000 UTC m=+741.551596208" lastFinishedPulling="2026-01-20 18:18:16.981022521 +0000 UTC m=+753.311812183" observedRunningTime="2026-01-20 18:18:17.183683044 +0000 UTC m=+753.514472716" watchObservedRunningTime="2026-01-20 18:18:17.188784223 +0000 UTC m=+753.519573885" Jan 20 18:18:25 crc kubenswrapper[4661]: I0120 18:18:25.128065 4661 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-6rl82" Jan 20 18:18:28 crc kubenswrapper[4661]: I0120 18:18:28.436379 4661 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 20 18:18:29 crc kubenswrapper[4661]: I0120 18:18:29.323308 4661 patch_prober.go:28] interesting pod/machine-config-daemon-svf7c container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 20 18:18:29 crc kubenswrapper[4661]: I0120 18:18:29.323392 4661 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-svf7c" podUID="78855c94-da90-4523-8d65-70f7fd153dee" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 20 18:18:40 crc kubenswrapper[4661]: I0120 18:18:40.296230 4661 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-f9d7485db-phg9x" podUID="4c500541-c3f2-4f6d-8bb7-1227aa74989a" containerName="console" containerID="cri-o://b226e077e4ab24849277d38c0a08eacaa3c1e7f4fed8a5f921bbc1e1eb4cb8ed" gracePeriod=15 Jan 20 18:18:40 crc kubenswrapper[4661]: I0120 18:18:40.618020 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-phg9x_4c500541-c3f2-4f6d-8bb7-1227aa74989a/console/0.log" Jan 20 18:18:40 crc kubenswrapper[4661]: I0120 18:18:40.618345 4661 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-phg9x" Jan 20 18:18:40 crc kubenswrapper[4661]: I0120 18:18:40.723022 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hxvmg\" (UniqueName: \"kubernetes.io/projected/4c500541-c3f2-4f6d-8bb7-1227aa74989a-kube-api-access-hxvmg\") pod \"4c500541-c3f2-4f6d-8bb7-1227aa74989a\" (UID: \"4c500541-c3f2-4f6d-8bb7-1227aa74989a\") " Jan 20 18:18:40 crc kubenswrapper[4661]: I0120 18:18:40.723096 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/4c500541-c3f2-4f6d-8bb7-1227aa74989a-console-oauth-config\") pod \"4c500541-c3f2-4f6d-8bb7-1227aa74989a\" (UID: \"4c500541-c3f2-4f6d-8bb7-1227aa74989a\") " Jan 20 18:18:40 crc kubenswrapper[4661]: I0120 18:18:40.723125 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/4c500541-c3f2-4f6d-8bb7-1227aa74989a-service-ca\") pod \"4c500541-c3f2-4f6d-8bb7-1227aa74989a\" (UID: \"4c500541-c3f2-4f6d-8bb7-1227aa74989a\") " Jan 20 18:18:40 crc kubenswrapper[4661]: I0120 18:18:40.723912 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/4c500541-c3f2-4f6d-8bb7-1227aa74989a-console-serving-cert\") pod \"4c500541-c3f2-4f6d-8bb7-1227aa74989a\" (UID: \"4c500541-c3f2-4f6d-8bb7-1227aa74989a\") " Jan 20 18:18:40 crc kubenswrapper[4661]: I0120 18:18:40.723962 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/4c500541-c3f2-4f6d-8bb7-1227aa74989a-oauth-serving-cert\") pod \"4c500541-c3f2-4f6d-8bb7-1227aa74989a\" (UID: \"4c500541-c3f2-4f6d-8bb7-1227aa74989a\") " Jan 20 18:18:40 crc kubenswrapper[4661]: I0120 18:18:40.724001 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4c500541-c3f2-4f6d-8bb7-1227aa74989a-trusted-ca-bundle\") pod \"4c500541-c3f2-4f6d-8bb7-1227aa74989a\" (UID: \"4c500541-c3f2-4f6d-8bb7-1227aa74989a\") " Jan 20 18:18:40 crc kubenswrapper[4661]: I0120 18:18:40.724001 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4c500541-c3f2-4f6d-8bb7-1227aa74989a-service-ca" (OuterVolumeSpecName: "service-ca") pod "4c500541-c3f2-4f6d-8bb7-1227aa74989a" (UID: "4c500541-c3f2-4f6d-8bb7-1227aa74989a"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 18:18:40 crc kubenswrapper[4661]: I0120 18:18:40.724038 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/4c500541-c3f2-4f6d-8bb7-1227aa74989a-console-config\") pod \"4c500541-c3f2-4f6d-8bb7-1227aa74989a\" (UID: \"4c500541-c3f2-4f6d-8bb7-1227aa74989a\") " Jan 20 18:18:40 crc kubenswrapper[4661]: I0120 18:18:40.724272 4661 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/4c500541-c3f2-4f6d-8bb7-1227aa74989a-service-ca\") on node \"crc\" DevicePath \"\"" Jan 20 18:18:40 crc kubenswrapper[4661]: I0120 18:18:40.724561 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4c500541-c3f2-4f6d-8bb7-1227aa74989a-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "4c500541-c3f2-4f6d-8bb7-1227aa74989a" (UID: "4c500541-c3f2-4f6d-8bb7-1227aa74989a"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 18:18:40 crc kubenswrapper[4661]: I0120 18:18:40.724609 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4c500541-c3f2-4f6d-8bb7-1227aa74989a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "4c500541-c3f2-4f6d-8bb7-1227aa74989a" (UID: "4c500541-c3f2-4f6d-8bb7-1227aa74989a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 18:18:40 crc kubenswrapper[4661]: I0120 18:18:40.724709 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4c500541-c3f2-4f6d-8bb7-1227aa74989a-console-config" (OuterVolumeSpecName: "console-config") pod "4c500541-c3f2-4f6d-8bb7-1227aa74989a" (UID: "4c500541-c3f2-4f6d-8bb7-1227aa74989a"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 18:18:40 crc kubenswrapper[4661]: I0120 18:18:40.733707 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4c500541-c3f2-4f6d-8bb7-1227aa74989a-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "4c500541-c3f2-4f6d-8bb7-1227aa74989a" (UID: "4c500541-c3f2-4f6d-8bb7-1227aa74989a"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 18:18:40 crc kubenswrapper[4661]: I0120 18:18:40.733885 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4c500541-c3f2-4f6d-8bb7-1227aa74989a-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "4c500541-c3f2-4f6d-8bb7-1227aa74989a" (UID: "4c500541-c3f2-4f6d-8bb7-1227aa74989a"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 18:18:40 crc kubenswrapper[4661]: I0120 18:18:40.734263 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4c500541-c3f2-4f6d-8bb7-1227aa74989a-kube-api-access-hxvmg" (OuterVolumeSpecName: "kube-api-access-hxvmg") pod "4c500541-c3f2-4f6d-8bb7-1227aa74989a" (UID: "4c500541-c3f2-4f6d-8bb7-1227aa74989a"). InnerVolumeSpecName "kube-api-access-hxvmg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 18:18:40 crc kubenswrapper[4661]: I0120 18:18:40.825190 4661 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/4c500541-c3f2-4f6d-8bb7-1227aa74989a-console-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 20 18:18:40 crc kubenswrapper[4661]: I0120 18:18:40.825238 4661 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/4c500541-c3f2-4f6d-8bb7-1227aa74989a-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 20 18:18:40 crc kubenswrapper[4661]: I0120 18:18:40.825248 4661 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4c500541-c3f2-4f6d-8bb7-1227aa74989a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 20 18:18:40 crc kubenswrapper[4661]: I0120 18:18:40.825256 4661 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/4c500541-c3f2-4f6d-8bb7-1227aa74989a-console-config\") on node \"crc\" DevicePath \"\"" Jan 20 18:18:40 crc kubenswrapper[4661]: I0120 18:18:40.825265 4661 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hxvmg\" (UniqueName: \"kubernetes.io/projected/4c500541-c3f2-4f6d-8bb7-1227aa74989a-kube-api-access-hxvmg\") on node \"crc\" DevicePath \"\"" Jan 20 18:18:40 crc kubenswrapper[4661]: I0120 18:18:40.825277 4661 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/4c500541-c3f2-4f6d-8bb7-1227aa74989a-console-oauth-config\") on node \"crc\" DevicePath \"\"" Jan 20 18:18:41 crc kubenswrapper[4661]: I0120 18:18:41.311913 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-phg9x_4c500541-c3f2-4f6d-8bb7-1227aa74989a/console/0.log" Jan 20 18:18:41 crc kubenswrapper[4661]: I0120 18:18:41.311971 4661 generic.go:334] "Generic (PLEG): container finished" podID="4c500541-c3f2-4f6d-8bb7-1227aa74989a" containerID="b226e077e4ab24849277d38c0a08eacaa3c1e7f4fed8a5f921bbc1e1eb4cb8ed" exitCode=2 Jan 20 18:18:41 crc kubenswrapper[4661]: I0120 18:18:41.312006 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-phg9x" event={"ID":"4c500541-c3f2-4f6d-8bb7-1227aa74989a","Type":"ContainerDied","Data":"b226e077e4ab24849277d38c0a08eacaa3c1e7f4fed8a5f921bbc1e1eb4cb8ed"} Jan 20 18:18:41 crc kubenswrapper[4661]: I0120 18:18:41.312033 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-phg9x" event={"ID":"4c500541-c3f2-4f6d-8bb7-1227aa74989a","Type":"ContainerDied","Data":"78cd7411d3700beb90063b3132373ffd49296a73375a75e8b159868fbf0d47e5"} Jan 20 18:18:41 crc kubenswrapper[4661]: I0120 18:18:41.312052 4661 scope.go:117] "RemoveContainer" containerID="b226e077e4ab24849277d38c0a08eacaa3c1e7f4fed8a5f921bbc1e1eb4cb8ed" Jan 20 18:18:41 crc kubenswrapper[4661]: I0120 18:18:41.312208 4661 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-phg9x" Jan 20 18:18:41 crc kubenswrapper[4661]: I0120 18:18:41.330403 4661 scope.go:117] "RemoveContainer" containerID="b226e077e4ab24849277d38c0a08eacaa3c1e7f4fed8a5f921bbc1e1eb4cb8ed" Jan 20 18:18:41 crc kubenswrapper[4661]: E0120 18:18:41.330829 4661 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b226e077e4ab24849277d38c0a08eacaa3c1e7f4fed8a5f921bbc1e1eb4cb8ed\": container with ID starting with b226e077e4ab24849277d38c0a08eacaa3c1e7f4fed8a5f921bbc1e1eb4cb8ed not found: ID does not exist" containerID="b226e077e4ab24849277d38c0a08eacaa3c1e7f4fed8a5f921bbc1e1eb4cb8ed" Jan 20 18:18:41 crc kubenswrapper[4661]: I0120 18:18:41.330857 4661 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b226e077e4ab24849277d38c0a08eacaa3c1e7f4fed8a5f921bbc1e1eb4cb8ed"} err="failed to get container status \"b226e077e4ab24849277d38c0a08eacaa3c1e7f4fed8a5f921bbc1e1eb4cb8ed\": rpc error: code = NotFound desc = could not find container \"b226e077e4ab24849277d38c0a08eacaa3c1e7f4fed8a5f921bbc1e1eb4cb8ed\": container with ID starting with b226e077e4ab24849277d38c0a08eacaa3c1e7f4fed8a5f921bbc1e1eb4cb8ed not found: ID does not exist" Jan 20 18:18:41 crc kubenswrapper[4661]: I0120 18:18:41.339654 4661 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-phg9x"] Jan 20 18:18:41 crc kubenswrapper[4661]: I0120 18:18:41.342330 4661 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-f9d7485db-phg9x"] Jan 20 18:18:41 crc kubenswrapper[4661]: I0120 18:18:41.712201 4661 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcwzssn"] Jan 20 18:18:41 crc kubenswrapper[4661]: E0120 18:18:41.712730 4661 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4c500541-c3f2-4f6d-8bb7-1227aa74989a" containerName="console" Jan 20 18:18:41 crc kubenswrapper[4661]: I0120 18:18:41.712824 4661 state_mem.go:107] "Deleted CPUSet assignment" podUID="4c500541-c3f2-4f6d-8bb7-1227aa74989a" containerName="console" Jan 20 18:18:41 crc kubenswrapper[4661]: I0120 18:18:41.712982 4661 memory_manager.go:354] "RemoveStaleState removing state" podUID="4c500541-c3f2-4f6d-8bb7-1227aa74989a" containerName="console" Jan 20 18:18:41 crc kubenswrapper[4661]: I0120 18:18:41.713779 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcwzssn" Jan 20 18:18:41 crc kubenswrapper[4661]: I0120 18:18:41.725086 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcwzssn"] Jan 20 18:18:41 crc kubenswrapper[4661]: I0120 18:18:41.728384 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 20 18:18:41 crc kubenswrapper[4661]: I0120 18:18:41.835125 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7kjdc\" (UniqueName: \"kubernetes.io/projected/32740d6c-8df0-4b8a-8097-5fbecd7ca5e5-kube-api-access-7kjdc\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcwzssn\" (UID: \"32740d6c-8df0-4b8a-8097-5fbecd7ca5e5\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcwzssn" Jan 20 18:18:41 crc kubenswrapper[4661]: I0120 18:18:41.835198 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/32740d6c-8df0-4b8a-8097-5fbecd7ca5e5-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcwzssn\" (UID: \"32740d6c-8df0-4b8a-8097-5fbecd7ca5e5\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcwzssn" Jan 20 18:18:41 crc kubenswrapper[4661]: I0120 18:18:41.835250 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/32740d6c-8df0-4b8a-8097-5fbecd7ca5e5-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcwzssn\" (UID: \"32740d6c-8df0-4b8a-8097-5fbecd7ca5e5\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcwzssn" Jan 20 18:18:41 crc kubenswrapper[4661]: I0120 18:18:41.936591 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/32740d6c-8df0-4b8a-8097-5fbecd7ca5e5-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcwzssn\" (UID: \"32740d6c-8df0-4b8a-8097-5fbecd7ca5e5\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcwzssn" Jan 20 18:18:41 crc kubenswrapper[4661]: I0120 18:18:41.936661 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/32740d6c-8df0-4b8a-8097-5fbecd7ca5e5-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcwzssn\" (UID: \"32740d6c-8df0-4b8a-8097-5fbecd7ca5e5\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcwzssn" Jan 20 18:18:41 crc kubenswrapper[4661]: I0120 18:18:41.936781 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7kjdc\" (UniqueName: \"kubernetes.io/projected/32740d6c-8df0-4b8a-8097-5fbecd7ca5e5-kube-api-access-7kjdc\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcwzssn\" (UID: \"32740d6c-8df0-4b8a-8097-5fbecd7ca5e5\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcwzssn" Jan 20 18:18:41 crc kubenswrapper[4661]: I0120 18:18:41.937594 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/32740d6c-8df0-4b8a-8097-5fbecd7ca5e5-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcwzssn\" (UID: \"32740d6c-8df0-4b8a-8097-5fbecd7ca5e5\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcwzssn" Jan 20 18:18:41 crc kubenswrapper[4661]: I0120 18:18:41.937734 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/32740d6c-8df0-4b8a-8097-5fbecd7ca5e5-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcwzssn\" (UID: \"32740d6c-8df0-4b8a-8097-5fbecd7ca5e5\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcwzssn" Jan 20 18:18:41 crc kubenswrapper[4661]: I0120 18:18:41.961777 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7kjdc\" (UniqueName: \"kubernetes.io/projected/32740d6c-8df0-4b8a-8097-5fbecd7ca5e5-kube-api-access-7kjdc\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcwzssn\" (UID: \"32740d6c-8df0-4b8a-8097-5fbecd7ca5e5\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcwzssn" Jan 20 18:18:42 crc kubenswrapper[4661]: I0120 18:18:42.032298 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcwzssn" Jan 20 18:18:42 crc kubenswrapper[4661]: I0120 18:18:42.150656 4661 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4c500541-c3f2-4f6d-8bb7-1227aa74989a" path="/var/lib/kubelet/pods/4c500541-c3f2-4f6d-8bb7-1227aa74989a/volumes" Jan 20 18:18:42 crc kubenswrapper[4661]: I0120 18:18:42.235772 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcwzssn"] Jan 20 18:18:42 crc kubenswrapper[4661]: I0120 18:18:42.318358 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcwzssn" event={"ID":"32740d6c-8df0-4b8a-8097-5fbecd7ca5e5","Type":"ContainerStarted","Data":"2c2c6681644af75499ef9d0fa8f6e4366022a0fab93436f604c59bb85eea348b"} Jan 20 18:18:43 crc kubenswrapper[4661]: I0120 18:18:43.327301 4661 generic.go:334] "Generic (PLEG): container finished" podID="32740d6c-8df0-4b8a-8097-5fbecd7ca5e5" containerID="73f848e47a606248d76b593ae70649e33d1488f6787be18da5a4eb8805616fac" exitCode=0 Jan 20 18:18:43 crc kubenswrapper[4661]: I0120 18:18:43.327372 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcwzssn" event={"ID":"32740d6c-8df0-4b8a-8097-5fbecd7ca5e5","Type":"ContainerDied","Data":"73f848e47a606248d76b593ae70649e33d1488f6787be18da5a4eb8805616fac"} Jan 20 18:18:45 crc kubenswrapper[4661]: I0120 18:18:45.270562 4661 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-5v9cs"] Jan 20 18:18:45 crc kubenswrapper[4661]: I0120 18:18:45.273034 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5v9cs" Jan 20 18:18:45 crc kubenswrapper[4661]: I0120 18:18:45.322031 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-5v9cs"] Jan 20 18:18:45 crc kubenswrapper[4661]: I0120 18:18:45.359516 4661 generic.go:334] "Generic (PLEG): container finished" podID="32740d6c-8df0-4b8a-8097-5fbecd7ca5e5" containerID="ae45f9ed1ec47119303ffb0a920c964b950fca1b33dfc19d07832f558942c738" exitCode=0 Jan 20 18:18:45 crc kubenswrapper[4661]: I0120 18:18:45.359578 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcwzssn" event={"ID":"32740d6c-8df0-4b8a-8097-5fbecd7ca5e5","Type":"ContainerDied","Data":"ae45f9ed1ec47119303ffb0a920c964b950fca1b33dfc19d07832f558942c738"} Jan 20 18:18:45 crc kubenswrapper[4661]: I0120 18:18:45.417099 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eaa3c44f-16ac-4387-b1ae-25f2fe0b55d8-utilities\") pod \"redhat-operators-5v9cs\" (UID: \"eaa3c44f-16ac-4387-b1ae-25f2fe0b55d8\") " pod="openshift-marketplace/redhat-operators-5v9cs" Jan 20 18:18:45 crc kubenswrapper[4661]: I0120 18:18:45.417210 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zp6zp\" (UniqueName: \"kubernetes.io/projected/eaa3c44f-16ac-4387-b1ae-25f2fe0b55d8-kube-api-access-zp6zp\") pod \"redhat-operators-5v9cs\" (UID: \"eaa3c44f-16ac-4387-b1ae-25f2fe0b55d8\") " pod="openshift-marketplace/redhat-operators-5v9cs" Jan 20 18:18:45 crc kubenswrapper[4661]: I0120 18:18:45.417296 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eaa3c44f-16ac-4387-b1ae-25f2fe0b55d8-catalog-content\") pod \"redhat-operators-5v9cs\" (UID: \"eaa3c44f-16ac-4387-b1ae-25f2fe0b55d8\") " pod="openshift-marketplace/redhat-operators-5v9cs" Jan 20 18:18:45 crc kubenswrapper[4661]: I0120 18:18:45.518163 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eaa3c44f-16ac-4387-b1ae-25f2fe0b55d8-utilities\") pod \"redhat-operators-5v9cs\" (UID: \"eaa3c44f-16ac-4387-b1ae-25f2fe0b55d8\") " pod="openshift-marketplace/redhat-operators-5v9cs" Jan 20 18:18:45 crc kubenswrapper[4661]: I0120 18:18:45.518209 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zp6zp\" (UniqueName: \"kubernetes.io/projected/eaa3c44f-16ac-4387-b1ae-25f2fe0b55d8-kube-api-access-zp6zp\") pod \"redhat-operators-5v9cs\" (UID: \"eaa3c44f-16ac-4387-b1ae-25f2fe0b55d8\") " pod="openshift-marketplace/redhat-operators-5v9cs" Jan 20 18:18:45 crc kubenswrapper[4661]: I0120 18:18:45.518244 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eaa3c44f-16ac-4387-b1ae-25f2fe0b55d8-catalog-content\") pod \"redhat-operators-5v9cs\" (UID: \"eaa3c44f-16ac-4387-b1ae-25f2fe0b55d8\") " pod="openshift-marketplace/redhat-operators-5v9cs" Jan 20 18:18:45 crc kubenswrapper[4661]: I0120 18:18:45.518910 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eaa3c44f-16ac-4387-b1ae-25f2fe0b55d8-catalog-content\") pod \"redhat-operators-5v9cs\" (UID: \"eaa3c44f-16ac-4387-b1ae-25f2fe0b55d8\") " pod="openshift-marketplace/redhat-operators-5v9cs" Jan 20 18:18:45 crc kubenswrapper[4661]: I0120 18:18:45.519053 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eaa3c44f-16ac-4387-b1ae-25f2fe0b55d8-utilities\") pod \"redhat-operators-5v9cs\" (UID: \"eaa3c44f-16ac-4387-b1ae-25f2fe0b55d8\") " pod="openshift-marketplace/redhat-operators-5v9cs" Jan 20 18:18:45 crc kubenswrapper[4661]: I0120 18:18:45.539039 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zp6zp\" (UniqueName: \"kubernetes.io/projected/eaa3c44f-16ac-4387-b1ae-25f2fe0b55d8-kube-api-access-zp6zp\") pod \"redhat-operators-5v9cs\" (UID: \"eaa3c44f-16ac-4387-b1ae-25f2fe0b55d8\") " pod="openshift-marketplace/redhat-operators-5v9cs" Jan 20 18:18:45 crc kubenswrapper[4661]: I0120 18:18:45.628206 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5v9cs" Jan 20 18:18:46 crc kubenswrapper[4661]: I0120 18:18:46.055505 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-5v9cs"] Jan 20 18:18:46 crc kubenswrapper[4661]: W0120 18:18:46.058881 4661 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podeaa3c44f_16ac_4387_b1ae_25f2fe0b55d8.slice/crio-1f4f1c95505d6009483a29de148b99e2fcbb9f795edae6d96abbdfad025eac2b WatchSource:0}: Error finding container 1f4f1c95505d6009483a29de148b99e2fcbb9f795edae6d96abbdfad025eac2b: Status 404 returned error can't find the container with id 1f4f1c95505d6009483a29de148b99e2fcbb9f795edae6d96abbdfad025eac2b Jan 20 18:18:46 crc kubenswrapper[4661]: I0120 18:18:46.365081 4661 generic.go:334] "Generic (PLEG): container finished" podID="eaa3c44f-16ac-4387-b1ae-25f2fe0b55d8" containerID="e89f92e6d91d4d56c26c3929f326e03d5e3eb2fee1f581a91d9760bd55ade7f1" exitCode=0 Jan 20 18:18:46 crc kubenswrapper[4661]: I0120 18:18:46.365217 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5v9cs" event={"ID":"eaa3c44f-16ac-4387-b1ae-25f2fe0b55d8","Type":"ContainerDied","Data":"e89f92e6d91d4d56c26c3929f326e03d5e3eb2fee1f581a91d9760bd55ade7f1"} Jan 20 18:18:46 crc kubenswrapper[4661]: I0120 18:18:46.365984 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5v9cs" event={"ID":"eaa3c44f-16ac-4387-b1ae-25f2fe0b55d8","Type":"ContainerStarted","Data":"1f4f1c95505d6009483a29de148b99e2fcbb9f795edae6d96abbdfad025eac2b"} Jan 20 18:18:46 crc kubenswrapper[4661]: I0120 18:18:46.368457 4661 generic.go:334] "Generic (PLEG): container finished" podID="32740d6c-8df0-4b8a-8097-5fbecd7ca5e5" containerID="105d50b77ec0d3deff11f0225c74003996fd2409b770b81a22c5a8a576863801" exitCode=0 Jan 20 18:18:46 crc kubenswrapper[4661]: I0120 18:18:46.368592 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcwzssn" event={"ID":"32740d6c-8df0-4b8a-8097-5fbecd7ca5e5","Type":"ContainerDied","Data":"105d50b77ec0d3deff11f0225c74003996fd2409b770b81a22c5a8a576863801"} Jan 20 18:18:47 crc kubenswrapper[4661]: I0120 18:18:47.377093 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5v9cs" event={"ID":"eaa3c44f-16ac-4387-b1ae-25f2fe0b55d8","Type":"ContainerStarted","Data":"f1f59b932d1d5cf88b17584e72cd478a89ed2faa135a43d0e563946d13f57c03"} Jan 20 18:18:47 crc kubenswrapper[4661]: I0120 18:18:47.614616 4661 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcwzssn" Jan 20 18:18:47 crc kubenswrapper[4661]: I0120 18:18:47.745347 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/32740d6c-8df0-4b8a-8097-5fbecd7ca5e5-bundle\") pod \"32740d6c-8df0-4b8a-8097-5fbecd7ca5e5\" (UID: \"32740d6c-8df0-4b8a-8097-5fbecd7ca5e5\") " Jan 20 18:18:47 crc kubenswrapper[4661]: I0120 18:18:47.745446 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/32740d6c-8df0-4b8a-8097-5fbecd7ca5e5-util\") pod \"32740d6c-8df0-4b8a-8097-5fbecd7ca5e5\" (UID: \"32740d6c-8df0-4b8a-8097-5fbecd7ca5e5\") " Jan 20 18:18:47 crc kubenswrapper[4661]: I0120 18:18:47.745473 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7kjdc\" (UniqueName: \"kubernetes.io/projected/32740d6c-8df0-4b8a-8097-5fbecd7ca5e5-kube-api-access-7kjdc\") pod \"32740d6c-8df0-4b8a-8097-5fbecd7ca5e5\" (UID: \"32740d6c-8df0-4b8a-8097-5fbecd7ca5e5\") " Jan 20 18:18:47 crc kubenswrapper[4661]: I0120 18:18:47.746709 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/32740d6c-8df0-4b8a-8097-5fbecd7ca5e5-bundle" (OuterVolumeSpecName: "bundle") pod "32740d6c-8df0-4b8a-8097-5fbecd7ca5e5" (UID: "32740d6c-8df0-4b8a-8097-5fbecd7ca5e5"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 18:18:47 crc kubenswrapper[4661]: I0120 18:18:47.751558 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/32740d6c-8df0-4b8a-8097-5fbecd7ca5e5-kube-api-access-7kjdc" (OuterVolumeSpecName: "kube-api-access-7kjdc") pod "32740d6c-8df0-4b8a-8097-5fbecd7ca5e5" (UID: "32740d6c-8df0-4b8a-8097-5fbecd7ca5e5"). InnerVolumeSpecName "kube-api-access-7kjdc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 18:18:47 crc kubenswrapper[4661]: I0120 18:18:47.819905 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/32740d6c-8df0-4b8a-8097-5fbecd7ca5e5-util" (OuterVolumeSpecName: "util") pod "32740d6c-8df0-4b8a-8097-5fbecd7ca5e5" (UID: "32740d6c-8df0-4b8a-8097-5fbecd7ca5e5"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 18:18:47 crc kubenswrapper[4661]: I0120 18:18:47.846741 4661 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/32740d6c-8df0-4b8a-8097-5fbecd7ca5e5-util\") on node \"crc\" DevicePath \"\"" Jan 20 18:18:47 crc kubenswrapper[4661]: I0120 18:18:47.846782 4661 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7kjdc\" (UniqueName: \"kubernetes.io/projected/32740d6c-8df0-4b8a-8097-5fbecd7ca5e5-kube-api-access-7kjdc\") on node \"crc\" DevicePath \"\"" Jan 20 18:18:47 crc kubenswrapper[4661]: I0120 18:18:47.846795 4661 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/32740d6c-8df0-4b8a-8097-5fbecd7ca5e5-bundle\") on node \"crc\" DevicePath \"\"" Jan 20 18:18:48 crc kubenswrapper[4661]: I0120 18:18:48.385897 4661 generic.go:334] "Generic (PLEG): container finished" podID="eaa3c44f-16ac-4387-b1ae-25f2fe0b55d8" containerID="f1f59b932d1d5cf88b17584e72cd478a89ed2faa135a43d0e563946d13f57c03" exitCode=0 Jan 20 18:18:48 crc kubenswrapper[4661]: I0120 18:18:48.385975 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5v9cs" event={"ID":"eaa3c44f-16ac-4387-b1ae-25f2fe0b55d8","Type":"ContainerDied","Data":"f1f59b932d1d5cf88b17584e72cd478a89ed2faa135a43d0e563946d13f57c03"} Jan 20 18:18:48 crc kubenswrapper[4661]: I0120 18:18:48.389404 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcwzssn" event={"ID":"32740d6c-8df0-4b8a-8097-5fbecd7ca5e5","Type":"ContainerDied","Data":"2c2c6681644af75499ef9d0fa8f6e4366022a0fab93436f604c59bb85eea348b"} Jan 20 18:18:48 crc kubenswrapper[4661]: I0120 18:18:48.389428 4661 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2c2c6681644af75499ef9d0fa8f6e4366022a0fab93436f604c59bb85eea348b" Jan 20 18:18:48 crc kubenswrapper[4661]: I0120 18:18:48.389477 4661 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcwzssn" Jan 20 18:18:49 crc kubenswrapper[4661]: I0120 18:18:49.397229 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5v9cs" event={"ID":"eaa3c44f-16ac-4387-b1ae-25f2fe0b55d8","Type":"ContainerStarted","Data":"869f1571b1c783266addd965c9676bbe437e95769b1f5be0a46ada7c9cad6742"} Jan 20 18:18:49 crc kubenswrapper[4661]: I0120 18:18:49.418019 4661 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-5v9cs" podStartSLOduration=2.018822034 podStartE2EDuration="4.417996189s" podCreationTimestamp="2026-01-20 18:18:45 +0000 UTC" firstStartedPulling="2026-01-20 18:18:46.366849177 +0000 UTC m=+782.697638839" lastFinishedPulling="2026-01-20 18:18:48.766023332 +0000 UTC m=+785.096812994" observedRunningTime="2026-01-20 18:18:49.41581913 +0000 UTC m=+785.746608812" watchObservedRunningTime="2026-01-20 18:18:49.417996189 +0000 UTC m=+785.748785851" Jan 20 18:18:55 crc kubenswrapper[4661]: I0120 18:18:55.628972 4661 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-5v9cs" Jan 20 18:18:55 crc kubenswrapper[4661]: I0120 18:18:55.629371 4661 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-5v9cs" Jan 20 18:18:55 crc kubenswrapper[4661]: I0120 18:18:55.669204 4661 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-5v9cs" Jan 20 18:18:56 crc kubenswrapper[4661]: I0120 18:18:56.468854 4661 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-5v9cs" Jan 20 18:18:59 crc kubenswrapper[4661]: I0120 18:18:59.063313 4661 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-5v9cs"] Jan 20 18:18:59 crc kubenswrapper[4661]: I0120 18:18:59.064198 4661 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-5v9cs" podUID="eaa3c44f-16ac-4387-b1ae-25f2fe0b55d8" containerName="registry-server" containerID="cri-o://869f1571b1c783266addd965c9676bbe437e95769b1f5be0a46ada7c9cad6742" gracePeriod=2 Jan 20 18:18:59 crc kubenswrapper[4661]: I0120 18:18:59.306467 4661 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-6f4477bbcd-h46rv"] Jan 20 18:18:59 crc kubenswrapper[4661]: E0120 18:18:59.306698 4661 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="32740d6c-8df0-4b8a-8097-5fbecd7ca5e5" containerName="extract" Jan 20 18:18:59 crc kubenswrapper[4661]: I0120 18:18:59.306709 4661 state_mem.go:107] "Deleted CPUSet assignment" podUID="32740d6c-8df0-4b8a-8097-5fbecd7ca5e5" containerName="extract" Jan 20 18:18:59 crc kubenswrapper[4661]: E0120 18:18:59.306723 4661 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="32740d6c-8df0-4b8a-8097-5fbecd7ca5e5" containerName="util" Jan 20 18:18:59 crc kubenswrapper[4661]: I0120 18:18:59.306729 4661 state_mem.go:107] "Deleted CPUSet assignment" podUID="32740d6c-8df0-4b8a-8097-5fbecd7ca5e5" containerName="util" Jan 20 18:18:59 crc kubenswrapper[4661]: E0120 18:18:59.306749 4661 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="32740d6c-8df0-4b8a-8097-5fbecd7ca5e5" containerName="pull" Jan 20 18:18:59 crc kubenswrapper[4661]: I0120 18:18:59.306759 4661 state_mem.go:107] "Deleted CPUSet assignment" podUID="32740d6c-8df0-4b8a-8097-5fbecd7ca5e5" containerName="pull" Jan 20 18:18:59 crc kubenswrapper[4661]: I0120 18:18:59.306850 4661 memory_manager.go:354] "RemoveStaleState removing state" podUID="32740d6c-8df0-4b8a-8097-5fbecd7ca5e5" containerName="extract" Jan 20 18:18:59 crc kubenswrapper[4661]: I0120 18:18:59.307239 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-6f4477bbcd-h46rv" Jan 20 18:18:59 crc kubenswrapper[4661]: I0120 18:18:59.309839 4661 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Jan 20 18:18:59 crc kubenswrapper[4661]: I0120 18:18:59.309904 4661 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Jan 20 18:18:59 crc kubenswrapper[4661]: I0120 18:18:59.310312 4661 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Jan 20 18:18:59 crc kubenswrapper[4661]: I0120 18:18:59.310841 4661 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Jan 20 18:18:59 crc kubenswrapper[4661]: I0120 18:18:59.319216 4661 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"manager-account-dockercfg-fw49k" Jan 20 18:18:59 crc kubenswrapper[4661]: I0120 18:18:59.324154 4661 patch_prober.go:28] interesting pod/machine-config-daemon-svf7c container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 20 18:18:59 crc kubenswrapper[4661]: I0120 18:18:59.324214 4661 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-svf7c" podUID="78855c94-da90-4523-8d65-70f7fd153dee" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 20 18:18:59 crc kubenswrapper[4661]: I0120 18:18:59.324259 4661 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-svf7c" Jan 20 18:18:59 crc kubenswrapper[4661]: I0120 18:18:59.324733 4661 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"07ea6c09f7f6b3cd3c82aa283c5480b53e463086680df4020a3d82e4e318e5b2"} pod="openshift-machine-config-operator/machine-config-daemon-svf7c" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 20 18:18:59 crc kubenswrapper[4661]: I0120 18:18:59.324818 4661 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-svf7c" podUID="78855c94-da90-4523-8d65-70f7fd153dee" containerName="machine-config-daemon" containerID="cri-o://07ea6c09f7f6b3cd3c82aa283c5480b53e463086680df4020a3d82e4e318e5b2" gracePeriod=600 Jan 20 18:18:59 crc kubenswrapper[4661]: I0120 18:18:59.337375 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-6f4477bbcd-h46rv"] Jan 20 18:18:59 crc kubenswrapper[4661]: I0120 18:18:59.389404 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/715feebe-b380-4ce1-9842-7f9da051a195-apiservice-cert\") pod \"metallb-operator-controller-manager-6f4477bbcd-h46rv\" (UID: \"715feebe-b380-4ce1-9842-7f9da051a195\") " pod="metallb-system/metallb-operator-controller-manager-6f4477bbcd-h46rv" Jan 20 18:18:59 crc kubenswrapper[4661]: I0120 18:18:59.389626 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/715feebe-b380-4ce1-9842-7f9da051a195-webhook-cert\") pod \"metallb-operator-controller-manager-6f4477bbcd-h46rv\" (UID: \"715feebe-b380-4ce1-9842-7f9da051a195\") " pod="metallb-system/metallb-operator-controller-manager-6f4477bbcd-h46rv" Jan 20 18:18:59 crc kubenswrapper[4661]: I0120 18:18:59.389736 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v9ljd\" (UniqueName: \"kubernetes.io/projected/715feebe-b380-4ce1-9842-7f9da051a195-kube-api-access-v9ljd\") pod \"metallb-operator-controller-manager-6f4477bbcd-h46rv\" (UID: \"715feebe-b380-4ce1-9842-7f9da051a195\") " pod="metallb-system/metallb-operator-controller-manager-6f4477bbcd-h46rv" Jan 20 18:18:59 crc kubenswrapper[4661]: I0120 18:18:59.490928 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/715feebe-b380-4ce1-9842-7f9da051a195-webhook-cert\") pod \"metallb-operator-controller-manager-6f4477bbcd-h46rv\" (UID: \"715feebe-b380-4ce1-9842-7f9da051a195\") " pod="metallb-system/metallb-operator-controller-manager-6f4477bbcd-h46rv" Jan 20 18:18:59 crc kubenswrapper[4661]: I0120 18:18:59.490972 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v9ljd\" (UniqueName: \"kubernetes.io/projected/715feebe-b380-4ce1-9842-7f9da051a195-kube-api-access-v9ljd\") pod \"metallb-operator-controller-manager-6f4477bbcd-h46rv\" (UID: \"715feebe-b380-4ce1-9842-7f9da051a195\") " pod="metallb-system/metallb-operator-controller-manager-6f4477bbcd-h46rv" Jan 20 18:18:59 crc kubenswrapper[4661]: I0120 18:18:59.491014 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/715feebe-b380-4ce1-9842-7f9da051a195-apiservice-cert\") pod \"metallb-operator-controller-manager-6f4477bbcd-h46rv\" (UID: \"715feebe-b380-4ce1-9842-7f9da051a195\") " pod="metallb-system/metallb-operator-controller-manager-6f4477bbcd-h46rv" Jan 20 18:18:59 crc kubenswrapper[4661]: I0120 18:18:59.497325 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/715feebe-b380-4ce1-9842-7f9da051a195-apiservice-cert\") pod \"metallb-operator-controller-manager-6f4477bbcd-h46rv\" (UID: \"715feebe-b380-4ce1-9842-7f9da051a195\") " pod="metallb-system/metallb-operator-controller-manager-6f4477bbcd-h46rv" Jan 20 18:18:59 crc kubenswrapper[4661]: I0120 18:18:59.498415 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/715feebe-b380-4ce1-9842-7f9da051a195-webhook-cert\") pod \"metallb-operator-controller-manager-6f4477bbcd-h46rv\" (UID: \"715feebe-b380-4ce1-9842-7f9da051a195\") " pod="metallb-system/metallb-operator-controller-manager-6f4477bbcd-h46rv" Jan 20 18:18:59 crc kubenswrapper[4661]: I0120 18:18:59.511478 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v9ljd\" (UniqueName: \"kubernetes.io/projected/715feebe-b380-4ce1-9842-7f9da051a195-kube-api-access-v9ljd\") pod \"metallb-operator-controller-manager-6f4477bbcd-h46rv\" (UID: \"715feebe-b380-4ce1-9842-7f9da051a195\") " pod="metallb-system/metallb-operator-controller-manager-6f4477bbcd-h46rv" Jan 20 18:18:59 crc kubenswrapper[4661]: I0120 18:18:59.622324 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-6f4477bbcd-h46rv" Jan 20 18:18:59 crc kubenswrapper[4661]: I0120 18:18:59.631348 4661 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-686c759fbc-hdkt8"] Jan 20 18:18:59 crc kubenswrapper[4661]: I0120 18:18:59.632048 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-686c759fbc-hdkt8" Jan 20 18:18:59 crc kubenswrapper[4661]: I0120 18:18:59.639022 4661 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Jan 20 18:18:59 crc kubenswrapper[4661]: I0120 18:18:59.642510 4661 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Jan 20 18:18:59 crc kubenswrapper[4661]: I0120 18:18:59.642689 4661 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-dockercfg-dxfwl" Jan 20 18:18:59 crc kubenswrapper[4661]: I0120 18:18:59.655193 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-686c759fbc-hdkt8"] Jan 20 18:18:59 crc kubenswrapper[4661]: I0120 18:18:59.695357 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/e0bbd467-090b-431e-b89b-8159d61d7dab-apiservice-cert\") pod \"metallb-operator-webhook-server-686c759fbc-hdkt8\" (UID: \"e0bbd467-090b-431e-b89b-8159d61d7dab\") " pod="metallb-system/metallb-operator-webhook-server-686c759fbc-hdkt8" Jan 20 18:18:59 crc kubenswrapper[4661]: I0120 18:18:59.695407 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/e0bbd467-090b-431e-b89b-8159d61d7dab-webhook-cert\") pod \"metallb-operator-webhook-server-686c759fbc-hdkt8\" (UID: \"e0bbd467-090b-431e-b89b-8159d61d7dab\") " pod="metallb-system/metallb-operator-webhook-server-686c759fbc-hdkt8" Jan 20 18:18:59 crc kubenswrapper[4661]: I0120 18:18:59.695453 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jmcbh\" (UniqueName: \"kubernetes.io/projected/e0bbd467-090b-431e-b89b-8159d61d7dab-kube-api-access-jmcbh\") pod \"metallb-operator-webhook-server-686c759fbc-hdkt8\" (UID: \"e0bbd467-090b-431e-b89b-8159d61d7dab\") " pod="metallb-system/metallb-operator-webhook-server-686c759fbc-hdkt8" Jan 20 18:18:59 crc kubenswrapper[4661]: I0120 18:18:59.797199 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/e0bbd467-090b-431e-b89b-8159d61d7dab-webhook-cert\") pod \"metallb-operator-webhook-server-686c759fbc-hdkt8\" (UID: \"e0bbd467-090b-431e-b89b-8159d61d7dab\") " pod="metallb-system/metallb-operator-webhook-server-686c759fbc-hdkt8" Jan 20 18:18:59 crc kubenswrapper[4661]: I0120 18:18:59.797491 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jmcbh\" (UniqueName: \"kubernetes.io/projected/e0bbd467-090b-431e-b89b-8159d61d7dab-kube-api-access-jmcbh\") pod \"metallb-operator-webhook-server-686c759fbc-hdkt8\" (UID: \"e0bbd467-090b-431e-b89b-8159d61d7dab\") " pod="metallb-system/metallb-operator-webhook-server-686c759fbc-hdkt8" Jan 20 18:18:59 crc kubenswrapper[4661]: I0120 18:18:59.797537 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/e0bbd467-090b-431e-b89b-8159d61d7dab-apiservice-cert\") pod \"metallb-operator-webhook-server-686c759fbc-hdkt8\" (UID: \"e0bbd467-090b-431e-b89b-8159d61d7dab\") " pod="metallb-system/metallb-operator-webhook-server-686c759fbc-hdkt8" Jan 20 18:18:59 crc kubenswrapper[4661]: I0120 18:18:59.803266 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/e0bbd467-090b-431e-b89b-8159d61d7dab-apiservice-cert\") pod \"metallb-operator-webhook-server-686c759fbc-hdkt8\" (UID: \"e0bbd467-090b-431e-b89b-8159d61d7dab\") " pod="metallb-system/metallb-operator-webhook-server-686c759fbc-hdkt8" Jan 20 18:18:59 crc kubenswrapper[4661]: I0120 18:18:59.808855 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/e0bbd467-090b-431e-b89b-8159d61d7dab-webhook-cert\") pod \"metallb-operator-webhook-server-686c759fbc-hdkt8\" (UID: \"e0bbd467-090b-431e-b89b-8159d61d7dab\") " pod="metallb-system/metallb-operator-webhook-server-686c759fbc-hdkt8" Jan 20 18:18:59 crc kubenswrapper[4661]: I0120 18:18:59.841308 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jmcbh\" (UniqueName: \"kubernetes.io/projected/e0bbd467-090b-431e-b89b-8159d61d7dab-kube-api-access-jmcbh\") pod \"metallb-operator-webhook-server-686c759fbc-hdkt8\" (UID: \"e0bbd467-090b-431e-b89b-8159d61d7dab\") " pod="metallb-system/metallb-operator-webhook-server-686c759fbc-hdkt8" Jan 20 18:18:59 crc kubenswrapper[4661]: I0120 18:18:59.975218 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-6f4477bbcd-h46rv"] Jan 20 18:18:59 crc kubenswrapper[4661]: I0120 18:18:59.978418 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-686c759fbc-hdkt8" Jan 20 18:19:00 crc kubenswrapper[4661]: I0120 18:19:00.262224 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-686c759fbc-hdkt8"] Jan 20 18:19:00 crc kubenswrapper[4661]: W0120 18:19:00.268709 4661 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode0bbd467_090b_431e_b89b_8159d61d7dab.slice/crio-c225aa7cdddf473cc69c3c2a32b77f197f45cf2bc0d357336a0895ebb383a17e WatchSource:0}: Error finding container c225aa7cdddf473cc69c3c2a32b77f197f45cf2bc0d357336a0895ebb383a17e: Status 404 returned error can't find the container with id c225aa7cdddf473cc69c3c2a32b77f197f45cf2bc0d357336a0895ebb383a17e Jan 20 18:19:00 crc kubenswrapper[4661]: I0120 18:19:00.463530 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-686c759fbc-hdkt8" event={"ID":"e0bbd467-090b-431e-b89b-8159d61d7dab","Type":"ContainerStarted","Data":"c225aa7cdddf473cc69c3c2a32b77f197f45cf2bc0d357336a0895ebb383a17e"} Jan 20 18:19:00 crc kubenswrapper[4661]: I0120 18:19:00.472605 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-6f4477bbcd-h46rv" event={"ID":"715feebe-b380-4ce1-9842-7f9da051a195","Type":"ContainerStarted","Data":"cb1c069a1e956bd6020c9e09dcd95680f4894e4591ab47934973926d6d1835ba"} Jan 20 18:19:01 crc kubenswrapper[4661]: I0120 18:19:01.482854 4661 generic.go:334] "Generic (PLEG): container finished" podID="eaa3c44f-16ac-4387-b1ae-25f2fe0b55d8" containerID="869f1571b1c783266addd965c9676bbe437e95769b1f5be0a46ada7c9cad6742" exitCode=0 Jan 20 18:19:01 crc kubenswrapper[4661]: I0120 18:19:01.482964 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5v9cs" event={"ID":"eaa3c44f-16ac-4387-b1ae-25f2fe0b55d8","Type":"ContainerDied","Data":"869f1571b1c783266addd965c9676bbe437e95769b1f5be0a46ada7c9cad6742"} Jan 20 18:19:01 crc kubenswrapper[4661]: I0120 18:19:01.485050 4661 generic.go:334] "Generic (PLEG): container finished" podID="78855c94-da90-4523-8d65-70f7fd153dee" containerID="07ea6c09f7f6b3cd3c82aa283c5480b53e463086680df4020a3d82e4e318e5b2" exitCode=0 Jan 20 18:19:01 crc kubenswrapper[4661]: I0120 18:19:01.485141 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-svf7c" event={"ID":"78855c94-da90-4523-8d65-70f7fd153dee","Type":"ContainerDied","Data":"07ea6c09f7f6b3cd3c82aa283c5480b53e463086680df4020a3d82e4e318e5b2"} Jan 20 18:19:01 crc kubenswrapper[4661]: I0120 18:19:01.485216 4661 scope.go:117] "RemoveContainer" containerID="c6bcba7fc6b732bb22c0a69a286a16f49bd4540d1c1a29be1bebfef4cffede69" Jan 20 18:19:02 crc kubenswrapper[4661]: I0120 18:19:02.502338 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-svf7c" event={"ID":"78855c94-da90-4523-8d65-70f7fd153dee","Type":"ContainerStarted","Data":"728daaf1b473865f17a594f3c69374509eda708725908283281a9d0d4f532f9a"} Jan 20 18:19:02 crc kubenswrapper[4661]: I0120 18:19:02.506090 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5v9cs" event={"ID":"eaa3c44f-16ac-4387-b1ae-25f2fe0b55d8","Type":"ContainerDied","Data":"1f4f1c95505d6009483a29de148b99e2fcbb9f795edae6d96abbdfad025eac2b"} Jan 20 18:19:02 crc kubenswrapper[4661]: I0120 18:19:02.506113 4661 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1f4f1c95505d6009483a29de148b99e2fcbb9f795edae6d96abbdfad025eac2b" Jan 20 18:19:02 crc kubenswrapper[4661]: I0120 18:19:02.522071 4661 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5v9cs" Jan 20 18:19:02 crc kubenswrapper[4661]: I0120 18:19:02.665166 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eaa3c44f-16ac-4387-b1ae-25f2fe0b55d8-catalog-content\") pod \"eaa3c44f-16ac-4387-b1ae-25f2fe0b55d8\" (UID: \"eaa3c44f-16ac-4387-b1ae-25f2fe0b55d8\") " Jan 20 18:19:02 crc kubenswrapper[4661]: I0120 18:19:02.665247 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eaa3c44f-16ac-4387-b1ae-25f2fe0b55d8-utilities\") pod \"eaa3c44f-16ac-4387-b1ae-25f2fe0b55d8\" (UID: \"eaa3c44f-16ac-4387-b1ae-25f2fe0b55d8\") " Jan 20 18:19:02 crc kubenswrapper[4661]: I0120 18:19:02.665318 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zp6zp\" (UniqueName: \"kubernetes.io/projected/eaa3c44f-16ac-4387-b1ae-25f2fe0b55d8-kube-api-access-zp6zp\") pod \"eaa3c44f-16ac-4387-b1ae-25f2fe0b55d8\" (UID: \"eaa3c44f-16ac-4387-b1ae-25f2fe0b55d8\") " Jan 20 18:19:02 crc kubenswrapper[4661]: I0120 18:19:02.666877 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/eaa3c44f-16ac-4387-b1ae-25f2fe0b55d8-utilities" (OuterVolumeSpecName: "utilities") pod "eaa3c44f-16ac-4387-b1ae-25f2fe0b55d8" (UID: "eaa3c44f-16ac-4387-b1ae-25f2fe0b55d8"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 18:19:02 crc kubenswrapper[4661]: I0120 18:19:02.680876 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eaa3c44f-16ac-4387-b1ae-25f2fe0b55d8-kube-api-access-zp6zp" (OuterVolumeSpecName: "kube-api-access-zp6zp") pod "eaa3c44f-16ac-4387-b1ae-25f2fe0b55d8" (UID: "eaa3c44f-16ac-4387-b1ae-25f2fe0b55d8"). InnerVolumeSpecName "kube-api-access-zp6zp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 18:19:02 crc kubenswrapper[4661]: I0120 18:19:02.766832 4661 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eaa3c44f-16ac-4387-b1ae-25f2fe0b55d8-utilities\") on node \"crc\" DevicePath \"\"" Jan 20 18:19:02 crc kubenswrapper[4661]: I0120 18:19:02.766876 4661 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zp6zp\" (UniqueName: \"kubernetes.io/projected/eaa3c44f-16ac-4387-b1ae-25f2fe0b55d8-kube-api-access-zp6zp\") on node \"crc\" DevicePath \"\"" Jan 20 18:19:02 crc kubenswrapper[4661]: I0120 18:19:02.791139 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/eaa3c44f-16ac-4387-b1ae-25f2fe0b55d8-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "eaa3c44f-16ac-4387-b1ae-25f2fe0b55d8" (UID: "eaa3c44f-16ac-4387-b1ae-25f2fe0b55d8"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 18:19:02 crc kubenswrapper[4661]: I0120 18:19:02.867737 4661 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eaa3c44f-16ac-4387-b1ae-25f2fe0b55d8-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 20 18:19:03 crc kubenswrapper[4661]: I0120 18:19:03.511712 4661 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5v9cs" Jan 20 18:19:03 crc kubenswrapper[4661]: I0120 18:19:03.552926 4661 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-5v9cs"] Jan 20 18:19:03 crc kubenswrapper[4661]: I0120 18:19:03.556579 4661 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-5v9cs"] Jan 20 18:19:04 crc kubenswrapper[4661]: I0120 18:19:04.150079 4661 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eaa3c44f-16ac-4387-b1ae-25f2fe0b55d8" path="/var/lib/kubelet/pods/eaa3c44f-16ac-4387-b1ae-25f2fe0b55d8/volumes" Jan 20 18:19:07 crc kubenswrapper[4661]: I0120 18:19:07.537754 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-6f4477bbcd-h46rv" event={"ID":"715feebe-b380-4ce1-9842-7f9da051a195","Type":"ContainerStarted","Data":"36feb280b1c14ee31e00d0be81202b02d698c6e3cd84f16c25500dcaa3163022"} Jan 20 18:19:07 crc kubenswrapper[4661]: I0120 18:19:07.538136 4661 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-6f4477bbcd-h46rv" Jan 20 18:19:07 crc kubenswrapper[4661]: I0120 18:19:07.539870 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-686c759fbc-hdkt8" event={"ID":"e0bbd467-090b-431e-b89b-8159d61d7dab","Type":"ContainerStarted","Data":"14f5a2414babc2d910021827a7832da3b84e526ab5f39f1ff738643744f0c1bd"} Jan 20 18:19:07 crc kubenswrapper[4661]: I0120 18:19:07.540020 4661 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-686c759fbc-hdkt8" Jan 20 18:19:07 crc kubenswrapper[4661]: I0120 18:19:07.559271 4661 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-6f4477bbcd-h46rv" podStartSLOduration=2.070685738 podStartE2EDuration="8.559253398s" podCreationTimestamp="2026-01-20 18:18:59 +0000 UTC" firstStartedPulling="2026-01-20 18:18:59.977721345 +0000 UTC m=+796.308511007" lastFinishedPulling="2026-01-20 18:19:06.466289005 +0000 UTC m=+802.797078667" observedRunningTime="2026-01-20 18:19:07.556409666 +0000 UTC m=+803.887199338" watchObservedRunningTime="2026-01-20 18:19:07.559253398 +0000 UTC m=+803.890043060" Jan 20 18:19:07 crc kubenswrapper[4661]: I0120 18:19:07.584715 4661 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-686c759fbc-hdkt8" podStartSLOduration=2.369076548 podStartE2EDuration="8.58469595s" podCreationTimestamp="2026-01-20 18:18:59 +0000 UTC" firstStartedPulling="2026-01-20 18:19:00.271614086 +0000 UTC m=+796.602403748" lastFinishedPulling="2026-01-20 18:19:06.487233478 +0000 UTC m=+802.818023150" observedRunningTime="2026-01-20 18:19:07.579809297 +0000 UTC m=+803.910598979" watchObservedRunningTime="2026-01-20 18:19:07.58469595 +0000 UTC m=+803.915485612" Jan 20 18:19:19 crc kubenswrapper[4661]: I0120 18:19:19.983742 4661 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-686c759fbc-hdkt8" Jan 20 18:19:39 crc kubenswrapper[4661]: I0120 18:19:39.626034 4661 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-6f4477bbcd-h46rv" Jan 20 18:19:40 crc kubenswrapper[4661]: I0120 18:19:40.508265 4661 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-9s69m"] Jan 20 18:19:40 crc kubenswrapper[4661]: E0120 18:19:40.508749 4661 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eaa3c44f-16ac-4387-b1ae-25f2fe0b55d8" containerName="registry-server" Jan 20 18:19:40 crc kubenswrapper[4661]: I0120 18:19:40.508762 4661 state_mem.go:107] "Deleted CPUSet assignment" podUID="eaa3c44f-16ac-4387-b1ae-25f2fe0b55d8" containerName="registry-server" Jan 20 18:19:40 crc kubenswrapper[4661]: E0120 18:19:40.508774 4661 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eaa3c44f-16ac-4387-b1ae-25f2fe0b55d8" containerName="extract-content" Jan 20 18:19:40 crc kubenswrapper[4661]: I0120 18:19:40.508780 4661 state_mem.go:107] "Deleted CPUSet assignment" podUID="eaa3c44f-16ac-4387-b1ae-25f2fe0b55d8" containerName="extract-content" Jan 20 18:19:40 crc kubenswrapper[4661]: E0120 18:19:40.508798 4661 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eaa3c44f-16ac-4387-b1ae-25f2fe0b55d8" containerName="extract-utilities" Jan 20 18:19:40 crc kubenswrapper[4661]: I0120 18:19:40.508803 4661 state_mem.go:107] "Deleted CPUSet assignment" podUID="eaa3c44f-16ac-4387-b1ae-25f2fe0b55d8" containerName="extract-utilities" Jan 20 18:19:40 crc kubenswrapper[4661]: I0120 18:19:40.508900 4661 memory_manager.go:354] "RemoveStaleState removing state" podUID="eaa3c44f-16ac-4387-b1ae-25f2fe0b55d8" containerName="registry-server" Jan 20 18:19:40 crc kubenswrapper[4661]: I0120 18:19:40.510711 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-9s69m" Jan 20 18:19:40 crc kubenswrapper[4661]: I0120 18:19:40.513258 4661 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-daemon-dockercfg-27fkp" Jan 20 18:19:40 crc kubenswrapper[4661]: I0120 18:19:40.513577 4661 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Jan 20 18:19:40 crc kubenswrapper[4661]: I0120 18:19:40.514694 4661 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Jan 20 18:19:40 crc kubenswrapper[4661]: I0120 18:19:40.576572 4661 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-7z86n"] Jan 20 18:19:40 crc kubenswrapper[4661]: I0120 18:19:40.577282 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-7z86n" Jan 20 18:19:40 crc kubenswrapper[4661]: I0120 18:19:40.583770 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-7z86n"] Jan 20 18:19:40 crc kubenswrapper[4661]: I0120 18:19:40.587040 4661 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Jan 20 18:19:40 crc kubenswrapper[4661]: I0120 18:19:40.641409 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/52982b0c-438c-4fbd-be5f-03fe6aca0327-frr-startup\") pod \"frr-k8s-9s69m\" (UID: \"52982b0c-438c-4fbd-be5f-03fe6aca0327\") " pod="metallb-system/frr-k8s-9s69m" Jan 20 18:19:40 crc kubenswrapper[4661]: I0120 18:19:40.641460 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/52982b0c-438c-4fbd-be5f-03fe6aca0327-metrics-certs\") pod \"frr-k8s-9s69m\" (UID: \"52982b0c-438c-4fbd-be5f-03fe6aca0327\") " pod="metallb-system/frr-k8s-9s69m" Jan 20 18:19:40 crc kubenswrapper[4661]: I0120 18:19:40.641495 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/52982b0c-438c-4fbd-be5f-03fe6aca0327-reloader\") pod \"frr-k8s-9s69m\" (UID: \"52982b0c-438c-4fbd-be5f-03fe6aca0327\") " pod="metallb-system/frr-k8s-9s69m" Jan 20 18:19:40 crc kubenswrapper[4661]: I0120 18:19:40.641544 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/52982b0c-438c-4fbd-be5f-03fe6aca0327-metrics\") pod \"frr-k8s-9s69m\" (UID: \"52982b0c-438c-4fbd-be5f-03fe6aca0327\") " pod="metallb-system/frr-k8s-9s69m" Jan 20 18:19:40 crc kubenswrapper[4661]: I0120 18:19:40.641577 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dnzpd\" (UniqueName: \"kubernetes.io/projected/52982b0c-438c-4fbd-be5f-03fe6aca0327-kube-api-access-dnzpd\") pod \"frr-k8s-9s69m\" (UID: \"52982b0c-438c-4fbd-be5f-03fe6aca0327\") " pod="metallb-system/frr-k8s-9s69m" Jan 20 18:19:40 crc kubenswrapper[4661]: I0120 18:19:40.641605 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/52982b0c-438c-4fbd-be5f-03fe6aca0327-frr-conf\") pod \"frr-k8s-9s69m\" (UID: \"52982b0c-438c-4fbd-be5f-03fe6aca0327\") " pod="metallb-system/frr-k8s-9s69m" Jan 20 18:19:40 crc kubenswrapper[4661]: I0120 18:19:40.641633 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/52982b0c-438c-4fbd-be5f-03fe6aca0327-frr-sockets\") pod \"frr-k8s-9s69m\" (UID: \"52982b0c-438c-4fbd-be5f-03fe6aca0327\") " pod="metallb-system/frr-k8s-9s69m" Jan 20 18:19:40 crc kubenswrapper[4661]: I0120 18:19:40.652561 4661 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-58jbs"] Jan 20 18:19:40 crc kubenswrapper[4661]: I0120 18:19:40.653912 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-58jbs" Jan 20 18:19:40 crc kubenswrapper[4661]: I0120 18:19:40.656207 4661 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Jan 20 18:19:40 crc kubenswrapper[4661]: I0120 18:19:40.656341 4661 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Jan 20 18:19:40 crc kubenswrapper[4661]: I0120 18:19:40.656376 4661 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-66r7z" Jan 20 18:19:40 crc kubenswrapper[4661]: I0120 18:19:40.657454 4661 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Jan 20 18:19:40 crc kubenswrapper[4661]: I0120 18:19:40.686228 4661 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-6968d8fdc4-htvf4"] Jan 20 18:19:40 crc kubenswrapper[4661]: I0120 18:19:40.687070 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-6968d8fdc4-htvf4" Jan 20 18:19:40 crc kubenswrapper[4661]: I0120 18:19:40.689331 4661 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Jan 20 18:19:40 crc kubenswrapper[4661]: I0120 18:19:40.701699 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-6968d8fdc4-htvf4"] Jan 20 18:19:40 crc kubenswrapper[4661]: I0120 18:19:40.742872 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/52982b0c-438c-4fbd-be5f-03fe6aca0327-metrics\") pod \"frr-k8s-9s69m\" (UID: \"52982b0c-438c-4fbd-be5f-03fe6aca0327\") " pod="metallb-system/frr-k8s-9s69m" Jan 20 18:19:40 crc kubenswrapper[4661]: I0120 18:19:40.742920 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dnzpd\" (UniqueName: \"kubernetes.io/projected/52982b0c-438c-4fbd-be5f-03fe6aca0327-kube-api-access-dnzpd\") pod \"frr-k8s-9s69m\" (UID: \"52982b0c-438c-4fbd-be5f-03fe6aca0327\") " pod="metallb-system/frr-k8s-9s69m" Jan 20 18:19:40 crc kubenswrapper[4661]: I0120 18:19:40.742952 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/52982b0c-438c-4fbd-be5f-03fe6aca0327-frr-conf\") pod \"frr-k8s-9s69m\" (UID: \"52982b0c-438c-4fbd-be5f-03fe6aca0327\") " pod="metallb-system/frr-k8s-9s69m" Jan 20 18:19:40 crc kubenswrapper[4661]: I0120 18:19:40.742983 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c47c14be-ea84-47ba-a52b-9cb718ae6a30-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-7z86n\" (UID: \"c47c14be-ea84-47ba-a52b-9cb718ae6a30\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-7z86n" Jan 20 18:19:40 crc kubenswrapper[4661]: I0120 18:19:40.743013 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/52982b0c-438c-4fbd-be5f-03fe6aca0327-frr-sockets\") pod \"frr-k8s-9s69m\" (UID: \"52982b0c-438c-4fbd-be5f-03fe6aca0327\") " pod="metallb-system/frr-k8s-9s69m" Jan 20 18:19:40 crc kubenswrapper[4661]: I0120 18:19:40.743031 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/52982b0c-438c-4fbd-be5f-03fe6aca0327-frr-startup\") pod \"frr-k8s-9s69m\" (UID: \"52982b0c-438c-4fbd-be5f-03fe6aca0327\") " pod="metallb-system/frr-k8s-9s69m" Jan 20 18:19:40 crc kubenswrapper[4661]: I0120 18:19:40.743048 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/52982b0c-438c-4fbd-be5f-03fe6aca0327-metrics-certs\") pod \"frr-k8s-9s69m\" (UID: \"52982b0c-438c-4fbd-be5f-03fe6aca0327\") " pod="metallb-system/frr-k8s-9s69m" Jan 20 18:19:40 crc kubenswrapper[4661]: I0120 18:19:40.743074 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/52982b0c-438c-4fbd-be5f-03fe6aca0327-reloader\") pod \"frr-k8s-9s69m\" (UID: \"52982b0c-438c-4fbd-be5f-03fe6aca0327\") " pod="metallb-system/frr-k8s-9s69m" Jan 20 18:19:40 crc kubenswrapper[4661]: I0120 18:19:40.743096 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hvv7m\" (UniqueName: \"kubernetes.io/projected/c47c14be-ea84-47ba-a52b-9cb718ae6a30-kube-api-access-hvv7m\") pod \"frr-k8s-webhook-server-7df86c4f6c-7z86n\" (UID: \"c47c14be-ea84-47ba-a52b-9cb718ae6a30\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-7z86n" Jan 20 18:19:40 crc kubenswrapper[4661]: I0120 18:19:40.743332 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/52982b0c-438c-4fbd-be5f-03fe6aca0327-metrics\") pod \"frr-k8s-9s69m\" (UID: \"52982b0c-438c-4fbd-be5f-03fe6aca0327\") " pod="metallb-system/frr-k8s-9s69m" Jan 20 18:19:40 crc kubenswrapper[4661]: I0120 18:19:40.743524 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/52982b0c-438c-4fbd-be5f-03fe6aca0327-frr-sockets\") pod \"frr-k8s-9s69m\" (UID: \"52982b0c-438c-4fbd-be5f-03fe6aca0327\") " pod="metallb-system/frr-k8s-9s69m" Jan 20 18:19:40 crc kubenswrapper[4661]: E0120 18:19:40.743710 4661 secret.go:188] Couldn't get secret metallb-system/frr-k8s-certs-secret: secret "frr-k8s-certs-secret" not found Jan 20 18:19:40 crc kubenswrapper[4661]: I0120 18:19:40.743772 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/52982b0c-438c-4fbd-be5f-03fe6aca0327-reloader\") pod \"frr-k8s-9s69m\" (UID: \"52982b0c-438c-4fbd-be5f-03fe6aca0327\") " pod="metallb-system/frr-k8s-9s69m" Jan 20 18:19:40 crc kubenswrapper[4661]: E0120 18:19:40.743805 4661 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/52982b0c-438c-4fbd-be5f-03fe6aca0327-metrics-certs podName:52982b0c-438c-4fbd-be5f-03fe6aca0327 nodeName:}" failed. No retries permitted until 2026-01-20 18:19:41.243780951 +0000 UTC m=+837.574570613 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/52982b0c-438c-4fbd-be5f-03fe6aca0327-metrics-certs") pod "frr-k8s-9s69m" (UID: "52982b0c-438c-4fbd-be5f-03fe6aca0327") : secret "frr-k8s-certs-secret" not found Jan 20 18:19:40 crc kubenswrapper[4661]: I0120 18:19:40.744154 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/52982b0c-438c-4fbd-be5f-03fe6aca0327-frr-conf\") pod \"frr-k8s-9s69m\" (UID: \"52982b0c-438c-4fbd-be5f-03fe6aca0327\") " pod="metallb-system/frr-k8s-9s69m" Jan 20 18:19:40 crc kubenswrapper[4661]: I0120 18:19:40.744187 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/52982b0c-438c-4fbd-be5f-03fe6aca0327-frr-startup\") pod \"frr-k8s-9s69m\" (UID: \"52982b0c-438c-4fbd-be5f-03fe6aca0327\") " pod="metallb-system/frr-k8s-9s69m" Jan 20 18:19:40 crc kubenswrapper[4661]: I0120 18:19:40.773632 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dnzpd\" (UniqueName: \"kubernetes.io/projected/52982b0c-438c-4fbd-be5f-03fe6aca0327-kube-api-access-dnzpd\") pod \"frr-k8s-9s69m\" (UID: \"52982b0c-438c-4fbd-be5f-03fe6aca0327\") " pod="metallb-system/frr-k8s-9s69m" Jan 20 18:19:40 crc kubenswrapper[4661]: I0120 18:19:40.843810 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/69809466-8e46-4f8a-b90e-638f8af8b313-memberlist\") pod \"speaker-58jbs\" (UID: \"69809466-8e46-4f8a-b90e-638f8af8b313\") " pod="metallb-system/speaker-58jbs" Jan 20 18:19:40 crc kubenswrapper[4661]: I0120 18:19:40.843859 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/69809466-8e46-4f8a-b90e-638f8af8b313-metallb-excludel2\") pod \"speaker-58jbs\" (UID: \"69809466-8e46-4f8a-b90e-638f8af8b313\") " pod="metallb-system/speaker-58jbs" Jan 20 18:19:40 crc kubenswrapper[4661]: I0120 18:19:40.843913 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-88zkb\" (UniqueName: \"kubernetes.io/projected/69809466-8e46-4f8a-b90e-638f8af8b313-kube-api-access-88zkb\") pod \"speaker-58jbs\" (UID: \"69809466-8e46-4f8a-b90e-638f8af8b313\") " pod="metallb-system/speaker-58jbs" Jan 20 18:19:40 crc kubenswrapper[4661]: I0120 18:19:40.843943 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/69809466-8e46-4f8a-b90e-638f8af8b313-metrics-certs\") pod \"speaker-58jbs\" (UID: \"69809466-8e46-4f8a-b90e-638f8af8b313\") " pod="metallb-system/speaker-58jbs" Jan 20 18:19:40 crc kubenswrapper[4661]: I0120 18:19:40.843966 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hvv7m\" (UniqueName: \"kubernetes.io/projected/c47c14be-ea84-47ba-a52b-9cb718ae6a30-kube-api-access-hvv7m\") pod \"frr-k8s-webhook-server-7df86c4f6c-7z86n\" (UID: \"c47c14be-ea84-47ba-a52b-9cb718ae6a30\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-7z86n" Jan 20 18:19:40 crc kubenswrapper[4661]: I0120 18:19:40.844011 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/e4437f72-2da5-4c3a-8a69-4a26f3190a62-cert\") pod \"controller-6968d8fdc4-htvf4\" (UID: \"e4437f72-2da5-4c3a-8a69-4a26f3190a62\") " pod="metallb-system/controller-6968d8fdc4-htvf4" Jan 20 18:19:40 crc kubenswrapper[4661]: I0120 18:19:40.844027 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e4437f72-2da5-4c3a-8a69-4a26f3190a62-metrics-certs\") pod \"controller-6968d8fdc4-htvf4\" (UID: \"e4437f72-2da5-4c3a-8a69-4a26f3190a62\") " pod="metallb-system/controller-6968d8fdc4-htvf4" Jan 20 18:19:40 crc kubenswrapper[4661]: I0120 18:19:40.844042 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-thkhg\" (UniqueName: \"kubernetes.io/projected/e4437f72-2da5-4c3a-8a69-4a26f3190a62-kube-api-access-thkhg\") pod \"controller-6968d8fdc4-htvf4\" (UID: \"e4437f72-2da5-4c3a-8a69-4a26f3190a62\") " pod="metallb-system/controller-6968d8fdc4-htvf4" Jan 20 18:19:40 crc kubenswrapper[4661]: I0120 18:19:40.844067 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c47c14be-ea84-47ba-a52b-9cb718ae6a30-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-7z86n\" (UID: \"c47c14be-ea84-47ba-a52b-9cb718ae6a30\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-7z86n" Jan 20 18:19:40 crc kubenswrapper[4661]: I0120 18:19:40.847633 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c47c14be-ea84-47ba-a52b-9cb718ae6a30-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-7z86n\" (UID: \"c47c14be-ea84-47ba-a52b-9cb718ae6a30\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-7z86n" Jan 20 18:19:40 crc kubenswrapper[4661]: I0120 18:19:40.872530 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hvv7m\" (UniqueName: \"kubernetes.io/projected/c47c14be-ea84-47ba-a52b-9cb718ae6a30-kube-api-access-hvv7m\") pod \"frr-k8s-webhook-server-7df86c4f6c-7z86n\" (UID: \"c47c14be-ea84-47ba-a52b-9cb718ae6a30\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-7z86n" Jan 20 18:19:40 crc kubenswrapper[4661]: I0120 18:19:40.900239 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-7z86n" Jan 20 18:19:40 crc kubenswrapper[4661]: I0120 18:19:40.944775 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/e4437f72-2da5-4c3a-8a69-4a26f3190a62-cert\") pod \"controller-6968d8fdc4-htvf4\" (UID: \"e4437f72-2da5-4c3a-8a69-4a26f3190a62\") " pod="metallb-system/controller-6968d8fdc4-htvf4" Jan 20 18:19:40 crc kubenswrapper[4661]: I0120 18:19:40.945024 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e4437f72-2da5-4c3a-8a69-4a26f3190a62-metrics-certs\") pod \"controller-6968d8fdc4-htvf4\" (UID: \"e4437f72-2da5-4c3a-8a69-4a26f3190a62\") " pod="metallb-system/controller-6968d8fdc4-htvf4" Jan 20 18:19:40 crc kubenswrapper[4661]: I0120 18:19:40.945045 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-thkhg\" (UniqueName: \"kubernetes.io/projected/e4437f72-2da5-4c3a-8a69-4a26f3190a62-kube-api-access-thkhg\") pod \"controller-6968d8fdc4-htvf4\" (UID: \"e4437f72-2da5-4c3a-8a69-4a26f3190a62\") " pod="metallb-system/controller-6968d8fdc4-htvf4" Jan 20 18:19:40 crc kubenswrapper[4661]: I0120 18:19:40.945078 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/69809466-8e46-4f8a-b90e-638f8af8b313-memberlist\") pod \"speaker-58jbs\" (UID: \"69809466-8e46-4f8a-b90e-638f8af8b313\") " pod="metallb-system/speaker-58jbs" Jan 20 18:19:40 crc kubenswrapper[4661]: I0120 18:19:40.945102 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/69809466-8e46-4f8a-b90e-638f8af8b313-metallb-excludel2\") pod \"speaker-58jbs\" (UID: \"69809466-8e46-4f8a-b90e-638f8af8b313\") " pod="metallb-system/speaker-58jbs" Jan 20 18:19:40 crc kubenswrapper[4661]: I0120 18:19:40.945141 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-88zkb\" (UniqueName: \"kubernetes.io/projected/69809466-8e46-4f8a-b90e-638f8af8b313-kube-api-access-88zkb\") pod \"speaker-58jbs\" (UID: \"69809466-8e46-4f8a-b90e-638f8af8b313\") " pod="metallb-system/speaker-58jbs" Jan 20 18:19:40 crc kubenswrapper[4661]: I0120 18:19:40.945156 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/69809466-8e46-4f8a-b90e-638f8af8b313-metrics-certs\") pod \"speaker-58jbs\" (UID: \"69809466-8e46-4f8a-b90e-638f8af8b313\") " pod="metallb-system/speaker-58jbs" Jan 20 18:19:40 crc kubenswrapper[4661]: E0120 18:19:40.945891 4661 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Jan 20 18:19:40 crc kubenswrapper[4661]: E0120 18:19:40.945961 4661 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/69809466-8e46-4f8a-b90e-638f8af8b313-memberlist podName:69809466-8e46-4f8a-b90e-638f8af8b313 nodeName:}" failed. No retries permitted until 2026-01-20 18:19:41.445943124 +0000 UTC m=+837.776732786 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/69809466-8e46-4f8a-b90e-638f8af8b313-memberlist") pod "speaker-58jbs" (UID: "69809466-8e46-4f8a-b90e-638f8af8b313") : secret "metallb-memberlist" not found Jan 20 18:19:40 crc kubenswrapper[4661]: E0120 18:19:40.945903 4661 secret.go:188] Couldn't get secret metallb-system/controller-certs-secret: secret "controller-certs-secret" not found Jan 20 18:19:40 crc kubenswrapper[4661]: E0120 18:19:40.946116 4661 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e4437f72-2da5-4c3a-8a69-4a26f3190a62-metrics-certs podName:e4437f72-2da5-4c3a-8a69-4a26f3190a62 nodeName:}" failed. No retries permitted until 2026-01-20 18:19:41.446099128 +0000 UTC m=+837.776888870 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/e4437f72-2da5-4c3a-8a69-4a26f3190a62-metrics-certs") pod "controller-6968d8fdc4-htvf4" (UID: "e4437f72-2da5-4c3a-8a69-4a26f3190a62") : secret "controller-certs-secret" not found Jan 20 18:19:40 crc kubenswrapper[4661]: I0120 18:19:40.946489 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/69809466-8e46-4f8a-b90e-638f8af8b313-metallb-excludel2\") pod \"speaker-58jbs\" (UID: \"69809466-8e46-4f8a-b90e-638f8af8b313\") " pod="metallb-system/speaker-58jbs" Jan 20 18:19:40 crc kubenswrapper[4661]: I0120 18:19:40.949280 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/69809466-8e46-4f8a-b90e-638f8af8b313-metrics-certs\") pod \"speaker-58jbs\" (UID: \"69809466-8e46-4f8a-b90e-638f8af8b313\") " pod="metallb-system/speaker-58jbs" Jan 20 18:19:40 crc kubenswrapper[4661]: I0120 18:19:40.950136 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/e4437f72-2da5-4c3a-8a69-4a26f3190a62-cert\") pod \"controller-6968d8fdc4-htvf4\" (UID: \"e4437f72-2da5-4c3a-8a69-4a26f3190a62\") " pod="metallb-system/controller-6968d8fdc4-htvf4" Jan 20 18:19:40 crc kubenswrapper[4661]: I0120 18:19:40.968155 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-thkhg\" (UniqueName: \"kubernetes.io/projected/e4437f72-2da5-4c3a-8a69-4a26f3190a62-kube-api-access-thkhg\") pod \"controller-6968d8fdc4-htvf4\" (UID: \"e4437f72-2da5-4c3a-8a69-4a26f3190a62\") " pod="metallb-system/controller-6968d8fdc4-htvf4" Jan 20 18:19:40 crc kubenswrapper[4661]: I0120 18:19:40.968521 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-88zkb\" (UniqueName: \"kubernetes.io/projected/69809466-8e46-4f8a-b90e-638f8af8b313-kube-api-access-88zkb\") pod \"speaker-58jbs\" (UID: \"69809466-8e46-4f8a-b90e-638f8af8b313\") " pod="metallb-system/speaker-58jbs" Jan 20 18:19:41 crc kubenswrapper[4661]: I0120 18:19:41.128743 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-7z86n"] Jan 20 18:19:41 crc kubenswrapper[4661]: W0120 18:19:41.141185 4661 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc47c14be_ea84_47ba_a52b_9cb718ae6a30.slice/crio-caeaea94a826c26dd1c70b4c606d544927dc01870681c2110bd0557f4ddb8f21 WatchSource:0}: Error finding container caeaea94a826c26dd1c70b4c606d544927dc01870681c2110bd0557f4ddb8f21: Status 404 returned error can't find the container with id caeaea94a826c26dd1c70b4c606d544927dc01870681c2110bd0557f4ddb8f21 Jan 20 18:19:41 crc kubenswrapper[4661]: I0120 18:19:41.250493 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/52982b0c-438c-4fbd-be5f-03fe6aca0327-metrics-certs\") pod \"frr-k8s-9s69m\" (UID: \"52982b0c-438c-4fbd-be5f-03fe6aca0327\") " pod="metallb-system/frr-k8s-9s69m" Jan 20 18:19:41 crc kubenswrapper[4661]: I0120 18:19:41.254464 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/52982b0c-438c-4fbd-be5f-03fe6aca0327-metrics-certs\") pod \"frr-k8s-9s69m\" (UID: \"52982b0c-438c-4fbd-be5f-03fe6aca0327\") " pod="metallb-system/frr-k8s-9s69m" Jan 20 18:19:41 crc kubenswrapper[4661]: I0120 18:19:41.428884 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-9s69m" Jan 20 18:19:41 crc kubenswrapper[4661]: I0120 18:19:41.453184 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e4437f72-2da5-4c3a-8a69-4a26f3190a62-metrics-certs\") pod \"controller-6968d8fdc4-htvf4\" (UID: \"e4437f72-2da5-4c3a-8a69-4a26f3190a62\") " pod="metallb-system/controller-6968d8fdc4-htvf4" Jan 20 18:19:41 crc kubenswrapper[4661]: I0120 18:19:41.453262 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/69809466-8e46-4f8a-b90e-638f8af8b313-memberlist\") pod \"speaker-58jbs\" (UID: \"69809466-8e46-4f8a-b90e-638f8af8b313\") " pod="metallb-system/speaker-58jbs" Jan 20 18:19:41 crc kubenswrapper[4661]: E0120 18:19:41.453414 4661 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Jan 20 18:19:41 crc kubenswrapper[4661]: E0120 18:19:41.453471 4661 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/69809466-8e46-4f8a-b90e-638f8af8b313-memberlist podName:69809466-8e46-4f8a-b90e-638f8af8b313 nodeName:}" failed. No retries permitted until 2026-01-20 18:19:42.453453681 +0000 UTC m=+838.784243353 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/69809466-8e46-4f8a-b90e-638f8af8b313-memberlist") pod "speaker-58jbs" (UID: "69809466-8e46-4f8a-b90e-638f8af8b313") : secret "metallb-memberlist" not found Jan 20 18:19:41 crc kubenswrapper[4661]: I0120 18:19:41.458348 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e4437f72-2da5-4c3a-8a69-4a26f3190a62-metrics-certs\") pod \"controller-6968d8fdc4-htvf4\" (UID: \"e4437f72-2da5-4c3a-8a69-4a26f3190a62\") " pod="metallb-system/controller-6968d8fdc4-htvf4" Jan 20 18:19:41 crc kubenswrapper[4661]: I0120 18:19:41.601310 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-6968d8fdc4-htvf4" Jan 20 18:19:41 crc kubenswrapper[4661]: I0120 18:19:41.869984 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-6968d8fdc4-htvf4"] Jan 20 18:19:41 crc kubenswrapper[4661]: I0120 18:19:41.938608 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-9s69m" event={"ID":"52982b0c-438c-4fbd-be5f-03fe6aca0327","Type":"ContainerStarted","Data":"1d99f980562c5b10f4ca82931c59cbe7e3c1d0db878489c11a78067625da7f00"} Jan 20 18:19:41 crc kubenswrapper[4661]: I0120 18:19:41.939794 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-htvf4" event={"ID":"e4437f72-2da5-4c3a-8a69-4a26f3190a62","Type":"ContainerStarted","Data":"c7bce8f0215108f040b8584847a8cd2ee575842ee6937d9abb9ee97a1aaa9a7a"} Jan 20 18:19:41 crc kubenswrapper[4661]: I0120 18:19:41.940770 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-7z86n" event={"ID":"c47c14be-ea84-47ba-a52b-9cb718ae6a30","Type":"ContainerStarted","Data":"caeaea94a826c26dd1c70b4c606d544927dc01870681c2110bd0557f4ddb8f21"} Jan 20 18:19:42 crc kubenswrapper[4661]: I0120 18:19:42.489992 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/69809466-8e46-4f8a-b90e-638f8af8b313-memberlist\") pod \"speaker-58jbs\" (UID: \"69809466-8e46-4f8a-b90e-638f8af8b313\") " pod="metallb-system/speaker-58jbs" Jan 20 18:19:42 crc kubenswrapper[4661]: I0120 18:19:42.494692 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/69809466-8e46-4f8a-b90e-638f8af8b313-memberlist\") pod \"speaker-58jbs\" (UID: \"69809466-8e46-4f8a-b90e-638f8af8b313\") " pod="metallb-system/speaker-58jbs" Jan 20 18:19:42 crc kubenswrapper[4661]: I0120 18:19:42.769439 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-58jbs" Jan 20 18:19:42 crc kubenswrapper[4661]: I0120 18:19:42.960909 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-htvf4" event={"ID":"e4437f72-2da5-4c3a-8a69-4a26f3190a62","Type":"ContainerStarted","Data":"55b31ed4ccd75a2f4e0507369639752f2ad8f69b976a93f54c5dfddcaecea37e"} Jan 20 18:19:42 crc kubenswrapper[4661]: I0120 18:19:42.960953 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-htvf4" event={"ID":"e4437f72-2da5-4c3a-8a69-4a26f3190a62","Type":"ContainerStarted","Data":"13c324ccf583d90e169b6c6015122ab415f897f684cabd36a04a9cac1054e162"} Jan 20 18:19:42 crc kubenswrapper[4661]: I0120 18:19:42.961043 4661 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-6968d8fdc4-htvf4" Jan 20 18:19:42 crc kubenswrapper[4661]: I0120 18:19:42.966240 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-58jbs" event={"ID":"69809466-8e46-4f8a-b90e-638f8af8b313","Type":"ContainerStarted","Data":"904888885e54b6a6c34a93ffdfe83ca041a6a7037764bd00345eb350504975e9"} Jan 20 18:19:42 crc kubenswrapper[4661]: I0120 18:19:42.983341 4661 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-6968d8fdc4-htvf4" podStartSLOduration=2.983326947 podStartE2EDuration="2.983326947s" podCreationTimestamp="2026-01-20 18:19:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 18:19:42.982654899 +0000 UTC m=+839.313444551" watchObservedRunningTime="2026-01-20 18:19:42.983326947 +0000 UTC m=+839.314116609" Jan 20 18:19:43 crc kubenswrapper[4661]: I0120 18:19:43.972399 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-58jbs" event={"ID":"69809466-8e46-4f8a-b90e-638f8af8b313","Type":"ContainerStarted","Data":"9fab6251473ec2e8fdcc73ad1e4e31de7abf95bd3efc37cc9a7fb5009220afe8"} Jan 20 18:19:43 crc kubenswrapper[4661]: I0120 18:19:43.972443 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-58jbs" event={"ID":"69809466-8e46-4f8a-b90e-638f8af8b313","Type":"ContainerStarted","Data":"87477216a8fc10b428f63d0e38ff39fe070acd6b8acda4bf2aa72b6b6a43c875"} Jan 20 18:19:43 crc kubenswrapper[4661]: I0120 18:19:43.972535 4661 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-58jbs" Jan 20 18:19:44 crc kubenswrapper[4661]: I0120 18:19:44.019546 4661 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-58jbs" podStartSLOduration=4.019527842 podStartE2EDuration="4.019527842s" podCreationTimestamp="2026-01-20 18:19:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 18:19:44.012068535 +0000 UTC m=+840.342858217" watchObservedRunningTime="2026-01-20 18:19:44.019527842 +0000 UTC m=+840.350317504" Jan 20 18:19:50 crc kubenswrapper[4661]: I0120 18:19:50.011187 4661 generic.go:334] "Generic (PLEG): container finished" podID="52982b0c-438c-4fbd-be5f-03fe6aca0327" containerID="ecf6e6fd8d26d68f8e30c6c669da3ab7ba7abe32bd51b5fe187b4bbb1b7103f0" exitCode=0 Jan 20 18:19:50 crc kubenswrapper[4661]: I0120 18:19:50.011256 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-9s69m" event={"ID":"52982b0c-438c-4fbd-be5f-03fe6aca0327","Type":"ContainerDied","Data":"ecf6e6fd8d26d68f8e30c6c669da3ab7ba7abe32bd51b5fe187b4bbb1b7103f0"} Jan 20 18:19:50 crc kubenswrapper[4661]: I0120 18:19:50.013446 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-7z86n" event={"ID":"c47c14be-ea84-47ba-a52b-9cb718ae6a30","Type":"ContainerStarted","Data":"ebd92ddfa489bfce63f67a33e1e9802b769a725d3bc214de85152722dec4c727"} Jan 20 18:19:50 crc kubenswrapper[4661]: I0120 18:19:50.013813 4661 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-7z86n" Jan 20 18:19:50 crc kubenswrapper[4661]: I0120 18:19:50.061591 4661 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-7z86n" podStartSLOduration=2.223058749 podStartE2EDuration="10.061541806s" podCreationTimestamp="2026-01-20 18:19:40 +0000 UTC" firstStartedPulling="2026-01-20 18:19:41.144292406 +0000 UTC m=+837.475082068" lastFinishedPulling="2026-01-20 18:19:48.982775463 +0000 UTC m=+845.313565125" observedRunningTime="2026-01-20 18:19:50.058373633 +0000 UTC m=+846.389163295" watchObservedRunningTime="2026-01-20 18:19:50.061541806 +0000 UTC m=+846.392331468" Jan 20 18:19:51 crc kubenswrapper[4661]: I0120 18:19:51.025303 4661 generic.go:334] "Generic (PLEG): container finished" podID="52982b0c-438c-4fbd-be5f-03fe6aca0327" containerID="ded96b3351e2f1b9895b8631d8db4e4a52b62843e46ee26a52795230e078d86b" exitCode=0 Jan 20 18:19:51 crc kubenswrapper[4661]: I0120 18:19:51.025423 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-9s69m" event={"ID":"52982b0c-438c-4fbd-be5f-03fe6aca0327","Type":"ContainerDied","Data":"ded96b3351e2f1b9895b8631d8db4e4a52b62843e46ee26a52795230e078d86b"} Jan 20 18:19:52 crc kubenswrapper[4661]: I0120 18:19:52.036174 4661 generic.go:334] "Generic (PLEG): container finished" podID="52982b0c-438c-4fbd-be5f-03fe6aca0327" containerID="dd6e522af2597b428231733d93f882f8b92a1ae5ca10b48c01b2ad3c47671885" exitCode=0 Jan 20 18:19:52 crc kubenswrapper[4661]: I0120 18:19:52.036309 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-9s69m" event={"ID":"52982b0c-438c-4fbd-be5f-03fe6aca0327","Type":"ContainerDied","Data":"dd6e522af2597b428231733d93f882f8b92a1ae5ca10b48c01b2ad3c47671885"} Jan 20 18:19:53 crc kubenswrapper[4661]: I0120 18:19:53.046202 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-9s69m" event={"ID":"52982b0c-438c-4fbd-be5f-03fe6aca0327","Type":"ContainerStarted","Data":"fe2d9c461f6031530a35c9f4ef2fca5c915a81b984c38fe0aa2b6b52881e10ba"} Jan 20 18:19:53 crc kubenswrapper[4661]: I0120 18:19:53.046251 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-9s69m" event={"ID":"52982b0c-438c-4fbd-be5f-03fe6aca0327","Type":"ContainerStarted","Data":"b4abfa8eb110e25422deb1b796f7e61062662664ce12ac639765b8355ad751e9"} Jan 20 18:19:53 crc kubenswrapper[4661]: I0120 18:19:53.046267 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-9s69m" event={"ID":"52982b0c-438c-4fbd-be5f-03fe6aca0327","Type":"ContainerStarted","Data":"61a84e3df3f7e68c85f8bdf948187de6aa9e4aef874b7c8f31978aa0e672a26e"} Jan 20 18:19:54 crc kubenswrapper[4661]: I0120 18:19:54.058673 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-9s69m" event={"ID":"52982b0c-438c-4fbd-be5f-03fe6aca0327","Type":"ContainerStarted","Data":"c179aa02c619798e21bd89e1fd2928bdf74b5eb31ac7cdc0318a5344c18f377c"} Jan 20 18:19:54 crc kubenswrapper[4661]: I0120 18:19:54.059360 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-9s69m" event={"ID":"52982b0c-438c-4fbd-be5f-03fe6aca0327","Type":"ContainerStarted","Data":"a543c6e560697301f63d7246b9a973472a4499cd520bcaf1419c642362e4ebdc"} Jan 20 18:19:54 crc kubenswrapper[4661]: I0120 18:19:54.059390 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-9s69m" event={"ID":"52982b0c-438c-4fbd-be5f-03fe6aca0327","Type":"ContainerStarted","Data":"888c18587f47f17ba83b06146406251929f2b0cf2d2285cb22b5c9d708b96905"} Jan 20 18:19:54 crc kubenswrapper[4661]: I0120 18:19:54.059416 4661 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-9s69m" Jan 20 18:19:54 crc kubenswrapper[4661]: I0120 18:19:54.096341 4661 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-9s69m" podStartSLOduration=6.686685723 podStartE2EDuration="14.096322606s" podCreationTimestamp="2026-01-20 18:19:40 +0000 UTC" firstStartedPulling="2026-01-20 18:19:41.600174686 +0000 UTC m=+837.930964348" lastFinishedPulling="2026-01-20 18:19:49.009811569 +0000 UTC m=+845.340601231" observedRunningTime="2026-01-20 18:19:54.090884384 +0000 UTC m=+850.421674096" watchObservedRunningTime="2026-01-20 18:19:54.096322606 +0000 UTC m=+850.427112288" Jan 20 18:19:56 crc kubenswrapper[4661]: I0120 18:19:56.429580 4661 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-9s69m" Jan 20 18:19:56 crc kubenswrapper[4661]: I0120 18:19:56.469983 4661 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-9s69m" Jan 20 18:20:00 crc kubenswrapper[4661]: I0120 18:20:00.905555 4661 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-7z86n" Jan 20 18:20:01 crc kubenswrapper[4661]: I0120 18:20:01.606262 4661 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-6968d8fdc4-htvf4" Jan 20 18:20:02 crc kubenswrapper[4661]: I0120 18:20:02.779770 4661 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-58jbs" Jan 20 18:20:05 crc kubenswrapper[4661]: I0120 18:20:05.850240 4661 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-zlt78"] Jan 20 18:20:05 crc kubenswrapper[4661]: I0120 18:20:05.851776 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-zlt78" Jan 20 18:20:05 crc kubenswrapper[4661]: I0120 18:20:05.864283 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-zlt78"] Jan 20 18:20:05 crc kubenswrapper[4661]: I0120 18:20:05.865116 4661 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Jan 20 18:20:05 crc kubenswrapper[4661]: I0120 18:20:05.865309 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-index-dockercfg-vbnbp" Jan 20 18:20:05 crc kubenswrapper[4661]: I0120 18:20:05.865607 4661 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Jan 20 18:20:06 crc kubenswrapper[4661]: I0120 18:20:06.034157 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ls8zl\" (UniqueName: \"kubernetes.io/projected/9fe75b3e-ea34-4e9e-952f-415ac754bded-kube-api-access-ls8zl\") pod \"openstack-operator-index-zlt78\" (UID: \"9fe75b3e-ea34-4e9e-952f-415ac754bded\") " pod="openstack-operators/openstack-operator-index-zlt78" Jan 20 18:20:06 crc kubenswrapper[4661]: I0120 18:20:06.135161 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ls8zl\" (UniqueName: \"kubernetes.io/projected/9fe75b3e-ea34-4e9e-952f-415ac754bded-kube-api-access-ls8zl\") pod \"openstack-operator-index-zlt78\" (UID: \"9fe75b3e-ea34-4e9e-952f-415ac754bded\") " pod="openstack-operators/openstack-operator-index-zlt78" Jan 20 18:20:06 crc kubenswrapper[4661]: I0120 18:20:06.155150 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ls8zl\" (UniqueName: \"kubernetes.io/projected/9fe75b3e-ea34-4e9e-952f-415ac754bded-kube-api-access-ls8zl\") pod \"openstack-operator-index-zlt78\" (UID: \"9fe75b3e-ea34-4e9e-952f-415ac754bded\") " pod="openstack-operators/openstack-operator-index-zlt78" Jan 20 18:20:06 crc kubenswrapper[4661]: I0120 18:20:06.175386 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-zlt78" Jan 20 18:20:06 crc kubenswrapper[4661]: I0120 18:20:06.635547 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-zlt78"] Jan 20 18:20:06 crc kubenswrapper[4661]: W0120 18:20:06.653683 4661 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9fe75b3e_ea34_4e9e_952f_415ac754bded.slice/crio-30fcb72d2a97a68c142fb21f774b132a924db1f75a3b7675c6fe1b7b99e71a2a WatchSource:0}: Error finding container 30fcb72d2a97a68c142fb21f774b132a924db1f75a3b7675c6fe1b7b99e71a2a: Status 404 returned error can't find the container with id 30fcb72d2a97a68c142fb21f774b132a924db1f75a3b7675c6fe1b7b99e71a2a Jan 20 18:20:07 crc kubenswrapper[4661]: I0120 18:20:07.145180 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-zlt78" event={"ID":"9fe75b3e-ea34-4e9e-952f-415ac754bded","Type":"ContainerStarted","Data":"30fcb72d2a97a68c142fb21f774b132a924db1f75a3b7675c6fe1b7b99e71a2a"} Jan 20 18:20:09 crc kubenswrapper[4661]: I0120 18:20:09.193870 4661 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-zlt78"] Jan 20 18:20:09 crc kubenswrapper[4661]: I0120 18:20:09.812480 4661 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-wj8kr"] Jan 20 18:20:09 crc kubenswrapper[4661]: I0120 18:20:09.814137 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-wj8kr" Jan 20 18:20:09 crc kubenswrapper[4661]: I0120 18:20:09.820090 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-wj8kr"] Jan 20 18:20:09 crc kubenswrapper[4661]: I0120 18:20:09.991436 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xtvvd\" (UniqueName: \"kubernetes.io/projected/a9b5891c-9b50-4f14-ade6-69a048487d08-kube-api-access-xtvvd\") pod \"openstack-operator-index-wj8kr\" (UID: \"a9b5891c-9b50-4f14-ade6-69a048487d08\") " pod="openstack-operators/openstack-operator-index-wj8kr" Jan 20 18:20:10 crc kubenswrapper[4661]: I0120 18:20:10.092895 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xtvvd\" (UniqueName: \"kubernetes.io/projected/a9b5891c-9b50-4f14-ade6-69a048487d08-kube-api-access-xtvvd\") pod \"openstack-operator-index-wj8kr\" (UID: \"a9b5891c-9b50-4f14-ade6-69a048487d08\") " pod="openstack-operators/openstack-operator-index-wj8kr" Jan 20 18:20:10 crc kubenswrapper[4661]: I0120 18:20:10.126792 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xtvvd\" (UniqueName: \"kubernetes.io/projected/a9b5891c-9b50-4f14-ade6-69a048487d08-kube-api-access-xtvvd\") pod \"openstack-operator-index-wj8kr\" (UID: \"a9b5891c-9b50-4f14-ade6-69a048487d08\") " pod="openstack-operators/openstack-operator-index-wj8kr" Jan 20 18:20:10 crc kubenswrapper[4661]: I0120 18:20:10.143200 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-wj8kr" Jan 20 18:20:10 crc kubenswrapper[4661]: I0120 18:20:10.166162 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-zlt78" event={"ID":"9fe75b3e-ea34-4e9e-952f-415ac754bded","Type":"ContainerStarted","Data":"77cbc1e5beb24d161ef1e0e4bcbdb853a162532d5d80b47909f8bf568f47db76"} Jan 20 18:20:10 crc kubenswrapper[4661]: I0120 18:20:10.166353 4661 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/openstack-operator-index-zlt78" podUID="9fe75b3e-ea34-4e9e-952f-415ac754bded" containerName="registry-server" containerID="cri-o://77cbc1e5beb24d161ef1e0e4bcbdb853a162532d5d80b47909f8bf568f47db76" gracePeriod=2 Jan 20 18:20:10 crc kubenswrapper[4661]: I0120 18:20:10.191955 4661 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-zlt78" podStartSLOduration=2.546283409 podStartE2EDuration="5.191938488s" podCreationTimestamp="2026-01-20 18:20:05 +0000 UTC" firstStartedPulling="2026-01-20 18:20:06.656371769 +0000 UTC m=+862.987161441" lastFinishedPulling="2026-01-20 18:20:09.302026848 +0000 UTC m=+865.632816520" observedRunningTime="2026-01-20 18:20:10.190370667 +0000 UTC m=+866.521160339" watchObservedRunningTime="2026-01-20 18:20:10.191938488 +0000 UTC m=+866.522728160" Jan 20 18:20:10 crc kubenswrapper[4661]: I0120 18:20:10.455871 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-wj8kr"] Jan 20 18:20:10 crc kubenswrapper[4661]: W0120 18:20:10.463120 4661 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda9b5891c_9b50_4f14_ade6_69a048487d08.slice/crio-75bb415a64a74e1801400864863363fd579ee81e0076c7595cc180eb64914e8c WatchSource:0}: Error finding container 75bb415a64a74e1801400864863363fd579ee81e0076c7595cc180eb64914e8c: Status 404 returned error can't find the container with id 75bb415a64a74e1801400864863363fd579ee81e0076c7595cc180eb64914e8c Jan 20 18:20:10 crc kubenswrapper[4661]: I0120 18:20:10.687778 4661 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-zlt78" Jan 20 18:20:10 crc kubenswrapper[4661]: I0120 18:20:10.805392 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ls8zl\" (UniqueName: \"kubernetes.io/projected/9fe75b3e-ea34-4e9e-952f-415ac754bded-kube-api-access-ls8zl\") pod \"9fe75b3e-ea34-4e9e-952f-415ac754bded\" (UID: \"9fe75b3e-ea34-4e9e-952f-415ac754bded\") " Jan 20 18:20:10 crc kubenswrapper[4661]: I0120 18:20:10.824315 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9fe75b3e-ea34-4e9e-952f-415ac754bded-kube-api-access-ls8zl" (OuterVolumeSpecName: "kube-api-access-ls8zl") pod "9fe75b3e-ea34-4e9e-952f-415ac754bded" (UID: "9fe75b3e-ea34-4e9e-952f-415ac754bded"). InnerVolumeSpecName "kube-api-access-ls8zl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 18:20:10 crc kubenswrapper[4661]: I0120 18:20:10.906616 4661 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ls8zl\" (UniqueName: \"kubernetes.io/projected/9fe75b3e-ea34-4e9e-952f-415ac754bded-kube-api-access-ls8zl\") on node \"crc\" DevicePath \"\"" Jan 20 18:20:11 crc kubenswrapper[4661]: I0120 18:20:11.176419 4661 generic.go:334] "Generic (PLEG): container finished" podID="9fe75b3e-ea34-4e9e-952f-415ac754bded" containerID="77cbc1e5beb24d161ef1e0e4bcbdb853a162532d5d80b47909f8bf568f47db76" exitCode=0 Jan 20 18:20:11 crc kubenswrapper[4661]: I0120 18:20:11.176447 4661 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-zlt78" Jan 20 18:20:11 crc kubenswrapper[4661]: I0120 18:20:11.176450 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-zlt78" event={"ID":"9fe75b3e-ea34-4e9e-952f-415ac754bded","Type":"ContainerDied","Data":"77cbc1e5beb24d161ef1e0e4bcbdb853a162532d5d80b47909f8bf568f47db76"} Jan 20 18:20:11 crc kubenswrapper[4661]: I0120 18:20:11.177050 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-zlt78" event={"ID":"9fe75b3e-ea34-4e9e-952f-415ac754bded","Type":"ContainerDied","Data":"30fcb72d2a97a68c142fb21f774b132a924db1f75a3b7675c6fe1b7b99e71a2a"} Jan 20 18:20:11 crc kubenswrapper[4661]: I0120 18:20:11.177079 4661 scope.go:117] "RemoveContainer" containerID="77cbc1e5beb24d161ef1e0e4bcbdb853a162532d5d80b47909f8bf568f47db76" Jan 20 18:20:11 crc kubenswrapper[4661]: I0120 18:20:11.178211 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-wj8kr" event={"ID":"a9b5891c-9b50-4f14-ade6-69a048487d08","Type":"ContainerStarted","Data":"09c443ff331d09546c6f9aed7bea965fa9588ffa2e702e0f1144184b6cb58ece"} Jan 20 18:20:11 crc kubenswrapper[4661]: I0120 18:20:11.178244 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-wj8kr" event={"ID":"a9b5891c-9b50-4f14-ade6-69a048487d08","Type":"ContainerStarted","Data":"75bb415a64a74e1801400864863363fd579ee81e0076c7595cc180eb64914e8c"} Jan 20 18:20:11 crc kubenswrapper[4661]: I0120 18:20:11.195162 4661 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-wj8kr" podStartSLOduration=2.117013819 podStartE2EDuration="2.195143756s" podCreationTimestamp="2026-01-20 18:20:09 +0000 UTC" firstStartedPulling="2026-01-20 18:20:10.466681558 +0000 UTC m=+866.797471220" lastFinishedPulling="2026-01-20 18:20:10.544811505 +0000 UTC m=+866.875601157" observedRunningTime="2026-01-20 18:20:11.194342225 +0000 UTC m=+867.525131887" watchObservedRunningTime="2026-01-20 18:20:11.195143756 +0000 UTC m=+867.525933418" Jan 20 18:20:11 crc kubenswrapper[4661]: I0120 18:20:11.203453 4661 scope.go:117] "RemoveContainer" containerID="77cbc1e5beb24d161ef1e0e4bcbdb853a162532d5d80b47909f8bf568f47db76" Jan 20 18:20:11 crc kubenswrapper[4661]: E0120 18:20:11.206311 4661 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"77cbc1e5beb24d161ef1e0e4bcbdb853a162532d5d80b47909f8bf568f47db76\": container with ID starting with 77cbc1e5beb24d161ef1e0e4bcbdb853a162532d5d80b47909f8bf568f47db76 not found: ID does not exist" containerID="77cbc1e5beb24d161ef1e0e4bcbdb853a162532d5d80b47909f8bf568f47db76" Jan 20 18:20:11 crc kubenswrapper[4661]: I0120 18:20:11.206357 4661 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"77cbc1e5beb24d161ef1e0e4bcbdb853a162532d5d80b47909f8bf568f47db76"} err="failed to get container status \"77cbc1e5beb24d161ef1e0e4bcbdb853a162532d5d80b47909f8bf568f47db76\": rpc error: code = NotFound desc = could not find container \"77cbc1e5beb24d161ef1e0e4bcbdb853a162532d5d80b47909f8bf568f47db76\": container with ID starting with 77cbc1e5beb24d161ef1e0e4bcbdb853a162532d5d80b47909f8bf568f47db76 not found: ID does not exist" Jan 20 18:20:11 crc kubenswrapper[4661]: I0120 18:20:11.222755 4661 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-zlt78"] Jan 20 18:20:11 crc kubenswrapper[4661]: I0120 18:20:11.228786 4661 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/openstack-operator-index-zlt78"] Jan 20 18:20:11 crc kubenswrapper[4661]: I0120 18:20:11.433206 4661 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-9s69m" Jan 20 18:20:12 crc kubenswrapper[4661]: I0120 18:20:12.150772 4661 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9fe75b3e-ea34-4e9e-952f-415ac754bded" path="/var/lib/kubelet/pods/9fe75b3e-ea34-4e9e-952f-415ac754bded/volumes" Jan 20 18:20:20 crc kubenswrapper[4661]: I0120 18:20:20.154138 4661 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-wj8kr" Jan 20 18:20:20 crc kubenswrapper[4661]: I0120 18:20:20.155010 4661 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-wj8kr" Jan 20 18:20:20 crc kubenswrapper[4661]: I0120 18:20:20.179921 4661 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-wj8kr" Jan 20 18:20:20 crc kubenswrapper[4661]: I0120 18:20:20.275200 4661 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-wj8kr" Jan 20 18:20:22 crc kubenswrapper[4661]: I0120 18:20:22.044210 4661 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/c328dd49ee03958f20cc032e90cbfddae000be998ce16b6019d67f47cbzppkm"] Jan 20 18:20:22 crc kubenswrapper[4661]: E0120 18:20:22.044721 4661 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9fe75b3e-ea34-4e9e-952f-415ac754bded" containerName="registry-server" Jan 20 18:20:22 crc kubenswrapper[4661]: I0120 18:20:22.044734 4661 state_mem.go:107] "Deleted CPUSet assignment" podUID="9fe75b3e-ea34-4e9e-952f-415ac754bded" containerName="registry-server" Jan 20 18:20:22 crc kubenswrapper[4661]: I0120 18:20:22.044843 4661 memory_manager.go:354] "RemoveStaleState removing state" podUID="9fe75b3e-ea34-4e9e-952f-415ac754bded" containerName="registry-server" Jan 20 18:20:22 crc kubenswrapper[4661]: I0120 18:20:22.045609 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/c328dd49ee03958f20cc032e90cbfddae000be998ce16b6019d67f47cbzppkm" Jan 20 18:20:22 crc kubenswrapper[4661]: I0120 18:20:22.050689 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-ljg5t" Jan 20 18:20:22 crc kubenswrapper[4661]: I0120 18:20:22.078896 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/c328dd49ee03958f20cc032e90cbfddae000be998ce16b6019d67f47cbzppkm"] Jan 20 18:20:22 crc kubenswrapper[4661]: I0120 18:20:22.120038 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/dc3336c8-c6d2-4f42-b8d7-534f5e765776-util\") pod \"c328dd49ee03958f20cc032e90cbfddae000be998ce16b6019d67f47cbzppkm\" (UID: \"dc3336c8-c6d2-4f42-b8d7-534f5e765776\") " pod="openstack-operators/c328dd49ee03958f20cc032e90cbfddae000be998ce16b6019d67f47cbzppkm" Jan 20 18:20:22 crc kubenswrapper[4661]: I0120 18:20:22.120186 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/dc3336c8-c6d2-4f42-b8d7-534f5e765776-bundle\") pod \"c328dd49ee03958f20cc032e90cbfddae000be998ce16b6019d67f47cbzppkm\" (UID: \"dc3336c8-c6d2-4f42-b8d7-534f5e765776\") " pod="openstack-operators/c328dd49ee03958f20cc032e90cbfddae000be998ce16b6019d67f47cbzppkm" Jan 20 18:20:22 crc kubenswrapper[4661]: I0120 18:20:22.120216 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2bxv2\" (UniqueName: \"kubernetes.io/projected/dc3336c8-c6d2-4f42-b8d7-534f5e765776-kube-api-access-2bxv2\") pod \"c328dd49ee03958f20cc032e90cbfddae000be998ce16b6019d67f47cbzppkm\" (UID: \"dc3336c8-c6d2-4f42-b8d7-534f5e765776\") " pod="openstack-operators/c328dd49ee03958f20cc032e90cbfddae000be998ce16b6019d67f47cbzppkm" Jan 20 18:20:22 crc kubenswrapper[4661]: I0120 18:20:22.221847 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/dc3336c8-c6d2-4f42-b8d7-534f5e765776-bundle\") pod \"c328dd49ee03958f20cc032e90cbfddae000be998ce16b6019d67f47cbzppkm\" (UID: \"dc3336c8-c6d2-4f42-b8d7-534f5e765776\") " pod="openstack-operators/c328dd49ee03958f20cc032e90cbfddae000be998ce16b6019d67f47cbzppkm" Jan 20 18:20:22 crc kubenswrapper[4661]: I0120 18:20:22.221948 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2bxv2\" (UniqueName: \"kubernetes.io/projected/dc3336c8-c6d2-4f42-b8d7-534f5e765776-kube-api-access-2bxv2\") pod \"c328dd49ee03958f20cc032e90cbfddae000be998ce16b6019d67f47cbzppkm\" (UID: \"dc3336c8-c6d2-4f42-b8d7-534f5e765776\") " pod="openstack-operators/c328dd49ee03958f20cc032e90cbfddae000be998ce16b6019d67f47cbzppkm" Jan 20 18:20:22 crc kubenswrapper[4661]: I0120 18:20:22.221983 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/dc3336c8-c6d2-4f42-b8d7-534f5e765776-util\") pod \"c328dd49ee03958f20cc032e90cbfddae000be998ce16b6019d67f47cbzppkm\" (UID: \"dc3336c8-c6d2-4f42-b8d7-534f5e765776\") " pod="openstack-operators/c328dd49ee03958f20cc032e90cbfddae000be998ce16b6019d67f47cbzppkm" Jan 20 18:20:22 crc kubenswrapper[4661]: I0120 18:20:22.222620 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/dc3336c8-c6d2-4f42-b8d7-534f5e765776-bundle\") pod \"c328dd49ee03958f20cc032e90cbfddae000be998ce16b6019d67f47cbzppkm\" (UID: \"dc3336c8-c6d2-4f42-b8d7-534f5e765776\") " pod="openstack-operators/c328dd49ee03958f20cc032e90cbfddae000be998ce16b6019d67f47cbzppkm" Jan 20 18:20:22 crc kubenswrapper[4661]: I0120 18:20:22.222687 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/dc3336c8-c6d2-4f42-b8d7-534f5e765776-util\") pod \"c328dd49ee03958f20cc032e90cbfddae000be998ce16b6019d67f47cbzppkm\" (UID: \"dc3336c8-c6d2-4f42-b8d7-534f5e765776\") " pod="openstack-operators/c328dd49ee03958f20cc032e90cbfddae000be998ce16b6019d67f47cbzppkm" Jan 20 18:20:22 crc kubenswrapper[4661]: I0120 18:20:22.263756 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2bxv2\" (UniqueName: \"kubernetes.io/projected/dc3336c8-c6d2-4f42-b8d7-534f5e765776-kube-api-access-2bxv2\") pod \"c328dd49ee03958f20cc032e90cbfddae000be998ce16b6019d67f47cbzppkm\" (UID: \"dc3336c8-c6d2-4f42-b8d7-534f5e765776\") " pod="openstack-operators/c328dd49ee03958f20cc032e90cbfddae000be998ce16b6019d67f47cbzppkm" Jan 20 18:20:22 crc kubenswrapper[4661]: I0120 18:20:22.365600 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/c328dd49ee03958f20cc032e90cbfddae000be998ce16b6019d67f47cbzppkm" Jan 20 18:20:22 crc kubenswrapper[4661]: I0120 18:20:22.688303 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/c328dd49ee03958f20cc032e90cbfddae000be998ce16b6019d67f47cbzppkm"] Jan 20 18:20:22 crc kubenswrapper[4661]: W0120 18:20:22.696050 4661 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddc3336c8_c6d2_4f42_b8d7_534f5e765776.slice/crio-ba45643c59c1942281ab9119cd35c710a736b0f9c5c395d8d7c90971d1e4c7e0 WatchSource:0}: Error finding container ba45643c59c1942281ab9119cd35c710a736b0f9c5c395d8d7c90971d1e4c7e0: Status 404 returned error can't find the container with id ba45643c59c1942281ab9119cd35c710a736b0f9c5c395d8d7c90971d1e4c7e0 Jan 20 18:20:23 crc kubenswrapper[4661]: I0120 18:20:23.280975 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/c328dd49ee03958f20cc032e90cbfddae000be998ce16b6019d67f47cbzppkm" event={"ID":"dc3336c8-c6d2-4f42-b8d7-534f5e765776","Type":"ContainerStarted","Data":"ba45643c59c1942281ab9119cd35c710a736b0f9c5c395d8d7c90971d1e4c7e0"} Jan 20 18:20:25 crc kubenswrapper[4661]: I0120 18:20:25.298927 4661 generic.go:334] "Generic (PLEG): container finished" podID="dc3336c8-c6d2-4f42-b8d7-534f5e765776" containerID="a6b2e4d191a80e09d38df5b6e855455c8d496c8205a68046f4df75d43eb720dd" exitCode=0 Jan 20 18:20:25 crc kubenswrapper[4661]: I0120 18:20:25.298992 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/c328dd49ee03958f20cc032e90cbfddae000be998ce16b6019d67f47cbzppkm" event={"ID":"dc3336c8-c6d2-4f42-b8d7-534f5e765776","Type":"ContainerDied","Data":"a6b2e4d191a80e09d38df5b6e855455c8d496c8205a68046f4df75d43eb720dd"} Jan 20 18:20:26 crc kubenswrapper[4661]: I0120 18:20:26.308081 4661 generic.go:334] "Generic (PLEG): container finished" podID="dc3336c8-c6d2-4f42-b8d7-534f5e765776" containerID="5b100e2a4fd91a2a95aa9dfdccd05fe8e59474c480d41e3672e0237c57bcd8eb" exitCode=0 Jan 20 18:20:26 crc kubenswrapper[4661]: I0120 18:20:26.308138 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/c328dd49ee03958f20cc032e90cbfddae000be998ce16b6019d67f47cbzppkm" event={"ID":"dc3336c8-c6d2-4f42-b8d7-534f5e765776","Type":"ContainerDied","Data":"5b100e2a4fd91a2a95aa9dfdccd05fe8e59474c480d41e3672e0237c57bcd8eb"} Jan 20 18:20:27 crc kubenswrapper[4661]: I0120 18:20:27.323439 4661 generic.go:334] "Generic (PLEG): container finished" podID="dc3336c8-c6d2-4f42-b8d7-534f5e765776" containerID="9a2e7e0b937430671a8d051193e20c28636babefa83539fa8a726c969f279cc2" exitCode=0 Jan 20 18:20:27 crc kubenswrapper[4661]: I0120 18:20:27.323935 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/c328dd49ee03958f20cc032e90cbfddae000be998ce16b6019d67f47cbzppkm" event={"ID":"dc3336c8-c6d2-4f42-b8d7-534f5e765776","Type":"ContainerDied","Data":"9a2e7e0b937430671a8d051193e20c28636babefa83539fa8a726c969f279cc2"} Jan 20 18:20:28 crc kubenswrapper[4661]: I0120 18:20:28.664969 4661 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/c328dd49ee03958f20cc032e90cbfddae000be998ce16b6019d67f47cbzppkm" Jan 20 18:20:28 crc kubenswrapper[4661]: I0120 18:20:28.838395 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/dc3336c8-c6d2-4f42-b8d7-534f5e765776-util\") pod \"dc3336c8-c6d2-4f42-b8d7-534f5e765776\" (UID: \"dc3336c8-c6d2-4f42-b8d7-534f5e765776\") " Jan 20 18:20:28 crc kubenswrapper[4661]: I0120 18:20:28.838497 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/dc3336c8-c6d2-4f42-b8d7-534f5e765776-bundle\") pod \"dc3336c8-c6d2-4f42-b8d7-534f5e765776\" (UID: \"dc3336c8-c6d2-4f42-b8d7-534f5e765776\") " Jan 20 18:20:28 crc kubenswrapper[4661]: I0120 18:20:28.839595 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dc3336c8-c6d2-4f42-b8d7-534f5e765776-bundle" (OuterVolumeSpecName: "bundle") pod "dc3336c8-c6d2-4f42-b8d7-534f5e765776" (UID: "dc3336c8-c6d2-4f42-b8d7-534f5e765776"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 18:20:28 crc kubenswrapper[4661]: I0120 18:20:28.839636 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2bxv2\" (UniqueName: \"kubernetes.io/projected/dc3336c8-c6d2-4f42-b8d7-534f5e765776-kube-api-access-2bxv2\") pod \"dc3336c8-c6d2-4f42-b8d7-534f5e765776\" (UID: \"dc3336c8-c6d2-4f42-b8d7-534f5e765776\") " Jan 20 18:20:28 crc kubenswrapper[4661]: I0120 18:20:28.839968 4661 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/dc3336c8-c6d2-4f42-b8d7-534f5e765776-bundle\") on node \"crc\" DevicePath \"\"" Jan 20 18:20:28 crc kubenswrapper[4661]: I0120 18:20:28.854702 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dc3336c8-c6d2-4f42-b8d7-534f5e765776-util" (OuterVolumeSpecName: "util") pod "dc3336c8-c6d2-4f42-b8d7-534f5e765776" (UID: "dc3336c8-c6d2-4f42-b8d7-534f5e765776"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 18:20:28 crc kubenswrapper[4661]: I0120 18:20:28.856434 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dc3336c8-c6d2-4f42-b8d7-534f5e765776-kube-api-access-2bxv2" (OuterVolumeSpecName: "kube-api-access-2bxv2") pod "dc3336c8-c6d2-4f42-b8d7-534f5e765776" (UID: "dc3336c8-c6d2-4f42-b8d7-534f5e765776"). InnerVolumeSpecName "kube-api-access-2bxv2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 18:20:28 crc kubenswrapper[4661]: I0120 18:20:28.941733 4661 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/dc3336c8-c6d2-4f42-b8d7-534f5e765776-util\") on node \"crc\" DevicePath \"\"" Jan 20 18:20:28 crc kubenswrapper[4661]: I0120 18:20:28.941813 4661 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2bxv2\" (UniqueName: \"kubernetes.io/projected/dc3336c8-c6d2-4f42-b8d7-534f5e765776-kube-api-access-2bxv2\") on node \"crc\" DevicePath \"\"" Jan 20 18:20:29 crc kubenswrapper[4661]: I0120 18:20:29.339115 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/c328dd49ee03958f20cc032e90cbfddae000be998ce16b6019d67f47cbzppkm" event={"ID":"dc3336c8-c6d2-4f42-b8d7-534f5e765776","Type":"ContainerDied","Data":"ba45643c59c1942281ab9119cd35c710a736b0f9c5c395d8d7c90971d1e4c7e0"} Jan 20 18:20:29 crc kubenswrapper[4661]: I0120 18:20:29.339160 4661 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ba45643c59c1942281ab9119cd35c710a736b0f9c5c395d8d7c90971d1e4c7e0" Jan 20 18:20:29 crc kubenswrapper[4661]: I0120 18:20:29.339555 4661 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/c328dd49ee03958f20cc032e90cbfddae000be998ce16b6019d67f47cbzppkm" Jan 20 18:20:29 crc kubenswrapper[4661]: I0120 18:20:29.661422 4661 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-msbfk"] Jan 20 18:20:29 crc kubenswrapper[4661]: E0120 18:20:29.662043 4661 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dc3336c8-c6d2-4f42-b8d7-534f5e765776" containerName="pull" Jan 20 18:20:29 crc kubenswrapper[4661]: I0120 18:20:29.662060 4661 state_mem.go:107] "Deleted CPUSet assignment" podUID="dc3336c8-c6d2-4f42-b8d7-534f5e765776" containerName="pull" Jan 20 18:20:29 crc kubenswrapper[4661]: E0120 18:20:29.662083 4661 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dc3336c8-c6d2-4f42-b8d7-534f5e765776" containerName="util" Jan 20 18:20:29 crc kubenswrapper[4661]: I0120 18:20:29.662090 4661 state_mem.go:107] "Deleted CPUSet assignment" podUID="dc3336c8-c6d2-4f42-b8d7-534f5e765776" containerName="util" Jan 20 18:20:29 crc kubenswrapper[4661]: E0120 18:20:29.662101 4661 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dc3336c8-c6d2-4f42-b8d7-534f5e765776" containerName="extract" Jan 20 18:20:29 crc kubenswrapper[4661]: I0120 18:20:29.662109 4661 state_mem.go:107] "Deleted CPUSet assignment" podUID="dc3336c8-c6d2-4f42-b8d7-534f5e765776" containerName="extract" Jan 20 18:20:29 crc kubenswrapper[4661]: I0120 18:20:29.662245 4661 memory_manager.go:354] "RemoveStaleState removing state" podUID="dc3336c8-c6d2-4f42-b8d7-534f5e765776" containerName="extract" Jan 20 18:20:29 crc kubenswrapper[4661]: I0120 18:20:29.663326 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-msbfk" Jan 20 18:20:29 crc kubenswrapper[4661]: I0120 18:20:29.676215 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-msbfk"] Jan 20 18:20:29 crc kubenswrapper[4661]: I0120 18:20:29.852162 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2483865b-1c62-44bf-a0c8-fab3fbcda4c7-utilities\") pod \"community-operators-msbfk\" (UID: \"2483865b-1c62-44bf-a0c8-fab3fbcda4c7\") " pod="openshift-marketplace/community-operators-msbfk" Jan 20 18:20:29 crc kubenswrapper[4661]: I0120 18:20:29.852496 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2483865b-1c62-44bf-a0c8-fab3fbcda4c7-catalog-content\") pod \"community-operators-msbfk\" (UID: \"2483865b-1c62-44bf-a0c8-fab3fbcda4c7\") " pod="openshift-marketplace/community-operators-msbfk" Jan 20 18:20:29 crc kubenswrapper[4661]: I0120 18:20:29.852629 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vcbvp\" (UniqueName: \"kubernetes.io/projected/2483865b-1c62-44bf-a0c8-fab3fbcda4c7-kube-api-access-vcbvp\") pod \"community-operators-msbfk\" (UID: \"2483865b-1c62-44bf-a0c8-fab3fbcda4c7\") " pod="openshift-marketplace/community-operators-msbfk" Jan 20 18:20:29 crc kubenswrapper[4661]: I0120 18:20:29.953657 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vcbvp\" (UniqueName: \"kubernetes.io/projected/2483865b-1c62-44bf-a0c8-fab3fbcda4c7-kube-api-access-vcbvp\") pod \"community-operators-msbfk\" (UID: \"2483865b-1c62-44bf-a0c8-fab3fbcda4c7\") " pod="openshift-marketplace/community-operators-msbfk" Jan 20 18:20:29 crc kubenswrapper[4661]: I0120 18:20:29.954445 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2483865b-1c62-44bf-a0c8-fab3fbcda4c7-utilities\") pod \"community-operators-msbfk\" (UID: \"2483865b-1c62-44bf-a0c8-fab3fbcda4c7\") " pod="openshift-marketplace/community-operators-msbfk" Jan 20 18:20:29 crc kubenswrapper[4661]: I0120 18:20:29.955109 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2483865b-1c62-44bf-a0c8-fab3fbcda4c7-catalog-content\") pod \"community-operators-msbfk\" (UID: \"2483865b-1c62-44bf-a0c8-fab3fbcda4c7\") " pod="openshift-marketplace/community-operators-msbfk" Jan 20 18:20:29 crc kubenswrapper[4661]: I0120 18:20:29.955043 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2483865b-1c62-44bf-a0c8-fab3fbcda4c7-utilities\") pod \"community-operators-msbfk\" (UID: \"2483865b-1c62-44bf-a0c8-fab3fbcda4c7\") " pod="openshift-marketplace/community-operators-msbfk" Jan 20 18:20:29 crc kubenswrapper[4661]: I0120 18:20:29.955534 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2483865b-1c62-44bf-a0c8-fab3fbcda4c7-catalog-content\") pod \"community-operators-msbfk\" (UID: \"2483865b-1c62-44bf-a0c8-fab3fbcda4c7\") " pod="openshift-marketplace/community-operators-msbfk" Jan 20 18:20:29 crc kubenswrapper[4661]: I0120 18:20:29.975209 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vcbvp\" (UniqueName: \"kubernetes.io/projected/2483865b-1c62-44bf-a0c8-fab3fbcda4c7-kube-api-access-vcbvp\") pod \"community-operators-msbfk\" (UID: \"2483865b-1c62-44bf-a0c8-fab3fbcda4c7\") " pod="openshift-marketplace/community-operators-msbfk" Jan 20 18:20:29 crc kubenswrapper[4661]: I0120 18:20:29.980992 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-msbfk" Jan 20 18:20:30 crc kubenswrapper[4661]: I0120 18:20:30.342230 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-msbfk"] Jan 20 18:20:31 crc kubenswrapper[4661]: I0120 18:20:31.370519 4661 generic.go:334] "Generic (PLEG): container finished" podID="2483865b-1c62-44bf-a0c8-fab3fbcda4c7" containerID="4f526b8c18c2e65827f63fc8fdb329690cf5d480b539d250e6f1de252ef50af4" exitCode=0 Jan 20 18:20:31 crc kubenswrapper[4661]: I0120 18:20:31.370773 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-msbfk" event={"ID":"2483865b-1c62-44bf-a0c8-fab3fbcda4c7","Type":"ContainerDied","Data":"4f526b8c18c2e65827f63fc8fdb329690cf5d480b539d250e6f1de252ef50af4"} Jan 20 18:20:31 crc kubenswrapper[4661]: I0120 18:20:31.377959 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-msbfk" event={"ID":"2483865b-1c62-44bf-a0c8-fab3fbcda4c7","Type":"ContainerStarted","Data":"bfdc04eb9f038b7156e29bdb29ecb8604411ef7858dd0e916edd7044db40770f"} Jan 20 18:20:32 crc kubenswrapper[4661]: I0120 18:20:32.385419 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-msbfk" event={"ID":"2483865b-1c62-44bf-a0c8-fab3fbcda4c7","Type":"ContainerStarted","Data":"70c91bf01753ea62cf54a2b64fcc80dddd2be97a6950acb710caf78ec675cae1"} Jan 20 18:20:33 crc kubenswrapper[4661]: I0120 18:20:33.164863 4661 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-init-fdc84db4c-p87rq"] Jan 20 18:20:33 crc kubenswrapper[4661]: I0120 18:20:33.165583 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-fdc84db4c-p87rq" Jan 20 18:20:33 crc kubenswrapper[4661]: I0120 18:20:33.180452 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-init-dockercfg-mltvp" Jan 20 18:20:33 crc kubenswrapper[4661]: I0120 18:20:33.216252 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-fdc84db4c-p87rq"] Jan 20 18:20:33 crc kubenswrapper[4661]: I0120 18:20:33.311215 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6zjv8\" (UniqueName: \"kubernetes.io/projected/8e170a45-9133-4aee-81e7-7f6188f48c91-kube-api-access-6zjv8\") pod \"openstack-operator-controller-init-fdc84db4c-p87rq\" (UID: \"8e170a45-9133-4aee-81e7-7f6188f48c91\") " pod="openstack-operators/openstack-operator-controller-init-fdc84db4c-p87rq" Jan 20 18:20:33 crc kubenswrapper[4661]: I0120 18:20:33.412232 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6zjv8\" (UniqueName: \"kubernetes.io/projected/8e170a45-9133-4aee-81e7-7f6188f48c91-kube-api-access-6zjv8\") pod \"openstack-operator-controller-init-fdc84db4c-p87rq\" (UID: \"8e170a45-9133-4aee-81e7-7f6188f48c91\") " pod="openstack-operators/openstack-operator-controller-init-fdc84db4c-p87rq" Jan 20 18:20:33 crc kubenswrapper[4661]: I0120 18:20:33.428616 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6zjv8\" (UniqueName: \"kubernetes.io/projected/8e170a45-9133-4aee-81e7-7f6188f48c91-kube-api-access-6zjv8\") pod \"openstack-operator-controller-init-fdc84db4c-p87rq\" (UID: \"8e170a45-9133-4aee-81e7-7f6188f48c91\") " pod="openstack-operators/openstack-operator-controller-init-fdc84db4c-p87rq" Jan 20 18:20:33 crc kubenswrapper[4661]: I0120 18:20:33.485924 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-fdc84db4c-p87rq" Jan 20 18:20:34 crc kubenswrapper[4661]: I0120 18:20:34.089157 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-fdc84db4c-p87rq"] Jan 20 18:20:34 crc kubenswrapper[4661]: W0120 18:20:34.096026 4661 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8e170a45_9133_4aee_81e7_7f6188f48c91.slice/crio-4f18571d5a5fb484be6e6981cfadde8c58bdcb44be521f73b2c8134edfb1a11c WatchSource:0}: Error finding container 4f18571d5a5fb484be6e6981cfadde8c58bdcb44be521f73b2c8134edfb1a11c: Status 404 returned error can't find the container with id 4f18571d5a5fb484be6e6981cfadde8c58bdcb44be521f73b2c8134edfb1a11c Jan 20 18:20:34 crc kubenswrapper[4661]: I0120 18:20:34.397350 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-fdc84db4c-p87rq" event={"ID":"8e170a45-9133-4aee-81e7-7f6188f48c91","Type":"ContainerStarted","Data":"4f18571d5a5fb484be6e6981cfadde8c58bdcb44be521f73b2c8134edfb1a11c"} Jan 20 18:20:34 crc kubenswrapper[4661]: I0120 18:20:34.402615 4661 generic.go:334] "Generic (PLEG): container finished" podID="2483865b-1c62-44bf-a0c8-fab3fbcda4c7" containerID="70c91bf01753ea62cf54a2b64fcc80dddd2be97a6950acb710caf78ec675cae1" exitCode=0 Jan 20 18:20:34 crc kubenswrapper[4661]: I0120 18:20:34.402660 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-msbfk" event={"ID":"2483865b-1c62-44bf-a0c8-fab3fbcda4c7","Type":"ContainerDied","Data":"70c91bf01753ea62cf54a2b64fcc80dddd2be97a6950acb710caf78ec675cae1"} Jan 20 18:20:35 crc kubenswrapper[4661]: I0120 18:20:35.425918 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-msbfk" event={"ID":"2483865b-1c62-44bf-a0c8-fab3fbcda4c7","Type":"ContainerStarted","Data":"cd22ca957bdb9bb1c1447c5902eb52e9677319a5be812aa344acba9abf6c48cc"} Jan 20 18:20:35 crc kubenswrapper[4661]: I0120 18:20:35.446978 4661 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-msbfk" podStartSLOduration=2.961471748 podStartE2EDuration="6.446960204s" podCreationTimestamp="2026-01-20 18:20:29 +0000 UTC" firstStartedPulling="2026-01-20 18:20:31.377616808 +0000 UTC m=+887.708406470" lastFinishedPulling="2026-01-20 18:20:34.863105254 +0000 UTC m=+891.193894926" observedRunningTime="2026-01-20 18:20:35.444604642 +0000 UTC m=+891.775394314" watchObservedRunningTime="2026-01-20 18:20:35.446960204 +0000 UTC m=+891.777749866" Jan 20 18:20:39 crc kubenswrapper[4661]: I0120 18:20:39.981328 4661 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-msbfk" Jan 20 18:20:39 crc kubenswrapper[4661]: I0120 18:20:39.982529 4661 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-msbfk" Jan 20 18:20:40 crc kubenswrapper[4661]: I0120 18:20:40.078099 4661 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-msbfk" Jan 20 18:20:40 crc kubenswrapper[4661]: I0120 18:20:40.545488 4661 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-msbfk" Jan 20 18:20:42 crc kubenswrapper[4661]: I0120 18:20:42.194241 4661 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-msbfk"] Jan 20 18:20:42 crc kubenswrapper[4661]: I0120 18:20:42.499643 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-fdc84db4c-p87rq" event={"ID":"8e170a45-9133-4aee-81e7-7f6188f48c91","Type":"ContainerStarted","Data":"b988e029130ffeaf6f46ac85f6dfcec5befbcc0cad6bbef53726745bc4f5860c"} Jan 20 18:20:42 crc kubenswrapper[4661]: I0120 18:20:42.546387 4661 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-init-fdc84db4c-p87rq" podStartSLOduration=1.996012446 podStartE2EDuration="9.546369171s" podCreationTimestamp="2026-01-20 18:20:33 +0000 UTC" firstStartedPulling="2026-01-20 18:20:34.098068786 +0000 UTC m=+890.428858448" lastFinishedPulling="2026-01-20 18:20:41.648425511 +0000 UTC m=+897.979215173" observedRunningTime="2026-01-20 18:20:42.542776917 +0000 UTC m=+898.873566589" watchObservedRunningTime="2026-01-20 18:20:42.546369171 +0000 UTC m=+898.877158843" Jan 20 18:20:43 crc kubenswrapper[4661]: I0120 18:20:43.486541 4661 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-init-fdc84db4c-p87rq" Jan 20 18:20:43 crc kubenswrapper[4661]: I0120 18:20:43.507409 4661 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-msbfk" podUID="2483865b-1c62-44bf-a0c8-fab3fbcda4c7" containerName="registry-server" containerID="cri-o://cd22ca957bdb9bb1c1447c5902eb52e9677319a5be812aa344acba9abf6c48cc" gracePeriod=2 Jan 20 18:20:44 crc kubenswrapper[4661]: I0120 18:20:44.367130 4661 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-msbfk" Jan 20 18:20:44 crc kubenswrapper[4661]: I0120 18:20:44.448024 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2483865b-1c62-44bf-a0c8-fab3fbcda4c7-utilities\") pod \"2483865b-1c62-44bf-a0c8-fab3fbcda4c7\" (UID: \"2483865b-1c62-44bf-a0c8-fab3fbcda4c7\") " Jan 20 18:20:44 crc kubenswrapper[4661]: I0120 18:20:44.448195 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vcbvp\" (UniqueName: \"kubernetes.io/projected/2483865b-1c62-44bf-a0c8-fab3fbcda4c7-kube-api-access-vcbvp\") pod \"2483865b-1c62-44bf-a0c8-fab3fbcda4c7\" (UID: \"2483865b-1c62-44bf-a0c8-fab3fbcda4c7\") " Jan 20 18:20:44 crc kubenswrapper[4661]: I0120 18:20:44.448243 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2483865b-1c62-44bf-a0c8-fab3fbcda4c7-catalog-content\") pod \"2483865b-1c62-44bf-a0c8-fab3fbcda4c7\" (UID: \"2483865b-1c62-44bf-a0c8-fab3fbcda4c7\") " Jan 20 18:20:44 crc kubenswrapper[4661]: I0120 18:20:44.448948 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2483865b-1c62-44bf-a0c8-fab3fbcda4c7-utilities" (OuterVolumeSpecName: "utilities") pod "2483865b-1c62-44bf-a0c8-fab3fbcda4c7" (UID: "2483865b-1c62-44bf-a0c8-fab3fbcda4c7"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 18:20:44 crc kubenswrapper[4661]: I0120 18:20:44.453564 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2483865b-1c62-44bf-a0c8-fab3fbcda4c7-kube-api-access-vcbvp" (OuterVolumeSpecName: "kube-api-access-vcbvp") pod "2483865b-1c62-44bf-a0c8-fab3fbcda4c7" (UID: "2483865b-1c62-44bf-a0c8-fab3fbcda4c7"). InnerVolumeSpecName "kube-api-access-vcbvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 18:20:44 crc kubenswrapper[4661]: I0120 18:20:44.498189 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2483865b-1c62-44bf-a0c8-fab3fbcda4c7-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2483865b-1c62-44bf-a0c8-fab3fbcda4c7" (UID: "2483865b-1c62-44bf-a0c8-fab3fbcda4c7"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 18:20:44 crc kubenswrapper[4661]: I0120 18:20:44.519203 4661 generic.go:334] "Generic (PLEG): container finished" podID="2483865b-1c62-44bf-a0c8-fab3fbcda4c7" containerID="cd22ca957bdb9bb1c1447c5902eb52e9677319a5be812aa344acba9abf6c48cc" exitCode=0 Jan 20 18:20:44 crc kubenswrapper[4661]: I0120 18:20:44.519832 4661 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-msbfk" Jan 20 18:20:44 crc kubenswrapper[4661]: I0120 18:20:44.519829 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-msbfk" event={"ID":"2483865b-1c62-44bf-a0c8-fab3fbcda4c7","Type":"ContainerDied","Data":"cd22ca957bdb9bb1c1447c5902eb52e9677319a5be812aa344acba9abf6c48cc"} Jan 20 18:20:44 crc kubenswrapper[4661]: I0120 18:20:44.520129 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-msbfk" event={"ID":"2483865b-1c62-44bf-a0c8-fab3fbcda4c7","Type":"ContainerDied","Data":"bfdc04eb9f038b7156e29bdb29ecb8604411ef7858dd0e916edd7044db40770f"} Jan 20 18:20:44 crc kubenswrapper[4661]: I0120 18:20:44.520264 4661 scope.go:117] "RemoveContainer" containerID="cd22ca957bdb9bb1c1447c5902eb52e9677319a5be812aa344acba9abf6c48cc" Jan 20 18:20:44 crc kubenswrapper[4661]: I0120 18:20:44.551209 4661 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2483865b-1c62-44bf-a0c8-fab3fbcda4c7-utilities\") on node \"crc\" DevicePath \"\"" Jan 20 18:20:44 crc kubenswrapper[4661]: I0120 18:20:44.551954 4661 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vcbvp\" (UniqueName: \"kubernetes.io/projected/2483865b-1c62-44bf-a0c8-fab3fbcda4c7-kube-api-access-vcbvp\") on node \"crc\" DevicePath \"\"" Jan 20 18:20:44 crc kubenswrapper[4661]: I0120 18:20:44.552068 4661 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2483865b-1c62-44bf-a0c8-fab3fbcda4c7-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 20 18:20:44 crc kubenswrapper[4661]: I0120 18:20:44.561003 4661 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-msbfk"] Jan 20 18:20:44 crc kubenswrapper[4661]: I0120 18:20:44.567574 4661 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-msbfk"] Jan 20 18:20:44 crc kubenswrapper[4661]: I0120 18:20:44.569194 4661 scope.go:117] "RemoveContainer" containerID="70c91bf01753ea62cf54a2b64fcc80dddd2be97a6950acb710caf78ec675cae1" Jan 20 18:20:44 crc kubenswrapper[4661]: I0120 18:20:44.590896 4661 scope.go:117] "RemoveContainer" containerID="4f526b8c18c2e65827f63fc8fdb329690cf5d480b539d250e6f1de252ef50af4" Jan 20 18:20:44 crc kubenswrapper[4661]: I0120 18:20:44.612811 4661 scope.go:117] "RemoveContainer" containerID="cd22ca957bdb9bb1c1447c5902eb52e9677319a5be812aa344acba9abf6c48cc" Jan 20 18:20:44 crc kubenswrapper[4661]: E0120 18:20:44.613164 4661 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cd22ca957bdb9bb1c1447c5902eb52e9677319a5be812aa344acba9abf6c48cc\": container with ID starting with cd22ca957bdb9bb1c1447c5902eb52e9677319a5be812aa344acba9abf6c48cc not found: ID does not exist" containerID="cd22ca957bdb9bb1c1447c5902eb52e9677319a5be812aa344acba9abf6c48cc" Jan 20 18:20:44 crc kubenswrapper[4661]: I0120 18:20:44.613196 4661 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cd22ca957bdb9bb1c1447c5902eb52e9677319a5be812aa344acba9abf6c48cc"} err="failed to get container status \"cd22ca957bdb9bb1c1447c5902eb52e9677319a5be812aa344acba9abf6c48cc\": rpc error: code = NotFound desc = could not find container \"cd22ca957bdb9bb1c1447c5902eb52e9677319a5be812aa344acba9abf6c48cc\": container with ID starting with cd22ca957bdb9bb1c1447c5902eb52e9677319a5be812aa344acba9abf6c48cc not found: ID does not exist" Jan 20 18:20:44 crc kubenswrapper[4661]: I0120 18:20:44.613223 4661 scope.go:117] "RemoveContainer" containerID="70c91bf01753ea62cf54a2b64fcc80dddd2be97a6950acb710caf78ec675cae1" Jan 20 18:20:44 crc kubenswrapper[4661]: E0120 18:20:44.613454 4661 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"70c91bf01753ea62cf54a2b64fcc80dddd2be97a6950acb710caf78ec675cae1\": container with ID starting with 70c91bf01753ea62cf54a2b64fcc80dddd2be97a6950acb710caf78ec675cae1 not found: ID does not exist" containerID="70c91bf01753ea62cf54a2b64fcc80dddd2be97a6950acb710caf78ec675cae1" Jan 20 18:20:44 crc kubenswrapper[4661]: I0120 18:20:44.613481 4661 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"70c91bf01753ea62cf54a2b64fcc80dddd2be97a6950acb710caf78ec675cae1"} err="failed to get container status \"70c91bf01753ea62cf54a2b64fcc80dddd2be97a6950acb710caf78ec675cae1\": rpc error: code = NotFound desc = could not find container \"70c91bf01753ea62cf54a2b64fcc80dddd2be97a6950acb710caf78ec675cae1\": container with ID starting with 70c91bf01753ea62cf54a2b64fcc80dddd2be97a6950acb710caf78ec675cae1 not found: ID does not exist" Jan 20 18:20:44 crc kubenswrapper[4661]: I0120 18:20:44.613499 4661 scope.go:117] "RemoveContainer" containerID="4f526b8c18c2e65827f63fc8fdb329690cf5d480b539d250e6f1de252ef50af4" Jan 20 18:20:44 crc kubenswrapper[4661]: E0120 18:20:44.613915 4661 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4f526b8c18c2e65827f63fc8fdb329690cf5d480b539d250e6f1de252ef50af4\": container with ID starting with 4f526b8c18c2e65827f63fc8fdb329690cf5d480b539d250e6f1de252ef50af4 not found: ID does not exist" containerID="4f526b8c18c2e65827f63fc8fdb329690cf5d480b539d250e6f1de252ef50af4" Jan 20 18:20:44 crc kubenswrapper[4661]: I0120 18:20:44.613940 4661 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4f526b8c18c2e65827f63fc8fdb329690cf5d480b539d250e6f1de252ef50af4"} err="failed to get container status \"4f526b8c18c2e65827f63fc8fdb329690cf5d480b539d250e6f1de252ef50af4\": rpc error: code = NotFound desc = could not find container \"4f526b8c18c2e65827f63fc8fdb329690cf5d480b539d250e6f1de252ef50af4\": container with ID starting with 4f526b8c18c2e65827f63fc8fdb329690cf5d480b539d250e6f1de252ef50af4 not found: ID does not exist" Jan 20 18:20:44 crc kubenswrapper[4661]: I0120 18:20:44.813965 4661 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-vlzt7"] Jan 20 18:20:44 crc kubenswrapper[4661]: E0120 18:20:44.814268 4661 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2483865b-1c62-44bf-a0c8-fab3fbcda4c7" containerName="registry-server" Jan 20 18:20:44 crc kubenswrapper[4661]: I0120 18:20:44.814297 4661 state_mem.go:107] "Deleted CPUSet assignment" podUID="2483865b-1c62-44bf-a0c8-fab3fbcda4c7" containerName="registry-server" Jan 20 18:20:44 crc kubenswrapper[4661]: E0120 18:20:44.814316 4661 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2483865b-1c62-44bf-a0c8-fab3fbcda4c7" containerName="extract-content" Jan 20 18:20:44 crc kubenswrapper[4661]: I0120 18:20:44.814329 4661 state_mem.go:107] "Deleted CPUSet assignment" podUID="2483865b-1c62-44bf-a0c8-fab3fbcda4c7" containerName="extract-content" Jan 20 18:20:44 crc kubenswrapper[4661]: E0120 18:20:44.814360 4661 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2483865b-1c62-44bf-a0c8-fab3fbcda4c7" containerName="extract-utilities" Jan 20 18:20:44 crc kubenswrapper[4661]: I0120 18:20:44.814372 4661 state_mem.go:107] "Deleted CPUSet assignment" podUID="2483865b-1c62-44bf-a0c8-fab3fbcda4c7" containerName="extract-utilities" Jan 20 18:20:44 crc kubenswrapper[4661]: I0120 18:20:44.814568 4661 memory_manager.go:354] "RemoveStaleState removing state" podUID="2483865b-1c62-44bf-a0c8-fab3fbcda4c7" containerName="registry-server" Jan 20 18:20:44 crc kubenswrapper[4661]: I0120 18:20:44.816177 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vlzt7" Jan 20 18:20:44 crc kubenswrapper[4661]: I0120 18:20:44.831873 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-vlzt7"] Jan 20 18:20:44 crc kubenswrapper[4661]: I0120 18:20:44.956939 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-58rz5\" (UniqueName: \"kubernetes.io/projected/b9dd96c1-64e8-4823-b84c-a798157148e1-kube-api-access-58rz5\") pod \"certified-operators-vlzt7\" (UID: \"b9dd96c1-64e8-4823-b84c-a798157148e1\") " pod="openshift-marketplace/certified-operators-vlzt7" Jan 20 18:20:44 crc kubenswrapper[4661]: I0120 18:20:44.957010 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b9dd96c1-64e8-4823-b84c-a798157148e1-catalog-content\") pod \"certified-operators-vlzt7\" (UID: \"b9dd96c1-64e8-4823-b84c-a798157148e1\") " pod="openshift-marketplace/certified-operators-vlzt7" Jan 20 18:20:44 crc kubenswrapper[4661]: I0120 18:20:44.957070 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b9dd96c1-64e8-4823-b84c-a798157148e1-utilities\") pod \"certified-operators-vlzt7\" (UID: \"b9dd96c1-64e8-4823-b84c-a798157148e1\") " pod="openshift-marketplace/certified-operators-vlzt7" Jan 20 18:20:45 crc kubenswrapper[4661]: I0120 18:20:45.058719 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b9dd96c1-64e8-4823-b84c-a798157148e1-utilities\") pod \"certified-operators-vlzt7\" (UID: \"b9dd96c1-64e8-4823-b84c-a798157148e1\") " pod="openshift-marketplace/certified-operators-vlzt7" Jan 20 18:20:45 crc kubenswrapper[4661]: I0120 18:20:45.058834 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-58rz5\" (UniqueName: \"kubernetes.io/projected/b9dd96c1-64e8-4823-b84c-a798157148e1-kube-api-access-58rz5\") pod \"certified-operators-vlzt7\" (UID: \"b9dd96c1-64e8-4823-b84c-a798157148e1\") " pod="openshift-marketplace/certified-operators-vlzt7" Jan 20 18:20:45 crc kubenswrapper[4661]: I0120 18:20:45.058868 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b9dd96c1-64e8-4823-b84c-a798157148e1-catalog-content\") pod \"certified-operators-vlzt7\" (UID: \"b9dd96c1-64e8-4823-b84c-a798157148e1\") " pod="openshift-marketplace/certified-operators-vlzt7" Jan 20 18:20:45 crc kubenswrapper[4661]: I0120 18:20:45.059353 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b9dd96c1-64e8-4823-b84c-a798157148e1-utilities\") pod \"certified-operators-vlzt7\" (UID: \"b9dd96c1-64e8-4823-b84c-a798157148e1\") " pod="openshift-marketplace/certified-operators-vlzt7" Jan 20 18:20:45 crc kubenswrapper[4661]: I0120 18:20:45.059406 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b9dd96c1-64e8-4823-b84c-a798157148e1-catalog-content\") pod \"certified-operators-vlzt7\" (UID: \"b9dd96c1-64e8-4823-b84c-a798157148e1\") " pod="openshift-marketplace/certified-operators-vlzt7" Jan 20 18:20:45 crc kubenswrapper[4661]: I0120 18:20:45.084783 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-58rz5\" (UniqueName: \"kubernetes.io/projected/b9dd96c1-64e8-4823-b84c-a798157148e1-kube-api-access-58rz5\") pod \"certified-operators-vlzt7\" (UID: \"b9dd96c1-64e8-4823-b84c-a798157148e1\") " pod="openshift-marketplace/certified-operators-vlzt7" Jan 20 18:20:45 crc kubenswrapper[4661]: I0120 18:20:45.149893 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vlzt7" Jan 20 18:20:45 crc kubenswrapper[4661]: I0120 18:20:45.632736 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-vlzt7"] Jan 20 18:20:46 crc kubenswrapper[4661]: I0120 18:20:46.153753 4661 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2483865b-1c62-44bf-a0c8-fab3fbcda4c7" path="/var/lib/kubelet/pods/2483865b-1c62-44bf-a0c8-fab3fbcda4c7/volumes" Jan 20 18:20:46 crc kubenswrapper[4661]: I0120 18:20:46.532687 4661 generic.go:334] "Generic (PLEG): container finished" podID="b9dd96c1-64e8-4823-b84c-a798157148e1" containerID="3827b6a52fdd6be8a97f2f495aae3c26f2acdaa0e01d284ce38e33c61619b4cf" exitCode=0 Jan 20 18:20:46 crc kubenswrapper[4661]: I0120 18:20:46.532733 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vlzt7" event={"ID":"b9dd96c1-64e8-4823-b84c-a798157148e1","Type":"ContainerDied","Data":"3827b6a52fdd6be8a97f2f495aae3c26f2acdaa0e01d284ce38e33c61619b4cf"} Jan 20 18:20:46 crc kubenswrapper[4661]: I0120 18:20:46.532766 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vlzt7" event={"ID":"b9dd96c1-64e8-4823-b84c-a798157148e1","Type":"ContainerStarted","Data":"8d31d72b29c9b710e46665391f89f6cc400126fad3973e5b8fe908ba072853bf"} Jan 20 18:20:48 crc kubenswrapper[4661]: I0120 18:20:48.547600 4661 generic.go:334] "Generic (PLEG): container finished" podID="b9dd96c1-64e8-4823-b84c-a798157148e1" containerID="4ccf9c8d217e01c13f9cf761ede395954cd5c1b455866119e07c2652cee093b4" exitCode=0 Jan 20 18:20:48 crc kubenswrapper[4661]: I0120 18:20:48.547662 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vlzt7" event={"ID":"b9dd96c1-64e8-4823-b84c-a798157148e1","Type":"ContainerDied","Data":"4ccf9c8d217e01c13f9cf761ede395954cd5c1b455866119e07c2652cee093b4"} Jan 20 18:20:50 crc kubenswrapper[4661]: I0120 18:20:50.570911 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vlzt7" event={"ID":"b9dd96c1-64e8-4823-b84c-a798157148e1","Type":"ContainerStarted","Data":"1dc339b920d01c1e44d7502e0e067e4a810d161faa899c9590a4da7b010ae5a2"} Jan 20 18:20:53 crc kubenswrapper[4661]: I0120 18:20:53.490636 4661 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-init-fdc84db4c-p87rq" Jan 20 18:20:53 crc kubenswrapper[4661]: I0120 18:20:53.531184 4661 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-vlzt7" podStartSLOduration=6.880649969 podStartE2EDuration="9.531152724s" podCreationTimestamp="2026-01-20 18:20:44 +0000 UTC" firstStartedPulling="2026-01-20 18:20:46.534922131 +0000 UTC m=+902.865711793" lastFinishedPulling="2026-01-20 18:20:49.185424846 +0000 UTC m=+905.516214548" observedRunningTime="2026-01-20 18:20:50.58724315 +0000 UTC m=+906.918032802" watchObservedRunningTime="2026-01-20 18:20:53.531152724 +0000 UTC m=+909.861942426" Jan 20 18:20:55 crc kubenswrapper[4661]: I0120 18:20:55.150950 4661 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-vlzt7" Jan 20 18:20:55 crc kubenswrapper[4661]: I0120 18:20:55.151022 4661 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-vlzt7" Jan 20 18:20:55 crc kubenswrapper[4661]: I0120 18:20:55.223635 4661 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-zqmbh"] Jan 20 18:20:55 crc kubenswrapper[4661]: I0120 18:20:55.225841 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zqmbh" Jan 20 18:20:55 crc kubenswrapper[4661]: I0120 18:20:55.255110 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-zqmbh"] Jan 20 18:20:55 crc kubenswrapper[4661]: I0120 18:20:55.279204 4661 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-vlzt7" Jan 20 18:20:55 crc kubenswrapper[4661]: I0120 18:20:55.291470 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/008e3053-f6f2-4f55-be24-fa0397866db9-utilities\") pod \"redhat-marketplace-zqmbh\" (UID: \"008e3053-f6f2-4f55-be24-fa0397866db9\") " pod="openshift-marketplace/redhat-marketplace-zqmbh" Jan 20 18:20:55 crc kubenswrapper[4661]: I0120 18:20:55.291781 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s6pgj\" (UniqueName: \"kubernetes.io/projected/008e3053-f6f2-4f55-be24-fa0397866db9-kube-api-access-s6pgj\") pod \"redhat-marketplace-zqmbh\" (UID: \"008e3053-f6f2-4f55-be24-fa0397866db9\") " pod="openshift-marketplace/redhat-marketplace-zqmbh" Jan 20 18:20:55 crc kubenswrapper[4661]: I0120 18:20:55.291965 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/008e3053-f6f2-4f55-be24-fa0397866db9-catalog-content\") pod \"redhat-marketplace-zqmbh\" (UID: \"008e3053-f6f2-4f55-be24-fa0397866db9\") " pod="openshift-marketplace/redhat-marketplace-zqmbh" Jan 20 18:20:55 crc kubenswrapper[4661]: I0120 18:20:55.393808 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/008e3053-f6f2-4f55-be24-fa0397866db9-utilities\") pod \"redhat-marketplace-zqmbh\" (UID: \"008e3053-f6f2-4f55-be24-fa0397866db9\") " pod="openshift-marketplace/redhat-marketplace-zqmbh" Jan 20 18:20:55 crc kubenswrapper[4661]: I0120 18:20:55.393858 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s6pgj\" (UniqueName: \"kubernetes.io/projected/008e3053-f6f2-4f55-be24-fa0397866db9-kube-api-access-s6pgj\") pod \"redhat-marketplace-zqmbh\" (UID: \"008e3053-f6f2-4f55-be24-fa0397866db9\") " pod="openshift-marketplace/redhat-marketplace-zqmbh" Jan 20 18:20:55 crc kubenswrapper[4661]: I0120 18:20:55.393923 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/008e3053-f6f2-4f55-be24-fa0397866db9-catalog-content\") pod \"redhat-marketplace-zqmbh\" (UID: \"008e3053-f6f2-4f55-be24-fa0397866db9\") " pod="openshift-marketplace/redhat-marketplace-zqmbh" Jan 20 18:20:55 crc kubenswrapper[4661]: I0120 18:20:55.394484 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/008e3053-f6f2-4f55-be24-fa0397866db9-catalog-content\") pod \"redhat-marketplace-zqmbh\" (UID: \"008e3053-f6f2-4f55-be24-fa0397866db9\") " pod="openshift-marketplace/redhat-marketplace-zqmbh" Jan 20 18:20:55 crc kubenswrapper[4661]: I0120 18:20:55.394798 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/008e3053-f6f2-4f55-be24-fa0397866db9-utilities\") pod \"redhat-marketplace-zqmbh\" (UID: \"008e3053-f6f2-4f55-be24-fa0397866db9\") " pod="openshift-marketplace/redhat-marketplace-zqmbh" Jan 20 18:20:55 crc kubenswrapper[4661]: I0120 18:20:55.414349 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s6pgj\" (UniqueName: \"kubernetes.io/projected/008e3053-f6f2-4f55-be24-fa0397866db9-kube-api-access-s6pgj\") pod \"redhat-marketplace-zqmbh\" (UID: \"008e3053-f6f2-4f55-be24-fa0397866db9\") " pod="openshift-marketplace/redhat-marketplace-zqmbh" Jan 20 18:20:55 crc kubenswrapper[4661]: I0120 18:20:55.560408 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zqmbh" Jan 20 18:20:55 crc kubenswrapper[4661]: I0120 18:20:55.646273 4661 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-vlzt7" Jan 20 18:20:55 crc kubenswrapper[4661]: I0120 18:20:55.820526 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-zqmbh"] Jan 20 18:20:56 crc kubenswrapper[4661]: I0120 18:20:56.610278 4661 generic.go:334] "Generic (PLEG): container finished" podID="008e3053-f6f2-4f55-be24-fa0397866db9" containerID="166f9f95ff89c091e2d3b47f675fdbccf9191023a11d806b7225eac53c4f96c5" exitCode=0 Jan 20 18:20:56 crc kubenswrapper[4661]: I0120 18:20:56.610358 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zqmbh" event={"ID":"008e3053-f6f2-4f55-be24-fa0397866db9","Type":"ContainerDied","Data":"166f9f95ff89c091e2d3b47f675fdbccf9191023a11d806b7225eac53c4f96c5"} Jan 20 18:20:56 crc kubenswrapper[4661]: I0120 18:20:56.610430 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zqmbh" event={"ID":"008e3053-f6f2-4f55-be24-fa0397866db9","Type":"ContainerStarted","Data":"8fef779bb0ba0aa00b6b759deb2bd9800695331ce4965f16cbfa7b36f9ef607a"} Jan 20 18:20:57 crc kubenswrapper[4661]: I0120 18:20:57.568883 4661 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-vlzt7"] Jan 20 18:20:57 crc kubenswrapper[4661]: I0120 18:20:57.618250 4661 generic.go:334] "Generic (PLEG): container finished" podID="008e3053-f6f2-4f55-be24-fa0397866db9" containerID="04f150abf93b4c8df37024f9075d984191260d4d2f1f5f91eee59525e9851a50" exitCode=0 Jan 20 18:20:57 crc kubenswrapper[4661]: I0120 18:20:57.618344 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zqmbh" event={"ID":"008e3053-f6f2-4f55-be24-fa0397866db9","Type":"ContainerDied","Data":"04f150abf93b4c8df37024f9075d984191260d4d2f1f5f91eee59525e9851a50"} Jan 20 18:20:57 crc kubenswrapper[4661]: I0120 18:20:57.618518 4661 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-vlzt7" podUID="b9dd96c1-64e8-4823-b84c-a798157148e1" containerName="registry-server" containerID="cri-o://1dc339b920d01c1e44d7502e0e067e4a810d161faa899c9590a4da7b010ae5a2" gracePeriod=2 Jan 20 18:20:57 crc kubenswrapper[4661]: I0120 18:20:57.990407 4661 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vlzt7" Jan 20 18:20:58 crc kubenswrapper[4661]: I0120 18:20:58.136109 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b9dd96c1-64e8-4823-b84c-a798157148e1-utilities\") pod \"b9dd96c1-64e8-4823-b84c-a798157148e1\" (UID: \"b9dd96c1-64e8-4823-b84c-a798157148e1\") " Jan 20 18:20:58 crc kubenswrapper[4661]: I0120 18:20:58.136199 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b9dd96c1-64e8-4823-b84c-a798157148e1-catalog-content\") pod \"b9dd96c1-64e8-4823-b84c-a798157148e1\" (UID: \"b9dd96c1-64e8-4823-b84c-a798157148e1\") " Jan 20 18:20:58 crc kubenswrapper[4661]: I0120 18:20:58.136272 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-58rz5\" (UniqueName: \"kubernetes.io/projected/b9dd96c1-64e8-4823-b84c-a798157148e1-kube-api-access-58rz5\") pod \"b9dd96c1-64e8-4823-b84c-a798157148e1\" (UID: \"b9dd96c1-64e8-4823-b84c-a798157148e1\") " Jan 20 18:20:58 crc kubenswrapper[4661]: I0120 18:20:58.137520 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b9dd96c1-64e8-4823-b84c-a798157148e1-utilities" (OuterVolumeSpecName: "utilities") pod "b9dd96c1-64e8-4823-b84c-a798157148e1" (UID: "b9dd96c1-64e8-4823-b84c-a798157148e1"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 18:20:58 crc kubenswrapper[4661]: I0120 18:20:58.138046 4661 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b9dd96c1-64e8-4823-b84c-a798157148e1-utilities\") on node \"crc\" DevicePath \"\"" Jan 20 18:20:58 crc kubenswrapper[4661]: I0120 18:20:58.144378 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b9dd96c1-64e8-4823-b84c-a798157148e1-kube-api-access-58rz5" (OuterVolumeSpecName: "kube-api-access-58rz5") pod "b9dd96c1-64e8-4823-b84c-a798157148e1" (UID: "b9dd96c1-64e8-4823-b84c-a798157148e1"). InnerVolumeSpecName "kube-api-access-58rz5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 18:20:58 crc kubenswrapper[4661]: I0120 18:20:58.239403 4661 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-58rz5\" (UniqueName: \"kubernetes.io/projected/b9dd96c1-64e8-4823-b84c-a798157148e1-kube-api-access-58rz5\") on node \"crc\" DevicePath \"\"" Jan 20 18:20:58 crc kubenswrapper[4661]: I0120 18:20:58.389278 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b9dd96c1-64e8-4823-b84c-a798157148e1-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b9dd96c1-64e8-4823-b84c-a798157148e1" (UID: "b9dd96c1-64e8-4823-b84c-a798157148e1"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 18:20:58 crc kubenswrapper[4661]: I0120 18:20:58.442848 4661 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b9dd96c1-64e8-4823-b84c-a798157148e1-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 20 18:20:58 crc kubenswrapper[4661]: I0120 18:20:58.626204 4661 generic.go:334] "Generic (PLEG): container finished" podID="b9dd96c1-64e8-4823-b84c-a798157148e1" containerID="1dc339b920d01c1e44d7502e0e067e4a810d161faa899c9590a4da7b010ae5a2" exitCode=0 Jan 20 18:20:58 crc kubenswrapper[4661]: I0120 18:20:58.626285 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vlzt7" event={"ID":"b9dd96c1-64e8-4823-b84c-a798157148e1","Type":"ContainerDied","Data":"1dc339b920d01c1e44d7502e0e067e4a810d161faa899c9590a4da7b010ae5a2"} Jan 20 18:20:58 crc kubenswrapper[4661]: I0120 18:20:58.626289 4661 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vlzt7" Jan 20 18:20:58 crc kubenswrapper[4661]: I0120 18:20:58.626313 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vlzt7" event={"ID":"b9dd96c1-64e8-4823-b84c-a798157148e1","Type":"ContainerDied","Data":"8d31d72b29c9b710e46665391f89f6cc400126fad3973e5b8fe908ba072853bf"} Jan 20 18:20:58 crc kubenswrapper[4661]: I0120 18:20:58.626332 4661 scope.go:117] "RemoveContainer" containerID="1dc339b920d01c1e44d7502e0e067e4a810d161faa899c9590a4da7b010ae5a2" Jan 20 18:20:58 crc kubenswrapper[4661]: I0120 18:20:58.630914 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zqmbh" event={"ID":"008e3053-f6f2-4f55-be24-fa0397866db9","Type":"ContainerStarted","Data":"d9bc664f60ad96994197d2a0b32aaee3365e15ec3d633a49108b112695fd5f01"} Jan 20 18:20:58 crc kubenswrapper[4661]: I0120 18:20:58.647283 4661 scope.go:117] "RemoveContainer" containerID="4ccf9c8d217e01c13f9cf761ede395954cd5c1b455866119e07c2652cee093b4" Jan 20 18:20:58 crc kubenswrapper[4661]: I0120 18:20:58.662108 4661 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-zqmbh" podStartSLOduration=2.016009341 podStartE2EDuration="3.662089024s" podCreationTimestamp="2026-01-20 18:20:55 +0000 UTC" firstStartedPulling="2026-01-20 18:20:56.612571007 +0000 UTC m=+912.943360699" lastFinishedPulling="2026-01-20 18:20:58.25865072 +0000 UTC m=+914.589440382" observedRunningTime="2026-01-20 18:20:58.656916988 +0000 UTC m=+914.987706650" watchObservedRunningTime="2026-01-20 18:20:58.662089024 +0000 UTC m=+914.992878686" Jan 20 18:20:58 crc kubenswrapper[4661]: I0120 18:20:58.673212 4661 scope.go:117] "RemoveContainer" containerID="3827b6a52fdd6be8a97f2f495aae3c26f2acdaa0e01d284ce38e33c61619b4cf" Jan 20 18:20:58 crc kubenswrapper[4661]: I0120 18:20:58.689641 4661 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-vlzt7"] Jan 20 18:20:58 crc kubenswrapper[4661]: I0120 18:20:58.692563 4661 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-vlzt7"] Jan 20 18:20:58 crc kubenswrapper[4661]: I0120 18:20:58.699549 4661 scope.go:117] "RemoveContainer" containerID="1dc339b920d01c1e44d7502e0e067e4a810d161faa899c9590a4da7b010ae5a2" Jan 20 18:20:58 crc kubenswrapper[4661]: E0120 18:20:58.709870 4661 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1dc339b920d01c1e44d7502e0e067e4a810d161faa899c9590a4da7b010ae5a2\": container with ID starting with 1dc339b920d01c1e44d7502e0e067e4a810d161faa899c9590a4da7b010ae5a2 not found: ID does not exist" containerID="1dc339b920d01c1e44d7502e0e067e4a810d161faa899c9590a4da7b010ae5a2" Jan 20 18:20:58 crc kubenswrapper[4661]: I0120 18:20:58.709924 4661 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1dc339b920d01c1e44d7502e0e067e4a810d161faa899c9590a4da7b010ae5a2"} err="failed to get container status \"1dc339b920d01c1e44d7502e0e067e4a810d161faa899c9590a4da7b010ae5a2\": rpc error: code = NotFound desc = could not find container \"1dc339b920d01c1e44d7502e0e067e4a810d161faa899c9590a4da7b010ae5a2\": container with ID starting with 1dc339b920d01c1e44d7502e0e067e4a810d161faa899c9590a4da7b010ae5a2 not found: ID does not exist" Jan 20 18:20:58 crc kubenswrapper[4661]: I0120 18:20:58.709950 4661 scope.go:117] "RemoveContainer" containerID="4ccf9c8d217e01c13f9cf761ede395954cd5c1b455866119e07c2652cee093b4" Jan 20 18:20:58 crc kubenswrapper[4661]: E0120 18:20:58.712374 4661 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4ccf9c8d217e01c13f9cf761ede395954cd5c1b455866119e07c2652cee093b4\": container with ID starting with 4ccf9c8d217e01c13f9cf761ede395954cd5c1b455866119e07c2652cee093b4 not found: ID does not exist" containerID="4ccf9c8d217e01c13f9cf761ede395954cd5c1b455866119e07c2652cee093b4" Jan 20 18:20:58 crc kubenswrapper[4661]: I0120 18:20:58.712600 4661 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4ccf9c8d217e01c13f9cf761ede395954cd5c1b455866119e07c2652cee093b4"} err="failed to get container status \"4ccf9c8d217e01c13f9cf761ede395954cd5c1b455866119e07c2652cee093b4\": rpc error: code = NotFound desc = could not find container \"4ccf9c8d217e01c13f9cf761ede395954cd5c1b455866119e07c2652cee093b4\": container with ID starting with 4ccf9c8d217e01c13f9cf761ede395954cd5c1b455866119e07c2652cee093b4 not found: ID does not exist" Jan 20 18:20:58 crc kubenswrapper[4661]: I0120 18:20:58.712621 4661 scope.go:117] "RemoveContainer" containerID="3827b6a52fdd6be8a97f2f495aae3c26f2acdaa0e01d284ce38e33c61619b4cf" Jan 20 18:20:58 crc kubenswrapper[4661]: E0120 18:20:58.713039 4661 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3827b6a52fdd6be8a97f2f495aae3c26f2acdaa0e01d284ce38e33c61619b4cf\": container with ID starting with 3827b6a52fdd6be8a97f2f495aae3c26f2acdaa0e01d284ce38e33c61619b4cf not found: ID does not exist" containerID="3827b6a52fdd6be8a97f2f495aae3c26f2acdaa0e01d284ce38e33c61619b4cf" Jan 20 18:20:58 crc kubenswrapper[4661]: I0120 18:20:58.713078 4661 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3827b6a52fdd6be8a97f2f495aae3c26f2acdaa0e01d284ce38e33c61619b4cf"} err="failed to get container status \"3827b6a52fdd6be8a97f2f495aae3c26f2acdaa0e01d284ce38e33c61619b4cf\": rpc error: code = NotFound desc = could not find container \"3827b6a52fdd6be8a97f2f495aae3c26f2acdaa0e01d284ce38e33c61619b4cf\": container with ID starting with 3827b6a52fdd6be8a97f2f495aae3c26f2acdaa0e01d284ce38e33c61619b4cf not found: ID does not exist" Jan 20 18:21:00 crc kubenswrapper[4661]: I0120 18:21:00.148213 4661 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b9dd96c1-64e8-4823-b84c-a798157148e1" path="/var/lib/kubelet/pods/b9dd96c1-64e8-4823-b84c-a798157148e1/volumes" Jan 20 18:21:05 crc kubenswrapper[4661]: I0120 18:21:05.560745 4661 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-zqmbh" Jan 20 18:21:05 crc kubenswrapper[4661]: I0120 18:21:05.561231 4661 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-zqmbh" Jan 20 18:21:05 crc kubenswrapper[4661]: I0120 18:21:05.608878 4661 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-zqmbh" Jan 20 18:21:05 crc kubenswrapper[4661]: I0120 18:21:05.710547 4661 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-zqmbh" Jan 20 18:21:05 crc kubenswrapper[4661]: I0120 18:21:05.847001 4661 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-zqmbh"] Jan 20 18:21:07 crc kubenswrapper[4661]: I0120 18:21:07.679388 4661 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-zqmbh" podUID="008e3053-f6f2-4f55-be24-fa0397866db9" containerName="registry-server" containerID="cri-o://d9bc664f60ad96994197d2a0b32aaee3365e15ec3d633a49108b112695fd5f01" gracePeriod=2 Jan 20 18:21:09 crc kubenswrapper[4661]: I0120 18:21:09.707276 4661 generic.go:334] "Generic (PLEG): container finished" podID="008e3053-f6f2-4f55-be24-fa0397866db9" containerID="d9bc664f60ad96994197d2a0b32aaee3365e15ec3d633a49108b112695fd5f01" exitCode=0 Jan 20 18:21:09 crc kubenswrapper[4661]: I0120 18:21:09.707359 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zqmbh" event={"ID":"008e3053-f6f2-4f55-be24-fa0397866db9","Type":"ContainerDied","Data":"d9bc664f60ad96994197d2a0b32aaee3365e15ec3d633a49108b112695fd5f01"} Jan 20 18:21:09 crc kubenswrapper[4661]: I0120 18:21:09.857294 4661 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zqmbh" Jan 20 18:21:09 crc kubenswrapper[4661]: I0120 18:21:09.988850 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/008e3053-f6f2-4f55-be24-fa0397866db9-utilities\") pod \"008e3053-f6f2-4f55-be24-fa0397866db9\" (UID: \"008e3053-f6f2-4f55-be24-fa0397866db9\") " Jan 20 18:21:09 crc kubenswrapper[4661]: I0120 18:21:09.989191 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/008e3053-f6f2-4f55-be24-fa0397866db9-catalog-content\") pod \"008e3053-f6f2-4f55-be24-fa0397866db9\" (UID: \"008e3053-f6f2-4f55-be24-fa0397866db9\") " Jan 20 18:21:09 crc kubenswrapper[4661]: I0120 18:21:09.989219 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s6pgj\" (UniqueName: \"kubernetes.io/projected/008e3053-f6f2-4f55-be24-fa0397866db9-kube-api-access-s6pgj\") pod \"008e3053-f6f2-4f55-be24-fa0397866db9\" (UID: \"008e3053-f6f2-4f55-be24-fa0397866db9\") " Jan 20 18:21:09 crc kubenswrapper[4661]: I0120 18:21:09.992113 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/008e3053-f6f2-4f55-be24-fa0397866db9-utilities" (OuterVolumeSpecName: "utilities") pod "008e3053-f6f2-4f55-be24-fa0397866db9" (UID: "008e3053-f6f2-4f55-be24-fa0397866db9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 18:21:09 crc kubenswrapper[4661]: I0120 18:21:09.995144 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/008e3053-f6f2-4f55-be24-fa0397866db9-kube-api-access-s6pgj" (OuterVolumeSpecName: "kube-api-access-s6pgj") pod "008e3053-f6f2-4f55-be24-fa0397866db9" (UID: "008e3053-f6f2-4f55-be24-fa0397866db9"). InnerVolumeSpecName "kube-api-access-s6pgj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 18:21:10 crc kubenswrapper[4661]: I0120 18:21:10.012588 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/008e3053-f6f2-4f55-be24-fa0397866db9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "008e3053-f6f2-4f55-be24-fa0397866db9" (UID: "008e3053-f6f2-4f55-be24-fa0397866db9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 18:21:10 crc kubenswrapper[4661]: I0120 18:21:10.090533 4661 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/008e3053-f6f2-4f55-be24-fa0397866db9-utilities\") on node \"crc\" DevicePath \"\"" Jan 20 18:21:10 crc kubenswrapper[4661]: I0120 18:21:10.090566 4661 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/008e3053-f6f2-4f55-be24-fa0397866db9-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 20 18:21:10 crc kubenswrapper[4661]: I0120 18:21:10.090577 4661 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s6pgj\" (UniqueName: \"kubernetes.io/projected/008e3053-f6f2-4f55-be24-fa0397866db9-kube-api-access-s6pgj\") on node \"crc\" DevicePath \"\"" Jan 20 18:21:10 crc kubenswrapper[4661]: I0120 18:21:10.715259 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zqmbh" event={"ID":"008e3053-f6f2-4f55-be24-fa0397866db9","Type":"ContainerDied","Data":"8fef779bb0ba0aa00b6b759deb2bd9800695331ce4965f16cbfa7b36f9ef607a"} Jan 20 18:21:10 crc kubenswrapper[4661]: I0120 18:21:10.715323 4661 scope.go:117] "RemoveContainer" containerID="d9bc664f60ad96994197d2a0b32aaee3365e15ec3d633a49108b112695fd5f01" Jan 20 18:21:10 crc kubenswrapper[4661]: I0120 18:21:10.716580 4661 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zqmbh" Jan 20 18:21:10 crc kubenswrapper[4661]: I0120 18:21:10.732709 4661 scope.go:117] "RemoveContainer" containerID="04f150abf93b4c8df37024f9075d984191260d4d2f1f5f91eee59525e9851a50" Jan 20 18:21:10 crc kubenswrapper[4661]: I0120 18:21:10.751208 4661 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-zqmbh"] Jan 20 18:21:10 crc kubenswrapper[4661]: I0120 18:21:10.760557 4661 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-zqmbh"] Jan 20 18:21:10 crc kubenswrapper[4661]: I0120 18:21:10.760696 4661 scope.go:117] "RemoveContainer" containerID="166f9f95ff89c091e2d3b47f675fdbccf9191023a11d806b7225eac53c4f96c5" Jan 20 18:21:12 crc kubenswrapper[4661]: I0120 18:21:12.148930 4661 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="008e3053-f6f2-4f55-be24-fa0397866db9" path="/var/lib/kubelet/pods/008e3053-f6f2-4f55-be24-fa0397866db9/volumes" Jan 20 18:21:12 crc kubenswrapper[4661]: I0120 18:21:12.738860 4661 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7ddb5c749-bbwzg"] Jan 20 18:21:12 crc kubenswrapper[4661]: E0120 18:21:12.739420 4661 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b9dd96c1-64e8-4823-b84c-a798157148e1" containerName="registry-server" Jan 20 18:21:12 crc kubenswrapper[4661]: I0120 18:21:12.739509 4661 state_mem.go:107] "Deleted CPUSet assignment" podUID="b9dd96c1-64e8-4823-b84c-a798157148e1" containerName="registry-server" Jan 20 18:21:12 crc kubenswrapper[4661]: E0120 18:21:12.739591 4661 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="008e3053-f6f2-4f55-be24-fa0397866db9" containerName="registry-server" Jan 20 18:21:12 crc kubenswrapper[4661]: I0120 18:21:12.739660 4661 state_mem.go:107] "Deleted CPUSet assignment" podUID="008e3053-f6f2-4f55-be24-fa0397866db9" containerName="registry-server" Jan 20 18:21:12 crc kubenswrapper[4661]: E0120 18:21:12.739772 4661 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="008e3053-f6f2-4f55-be24-fa0397866db9" containerName="extract-content" Jan 20 18:21:12 crc kubenswrapper[4661]: I0120 18:21:12.739834 4661 state_mem.go:107] "Deleted CPUSet assignment" podUID="008e3053-f6f2-4f55-be24-fa0397866db9" containerName="extract-content" Jan 20 18:21:12 crc kubenswrapper[4661]: E0120 18:21:12.739898 4661 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="008e3053-f6f2-4f55-be24-fa0397866db9" containerName="extract-utilities" Jan 20 18:21:12 crc kubenswrapper[4661]: I0120 18:21:12.739949 4661 state_mem.go:107] "Deleted CPUSet assignment" podUID="008e3053-f6f2-4f55-be24-fa0397866db9" containerName="extract-utilities" Jan 20 18:21:12 crc kubenswrapper[4661]: E0120 18:21:12.739999 4661 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b9dd96c1-64e8-4823-b84c-a798157148e1" containerName="extract-content" Jan 20 18:21:12 crc kubenswrapper[4661]: I0120 18:21:12.740053 4661 state_mem.go:107] "Deleted CPUSet assignment" podUID="b9dd96c1-64e8-4823-b84c-a798157148e1" containerName="extract-content" Jan 20 18:21:12 crc kubenswrapper[4661]: E0120 18:21:12.740110 4661 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b9dd96c1-64e8-4823-b84c-a798157148e1" containerName="extract-utilities" Jan 20 18:21:12 crc kubenswrapper[4661]: I0120 18:21:12.740183 4661 state_mem.go:107] "Deleted CPUSet assignment" podUID="b9dd96c1-64e8-4823-b84c-a798157148e1" containerName="extract-utilities" Jan 20 18:21:12 crc kubenswrapper[4661]: I0120 18:21:12.740399 4661 memory_manager.go:354] "RemoveStaleState removing state" podUID="008e3053-f6f2-4f55-be24-fa0397866db9" containerName="registry-server" Jan 20 18:21:12 crc kubenswrapper[4661]: I0120 18:21:12.740488 4661 memory_manager.go:354] "RemoveStaleState removing state" podUID="b9dd96c1-64e8-4823-b84c-a798157148e1" containerName="registry-server" Jan 20 18:21:12 crc kubenswrapper[4661]: I0120 18:21:12.741005 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-7ddb5c749-bbwzg" Jan 20 18:21:12 crc kubenswrapper[4661]: I0120 18:21:12.753782 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7ddb5c749-bbwzg"] Jan 20 18:21:12 crc kubenswrapper[4661]: I0120 18:21:12.756405 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-4mm55" Jan 20 18:21:12 crc kubenswrapper[4661]: I0120 18:21:12.767828 4661 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/cinder-operator-controller-manager-9b68f5989-hk9zx"] Jan 20 18:21:12 crc kubenswrapper[4661]: I0120 18:21:12.768517 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-9b68f5989-hk9zx" Jan 20 18:21:12 crc kubenswrapper[4661]: I0120 18:21:12.770988 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"cinder-operator-controller-manager-dockercfg-sqpm5" Jan 20 18:21:12 crc kubenswrapper[4661]: I0120 18:21:12.779432 4661 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/designate-operator-controller-manager-9f958b845-dw6hd"] Jan 20 18:21:12 crc kubenswrapper[4661]: I0120 18:21:12.780209 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-9f958b845-dw6hd" Jan 20 18:21:12 crc kubenswrapper[4661]: I0120 18:21:12.804843 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"designate-operator-controller-manager-dockercfg-rwsct" Jan 20 18:21:12 crc kubenswrapper[4661]: I0120 18:21:12.819768 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-9b68f5989-hk9zx"] Jan 20 18:21:12 crc kubenswrapper[4661]: I0120 18:21:12.822721 4661 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/glance-operator-controller-manager-c6994669c-gzjg9"] Jan 20 18:21:12 crc kubenswrapper[4661]: I0120 18:21:12.823469 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-c6994669c-gzjg9" Jan 20 18:21:12 crc kubenswrapper[4661]: I0120 18:21:12.823638 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lpzwh\" (UniqueName: \"kubernetes.io/projected/08e08814-f213-4476-a78d-82cddc30022d-kube-api-access-lpzwh\") pod \"designate-operator-controller-manager-9f958b845-dw6hd\" (UID: \"08e08814-f213-4476-a78d-82cddc30022d\") " pod="openstack-operators/designate-operator-controller-manager-9f958b845-dw6hd" Jan 20 18:21:12 crc kubenswrapper[4661]: I0120 18:21:12.823732 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ps9cc\" (UniqueName: \"kubernetes.io/projected/51bdae14-22a5-4783-8712-fc51ca6d8a07-kube-api-access-ps9cc\") pod \"cinder-operator-controller-manager-9b68f5989-hk9zx\" (UID: \"51bdae14-22a5-4783-8712-fc51ca6d8a07\") " pod="openstack-operators/cinder-operator-controller-manager-9b68f5989-hk9zx" Jan 20 18:21:12 crc kubenswrapper[4661]: I0120 18:21:12.823772 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-95ww6\" (UniqueName: \"kubernetes.io/projected/e257e7b3-ba70-44d2-abb9-6a6848bf1c06-kube-api-access-95ww6\") pod \"barbican-operator-controller-manager-7ddb5c749-bbwzg\" (UID: \"e257e7b3-ba70-44d2-abb9-6a6848bf1c06\") " pod="openstack-operators/barbican-operator-controller-manager-7ddb5c749-bbwzg" Jan 20 18:21:12 crc kubenswrapper[4661]: I0120 18:21:12.827091 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"glance-operator-controller-manager-dockercfg-7d87k" Jan 20 18:21:12 crc kubenswrapper[4661]: I0120 18:21:12.834340 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-9f958b845-dw6hd"] Jan 20 18:21:12 crc kubenswrapper[4661]: I0120 18:21:12.838501 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-c6994669c-gzjg9"] Jan 20 18:21:12 crc kubenswrapper[4661]: I0120 18:21:12.867825 4661 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/horizon-operator-controller-manager-77d5c5b54f-5w4m2"] Jan 20 18:21:12 crc kubenswrapper[4661]: I0120 18:21:12.868497 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-5w4m2" Jan 20 18:21:12 crc kubenswrapper[4661]: I0120 18:21:12.870772 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"horizon-operator-controller-manager-dockercfg-ppxks" Jan 20 18:21:12 crc kubenswrapper[4661]: I0120 18:21:12.873023 4661 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/heat-operator-controller-manager-594c8c9d5d-r5bws"] Jan 20 18:21:12 crc kubenswrapper[4661]: I0120 18:21:12.873609 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-r5bws" Jan 20 18:21:12 crc kubenswrapper[4661]: I0120 18:21:12.874626 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"heat-operator-controller-manager-dockercfg-rncmg" Jan 20 18:21:12 crc kubenswrapper[4661]: I0120 18:21:12.913171 4661 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-77c48c7859-w8bbb"] Jan 20 18:21:12 crc kubenswrapper[4661]: I0120 18:21:12.914458 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-77c48c7859-w8bbb" Jan 20 18:21:12 crc kubenswrapper[4661]: I0120 18:21:12.923519 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-594c8c9d5d-r5bws"] Jan 20 18:21:12 crc kubenswrapper[4661]: I0120 18:21:12.925896 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-77d5c5b54f-5w4m2"] Jan 20 18:21:12 crc kubenswrapper[4661]: I0120 18:21:12.926561 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ps9cc\" (UniqueName: \"kubernetes.io/projected/51bdae14-22a5-4783-8712-fc51ca6d8a07-kube-api-access-ps9cc\") pod \"cinder-operator-controller-manager-9b68f5989-hk9zx\" (UID: \"51bdae14-22a5-4783-8712-fc51ca6d8a07\") " pod="openstack-operators/cinder-operator-controller-manager-9b68f5989-hk9zx" Jan 20 18:21:12 crc kubenswrapper[4661]: I0120 18:21:12.926741 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-95ww6\" (UniqueName: \"kubernetes.io/projected/e257e7b3-ba70-44d2-abb9-6a6848bf1c06-kube-api-access-95ww6\") pod \"barbican-operator-controller-manager-7ddb5c749-bbwzg\" (UID: \"e257e7b3-ba70-44d2-abb9-6a6848bf1c06\") " pod="openstack-operators/barbican-operator-controller-manager-7ddb5c749-bbwzg" Jan 20 18:21:12 crc kubenswrapper[4661]: I0120 18:21:12.926858 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cqrpl\" (UniqueName: \"kubernetes.io/projected/04a8f9c5-45fc-47db-adf2-3de38af2cf96-kube-api-access-cqrpl\") pod \"horizon-operator-controller-manager-77d5c5b54f-5w4m2\" (UID: \"04a8f9c5-45fc-47db-adf2-3de38af2cf96\") " pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-5w4m2" Jan 20 18:21:12 crc kubenswrapper[4661]: I0120 18:21:12.926971 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hcfdv\" (UniqueName: \"kubernetes.io/projected/2bf3fc47-9ca2-45aa-9835-1ed5d413b0ec-kube-api-access-hcfdv\") pod \"heat-operator-controller-manager-594c8c9d5d-r5bws\" (UID: \"2bf3fc47-9ca2-45aa-9835-1ed5d413b0ec\") " pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-r5bws" Jan 20 18:21:12 crc kubenswrapper[4661]: I0120 18:21:12.927142 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lpzwh\" (UniqueName: \"kubernetes.io/projected/08e08814-f213-4476-a78d-82cddc30022d-kube-api-access-lpzwh\") pod \"designate-operator-controller-manager-9f958b845-dw6hd\" (UID: \"08e08814-f213-4476-a78d-82cddc30022d\") " pod="openstack-operators/designate-operator-controller-manager-9f958b845-dw6hd" Jan 20 18:21:12 crc kubenswrapper[4661]: I0120 18:21:12.927270 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-87pjj\" (UniqueName: \"kubernetes.io/projected/eccd3436-cb57-49b8-a2f7-106fe5e39c7d-kube-api-access-87pjj\") pod \"glance-operator-controller-manager-c6994669c-gzjg9\" (UID: \"eccd3436-cb57-49b8-a2f7-106fe5e39c7d\") " pod="openstack-operators/glance-operator-controller-manager-c6994669c-gzjg9" Jan 20 18:21:12 crc kubenswrapper[4661]: I0120 18:21:12.947375 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-cqmk4" Jan 20 18:21:12 crc kubenswrapper[4661]: I0120 18:21:12.950398 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-webhook-server-cert" Jan 20 18:21:12 crc kubenswrapper[4661]: I0120 18:21:12.952965 4661 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ironic-operator-controller-manager-78757b4889-cszrc"] Jan 20 18:21:12 crc kubenswrapper[4661]: I0120 18:21:12.953614 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-78757b4889-cszrc" Jan 20 18:21:12 crc kubenswrapper[4661]: I0120 18:21:12.969030 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ironic-operator-controller-manager-dockercfg-tmhwr" Jan 20 18:21:12 crc kubenswrapper[4661]: I0120 18:21:12.982733 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ps9cc\" (UniqueName: \"kubernetes.io/projected/51bdae14-22a5-4783-8712-fc51ca6d8a07-kube-api-access-ps9cc\") pod \"cinder-operator-controller-manager-9b68f5989-hk9zx\" (UID: \"51bdae14-22a5-4783-8712-fc51ca6d8a07\") " pod="openstack-operators/cinder-operator-controller-manager-9b68f5989-hk9zx" Jan 20 18:21:12 crc kubenswrapper[4661]: I0120 18:21:12.985574 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lpzwh\" (UniqueName: \"kubernetes.io/projected/08e08814-f213-4476-a78d-82cddc30022d-kube-api-access-lpzwh\") pod \"designate-operator-controller-manager-9f958b845-dw6hd\" (UID: \"08e08814-f213-4476-a78d-82cddc30022d\") " pod="openstack-operators/designate-operator-controller-manager-9f958b845-dw6hd" Jan 20 18:21:12 crc kubenswrapper[4661]: I0120 18:21:12.986742 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-77c48c7859-w8bbb"] Jan 20 18:21:12 crc kubenswrapper[4661]: I0120 18:21:12.996990 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-95ww6\" (UniqueName: \"kubernetes.io/projected/e257e7b3-ba70-44d2-abb9-6a6848bf1c06-kube-api-access-95ww6\") pod \"barbican-operator-controller-manager-7ddb5c749-bbwzg\" (UID: \"e257e7b3-ba70-44d2-abb9-6a6848bf1c06\") " pod="openstack-operators/barbican-operator-controller-manager-7ddb5c749-bbwzg" Jan 20 18:21:12 crc kubenswrapper[4661]: I0120 18:21:12.998730 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-78757b4889-cszrc"] Jan 20 18:21:13 crc kubenswrapper[4661]: I0120 18:21:13.005209 4661 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-767fdc4f47-4g9db"] Jan 20 18:21:13 crc kubenswrapper[4661]: I0120 18:21:13.005853 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-767fdc4f47-4g9db" Jan 20 18:21:13 crc kubenswrapper[4661]: I0120 18:21:13.026343 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-skgdx" Jan 20 18:21:13 crc kubenswrapper[4661]: I0120 18:21:13.038160 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqrpl\" (UniqueName: \"kubernetes.io/projected/04a8f9c5-45fc-47db-adf2-3de38af2cf96-kube-api-access-cqrpl\") pod \"horizon-operator-controller-manager-77d5c5b54f-5w4m2\" (UID: \"04a8f9c5-45fc-47db-adf2-3de38af2cf96\") " pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-5w4m2" Jan 20 18:21:13 crc kubenswrapper[4661]: I0120 18:21:13.038204 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hcfdv\" (UniqueName: \"kubernetes.io/projected/2bf3fc47-9ca2-45aa-9835-1ed5d413b0ec-kube-api-access-hcfdv\") pod \"heat-operator-controller-manager-594c8c9d5d-r5bws\" (UID: \"2bf3fc47-9ca2-45aa-9835-1ed5d413b0ec\") " pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-r5bws" Jan 20 18:21:13 crc kubenswrapper[4661]: I0120 18:21:13.038237 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/70002b35-6f0d-4679-9279-a80574c467f0-cert\") pod \"infra-operator-controller-manager-77c48c7859-w8bbb\" (UID: \"70002b35-6f0d-4679-9279-a80574c467f0\") " pod="openstack-operators/infra-operator-controller-manager-77c48c7859-w8bbb" Jan 20 18:21:13 crc kubenswrapper[4661]: I0120 18:21:13.038280 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cfn5g\" (UniqueName: \"kubernetes.io/projected/70002b35-6f0d-4679-9279-a80574c467f0-kube-api-access-cfn5g\") pod \"infra-operator-controller-manager-77c48c7859-w8bbb\" (UID: \"70002b35-6f0d-4679-9279-a80574c467f0\") " pod="openstack-operators/infra-operator-controller-manager-77c48c7859-w8bbb" Jan 20 18:21:13 crc kubenswrapper[4661]: I0120 18:21:13.038300 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-44psq\" (UniqueName: \"kubernetes.io/projected/10ed69a9-7fbf-4139-b2b2-80dec4f8cf41-kube-api-access-44psq\") pod \"ironic-operator-controller-manager-78757b4889-cszrc\" (UID: \"10ed69a9-7fbf-4139-b2b2-80dec4f8cf41\") " pod="openstack-operators/ironic-operator-controller-manager-78757b4889-cszrc" Jan 20 18:21:13 crc kubenswrapper[4661]: I0120 18:21:13.038320 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-87pjj\" (UniqueName: \"kubernetes.io/projected/eccd3436-cb57-49b8-a2f7-106fe5e39c7d-kube-api-access-87pjj\") pod \"glance-operator-controller-manager-c6994669c-gzjg9\" (UID: \"eccd3436-cb57-49b8-a2f7-106fe5e39c7d\") " pod="openstack-operators/glance-operator-controller-manager-c6994669c-gzjg9" Jan 20 18:21:13 crc kubenswrapper[4661]: I0120 18:21:13.047802 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-767fdc4f47-4g9db"] Jan 20 18:21:13 crc kubenswrapper[4661]: I0120 18:21:13.058046 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-7ddb5c749-bbwzg" Jan 20 18:21:13 crc kubenswrapper[4661]: I0120 18:21:13.088005 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hcfdv\" (UniqueName: \"kubernetes.io/projected/2bf3fc47-9ca2-45aa-9835-1ed5d413b0ec-kube-api-access-hcfdv\") pod \"heat-operator-controller-manager-594c8c9d5d-r5bws\" (UID: \"2bf3fc47-9ca2-45aa-9835-1ed5d413b0ec\") " pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-r5bws" Jan 20 18:21:13 crc kubenswrapper[4661]: I0120 18:21:13.093195 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-9b68f5989-hk9zx" Jan 20 18:21:13 crc kubenswrapper[4661]: I0120 18:21:13.099724 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-87pjj\" (UniqueName: \"kubernetes.io/projected/eccd3436-cb57-49b8-a2f7-106fe5e39c7d-kube-api-access-87pjj\") pod \"glance-operator-controller-manager-c6994669c-gzjg9\" (UID: \"eccd3436-cb57-49b8-a2f7-106fe5e39c7d\") " pod="openstack-operators/glance-operator-controller-manager-c6994669c-gzjg9" Jan 20 18:21:13 crc kubenswrapper[4661]: I0120 18:21:13.104772 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-9f958b845-dw6hd" Jan 20 18:21:13 crc kubenswrapper[4661]: I0120 18:21:13.112571 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqrpl\" (UniqueName: \"kubernetes.io/projected/04a8f9c5-45fc-47db-adf2-3de38af2cf96-kube-api-access-cqrpl\") pod \"horizon-operator-controller-manager-77d5c5b54f-5w4m2\" (UID: \"04a8f9c5-45fc-47db-adf2-3de38af2cf96\") " pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-5w4m2" Jan 20 18:21:13 crc kubenswrapper[4661]: I0120 18:21:13.119956 4661 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/manila-operator-controller-manager-864f6b75bf-svt25"] Jan 20 18:21:13 crc kubenswrapper[4661]: I0120 18:21:13.132700 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-864f6b75bf-svt25" Jan 20 18:21:13 crc kubenswrapper[4661]: I0120 18:21:13.142836 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/70002b35-6f0d-4679-9279-a80574c467f0-cert\") pod \"infra-operator-controller-manager-77c48c7859-w8bbb\" (UID: \"70002b35-6f0d-4679-9279-a80574c467f0\") " pod="openstack-operators/infra-operator-controller-manager-77c48c7859-w8bbb" Jan 20 18:21:13 crc kubenswrapper[4661]: I0120 18:21:13.142938 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cfn5g\" (UniqueName: \"kubernetes.io/projected/70002b35-6f0d-4679-9279-a80574c467f0-kube-api-access-cfn5g\") pod \"infra-operator-controller-manager-77c48c7859-w8bbb\" (UID: \"70002b35-6f0d-4679-9279-a80574c467f0\") " pod="openstack-operators/infra-operator-controller-manager-77c48c7859-w8bbb" Jan 20 18:21:13 crc kubenswrapper[4661]: I0120 18:21:13.142981 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-44psq\" (UniqueName: \"kubernetes.io/projected/10ed69a9-7fbf-4139-b2b2-80dec4f8cf41-kube-api-access-44psq\") pod \"ironic-operator-controller-manager-78757b4889-cszrc\" (UID: \"10ed69a9-7fbf-4139-b2b2-80dec4f8cf41\") " pod="openstack-operators/ironic-operator-controller-manager-78757b4889-cszrc" Jan 20 18:21:13 crc kubenswrapper[4661]: I0120 18:21:13.143040 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jjqhz\" (UniqueName: \"kubernetes.io/projected/a5920876-3cd0-41cf-b7d8-6fd8ea0af29c-kube-api-access-jjqhz\") pod \"keystone-operator-controller-manager-767fdc4f47-4g9db\" (UID: \"a5920876-3cd0-41cf-b7d8-6fd8ea0af29c\") " pod="openstack-operators/keystone-operator-controller-manager-767fdc4f47-4g9db" Jan 20 18:21:13 crc kubenswrapper[4661]: I0120 18:21:13.159163 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"manila-operator-controller-manager-dockercfg-qth5w" Jan 20 18:21:13 crc kubenswrapper[4661]: I0120 18:21:13.159201 4661 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-c87fff755-69ktn"] Jan 20 18:21:13 crc kubenswrapper[4661]: E0120 18:21:13.163874 4661 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 20 18:21:13 crc kubenswrapper[4661]: E0120 18:21:13.163948 4661 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/70002b35-6f0d-4679-9279-a80574c467f0-cert podName:70002b35-6f0d-4679-9279-a80574c467f0 nodeName:}" failed. No retries permitted until 2026-01-20 18:21:13.663930882 +0000 UTC m=+929.994720544 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/70002b35-6f0d-4679-9279-a80574c467f0-cert") pod "infra-operator-controller-manager-77c48c7859-w8bbb" (UID: "70002b35-6f0d-4679-9279-a80574c467f0") : secret "infra-operator-webhook-server-cert" not found Jan 20 18:21:13 crc kubenswrapper[4661]: I0120 18:21:13.164232 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-c6994669c-gzjg9" Jan 20 18:21:13 crc kubenswrapper[4661]: I0120 18:21:13.178078 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-69ktn" Jan 20 18:21:13 crc kubenswrapper[4661]: I0120 18:21:13.180216 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-mqfpd" Jan 20 18:21:13 crc kubenswrapper[4661]: I0120 18:21:13.190183 4661 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-controller-manager-65849867d6-2g55t"] Jan 20 18:21:13 crc kubenswrapper[4661]: I0120 18:21:13.191794 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-65849867d6-2g55t" Jan 20 18:21:13 crc kubenswrapper[4661]: I0120 18:21:13.500074 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-5w4m2" Jan 20 18:21:13 crc kubenswrapper[4661]: I0120 18:21:13.501321 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"nova-operator-controller-manager-dockercfg-sx2s2" Jan 20 18:21:13 crc kubenswrapper[4661]: I0120 18:21:13.501553 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-r5bws" Jan 20 18:21:13 crc kubenswrapper[4661]: I0120 18:21:13.503513 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cfn5g\" (UniqueName: \"kubernetes.io/projected/70002b35-6f0d-4679-9279-a80574c467f0-kube-api-access-cfn5g\") pod \"infra-operator-controller-manager-77c48c7859-w8bbb\" (UID: \"70002b35-6f0d-4679-9279-a80574c467f0\") " pod="openstack-operators/infra-operator-controller-manager-77c48c7859-w8bbb" Jan 20 18:21:13 crc kubenswrapper[4661]: I0120 18:21:13.513805 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-44psq\" (UniqueName: \"kubernetes.io/projected/10ed69a9-7fbf-4139-b2b2-80dec4f8cf41-kube-api-access-44psq\") pod \"ironic-operator-controller-manager-78757b4889-cszrc\" (UID: \"10ed69a9-7fbf-4139-b2b2-80dec4f8cf41\") " pod="openstack-operators/ironic-operator-controller-manager-78757b4889-cszrc" Jan 20 18:21:13 crc kubenswrapper[4661]: I0120 18:21:13.514729 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-864f6b75bf-svt25"] Jan 20 18:21:13 crc kubenswrapper[4661]: I0120 18:21:13.520620 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jjqhz\" (UniqueName: \"kubernetes.io/projected/a5920876-3cd0-41cf-b7d8-6fd8ea0af29c-kube-api-access-jjqhz\") pod \"keystone-operator-controller-manager-767fdc4f47-4g9db\" (UID: \"a5920876-3cd0-41cf-b7d8-6fd8ea0af29c\") " pod="openstack-operators/keystone-operator-controller-manager-767fdc4f47-4g9db" Jan 20 18:21:13 crc kubenswrapper[4661]: I0120 18:21:13.520659 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qpqjq\" (UniqueName: \"kubernetes.io/projected/12b130a9-df33-4c1a-a145-961791dc9d9d-kube-api-access-qpqjq\") pod \"mariadb-operator-controller-manager-c87fff755-69ktn\" (UID: \"12b130a9-df33-4c1a-a145-961791dc9d9d\") " pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-69ktn" Jan 20 18:21:13 crc kubenswrapper[4661]: I0120 18:21:13.544041 4661 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/neutron-operator-controller-manager-cb4666565-cqf8m"] Jan 20 18:21:13 crc kubenswrapper[4661]: I0120 18:21:13.545196 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-cb4666565-cqf8m" Jan 20 18:21:13 crc kubenswrapper[4661]: I0120 18:21:13.551077 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"neutron-operator-controller-manager-dockercfg-pwdxt" Jan 20 18:21:13 crc kubenswrapper[4661]: I0120 18:21:13.579263 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-c87fff755-69ktn"] Jan 20 18:21:13 crc kubenswrapper[4661]: I0120 18:21:13.585873 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jjqhz\" (UniqueName: \"kubernetes.io/projected/a5920876-3cd0-41cf-b7d8-6fd8ea0af29c-kube-api-access-jjqhz\") pod \"keystone-operator-controller-manager-767fdc4f47-4g9db\" (UID: \"a5920876-3cd0-41cf-b7d8-6fd8ea0af29c\") " pod="openstack-operators/keystone-operator-controller-manager-767fdc4f47-4g9db" Jan 20 18:21:13 crc kubenswrapper[4661]: I0120 18:21:13.597618 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-cb4666565-cqf8m"] Jan 20 18:21:13 crc kubenswrapper[4661]: I0120 18:21:13.614965 4661 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/octavia-operator-controller-manager-7fc9b76cf6-mqz45"] Jan 20 18:21:13 crc kubenswrapper[4661]: I0120 18:21:13.615760 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-7fc9b76cf6-mqz45" Jan 20 18:21:13 crc kubenswrapper[4661]: I0120 18:21:13.623572 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qpqjq\" (UniqueName: \"kubernetes.io/projected/12b130a9-df33-4c1a-a145-961791dc9d9d-kube-api-access-qpqjq\") pod \"mariadb-operator-controller-manager-c87fff755-69ktn\" (UID: \"12b130a9-df33-4c1a-a145-961791dc9d9d\") " pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-69ktn" Jan 20 18:21:13 crc kubenswrapper[4661]: I0120 18:21:13.623650 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pm8nb\" (UniqueName: \"kubernetes.io/projected/6c1159da-faf7-4389-b57b-05173827968d-kube-api-access-pm8nb\") pod \"manila-operator-controller-manager-864f6b75bf-svt25\" (UID: \"6c1159da-faf7-4389-b57b-05173827968d\") " pod="openstack-operators/manila-operator-controller-manager-864f6b75bf-svt25" Jan 20 18:21:13 crc kubenswrapper[4661]: I0120 18:21:13.623776 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5vjzf\" (UniqueName: \"kubernetes.io/projected/52bfaf4d-624e-45d7-86d8-4c0e18afe2e6-kube-api-access-5vjzf\") pod \"nova-operator-controller-manager-65849867d6-2g55t\" (UID: \"52bfaf4d-624e-45d7-86d8-4c0e18afe2e6\") " pod="openstack-operators/nova-operator-controller-manager-65849867d6-2g55t" Jan 20 18:21:13 crc kubenswrapper[4661]: I0120 18:21:13.629398 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"octavia-operator-controller-manager-dockercfg-q8wz4" Jan 20 18:21:13 crc kubenswrapper[4661]: I0120 18:21:13.662125 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-7fc9b76cf6-mqz45"] Jan 20 18:21:13 crc kubenswrapper[4661]: I0120 18:21:13.672645 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-65849867d6-2g55t"] Jan 20 18:21:13 crc kubenswrapper[4661]: I0120 18:21:13.674430 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-78757b4889-cszrc" Jan 20 18:21:13 crc kubenswrapper[4661]: I0120 18:21:13.682630 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qpqjq\" (UniqueName: \"kubernetes.io/projected/12b130a9-df33-4c1a-a145-961791dc9d9d-kube-api-access-qpqjq\") pod \"mariadb-operator-controller-manager-c87fff755-69ktn\" (UID: \"12b130a9-df33-4c1a-a145-961791dc9d9d\") " pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-69ktn" Jan 20 18:21:13 crc kubenswrapper[4661]: I0120 18:21:13.707418 4661 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ovn-operator-controller-manager-55db956ddc-4msz7"] Jan 20 18:21:13 crc kubenswrapper[4661]: I0120 18:21:13.708280 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-4msz7" Jan 20 18:21:13 crc kubenswrapper[4661]: I0120 18:21:13.719708 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ovn-operator-controller-manager-dockercfg-4wjhh" Jan 20 18:21:13 crc kubenswrapper[4661]: I0120 18:21:13.723504 4661 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854xbhqc"] Jan 20 18:21:13 crc kubenswrapper[4661]: I0120 18:21:13.724577 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pm8nb\" (UniqueName: \"kubernetes.io/projected/6c1159da-faf7-4389-b57b-05173827968d-kube-api-access-pm8nb\") pod \"manila-operator-controller-manager-864f6b75bf-svt25\" (UID: \"6c1159da-faf7-4389-b57b-05173827968d\") " pod="openstack-operators/manila-operator-controller-manager-864f6b75bf-svt25" Jan 20 18:21:13 crc kubenswrapper[4661]: I0120 18:21:13.724705 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-82gft\" (UniqueName: \"kubernetes.io/projected/5798b368-6725-4e14-a77c-37b7bcfd538d-kube-api-access-82gft\") pod \"octavia-operator-controller-manager-7fc9b76cf6-mqz45\" (UID: \"5798b368-6725-4e14-a77c-37b7bcfd538d\") " pod="openstack-operators/octavia-operator-controller-manager-7fc9b76cf6-mqz45" Jan 20 18:21:13 crc kubenswrapper[4661]: I0120 18:21:13.724791 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5vjzf\" (UniqueName: \"kubernetes.io/projected/52bfaf4d-624e-45d7-86d8-4c0e18afe2e6-kube-api-access-5vjzf\") pod \"nova-operator-controller-manager-65849867d6-2g55t\" (UID: \"52bfaf4d-624e-45d7-86d8-4c0e18afe2e6\") " pod="openstack-operators/nova-operator-controller-manager-65849867d6-2g55t" Jan 20 18:21:13 crc kubenswrapper[4661]: I0120 18:21:13.724875 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wsm2t\" (UniqueName: \"kubernetes.io/projected/1b070a22-e050-4db7-bc74-f8a1129a8d61-kube-api-access-wsm2t\") pod \"neutron-operator-controller-manager-cb4666565-cqf8m\" (UID: \"1b070a22-e050-4db7-bc74-f8a1129a8d61\") " pod="openstack-operators/neutron-operator-controller-manager-cb4666565-cqf8m" Jan 20 18:21:13 crc kubenswrapper[4661]: I0120 18:21:13.724999 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/70002b35-6f0d-4679-9279-a80574c467f0-cert\") pod \"infra-operator-controller-manager-77c48c7859-w8bbb\" (UID: \"70002b35-6f0d-4679-9279-a80574c467f0\") " pod="openstack-operators/infra-operator-controller-manager-77c48c7859-w8bbb" Jan 20 18:21:13 crc kubenswrapper[4661]: E0120 18:21:13.725207 4661 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 20 18:21:13 crc kubenswrapper[4661]: E0120 18:21:13.725316 4661 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/70002b35-6f0d-4679-9279-a80574c467f0-cert podName:70002b35-6f0d-4679-9279-a80574c467f0 nodeName:}" failed. No retries permitted until 2026-01-20 18:21:14.725301124 +0000 UTC m=+931.056090786 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/70002b35-6f0d-4679-9279-a80574c467f0-cert") pod "infra-operator-controller-manager-77c48c7859-w8bbb" (UID: "70002b35-6f0d-4679-9279-a80574c467f0") : secret "infra-operator-webhook-server-cert" not found Jan 20 18:21:13 crc kubenswrapper[4661]: I0120 18:21:13.734088 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854xbhqc" Jan 20 18:21:13 crc kubenswrapper[4661]: I0120 18:21:13.743620 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-658jc" Jan 20 18:21:13 crc kubenswrapper[4661]: I0120 18:21:13.743910 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert" Jan 20 18:21:13 crc kubenswrapper[4661]: I0120 18:21:13.750543 4661 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/placement-operator-controller-manager-686df47fcb-tcgdv"] Jan 20 18:21:13 crc kubenswrapper[4661]: I0120 18:21:13.751647 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-686df47fcb-tcgdv" Jan 20 18:21:13 crc kubenswrapper[4661]: I0120 18:21:13.757262 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"placement-operator-controller-manager-dockercfg-gbq9g" Jan 20 18:21:13 crc kubenswrapper[4661]: I0120 18:21:13.767513 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-55db956ddc-4msz7"] Jan 20 18:21:13 crc kubenswrapper[4661]: I0120 18:21:13.777850 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pm8nb\" (UniqueName: \"kubernetes.io/projected/6c1159da-faf7-4389-b57b-05173827968d-kube-api-access-pm8nb\") pod \"manila-operator-controller-manager-864f6b75bf-svt25\" (UID: \"6c1159da-faf7-4389-b57b-05173827968d\") " pod="openstack-operators/manila-operator-controller-manager-864f6b75bf-svt25" Jan 20 18:21:13 crc kubenswrapper[4661]: I0120 18:21:13.782548 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5vjzf\" (UniqueName: \"kubernetes.io/projected/52bfaf4d-624e-45d7-86d8-4c0e18afe2e6-kube-api-access-5vjzf\") pod \"nova-operator-controller-manager-65849867d6-2g55t\" (UID: \"52bfaf4d-624e-45d7-86d8-4c0e18afe2e6\") " pod="openstack-operators/nova-operator-controller-manager-65849867d6-2g55t" Jan 20 18:21:13 crc kubenswrapper[4661]: I0120 18:21:13.787989 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-686df47fcb-tcgdv"] Jan 20 18:21:13 crc kubenswrapper[4661]: I0120 18:21:13.816739 4661 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-controller-manager-85dd56d4cc-wqr58"] Jan 20 18:21:13 crc kubenswrapper[4661]: I0120 18:21:13.817567 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-85dd56d4cc-wqr58" Jan 20 18:21:13 crc kubenswrapper[4661]: I0120 18:21:13.826097 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-d8lfn" Jan 20 18:21:13 crc kubenswrapper[4661]: I0120 18:21:13.826306 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-82gft\" (UniqueName: \"kubernetes.io/projected/5798b368-6725-4e14-a77c-37b7bcfd538d-kube-api-access-82gft\") pod \"octavia-operator-controller-manager-7fc9b76cf6-mqz45\" (UID: \"5798b368-6725-4e14-a77c-37b7bcfd538d\") " pod="openstack-operators/octavia-operator-controller-manager-7fc9b76cf6-mqz45" Jan 20 18:21:13 crc kubenswrapper[4661]: I0120 18:21:13.826354 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wsm2t\" (UniqueName: \"kubernetes.io/projected/1b070a22-e050-4db7-bc74-f8a1129a8d61-kube-api-access-wsm2t\") pod \"neutron-operator-controller-manager-cb4666565-cqf8m\" (UID: \"1b070a22-e050-4db7-bc74-f8a1129a8d61\") " pod="openstack-operators/neutron-operator-controller-manager-cb4666565-cqf8m" Jan 20 18:21:13 crc kubenswrapper[4661]: I0120 18:21:13.826416 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fb2cz\" (UniqueName: \"kubernetes.io/projected/f61aad5b-f531-4dc0-8328-4b057c84651e-kube-api-access-fb2cz\") pod \"ovn-operator-controller-manager-55db956ddc-4msz7\" (UID: \"f61aad5b-f531-4dc0-8328-4b057c84651e\") " pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-4msz7" Jan 20 18:21:13 crc kubenswrapper[4661]: I0120 18:21:13.845955 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854xbhqc"] Jan 20 18:21:13 crc kubenswrapper[4661]: I0120 18:21:13.848303 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-85dd56d4cc-wqr58"] Jan 20 18:21:13 crc kubenswrapper[4661]: I0120 18:21:13.854427 4661 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-5f8f495fcf-2wsx8"] Jan 20 18:21:13 crc kubenswrapper[4661]: I0120 18:21:13.855353 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-5f8f495fcf-2wsx8" Jan 20 18:21:13 crc kubenswrapper[4661]: I0120 18:21:13.856604 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-82gft\" (UniqueName: \"kubernetes.io/projected/5798b368-6725-4e14-a77c-37b7bcfd538d-kube-api-access-82gft\") pod \"octavia-operator-controller-manager-7fc9b76cf6-mqz45\" (UID: \"5798b368-6725-4e14-a77c-37b7bcfd538d\") " pod="openstack-operators/octavia-operator-controller-manager-7fc9b76cf6-mqz45" Jan 20 18:21:13 crc kubenswrapper[4661]: I0120 18:21:13.857785 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"telemetry-operator-controller-manager-dockercfg-pn56m" Jan 20 18:21:13 crc kubenswrapper[4661]: I0120 18:21:13.861436 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wsm2t\" (UniqueName: \"kubernetes.io/projected/1b070a22-e050-4db7-bc74-f8a1129a8d61-kube-api-access-wsm2t\") pod \"neutron-operator-controller-manager-cb4666565-cqf8m\" (UID: \"1b070a22-e050-4db7-bc74-f8a1129a8d61\") " pod="openstack-operators/neutron-operator-controller-manager-cb4666565-cqf8m" Jan 20 18:21:13 crc kubenswrapper[4661]: I0120 18:21:13.861450 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-7fc9b76cf6-mqz45" Jan 20 18:21:13 crc kubenswrapper[4661]: I0120 18:21:13.863214 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-767fdc4f47-4g9db" Jan 20 18:21:13 crc kubenswrapper[4661]: I0120 18:21:13.867310 4661 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/test-operator-controller-manager-7cd8bc9dbb-gg985"] Jan 20 18:21:13 crc kubenswrapper[4661]: I0120 18:21:13.868519 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-7cd8bc9dbb-gg985" Jan 20 18:21:13 crc kubenswrapper[4661]: I0120 18:21:13.872384 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"test-operator-controller-manager-dockercfg-9tc57" Jan 20 18:21:13 crc kubenswrapper[4661]: I0120 18:21:13.876905 4661 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-64cd966744-hppzk"] Jan 20 18:21:13 crc kubenswrapper[4661]: I0120 18:21:13.877585 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-64cd966744-hppzk" Jan 20 18:21:13 crc kubenswrapper[4661]: I0120 18:21:13.888379 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-controller-manager-dockercfg-k4m5p" Jan 20 18:21:13 crc kubenswrapper[4661]: I0120 18:21:13.891472 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-cb4666565-cqf8m" Jan 20 18:21:13 crc kubenswrapper[4661]: I0120 18:21:13.925948 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-864f6b75bf-svt25" Jan 20 18:21:13 crc kubenswrapper[4661]: I0120 18:21:13.927200 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fhfp4\" (UniqueName: \"kubernetes.io/projected/65995719-9618-424e-a324-084d52a0cd47-kube-api-access-fhfp4\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854xbhqc\" (UID: \"65995719-9618-424e-a324-084d52a0cd47\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854xbhqc" Jan 20 18:21:13 crc kubenswrapper[4661]: I0120 18:21:13.927238 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/65995719-9618-424e-a324-084d52a0cd47-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854xbhqc\" (UID: \"65995719-9618-424e-a324-084d52a0cd47\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854xbhqc" Jan 20 18:21:13 crc kubenswrapper[4661]: I0120 18:21:13.927270 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fb2cz\" (UniqueName: \"kubernetes.io/projected/f61aad5b-f531-4dc0-8328-4b057c84651e-kube-api-access-fb2cz\") pod \"ovn-operator-controller-manager-55db956ddc-4msz7\" (UID: \"f61aad5b-f531-4dc0-8328-4b057c84651e\") " pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-4msz7" Jan 20 18:21:13 crc kubenswrapper[4661]: I0120 18:21:13.927307 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-47gnq\" (UniqueName: \"kubernetes.io/projected/497cc518-3499-43be-8aff-c4ff58803cba-kube-api-access-47gnq\") pod \"swift-operator-controller-manager-85dd56d4cc-wqr58\" (UID: \"497cc518-3499-43be-8aff-c4ff58803cba\") " pod="openstack-operators/swift-operator-controller-manager-85dd56d4cc-wqr58" Jan 20 18:21:13 crc kubenswrapper[4661]: I0120 18:21:13.927359 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2zz49\" (UniqueName: \"kubernetes.io/projected/dbbf0040-fc50-457e-ad76-42d6061a6df1-kube-api-access-2zz49\") pod \"placement-operator-controller-manager-686df47fcb-tcgdv\" (UID: \"dbbf0040-fc50-457e-ad76-42d6061a6df1\") " pod="openstack-operators/placement-operator-controller-manager-686df47fcb-tcgdv" Jan 20 18:21:13 crc kubenswrapper[4661]: I0120 18:21:13.931598 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-5f8f495fcf-2wsx8"] Jan 20 18:21:13 crc kubenswrapper[4661]: I0120 18:21:13.954900 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fb2cz\" (UniqueName: \"kubernetes.io/projected/f61aad5b-f531-4dc0-8328-4b057c84651e-kube-api-access-fb2cz\") pod \"ovn-operator-controller-manager-55db956ddc-4msz7\" (UID: \"f61aad5b-f531-4dc0-8328-4b057c84651e\") " pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-4msz7" Jan 20 18:21:13 crc kubenswrapper[4661]: I0120 18:21:13.956865 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-64cd966744-hppzk"] Jan 20 18:21:13 crc kubenswrapper[4661]: I0120 18:21:13.957349 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-69ktn" Jan 20 18:21:13 crc kubenswrapper[4661]: I0120 18:21:13.967954 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-7cd8bc9dbb-gg985"] Jan 20 18:21:13 crc kubenswrapper[4661]: I0120 18:21:13.983290 4661 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-v8gf9"] Jan 20 18:21:13 crc kubenswrapper[4661]: I0120 18:21:13.985886 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-v8gf9" Jan 20 18:21:13 crc kubenswrapper[4661]: I0120 18:21:13.988212 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-controller-manager-dockercfg-g5pjs" Jan 20 18:21:14 crc kubenswrapper[4661]: I0120 18:21:14.005650 4661 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-manager-58b4997fc9-9wjks"] Jan 20 18:21:14 crc kubenswrapper[4661]: I0120 18:21:14.006638 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-58b4997fc9-9wjks" Jan 20 18:21:14 crc kubenswrapper[4661]: I0120 18:21:14.009787 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Jan 20 18:21:14 crc kubenswrapper[4661]: I0120 18:21:14.009978 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"metrics-server-cert" Jan 20 18:21:14 crc kubenswrapper[4661]: I0120 18:21:14.010078 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-8nt9z" Jan 20 18:21:14 crc kubenswrapper[4661]: I0120 18:21:14.029274 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tq2bb\" (UniqueName: \"kubernetes.io/projected/7f267072-d784-469d-acad-238e58ddd82c-kube-api-access-tq2bb\") pod \"test-operator-controller-manager-7cd8bc9dbb-gg985\" (UID: \"7f267072-d784-469d-acad-238e58ddd82c\") " pod="openstack-operators/test-operator-controller-manager-7cd8bc9dbb-gg985" Jan 20 18:21:14 crc kubenswrapper[4661]: I0120 18:21:14.029345 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fhfp4\" (UniqueName: \"kubernetes.io/projected/65995719-9618-424e-a324-084d52a0cd47-kube-api-access-fhfp4\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854xbhqc\" (UID: \"65995719-9618-424e-a324-084d52a0cd47\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854xbhqc" Jan 20 18:21:14 crc kubenswrapper[4661]: I0120 18:21:14.029367 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/65995719-9618-424e-a324-084d52a0cd47-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854xbhqc\" (UID: \"65995719-9618-424e-a324-084d52a0cd47\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854xbhqc" Jan 20 18:21:14 crc kubenswrapper[4661]: I0120 18:21:14.029389 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6k9xk\" (UniqueName: \"kubernetes.io/projected/5a07b584-21cc-464b-a3bf-046c6e0ab18f-kube-api-access-6k9xk\") pod \"watcher-operator-controller-manager-64cd966744-hppzk\" (UID: \"5a07b584-21cc-464b-a3bf-046c6e0ab18f\") " pod="openstack-operators/watcher-operator-controller-manager-64cd966744-hppzk" Jan 20 18:21:14 crc kubenswrapper[4661]: I0120 18:21:14.029423 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vv968\" (UniqueName: \"kubernetes.io/projected/22fe1eac-c7f9-4cef-8811-db5861b4caa2-kube-api-access-vv968\") pod \"telemetry-operator-controller-manager-5f8f495fcf-2wsx8\" (UID: \"22fe1eac-c7f9-4cef-8811-db5861b4caa2\") " pod="openstack-operators/telemetry-operator-controller-manager-5f8f495fcf-2wsx8" Jan 20 18:21:14 crc kubenswrapper[4661]: I0120 18:21:14.029447 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-47gnq\" (UniqueName: \"kubernetes.io/projected/497cc518-3499-43be-8aff-c4ff58803cba-kube-api-access-47gnq\") pod \"swift-operator-controller-manager-85dd56d4cc-wqr58\" (UID: \"497cc518-3499-43be-8aff-c4ff58803cba\") " pod="openstack-operators/swift-operator-controller-manager-85dd56d4cc-wqr58" Jan 20 18:21:14 crc kubenswrapper[4661]: I0120 18:21:14.029490 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2zz49\" (UniqueName: \"kubernetes.io/projected/dbbf0040-fc50-457e-ad76-42d6061a6df1-kube-api-access-2zz49\") pod \"placement-operator-controller-manager-686df47fcb-tcgdv\" (UID: \"dbbf0040-fc50-457e-ad76-42d6061a6df1\") " pod="openstack-operators/placement-operator-controller-manager-686df47fcb-tcgdv" Jan 20 18:21:14 crc kubenswrapper[4661]: E0120 18:21:14.033731 4661 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 20 18:21:14 crc kubenswrapper[4661]: E0120 18:21:14.033806 4661 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/65995719-9618-424e-a324-084d52a0cd47-cert podName:65995719-9618-424e-a324-084d52a0cd47 nodeName:}" failed. No retries permitted until 2026-01-20 18:21:14.533788117 +0000 UTC m=+930.864577769 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/65995719-9618-424e-a324-084d52a0cd47-cert") pod "openstack-baremetal-operator-controller-manager-6b68b8b854xbhqc" (UID: "65995719-9618-424e-a324-084d52a0cd47") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 20 18:21:14 crc kubenswrapper[4661]: I0120 18:21:14.036550 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-v8gf9"] Jan 20 18:21:14 crc kubenswrapper[4661]: I0120 18:21:14.052262 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2zz49\" (UniqueName: \"kubernetes.io/projected/dbbf0040-fc50-457e-ad76-42d6061a6df1-kube-api-access-2zz49\") pod \"placement-operator-controller-manager-686df47fcb-tcgdv\" (UID: \"dbbf0040-fc50-457e-ad76-42d6061a6df1\") " pod="openstack-operators/placement-operator-controller-manager-686df47fcb-tcgdv" Jan 20 18:21:14 crc kubenswrapper[4661]: I0120 18:21:14.074552 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-47gnq\" (UniqueName: \"kubernetes.io/projected/497cc518-3499-43be-8aff-c4ff58803cba-kube-api-access-47gnq\") pod \"swift-operator-controller-manager-85dd56d4cc-wqr58\" (UID: \"497cc518-3499-43be-8aff-c4ff58803cba\") " pod="openstack-operators/swift-operator-controller-manager-85dd56d4cc-wqr58" Jan 20 18:21:14 crc kubenswrapper[4661]: I0120 18:21:14.080324 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-65849867d6-2g55t" Jan 20 18:21:14 crc kubenswrapper[4661]: I0120 18:21:14.083192 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fhfp4\" (UniqueName: \"kubernetes.io/projected/65995719-9618-424e-a324-084d52a0cd47-kube-api-access-fhfp4\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854xbhqc\" (UID: \"65995719-9618-424e-a324-084d52a0cd47\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854xbhqc" Jan 20 18:21:14 crc kubenswrapper[4661]: I0120 18:21:14.087433 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-58b4997fc9-9wjks"] Jan 20 18:21:14 crc kubenswrapper[4661]: I0120 18:21:14.133789 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d603e76e-8a9d-444f-b251-2d29b5588c8e-metrics-certs\") pod \"openstack-operator-controller-manager-58b4997fc9-9wjks\" (UID: \"d603e76e-8a9d-444f-b251-2d29b5588c8e\") " pod="openstack-operators/openstack-operator-controller-manager-58b4997fc9-9wjks" Jan 20 18:21:14 crc kubenswrapper[4661]: I0120 18:21:14.133856 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tq2bb\" (UniqueName: \"kubernetes.io/projected/7f267072-d784-469d-acad-238e58ddd82c-kube-api-access-tq2bb\") pod \"test-operator-controller-manager-7cd8bc9dbb-gg985\" (UID: \"7f267072-d784-469d-acad-238e58ddd82c\") " pod="openstack-operators/test-operator-controller-manager-7cd8bc9dbb-gg985" Jan 20 18:21:14 crc kubenswrapper[4661]: I0120 18:21:14.133931 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/d603e76e-8a9d-444f-b251-2d29b5588c8e-webhook-certs\") pod \"openstack-operator-controller-manager-58b4997fc9-9wjks\" (UID: \"d603e76e-8a9d-444f-b251-2d29b5588c8e\") " pod="openstack-operators/openstack-operator-controller-manager-58b4997fc9-9wjks" Jan 20 18:21:14 crc kubenswrapper[4661]: I0120 18:21:14.133953 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6k9xk\" (UniqueName: \"kubernetes.io/projected/5a07b584-21cc-464b-a3bf-046c6e0ab18f-kube-api-access-6k9xk\") pod \"watcher-operator-controller-manager-64cd966744-hppzk\" (UID: \"5a07b584-21cc-464b-a3bf-046c6e0ab18f\") " pod="openstack-operators/watcher-operator-controller-manager-64cd966744-hppzk" Jan 20 18:21:14 crc kubenswrapper[4661]: I0120 18:21:14.133994 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vv968\" (UniqueName: \"kubernetes.io/projected/22fe1eac-c7f9-4cef-8811-db5861b4caa2-kube-api-access-vv968\") pod \"telemetry-operator-controller-manager-5f8f495fcf-2wsx8\" (UID: \"22fe1eac-c7f9-4cef-8811-db5861b4caa2\") " pod="openstack-operators/telemetry-operator-controller-manager-5f8f495fcf-2wsx8" Jan 20 18:21:14 crc kubenswrapper[4661]: I0120 18:21:14.134041 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zbzjc\" (UniqueName: \"kubernetes.io/projected/2e78fff0-2eba-4aa9-a4b0-2f5ff775e1ec-kube-api-access-zbzjc\") pod \"rabbitmq-cluster-operator-manager-668c99d594-v8gf9\" (UID: \"2e78fff0-2eba-4aa9-a4b0-2f5ff775e1ec\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-v8gf9" Jan 20 18:21:14 crc kubenswrapper[4661]: I0120 18:21:14.134067 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rzmtq\" (UniqueName: \"kubernetes.io/projected/d603e76e-8a9d-444f-b251-2d29b5588c8e-kube-api-access-rzmtq\") pod \"openstack-operator-controller-manager-58b4997fc9-9wjks\" (UID: \"d603e76e-8a9d-444f-b251-2d29b5588c8e\") " pod="openstack-operators/openstack-operator-controller-manager-58b4997fc9-9wjks" Jan 20 18:21:14 crc kubenswrapper[4661]: I0120 18:21:14.135773 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-85dd56d4cc-wqr58" Jan 20 18:21:14 crc kubenswrapper[4661]: I0120 18:21:14.166703 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tq2bb\" (UniqueName: \"kubernetes.io/projected/7f267072-d784-469d-acad-238e58ddd82c-kube-api-access-tq2bb\") pod \"test-operator-controller-manager-7cd8bc9dbb-gg985\" (UID: \"7f267072-d784-469d-acad-238e58ddd82c\") " pod="openstack-operators/test-operator-controller-manager-7cd8bc9dbb-gg985" Jan 20 18:21:14 crc kubenswrapper[4661]: I0120 18:21:14.167218 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6k9xk\" (UniqueName: \"kubernetes.io/projected/5a07b584-21cc-464b-a3bf-046c6e0ab18f-kube-api-access-6k9xk\") pod \"watcher-operator-controller-manager-64cd966744-hppzk\" (UID: \"5a07b584-21cc-464b-a3bf-046c6e0ab18f\") " pod="openstack-operators/watcher-operator-controller-manager-64cd966744-hppzk" Jan 20 18:21:14 crc kubenswrapper[4661]: I0120 18:21:14.184554 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vv968\" (UniqueName: \"kubernetes.io/projected/22fe1eac-c7f9-4cef-8811-db5861b4caa2-kube-api-access-vv968\") pod \"telemetry-operator-controller-manager-5f8f495fcf-2wsx8\" (UID: \"22fe1eac-c7f9-4cef-8811-db5861b4caa2\") " pod="openstack-operators/telemetry-operator-controller-manager-5f8f495fcf-2wsx8" Jan 20 18:21:14 crc kubenswrapper[4661]: I0120 18:21:14.185812 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-7cd8bc9dbb-gg985" Jan 20 18:21:15 crc kubenswrapper[4661]: I0120 18:21:14.204107 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-4msz7" Jan 20 18:21:15 crc kubenswrapper[4661]: I0120 18:21:14.247445 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-686df47fcb-tcgdv" Jan 20 18:21:15 crc kubenswrapper[4661]: I0120 18:21:14.264653 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zbzjc\" (UniqueName: \"kubernetes.io/projected/2e78fff0-2eba-4aa9-a4b0-2f5ff775e1ec-kube-api-access-zbzjc\") pod \"rabbitmq-cluster-operator-manager-668c99d594-v8gf9\" (UID: \"2e78fff0-2eba-4aa9-a4b0-2f5ff775e1ec\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-v8gf9" Jan 20 18:21:15 crc kubenswrapper[4661]: I0120 18:21:14.264791 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rzmtq\" (UniqueName: \"kubernetes.io/projected/d603e76e-8a9d-444f-b251-2d29b5588c8e-kube-api-access-rzmtq\") pod \"openstack-operator-controller-manager-58b4997fc9-9wjks\" (UID: \"d603e76e-8a9d-444f-b251-2d29b5588c8e\") " pod="openstack-operators/openstack-operator-controller-manager-58b4997fc9-9wjks" Jan 20 18:21:15 crc kubenswrapper[4661]: I0120 18:21:14.264975 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d603e76e-8a9d-444f-b251-2d29b5588c8e-metrics-certs\") pod \"openstack-operator-controller-manager-58b4997fc9-9wjks\" (UID: \"d603e76e-8a9d-444f-b251-2d29b5588c8e\") " pod="openstack-operators/openstack-operator-controller-manager-58b4997fc9-9wjks" Jan 20 18:21:15 crc kubenswrapper[4661]: I0120 18:21:14.265130 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/d603e76e-8a9d-444f-b251-2d29b5588c8e-webhook-certs\") pod \"openstack-operator-controller-manager-58b4997fc9-9wjks\" (UID: \"d603e76e-8a9d-444f-b251-2d29b5588c8e\") " pod="openstack-operators/openstack-operator-controller-manager-58b4997fc9-9wjks" Jan 20 18:21:15 crc kubenswrapper[4661]: E0120 18:21:14.265402 4661 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 20 18:21:15 crc kubenswrapper[4661]: E0120 18:21:14.265612 4661 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d603e76e-8a9d-444f-b251-2d29b5588c8e-webhook-certs podName:d603e76e-8a9d-444f-b251-2d29b5588c8e nodeName:}" failed. No retries permitted until 2026-01-20 18:21:14.765594411 +0000 UTC m=+931.096384073 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/d603e76e-8a9d-444f-b251-2d29b5588c8e-webhook-certs") pod "openstack-operator-controller-manager-58b4997fc9-9wjks" (UID: "d603e76e-8a9d-444f-b251-2d29b5588c8e") : secret "webhook-server-cert" not found Jan 20 18:21:15 crc kubenswrapper[4661]: E0120 18:21:14.266079 4661 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 20 18:21:15 crc kubenswrapper[4661]: E0120 18:21:14.266141 4661 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d603e76e-8a9d-444f-b251-2d29b5588c8e-metrics-certs podName:d603e76e-8a9d-444f-b251-2d29b5588c8e nodeName:}" failed. No retries permitted until 2026-01-20 18:21:14.766125015 +0000 UTC m=+931.096914677 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/d603e76e-8a9d-444f-b251-2d29b5588c8e-metrics-certs") pod "openstack-operator-controller-manager-58b4997fc9-9wjks" (UID: "d603e76e-8a9d-444f-b251-2d29b5588c8e") : secret "metrics-server-cert" not found Jan 20 18:21:15 crc kubenswrapper[4661]: I0120 18:21:14.306867 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rzmtq\" (UniqueName: \"kubernetes.io/projected/d603e76e-8a9d-444f-b251-2d29b5588c8e-kube-api-access-rzmtq\") pod \"openstack-operator-controller-manager-58b4997fc9-9wjks\" (UID: \"d603e76e-8a9d-444f-b251-2d29b5588c8e\") " pod="openstack-operators/openstack-operator-controller-manager-58b4997fc9-9wjks" Jan 20 18:21:15 crc kubenswrapper[4661]: I0120 18:21:14.309150 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zbzjc\" (UniqueName: \"kubernetes.io/projected/2e78fff0-2eba-4aa9-a4b0-2f5ff775e1ec-kube-api-access-zbzjc\") pod \"rabbitmq-cluster-operator-manager-668c99d594-v8gf9\" (UID: \"2e78fff0-2eba-4aa9-a4b0-2f5ff775e1ec\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-v8gf9" Jan 20 18:21:15 crc kubenswrapper[4661]: I0120 18:21:14.353133 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-64cd966744-hppzk" Jan 20 18:21:15 crc kubenswrapper[4661]: I0120 18:21:14.386496 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-v8gf9" Jan 20 18:21:15 crc kubenswrapper[4661]: I0120 18:21:14.469955 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-5f8f495fcf-2wsx8" Jan 20 18:21:15 crc kubenswrapper[4661]: I0120 18:21:14.498409 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-c6994669c-gzjg9"] Jan 20 18:21:15 crc kubenswrapper[4661]: I0120 18:21:14.502986 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-9f958b845-dw6hd"] Jan 20 18:21:15 crc kubenswrapper[4661]: I0120 18:21:14.510716 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7ddb5c749-bbwzg"] Jan 20 18:21:15 crc kubenswrapper[4661]: I0120 18:21:14.579336 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/65995719-9618-424e-a324-084d52a0cd47-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854xbhqc\" (UID: \"65995719-9618-424e-a324-084d52a0cd47\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854xbhqc" Jan 20 18:21:15 crc kubenswrapper[4661]: E0120 18:21:14.579783 4661 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 20 18:21:15 crc kubenswrapper[4661]: E0120 18:21:14.579821 4661 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/65995719-9618-424e-a324-084d52a0cd47-cert podName:65995719-9618-424e-a324-084d52a0cd47 nodeName:}" failed. No retries permitted until 2026-01-20 18:21:15.579808035 +0000 UTC m=+931.910597697 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/65995719-9618-424e-a324-084d52a0cd47-cert") pod "openstack-baremetal-operator-controller-manager-6b68b8b854xbhqc" (UID: "65995719-9618-424e-a324-084d52a0cd47") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 20 18:21:15 crc kubenswrapper[4661]: I0120 18:21:14.781796 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d603e76e-8a9d-444f-b251-2d29b5588c8e-metrics-certs\") pod \"openstack-operator-controller-manager-58b4997fc9-9wjks\" (UID: \"d603e76e-8a9d-444f-b251-2d29b5588c8e\") " pod="openstack-operators/openstack-operator-controller-manager-58b4997fc9-9wjks" Jan 20 18:21:15 crc kubenswrapper[4661]: I0120 18:21:14.781844 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/70002b35-6f0d-4679-9279-a80574c467f0-cert\") pod \"infra-operator-controller-manager-77c48c7859-w8bbb\" (UID: \"70002b35-6f0d-4679-9279-a80574c467f0\") " pod="openstack-operators/infra-operator-controller-manager-77c48c7859-w8bbb" Jan 20 18:21:15 crc kubenswrapper[4661]: I0120 18:21:14.781871 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/d603e76e-8a9d-444f-b251-2d29b5588c8e-webhook-certs\") pod \"openstack-operator-controller-manager-58b4997fc9-9wjks\" (UID: \"d603e76e-8a9d-444f-b251-2d29b5588c8e\") " pod="openstack-operators/openstack-operator-controller-manager-58b4997fc9-9wjks" Jan 20 18:21:15 crc kubenswrapper[4661]: E0120 18:21:14.781944 4661 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 20 18:21:15 crc kubenswrapper[4661]: E0120 18:21:14.781998 4661 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d603e76e-8a9d-444f-b251-2d29b5588c8e-metrics-certs podName:d603e76e-8a9d-444f-b251-2d29b5588c8e nodeName:}" failed. No retries permitted until 2026-01-20 18:21:15.781981509 +0000 UTC m=+932.112771171 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/d603e76e-8a9d-444f-b251-2d29b5588c8e-metrics-certs") pod "openstack-operator-controller-manager-58b4997fc9-9wjks" (UID: "d603e76e-8a9d-444f-b251-2d29b5588c8e") : secret "metrics-server-cert" not found Jan 20 18:21:15 crc kubenswrapper[4661]: E0120 18:21:14.782033 4661 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 20 18:21:15 crc kubenswrapper[4661]: E0120 18:21:14.782068 4661 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d603e76e-8a9d-444f-b251-2d29b5588c8e-webhook-certs podName:d603e76e-8a9d-444f-b251-2d29b5588c8e nodeName:}" failed. No retries permitted until 2026-01-20 18:21:15.782059281 +0000 UTC m=+932.112848943 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/d603e76e-8a9d-444f-b251-2d29b5588c8e-webhook-certs") pod "openstack-operator-controller-manager-58b4997fc9-9wjks" (UID: "d603e76e-8a9d-444f-b251-2d29b5588c8e") : secret "webhook-server-cert" not found Jan 20 18:21:15 crc kubenswrapper[4661]: E0120 18:21:14.782070 4661 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 20 18:21:15 crc kubenswrapper[4661]: E0120 18:21:14.782157 4661 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/70002b35-6f0d-4679-9279-a80574c467f0-cert podName:70002b35-6f0d-4679-9279-a80574c467f0 nodeName:}" failed. No retries permitted until 2026-01-20 18:21:16.782138803 +0000 UTC m=+933.112928455 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/70002b35-6f0d-4679-9279-a80574c467f0-cert") pod "infra-operator-controller-manager-77c48c7859-w8bbb" (UID: "70002b35-6f0d-4679-9279-a80574c467f0") : secret "infra-operator-webhook-server-cert" not found Jan 20 18:21:15 crc kubenswrapper[4661]: I0120 18:21:15.592565 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/65995719-9618-424e-a324-084d52a0cd47-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854xbhqc\" (UID: \"65995719-9618-424e-a324-084d52a0cd47\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854xbhqc" Jan 20 18:21:15 crc kubenswrapper[4661]: E0120 18:21:15.592796 4661 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 20 18:21:15 crc kubenswrapper[4661]: E0120 18:21:15.592876 4661 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/65995719-9618-424e-a324-084d52a0cd47-cert podName:65995719-9618-424e-a324-084d52a0cd47 nodeName:}" failed. No retries permitted until 2026-01-20 18:21:17.592852689 +0000 UTC m=+933.923642371 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/65995719-9618-424e-a324-084d52a0cd47-cert") pod "openstack-baremetal-operator-controller-manager-6b68b8b854xbhqc" (UID: "65995719-9618-424e-a324-084d52a0cd47") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 20 18:21:15 crc kubenswrapper[4661]: W0120 18:21:15.669869 4661 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod08e08814_f213_4476_a78d_82cddc30022d.slice/crio-437c4d0be20d40a05fe91cd8c3f1f6aca86360d2b28ba8372df624f0eb8aeaf7 WatchSource:0}: Error finding container 437c4d0be20d40a05fe91cd8c3f1f6aca86360d2b28ba8372df624f0eb8aeaf7: Status 404 returned error can't find the container with id 437c4d0be20d40a05fe91cd8c3f1f6aca86360d2b28ba8372df624f0eb8aeaf7 Jan 20 18:21:15 crc kubenswrapper[4661]: I0120 18:21:15.795072 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d603e76e-8a9d-444f-b251-2d29b5588c8e-metrics-certs\") pod \"openstack-operator-controller-manager-58b4997fc9-9wjks\" (UID: \"d603e76e-8a9d-444f-b251-2d29b5588c8e\") " pod="openstack-operators/openstack-operator-controller-manager-58b4997fc9-9wjks" Jan 20 18:21:15 crc kubenswrapper[4661]: I0120 18:21:15.795212 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/d603e76e-8a9d-444f-b251-2d29b5588c8e-webhook-certs\") pod \"openstack-operator-controller-manager-58b4997fc9-9wjks\" (UID: \"d603e76e-8a9d-444f-b251-2d29b5588c8e\") " pod="openstack-operators/openstack-operator-controller-manager-58b4997fc9-9wjks" Jan 20 18:21:15 crc kubenswrapper[4661]: E0120 18:21:15.795721 4661 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 20 18:21:15 crc kubenswrapper[4661]: E0120 18:21:15.795817 4661 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d603e76e-8a9d-444f-b251-2d29b5588c8e-webhook-certs podName:d603e76e-8a9d-444f-b251-2d29b5588c8e nodeName:}" failed. No retries permitted until 2026-01-20 18:21:17.795797003 +0000 UTC m=+934.126586665 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/d603e76e-8a9d-444f-b251-2d29b5588c8e-webhook-certs") pod "openstack-operator-controller-manager-58b4997fc9-9wjks" (UID: "d603e76e-8a9d-444f-b251-2d29b5588c8e") : secret "webhook-server-cert" not found Jan 20 18:21:15 crc kubenswrapper[4661]: E0120 18:21:15.795917 4661 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 20 18:21:15 crc kubenswrapper[4661]: E0120 18:21:15.795980 4661 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d603e76e-8a9d-444f-b251-2d29b5588c8e-metrics-certs podName:d603e76e-8a9d-444f-b251-2d29b5588c8e nodeName:}" failed. No retries permitted until 2026-01-20 18:21:17.795969698 +0000 UTC m=+934.126759360 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/d603e76e-8a9d-444f-b251-2d29b5588c8e-metrics-certs") pod "openstack-operator-controller-manager-58b4997fc9-9wjks" (UID: "d603e76e-8a9d-444f-b251-2d29b5588c8e") : secret "metrics-server-cert" not found Jan 20 18:21:15 crc kubenswrapper[4661]: I0120 18:21:15.810115 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-c6994669c-gzjg9" event={"ID":"eccd3436-cb57-49b8-a2f7-106fe5e39c7d","Type":"ContainerStarted","Data":"d0a5553bdb6df1c1e51ae091cdec493cdb1114be819338a1c5b71e6cc0e6e0e3"} Jan 20 18:21:15 crc kubenswrapper[4661]: I0120 18:21:15.811753 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-9f958b845-dw6hd" event={"ID":"08e08814-f213-4476-a78d-82cddc30022d","Type":"ContainerStarted","Data":"437c4d0be20d40a05fe91cd8c3f1f6aca86360d2b28ba8372df624f0eb8aeaf7"} Jan 20 18:21:15 crc kubenswrapper[4661]: I0120 18:21:15.813073 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-7ddb5c749-bbwzg" event={"ID":"e257e7b3-ba70-44d2-abb9-6a6848bf1c06","Type":"ContainerStarted","Data":"d96f53b1df886833671630f29d1bdbf0fc422bfb2c732be06d4c127c09465098"} Jan 20 18:21:15 crc kubenswrapper[4661]: I0120 18:21:15.833766 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-594c8c9d5d-r5bws"] Jan 20 18:21:15 crc kubenswrapper[4661]: W0120 18:21:15.900394 4661 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2bf3fc47_9ca2_45aa_9835_1ed5d413b0ec.slice/crio-d88c958b57c35033aef7e95d16b9bade8dd7e2e34d0532b35d973c6b2993fd79 WatchSource:0}: Error finding container d88c958b57c35033aef7e95d16b9bade8dd7e2e34d0532b35d973c6b2993fd79: Status 404 returned error can't find the container with id d88c958b57c35033aef7e95d16b9bade8dd7e2e34d0532b35d973c6b2993fd79 Jan 20 18:21:15 crc kubenswrapper[4661]: I0120 18:21:15.960610 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-78757b4889-cszrc"] Jan 20 18:21:15 crc kubenswrapper[4661]: I0120 18:21:15.998479 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-77d5c5b54f-5w4m2"] Jan 20 18:21:16 crc kubenswrapper[4661]: I0120 18:21:16.030053 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-9b68f5989-hk9zx"] Jan 20 18:21:16 crc kubenswrapper[4661]: I0120 18:21:16.367016 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-7fc9b76cf6-mqz45"] Jan 20 18:21:16 crc kubenswrapper[4661]: W0120 18:21:16.399093 4661 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5798b368_6725_4e14_a77c_37b7bcfd538d.slice/crio-953d68f3ac57a6c264e3caef76c3ba8897cdcd8844e046f22c898e0d51428263 WatchSource:0}: Error finding container 953d68f3ac57a6c264e3caef76c3ba8897cdcd8844e046f22c898e0d51428263: Status 404 returned error can't find the container with id 953d68f3ac57a6c264e3caef76c3ba8897cdcd8844e046f22c898e0d51428263 Jan 20 18:21:16 crc kubenswrapper[4661]: I0120 18:21:16.409404 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-c87fff755-69ktn"] Jan 20 18:21:16 crc kubenswrapper[4661]: I0120 18:21:16.435231 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-65849867d6-2g55t"] Jan 20 18:21:16 crc kubenswrapper[4661]: W0120 18:21:16.446848 4661 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6c1159da_faf7_4389_b57b_05173827968d.slice/crio-1293ff9279a7751a67623d04d14ab55a069a4a49c2439e1d24dd772a758d72f1 WatchSource:0}: Error finding container 1293ff9279a7751a67623d04d14ab55a069a4a49c2439e1d24dd772a758d72f1: Status 404 returned error can't find the container with id 1293ff9279a7751a67623d04d14ab55a069a4a49c2439e1d24dd772a758d72f1 Jan 20 18:21:16 crc kubenswrapper[4661]: I0120 18:21:16.447038 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-864f6b75bf-svt25"] Jan 20 18:21:16 crc kubenswrapper[4661]: I0120 18:21:16.465544 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-767fdc4f47-4g9db"] Jan 20 18:21:16 crc kubenswrapper[4661]: W0120 18:21:16.467915 4661 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1b070a22_e050_4db7_bc74_f8a1129a8d61.slice/crio-9ca6a796a32fc1bc2cc091e4919b7f28f653e386e0347e6cfaf57b81f60399fb WatchSource:0}: Error finding container 9ca6a796a32fc1bc2cc091e4919b7f28f653e386e0347e6cfaf57b81f60399fb: Status 404 returned error can't find the container with id 9ca6a796a32fc1bc2cc091e4919b7f28f653e386e0347e6cfaf57b81f60399fb Jan 20 18:21:16 crc kubenswrapper[4661]: I0120 18:21:16.473619 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-cb4666565-cqf8m"] Jan 20 18:21:16 crc kubenswrapper[4661]: I0120 18:21:16.613905 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-85dd56d4cc-wqr58"] Jan 20 18:21:16 crc kubenswrapper[4661]: W0120 18:21:16.618240 4661 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5a07b584_21cc_464b_a3bf_046c6e0ab18f.slice/crio-a0f471e7b4109b7807341f8c3e0c6b7a9552c05758175c12ee73cfd45c39afd6 WatchSource:0}: Error finding container a0f471e7b4109b7807341f8c3e0c6b7a9552c05758175c12ee73cfd45c39afd6: Status 404 returned error can't find the container with id a0f471e7b4109b7807341f8c3e0c6b7a9552c05758175c12ee73cfd45c39afd6 Jan 20 18:21:16 crc kubenswrapper[4661]: E0120 18:21:16.629380 4661 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/watcher-operator@sha256:d687150a46d97eb382dcd8305a2a611943af74771debe1fa9cc13a21e51c69ad,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-6k9xk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-operator-controller-manager-64cd966744-hppzk_openstack-operators(5a07b584-21cc-464b-a3bf-046c6e0ab18f): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 20 18:21:16 crc kubenswrapper[4661]: E0120 18:21:16.632126 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/watcher-operator-controller-manager-64cd966744-hppzk" podUID="5a07b584-21cc-464b-a3bf-046c6e0ab18f" Jan 20 18:21:16 crc kubenswrapper[4661]: I0120 18:21:16.642818 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-64cd966744-hppzk"] Jan 20 18:21:16 crc kubenswrapper[4661]: E0120 18:21:16.646916 4661 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/ovn-operator@sha256:8b3bfb9e86618b7ac69443939b0968fae28a22cd62ea1e429b599ff9f8a5f8cf,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-fb2cz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-operator-controller-manager-55db956ddc-4msz7_openstack-operators(f61aad5b-f531-4dc0-8328-4b057c84651e): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 20 18:21:16 crc kubenswrapper[4661]: E0120 18:21:16.648150 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-4msz7" podUID="f61aad5b-f531-4dc0-8328-4b057c84651e" Jan 20 18:21:16 crc kubenswrapper[4661]: W0120 18:21:16.652634 4661 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7f267072_d784_469d_acad_238e58ddd82c.slice/crio-27bc4f417c9b0de49cee67d39378a02a32402e2c9c66e07312cdd58a81d8a8e4 WatchSource:0}: Error finding container 27bc4f417c9b0de49cee67d39378a02a32402e2c9c66e07312cdd58a81d8a8e4: Status 404 returned error can't find the container with id 27bc4f417c9b0de49cee67d39378a02a32402e2c9c66e07312cdd58a81d8a8e4 Jan 20 18:21:16 crc kubenswrapper[4661]: I0120 18:21:16.658936 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-55db956ddc-4msz7"] Jan 20 18:21:16 crc kubenswrapper[4661]: E0120 18:21:16.668732 4661 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/test-operator@sha256:244a4906353b84899db16a89e1ebb64491c9f85e69327cb2a72b6da0142a6e5e,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-tq2bb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-controller-manager-7cd8bc9dbb-gg985_openstack-operators(7f267072-d784-469d-acad-238e58ddd82c): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 20 18:21:16 crc kubenswrapper[4661]: E0120 18:21:16.670244 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/test-operator-controller-manager-7cd8bc9dbb-gg985" podUID="7f267072-d784-469d-acad-238e58ddd82c" Jan 20 18:21:16 crc kubenswrapper[4661]: I0120 18:21:16.679239 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-7cd8bc9dbb-gg985"] Jan 20 18:21:16 crc kubenswrapper[4661]: I0120 18:21:16.763592 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-v8gf9"] Jan 20 18:21:16 crc kubenswrapper[4661]: W0120 18:21:16.765406 4661 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2e78fff0_2eba_4aa9_a4b0_2f5ff775e1ec.slice/crio-5a8b5741167dbfa8f3060161c5a97e06d1796c875a4b2c9db04b497e71283ebe WatchSource:0}: Error finding container 5a8b5741167dbfa8f3060161c5a97e06d1796c875a4b2c9db04b497e71283ebe: Status 404 returned error can't find the container with id 5a8b5741167dbfa8f3060161c5a97e06d1796c875a4b2c9db04b497e71283ebe Jan 20 18:21:16 crc kubenswrapper[4661]: I0120 18:21:16.820224 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/70002b35-6f0d-4679-9279-a80574c467f0-cert\") pod \"infra-operator-controller-manager-77c48c7859-w8bbb\" (UID: \"70002b35-6f0d-4679-9279-a80574c467f0\") " pod="openstack-operators/infra-operator-controller-manager-77c48c7859-w8bbb" Jan 20 18:21:16 crc kubenswrapper[4661]: E0120 18:21:16.820568 4661 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 20 18:21:16 crc kubenswrapper[4661]: E0120 18:21:16.820707 4661 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/70002b35-6f0d-4679-9279-a80574c467f0-cert podName:70002b35-6f0d-4679-9279-a80574c467f0 nodeName:}" failed. No retries permitted until 2026-01-20 18:21:20.82066179 +0000 UTC m=+937.151451452 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/70002b35-6f0d-4679-9279-a80574c467f0-cert") pod "infra-operator-controller-manager-77c48c7859-w8bbb" (UID: "70002b35-6f0d-4679-9279-a80574c467f0") : secret "infra-operator-webhook-server-cert" not found Jan 20 18:21:16 crc kubenswrapper[4661]: I0120 18:21:16.825305 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-69ktn" event={"ID":"12b130a9-df33-4c1a-a145-961791dc9d9d","Type":"ContainerStarted","Data":"61545211ef88e7b3671135ad9cf6a22cf0ce63a1fc7a87037266dbe0bf1b270d"} Jan 20 18:21:16 crc kubenswrapper[4661]: I0120 18:21:16.830980 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-r5bws" event={"ID":"2bf3fc47-9ca2-45aa-9835-1ed5d413b0ec","Type":"ContainerStarted","Data":"d88c958b57c35033aef7e95d16b9bade8dd7e2e34d0532b35d973c6b2993fd79"} Jan 20 18:21:16 crc kubenswrapper[4661]: I0120 18:21:16.832137 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-7fc9b76cf6-mqz45" event={"ID":"5798b368-6725-4e14-a77c-37b7bcfd538d","Type":"ContainerStarted","Data":"953d68f3ac57a6c264e3caef76c3ba8897cdcd8844e046f22c898e0d51428263"} Jan 20 18:21:16 crc kubenswrapper[4661]: I0120 18:21:16.834022 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-78757b4889-cszrc" event={"ID":"10ed69a9-7fbf-4139-b2b2-80dec4f8cf41","Type":"ContainerStarted","Data":"4e83672a2e871706a7da15d043636f26442e3874423a1c1752c2c6d1be9eb230"} Jan 20 18:21:16 crc kubenswrapper[4661]: I0120 18:21:16.835726 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-65849867d6-2g55t" event={"ID":"52bfaf4d-624e-45d7-86d8-4c0e18afe2e6","Type":"ContainerStarted","Data":"1db92ef8ebcae32ef1bbc8142a9464d31d6a0948c8083d485a9bfe1e15ced1ad"} Jan 20 18:21:16 crc kubenswrapper[4661]: I0120 18:21:16.837046 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-864f6b75bf-svt25" event={"ID":"6c1159da-faf7-4389-b57b-05173827968d","Type":"ContainerStarted","Data":"1293ff9279a7751a67623d04d14ab55a069a4a49c2439e1d24dd772a758d72f1"} Jan 20 18:21:16 crc kubenswrapper[4661]: I0120 18:21:16.837900 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-64cd966744-hppzk" event={"ID":"5a07b584-21cc-464b-a3bf-046c6e0ab18f","Type":"ContainerStarted","Data":"a0f471e7b4109b7807341f8c3e0c6b7a9552c05758175c12ee73cfd45c39afd6"} Jan 20 18:21:16 crc kubenswrapper[4661]: E0120 18:21:16.842033 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:d687150a46d97eb382dcd8305a2a611943af74771debe1fa9cc13a21e51c69ad\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-64cd966744-hppzk" podUID="5a07b584-21cc-464b-a3bf-046c6e0ab18f" Jan 20 18:21:16 crc kubenswrapper[4661]: I0120 18:21:16.851587 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-4msz7" event={"ID":"f61aad5b-f531-4dc0-8328-4b057c84651e","Type":"ContainerStarted","Data":"7db76c3072ed144cc5c8a2f4b41e8e9e0fcdd0b9ba113cccf0055beb24f40b31"} Jan 20 18:21:16 crc kubenswrapper[4661]: E0120 18:21:16.853915 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ovn-operator@sha256:8b3bfb9e86618b7ac69443939b0968fae28a22cd62ea1e429b599ff9f8a5f8cf\\\"\"" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-4msz7" podUID="f61aad5b-f531-4dc0-8328-4b057c84651e" Jan 20 18:21:16 crc kubenswrapper[4661]: I0120 18:21:16.858342 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-cb4666565-cqf8m" event={"ID":"1b070a22-e050-4db7-bc74-f8a1129a8d61","Type":"ContainerStarted","Data":"9ca6a796a32fc1bc2cc091e4919b7f28f653e386e0347e6cfaf57b81f60399fb"} Jan 20 18:21:16 crc kubenswrapper[4661]: I0120 18:21:16.859015 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-5f8f495fcf-2wsx8"] Jan 20 18:21:16 crc kubenswrapper[4661]: I0120 18:21:16.859772 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-9b68f5989-hk9zx" event={"ID":"51bdae14-22a5-4783-8712-fc51ca6d8a07","Type":"ContainerStarted","Data":"fb9669e208ab61f8a749a827b2022b5ef976757eee59bbb13faa31e17d66667a"} Jan 20 18:21:16 crc kubenswrapper[4661]: I0120 18:21:16.861407 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-85dd56d4cc-wqr58" event={"ID":"497cc518-3499-43be-8aff-c4ff58803cba","Type":"ContainerStarted","Data":"e41eac7eebed8ed7e48319841319762b533ca235d0d0aaa9b2d68bdd9e3a1aad"} Jan 20 18:21:16 crc kubenswrapper[4661]: I0120 18:21:16.874264 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-686df47fcb-tcgdv"] Jan 20 18:21:16 crc kubenswrapper[4661]: I0120 18:21:16.880627 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-v8gf9" event={"ID":"2e78fff0-2eba-4aa9-a4b0-2f5ff775e1ec","Type":"ContainerStarted","Data":"5a8b5741167dbfa8f3060161c5a97e06d1796c875a4b2c9db04b497e71283ebe"} Jan 20 18:21:16 crc kubenswrapper[4661]: W0120 18:21:16.884475 4661 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod22fe1eac_c7f9_4cef_8811_db5861b4caa2.slice/crio-6059b409ad53cca1550068247c5018d5f00503c25f8544e3dbe4c547bc1aabbb WatchSource:0}: Error finding container 6059b409ad53cca1550068247c5018d5f00503c25f8544e3dbe4c547bc1aabbb: Status 404 returned error can't find the container with id 6059b409ad53cca1550068247c5018d5f00503c25f8544e3dbe4c547bc1aabbb Jan 20 18:21:16 crc kubenswrapper[4661]: I0120 18:21:16.894390 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-7cd8bc9dbb-gg985" event={"ID":"7f267072-d784-469d-acad-238e58ddd82c","Type":"ContainerStarted","Data":"27bc4f417c9b0de49cee67d39378a02a32402e2c9c66e07312cdd58a81d8a8e4"} Jan 20 18:21:16 crc kubenswrapper[4661]: W0120 18:21:16.895152 4661 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddbbf0040_fc50_457e_ad76_42d6061a6df1.slice/crio-9652539f87420ee18e5c2d3e376ea007da3df25fc742ebf63cd2639ae4fb9ba2 WatchSource:0}: Error finding container 9652539f87420ee18e5c2d3e376ea007da3df25fc742ebf63cd2639ae4fb9ba2: Status 404 returned error can't find the container with id 9652539f87420ee18e5c2d3e376ea007da3df25fc742ebf63cd2639ae4fb9ba2 Jan 20 18:21:16 crc kubenswrapper[4661]: E0120 18:21:16.899020 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:244a4906353b84899db16a89e1ebb64491c9f85e69327cb2a72b6da0142a6e5e\\\"\"" pod="openstack-operators/test-operator-controller-manager-7cd8bc9dbb-gg985" podUID="7f267072-d784-469d-acad-238e58ddd82c" Jan 20 18:21:16 crc kubenswrapper[4661]: I0120 18:21:16.899212 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-5w4m2" event={"ID":"04a8f9c5-45fc-47db-adf2-3de38af2cf96","Type":"ContainerStarted","Data":"314d0a15b84ea37319aa1ad8b990ba1d28ae8faa3cbe58ce7981c5c9b3f64e48"} Jan 20 18:21:16 crc kubenswrapper[4661]: E0120 18:21:16.900527 4661 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/placement-operator@sha256:146961cac3291daf96c1ca2bc7bd52bc94d1f4787a0770e23205c2c9beb0d737,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-2zz49,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-operator-controller-manager-686df47fcb-tcgdv_openstack-operators(dbbf0040-fc50-457e-ad76-42d6061a6df1): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 20 18:21:16 crc kubenswrapper[4661]: E0120 18:21:16.901605 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/placement-operator-controller-manager-686df47fcb-tcgdv" podUID="dbbf0040-fc50-457e-ad76-42d6061a6df1" Jan 20 18:21:16 crc kubenswrapper[4661]: I0120 18:21:16.902313 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-767fdc4f47-4g9db" event={"ID":"a5920876-3cd0-41cf-b7d8-6fd8ea0af29c","Type":"ContainerStarted","Data":"d2cecb1f71f076c6b6f2f2361bd23eabcb0305b72e6013e333503e82bb815e91"} Jan 20 18:21:17 crc kubenswrapper[4661]: I0120 18:21:17.640411 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/65995719-9618-424e-a324-084d52a0cd47-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854xbhqc\" (UID: \"65995719-9618-424e-a324-084d52a0cd47\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854xbhqc" Jan 20 18:21:17 crc kubenswrapper[4661]: E0120 18:21:17.640578 4661 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 20 18:21:17 crc kubenswrapper[4661]: E0120 18:21:17.640748 4661 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/65995719-9618-424e-a324-084d52a0cd47-cert podName:65995719-9618-424e-a324-084d52a0cd47 nodeName:}" failed. No retries permitted until 2026-01-20 18:21:21.640734224 +0000 UTC m=+937.971523886 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/65995719-9618-424e-a324-084d52a0cd47-cert") pod "openstack-baremetal-operator-controller-manager-6b68b8b854xbhqc" (UID: "65995719-9618-424e-a324-084d52a0cd47") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 20 18:21:17 crc kubenswrapper[4661]: I0120 18:21:17.843843 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d603e76e-8a9d-444f-b251-2d29b5588c8e-metrics-certs\") pod \"openstack-operator-controller-manager-58b4997fc9-9wjks\" (UID: \"d603e76e-8a9d-444f-b251-2d29b5588c8e\") " pod="openstack-operators/openstack-operator-controller-manager-58b4997fc9-9wjks" Jan 20 18:21:17 crc kubenswrapper[4661]: I0120 18:21:17.843917 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/d603e76e-8a9d-444f-b251-2d29b5588c8e-webhook-certs\") pod \"openstack-operator-controller-manager-58b4997fc9-9wjks\" (UID: \"d603e76e-8a9d-444f-b251-2d29b5588c8e\") " pod="openstack-operators/openstack-operator-controller-manager-58b4997fc9-9wjks" Jan 20 18:21:17 crc kubenswrapper[4661]: E0120 18:21:17.844026 4661 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 20 18:21:17 crc kubenswrapper[4661]: E0120 18:21:17.844041 4661 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 20 18:21:17 crc kubenswrapper[4661]: E0120 18:21:17.844098 4661 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d603e76e-8a9d-444f-b251-2d29b5588c8e-metrics-certs podName:d603e76e-8a9d-444f-b251-2d29b5588c8e nodeName:}" failed. No retries permitted until 2026-01-20 18:21:21.844081309 +0000 UTC m=+938.174870971 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/d603e76e-8a9d-444f-b251-2d29b5588c8e-metrics-certs") pod "openstack-operator-controller-manager-58b4997fc9-9wjks" (UID: "d603e76e-8a9d-444f-b251-2d29b5588c8e") : secret "metrics-server-cert" not found Jan 20 18:21:17 crc kubenswrapper[4661]: E0120 18:21:17.844114 4661 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d603e76e-8a9d-444f-b251-2d29b5588c8e-webhook-certs podName:d603e76e-8a9d-444f-b251-2d29b5588c8e nodeName:}" failed. No retries permitted until 2026-01-20 18:21:21.844108299 +0000 UTC m=+938.174897961 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/d603e76e-8a9d-444f-b251-2d29b5588c8e-webhook-certs") pod "openstack-operator-controller-manager-58b4997fc9-9wjks" (UID: "d603e76e-8a9d-444f-b251-2d29b5588c8e") : secret "webhook-server-cert" not found Jan 20 18:21:17 crc kubenswrapper[4661]: I0120 18:21:17.935413 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-5f8f495fcf-2wsx8" event={"ID":"22fe1eac-c7f9-4cef-8811-db5861b4caa2","Type":"ContainerStarted","Data":"6059b409ad53cca1550068247c5018d5f00503c25f8544e3dbe4c547bc1aabbb"} Jan 20 18:21:17 crc kubenswrapper[4661]: I0120 18:21:17.941643 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-686df47fcb-tcgdv" event={"ID":"dbbf0040-fc50-457e-ad76-42d6061a6df1","Type":"ContainerStarted","Data":"9652539f87420ee18e5c2d3e376ea007da3df25fc742ebf63cd2639ae4fb9ba2"} Jan 20 18:21:17 crc kubenswrapper[4661]: E0120 18:21:17.943328 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:d687150a46d97eb382dcd8305a2a611943af74771debe1fa9cc13a21e51c69ad\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-64cd966744-hppzk" podUID="5a07b584-21cc-464b-a3bf-046c6e0ab18f" Jan 20 18:21:17 crc kubenswrapper[4661]: E0120 18:21:17.947058 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:244a4906353b84899db16a89e1ebb64491c9f85e69327cb2a72b6da0142a6e5e\\\"\"" pod="openstack-operators/test-operator-controller-manager-7cd8bc9dbb-gg985" podUID="7f267072-d784-469d-acad-238e58ddd82c" Jan 20 18:21:17 crc kubenswrapper[4661]: E0120 18:21:17.950063 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ovn-operator@sha256:8b3bfb9e86618b7ac69443939b0968fae28a22cd62ea1e429b599ff9f8a5f8cf\\\"\"" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-4msz7" podUID="f61aad5b-f531-4dc0-8328-4b057c84651e" Jan 20 18:21:17 crc kubenswrapper[4661]: E0120 18:21:17.953699 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:146961cac3291daf96c1ca2bc7bd52bc94d1f4787a0770e23205c2c9beb0d737\\\"\"" pod="openstack-operators/placement-operator-controller-manager-686df47fcb-tcgdv" podUID="dbbf0040-fc50-457e-ad76-42d6061a6df1" Jan 20 18:21:18 crc kubenswrapper[4661]: E0120 18:21:18.957927 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:146961cac3291daf96c1ca2bc7bd52bc94d1f4787a0770e23205c2c9beb0d737\\\"\"" pod="openstack-operators/placement-operator-controller-manager-686df47fcb-tcgdv" podUID="dbbf0040-fc50-457e-ad76-42d6061a6df1" Jan 20 18:21:20 crc kubenswrapper[4661]: I0120 18:21:20.901475 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/70002b35-6f0d-4679-9279-a80574c467f0-cert\") pod \"infra-operator-controller-manager-77c48c7859-w8bbb\" (UID: \"70002b35-6f0d-4679-9279-a80574c467f0\") " pod="openstack-operators/infra-operator-controller-manager-77c48c7859-w8bbb" Jan 20 18:21:20 crc kubenswrapper[4661]: E0120 18:21:20.901635 4661 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 20 18:21:20 crc kubenswrapper[4661]: E0120 18:21:20.902000 4661 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/70002b35-6f0d-4679-9279-a80574c467f0-cert podName:70002b35-6f0d-4679-9279-a80574c467f0 nodeName:}" failed. No retries permitted until 2026-01-20 18:21:28.901979399 +0000 UTC m=+945.232769061 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/70002b35-6f0d-4679-9279-a80574c467f0-cert") pod "infra-operator-controller-manager-77c48c7859-w8bbb" (UID: "70002b35-6f0d-4679-9279-a80574c467f0") : secret "infra-operator-webhook-server-cert" not found Jan 20 18:21:21 crc kubenswrapper[4661]: I0120 18:21:21.714505 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/65995719-9618-424e-a324-084d52a0cd47-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854xbhqc\" (UID: \"65995719-9618-424e-a324-084d52a0cd47\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854xbhqc" Jan 20 18:21:21 crc kubenswrapper[4661]: E0120 18:21:21.714819 4661 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 20 18:21:21 crc kubenswrapper[4661]: E0120 18:21:21.714877 4661 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/65995719-9618-424e-a324-084d52a0cd47-cert podName:65995719-9618-424e-a324-084d52a0cd47 nodeName:}" failed. No retries permitted until 2026-01-20 18:21:29.714859873 +0000 UTC m=+946.045649535 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/65995719-9618-424e-a324-084d52a0cd47-cert") pod "openstack-baremetal-operator-controller-manager-6b68b8b854xbhqc" (UID: "65995719-9618-424e-a324-084d52a0cd47") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 20 18:21:21 crc kubenswrapper[4661]: I0120 18:21:21.920355 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d603e76e-8a9d-444f-b251-2d29b5588c8e-metrics-certs\") pod \"openstack-operator-controller-manager-58b4997fc9-9wjks\" (UID: \"d603e76e-8a9d-444f-b251-2d29b5588c8e\") " pod="openstack-operators/openstack-operator-controller-manager-58b4997fc9-9wjks" Jan 20 18:21:21 crc kubenswrapper[4661]: I0120 18:21:21.920764 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/d603e76e-8a9d-444f-b251-2d29b5588c8e-webhook-certs\") pod \"openstack-operator-controller-manager-58b4997fc9-9wjks\" (UID: \"d603e76e-8a9d-444f-b251-2d29b5588c8e\") " pod="openstack-operators/openstack-operator-controller-manager-58b4997fc9-9wjks" Jan 20 18:21:21 crc kubenswrapper[4661]: E0120 18:21:21.920625 4661 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 20 18:21:21 crc kubenswrapper[4661]: E0120 18:21:21.920880 4661 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 20 18:21:21 crc kubenswrapper[4661]: E0120 18:21:21.920909 4661 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d603e76e-8a9d-444f-b251-2d29b5588c8e-metrics-certs podName:d603e76e-8a9d-444f-b251-2d29b5588c8e nodeName:}" failed. No retries permitted until 2026-01-20 18:21:29.920883818 +0000 UTC m=+946.251673550 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/d603e76e-8a9d-444f-b251-2d29b5588c8e-metrics-certs") pod "openstack-operator-controller-manager-58b4997fc9-9wjks" (UID: "d603e76e-8a9d-444f-b251-2d29b5588c8e") : secret "metrics-server-cert" not found Jan 20 18:21:21 crc kubenswrapper[4661]: E0120 18:21:21.920933 4661 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d603e76e-8a9d-444f-b251-2d29b5588c8e-webhook-certs podName:d603e76e-8a9d-444f-b251-2d29b5588c8e nodeName:}" failed. No retries permitted until 2026-01-20 18:21:29.920922019 +0000 UTC m=+946.251711781 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/d603e76e-8a9d-444f-b251-2d29b5588c8e-webhook-certs") pod "openstack-operator-controller-manager-58b4997fc9-9wjks" (UID: "d603e76e-8a9d-444f-b251-2d29b5588c8e") : secret "webhook-server-cert" not found Jan 20 18:21:28 crc kubenswrapper[4661]: I0120 18:21:28.935359 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/70002b35-6f0d-4679-9279-a80574c467f0-cert\") pod \"infra-operator-controller-manager-77c48c7859-w8bbb\" (UID: \"70002b35-6f0d-4679-9279-a80574c467f0\") " pod="openstack-operators/infra-operator-controller-manager-77c48c7859-w8bbb" Jan 20 18:21:28 crc kubenswrapper[4661]: I0120 18:21:28.957341 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/70002b35-6f0d-4679-9279-a80574c467f0-cert\") pod \"infra-operator-controller-manager-77c48c7859-w8bbb\" (UID: \"70002b35-6f0d-4679-9279-a80574c467f0\") " pod="openstack-operators/infra-operator-controller-manager-77c48c7859-w8bbb" Jan 20 18:21:29 crc kubenswrapper[4661]: I0120 18:21:29.162354 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-77c48c7859-w8bbb" Jan 20 18:21:29 crc kubenswrapper[4661]: I0120 18:21:29.323284 4661 patch_prober.go:28] interesting pod/machine-config-daemon-svf7c container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 20 18:21:29 crc kubenswrapper[4661]: I0120 18:21:29.323354 4661 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-svf7c" podUID="78855c94-da90-4523-8d65-70f7fd153dee" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 20 18:21:29 crc kubenswrapper[4661]: E0120 18:21:29.702891 4661 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/glance-operator@sha256:d69a68cdac59165797daf1064f3a3b4b14b546bf1c7254070a7ed1238998c028" Jan 20 18:21:29 crc kubenswrapper[4661]: E0120 18:21:29.703409 4661 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/glance-operator@sha256:d69a68cdac59165797daf1064f3a3b4b14b546bf1c7254070a7ed1238998c028,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-87pjj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod glance-operator-controller-manager-c6994669c-gzjg9_openstack-operators(eccd3436-cb57-49b8-a2f7-106fe5e39c7d): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 20 18:21:29 crc kubenswrapper[4661]: E0120 18:21:29.705045 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/glance-operator-controller-manager-c6994669c-gzjg9" podUID="eccd3436-cb57-49b8-a2f7-106fe5e39c7d" Jan 20 18:21:29 crc kubenswrapper[4661]: I0120 18:21:29.746182 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/65995719-9618-424e-a324-084d52a0cd47-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854xbhqc\" (UID: \"65995719-9618-424e-a324-084d52a0cd47\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854xbhqc" Jan 20 18:21:29 crc kubenswrapper[4661]: I0120 18:21:29.754566 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/65995719-9618-424e-a324-084d52a0cd47-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854xbhqc\" (UID: \"65995719-9618-424e-a324-084d52a0cd47\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854xbhqc" Jan 20 18:21:29 crc kubenswrapper[4661]: I0120 18:21:29.816539 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854xbhqc" Jan 20 18:21:29 crc kubenswrapper[4661]: I0120 18:21:29.951112 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d603e76e-8a9d-444f-b251-2d29b5588c8e-metrics-certs\") pod \"openstack-operator-controller-manager-58b4997fc9-9wjks\" (UID: \"d603e76e-8a9d-444f-b251-2d29b5588c8e\") " pod="openstack-operators/openstack-operator-controller-manager-58b4997fc9-9wjks" Jan 20 18:21:29 crc kubenswrapper[4661]: I0120 18:21:29.951190 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/d603e76e-8a9d-444f-b251-2d29b5588c8e-webhook-certs\") pod \"openstack-operator-controller-manager-58b4997fc9-9wjks\" (UID: \"d603e76e-8a9d-444f-b251-2d29b5588c8e\") " pod="openstack-operators/openstack-operator-controller-manager-58b4997fc9-9wjks" Jan 20 18:21:29 crc kubenswrapper[4661]: I0120 18:21:29.954786 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/d603e76e-8a9d-444f-b251-2d29b5588c8e-webhook-certs\") pod \"openstack-operator-controller-manager-58b4997fc9-9wjks\" (UID: \"d603e76e-8a9d-444f-b251-2d29b5588c8e\") " pod="openstack-operators/openstack-operator-controller-manager-58b4997fc9-9wjks" Jan 20 18:21:29 crc kubenswrapper[4661]: I0120 18:21:29.954823 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d603e76e-8a9d-444f-b251-2d29b5588c8e-metrics-certs\") pod \"openstack-operator-controller-manager-58b4997fc9-9wjks\" (UID: \"d603e76e-8a9d-444f-b251-2d29b5588c8e\") " pod="openstack-operators/openstack-operator-controller-manager-58b4997fc9-9wjks" Jan 20 18:21:30 crc kubenswrapper[4661]: I0120 18:21:30.010754 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-58b4997fc9-9wjks" Jan 20 18:21:30 crc kubenswrapper[4661]: E0120 18:21:30.044358 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/glance-operator@sha256:d69a68cdac59165797daf1064f3a3b4b14b546bf1c7254070a7ed1238998c028\\\"\"" pod="openstack-operators/glance-operator-controller-manager-c6994669c-gzjg9" podUID="eccd3436-cb57-49b8-a2f7-106fe5e39c7d" Jan 20 18:21:30 crc kubenswrapper[4661]: E0120 18:21:30.401556 4661 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/octavia-operator@sha256:ab629ec4ce57b5cde9cd6d75069e68edca441b97b7b5a3f58804e2e61766b729" Jan 20 18:21:30 crc kubenswrapper[4661]: E0120 18:21:30.401760 4661 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/octavia-operator@sha256:ab629ec4ce57b5cde9cd6d75069e68edca441b97b7b5a3f58804e2e61766b729,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-82gft,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod octavia-operator-controller-manager-7fc9b76cf6-mqz45_openstack-operators(5798b368-6725-4e14-a77c-37b7bcfd538d): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 20 18:21:30 crc kubenswrapper[4661]: E0120 18:21:30.402954 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/octavia-operator-controller-manager-7fc9b76cf6-mqz45" podUID="5798b368-6725-4e14-a77c-37b7bcfd538d" Jan 20 18:21:31 crc kubenswrapper[4661]: E0120 18:21:31.050297 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/octavia-operator@sha256:ab629ec4ce57b5cde9cd6d75069e68edca441b97b7b5a3f58804e2e61766b729\\\"\"" pod="openstack-operators/octavia-operator-controller-manager-7fc9b76cf6-mqz45" podUID="5798b368-6725-4e14-a77c-37b7bcfd538d" Jan 20 18:21:33 crc kubenswrapper[4661]: E0120 18:21:33.179908 4661 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/ironic-operator@sha256:56c5f8b78445b3dbfc0d5afd9312906f6bef4dccf67302b0e4e5ca20bd263525" Jan 20 18:21:33 crc kubenswrapper[4661]: E0120 18:21:33.180342 4661 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/ironic-operator@sha256:56c5f8b78445b3dbfc0d5afd9312906f6bef4dccf67302b0e4e5ca20bd263525,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-44psq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ironic-operator-controller-manager-78757b4889-cszrc_openstack-operators(10ed69a9-7fbf-4139-b2b2-80dec4f8cf41): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 20 18:21:33 crc kubenswrapper[4661]: E0120 18:21:33.181590 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/ironic-operator-controller-manager-78757b4889-cszrc" podUID="10ed69a9-7fbf-4139-b2b2-80dec4f8cf41" Jan 20 18:21:34 crc kubenswrapper[4661]: E0120 18:21:34.072499 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ironic-operator@sha256:56c5f8b78445b3dbfc0d5afd9312906f6bef4dccf67302b0e4e5ca20bd263525\\\"\"" pod="openstack-operators/ironic-operator-controller-manager-78757b4889-cszrc" podUID="10ed69a9-7fbf-4139-b2b2-80dec4f8cf41" Jan 20 18:21:36 crc kubenswrapper[4661]: E0120 18:21:36.562933 4661 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/designate-operator@sha256:0d59a405f50b37c833e14c0f4987e95c8769d9ab06a7087078bdd02568c18ca8" Jan 20 18:21:36 crc kubenswrapper[4661]: E0120 18:21:36.563450 4661 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/designate-operator@sha256:0d59a405f50b37c833e14c0f4987e95c8769d9ab06a7087078bdd02568c18ca8,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-lpzwh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod designate-operator-controller-manager-9f958b845-dw6hd_openstack-operators(08e08814-f213-4476-a78d-82cddc30022d): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 20 18:21:36 crc kubenswrapper[4661]: E0120 18:21:36.564700 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/designate-operator-controller-manager-9f958b845-dw6hd" podUID="08e08814-f213-4476-a78d-82cddc30022d" Jan 20 18:21:37 crc kubenswrapper[4661]: E0120 18:21:37.110730 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/designate-operator@sha256:0d59a405f50b37c833e14c0f4987e95c8769d9ab06a7087078bdd02568c18ca8\\\"\"" pod="openstack-operators/designate-operator-controller-manager-9f958b845-dw6hd" podUID="08e08814-f213-4476-a78d-82cddc30022d" Jan 20 18:21:38 crc kubenswrapper[4661]: E0120 18:21:38.507700 4661 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/manila-operator@sha256:fd2e631e747c35a95f083418f5829d06c4b830f1fdb322368ff6190b9887ea32" Jan 20 18:21:38 crc kubenswrapper[4661]: E0120 18:21:38.508199 4661 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/manila-operator@sha256:fd2e631e747c35a95f083418f5829d06c4b830f1fdb322368ff6190b9887ea32,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-pm8nb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod manila-operator-controller-manager-864f6b75bf-svt25_openstack-operators(6c1159da-faf7-4389-b57b-05173827968d): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 20 18:21:38 crc kubenswrapper[4661]: E0120 18:21:38.509383 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/manila-operator-controller-manager-864f6b75bf-svt25" podUID="6c1159da-faf7-4389-b57b-05173827968d" Jan 20 18:21:39 crc kubenswrapper[4661]: E0120 18:21:39.122516 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/manila-operator@sha256:fd2e631e747c35a95f083418f5829d06c4b830f1fdb322368ff6190b9887ea32\\\"\"" pod="openstack-operators/manila-operator-controller-manager-864f6b75bf-svt25" podUID="6c1159da-faf7-4389-b57b-05173827968d" Jan 20 18:21:40 crc kubenswrapper[4661]: E0120 18:21:40.503505 4661 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/cinder-operator@sha256:ddb59f1a8e3fd0d641405e371e33b3d8c913af08e40e84f390e7e06f0a7f3488" Jan 20 18:21:40 crc kubenswrapper[4661]: E0120 18:21:40.504599 4661 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/cinder-operator@sha256:ddb59f1a8e3fd0d641405e371e33b3d8c913af08e40e84f390e7e06f0a7f3488,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-ps9cc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-operator-controller-manager-9b68f5989-hk9zx_openstack-operators(51bdae14-22a5-4783-8712-fc51ca6d8a07): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 20 18:21:40 crc kubenswrapper[4661]: E0120 18:21:40.506051 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/cinder-operator-controller-manager-9b68f5989-hk9zx" podUID="51bdae14-22a5-4783-8712-fc51ca6d8a07" Jan 20 18:21:41 crc kubenswrapper[4661]: E0120 18:21:41.140036 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/cinder-operator@sha256:ddb59f1a8e3fd0d641405e371e33b3d8c913af08e40e84f390e7e06f0a7f3488\\\"\"" pod="openstack-operators/cinder-operator-controller-manager-9b68f5989-hk9zx" podUID="51bdae14-22a5-4783-8712-fc51ca6d8a07" Jan 20 18:21:41 crc kubenswrapper[4661]: E0120 18:21:41.185713 4661 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/horizon-operator@sha256:3311e627bcb860d9443592a2c67078417318c9eb77d8ef4d07f9aa7027d46822" Jan 20 18:21:41 crc kubenswrapper[4661]: E0120 18:21:41.186207 4661 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/horizon-operator@sha256:3311e627bcb860d9443592a2c67078417318c9eb77d8ef4d07f9aa7027d46822,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-cqrpl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-operator-controller-manager-77d5c5b54f-5w4m2_openstack-operators(04a8f9c5-45fc-47db-adf2-3de38af2cf96): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 20 18:21:41 crc kubenswrapper[4661]: E0120 18:21:41.187389 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-5w4m2" podUID="04a8f9c5-45fc-47db-adf2-3de38af2cf96" Jan 20 18:21:42 crc kubenswrapper[4661]: E0120 18:21:42.143933 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/horizon-operator@sha256:3311e627bcb860d9443592a2c67078417318c9eb77d8ef4d07f9aa7027d46822\\\"\"" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-5w4m2" podUID="04a8f9c5-45fc-47db-adf2-3de38af2cf96" Jan 20 18:21:42 crc kubenswrapper[4661]: E0120 18:21:42.371454 4661 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/mariadb-operator@sha256:ff0b6c27e2d96afccd73fbbb5b5297a3f60c7f4f1dfd2a877152466697018d71" Jan 20 18:21:42 crc kubenswrapper[4661]: E0120 18:21:42.371752 4661 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/mariadb-operator@sha256:ff0b6c27e2d96afccd73fbbb5b5297a3f60c7f4f1dfd2a877152466697018d71,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-qpqjq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod mariadb-operator-controller-manager-c87fff755-69ktn_openstack-operators(12b130a9-df33-4c1a-a145-961791dc9d9d): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 20 18:21:42 crc kubenswrapper[4661]: E0120 18:21:42.373374 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-69ktn" podUID="12b130a9-df33-4c1a-a145-961791dc9d9d" Jan 20 18:21:42 crc kubenswrapper[4661]: E0120 18:21:42.952099 4661 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/swift-operator@sha256:9404536bf7cb7c3818e1a0f92b53e4d7c02fe7942324f32894106f02f8fc7e92" Jan 20 18:21:42 crc kubenswrapper[4661]: E0120 18:21:42.952264 4661 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/swift-operator@sha256:9404536bf7cb7c3818e1a0f92b53e4d7c02fe7942324f32894106f02f8fc7e92,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-47gnq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod swift-operator-controller-manager-85dd56d4cc-wqr58_openstack-operators(497cc518-3499-43be-8aff-c4ff58803cba): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 20 18:21:42 crc kubenswrapper[4661]: E0120 18:21:42.953452 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/swift-operator-controller-manager-85dd56d4cc-wqr58" podUID="497cc518-3499-43be-8aff-c4ff58803cba" Jan 20 18:21:43 crc kubenswrapper[4661]: E0120 18:21:43.154563 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/mariadb-operator@sha256:ff0b6c27e2d96afccd73fbbb5b5297a3f60c7f4f1dfd2a877152466697018d71\\\"\"" pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-69ktn" podUID="12b130a9-df33-4c1a-a145-961791dc9d9d" Jan 20 18:21:43 crc kubenswrapper[4661]: E0120 18:21:43.156741 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:9404536bf7cb7c3818e1a0f92b53e4d7c02fe7942324f32894106f02f8fc7e92\\\"\"" pod="openstack-operators/swift-operator-controller-manager-85dd56d4cc-wqr58" podUID="497cc518-3499-43be-8aff-c4ff58803cba" Jan 20 18:21:43 crc kubenswrapper[4661]: E0120 18:21:43.591514 4661 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/neutron-operator@sha256:0f440bf7dc937ce0135bdd328716686fd2f1320f453a9ac4e11e96383148ad6c" Jan 20 18:21:43 crc kubenswrapper[4661]: E0120 18:21:43.592019 4661 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/neutron-operator@sha256:0f440bf7dc937ce0135bdd328716686fd2f1320f453a9ac4e11e96383148ad6c,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-wsm2t,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod neutron-operator-controller-manager-cb4666565-cqf8m_openstack-operators(1b070a22-e050-4db7-bc74-f8a1129a8d61): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 20 18:21:43 crc kubenswrapper[4661]: E0120 18:21:43.593211 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/neutron-operator-controller-manager-cb4666565-cqf8m" podUID="1b070a22-e050-4db7-bc74-f8a1129a8d61" Jan 20 18:21:44 crc kubenswrapper[4661]: E0120 18:21:44.161446 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/neutron-operator@sha256:0f440bf7dc937ce0135bdd328716686fd2f1320f453a9ac4e11e96383148ad6c\\\"\"" pod="openstack-operators/neutron-operator-controller-manager-cb4666565-cqf8m" podUID="1b070a22-e050-4db7-bc74-f8a1129a8d61" Jan 20 18:21:48 crc kubenswrapper[4661]: E0120 18:21:48.198942 4661 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/barbican-operator@sha256:f0634d8cf7c2c2919ca248a6883ce43d6ae4ac59252c987a5cfe17643fe7d38a" Jan 20 18:21:48 crc kubenswrapper[4661]: E0120 18:21:48.200845 4661 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/barbican-operator@sha256:f0634d8cf7c2c2919ca248a6883ce43d6ae4ac59252c987a5cfe17643fe7d38a,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-95ww6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod barbican-operator-controller-manager-7ddb5c749-bbwzg_openstack-operators(e257e7b3-ba70-44d2-abb9-6a6848bf1c06): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 20 18:21:48 crc kubenswrapper[4661]: E0120 18:21:48.202389 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/barbican-operator-controller-manager-7ddb5c749-bbwzg" podUID="e257e7b3-ba70-44d2-abb9-6a6848bf1c06" Jan 20 18:21:49 crc kubenswrapper[4661]: E0120 18:21:49.208234 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/barbican-operator@sha256:f0634d8cf7c2c2919ca248a6883ce43d6ae4ac59252c987a5cfe17643fe7d38a\\\"\"" pod="openstack-operators/barbican-operator-controller-manager-7ddb5c749-bbwzg" podUID="e257e7b3-ba70-44d2-abb9-6a6848bf1c06" Jan 20 18:21:50 crc kubenswrapper[4661]: E0120 18:21:50.228967 4661 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/keystone-operator@sha256:393d7567eef4fd05af625389f5a7384c6bb75108b21b06183f1f5e33aac5417e" Jan 20 18:21:50 crc kubenswrapper[4661]: E0120 18:21:50.229410 4661 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/keystone-operator@sha256:393d7567eef4fd05af625389f5a7384c6bb75108b21b06183f1f5e33aac5417e,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-jjqhz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod keystone-operator-controller-manager-767fdc4f47-4g9db_openstack-operators(a5920876-3cd0-41cf-b7d8-6fd8ea0af29c): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 20 18:21:50 crc kubenswrapper[4661]: E0120 18:21:50.230540 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/keystone-operator-controller-manager-767fdc4f47-4g9db" podUID="a5920876-3cd0-41cf-b7d8-6fd8ea0af29c" Jan 20 18:21:51 crc kubenswrapper[4661]: E0120 18:21:51.151531 4661 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/nova-operator@sha256:6defa56fc6a5bfbd5b27d28ff7b1c7bc89b24b2ef956e2a6d97b2726f668a231" Jan 20 18:21:51 crc kubenswrapper[4661]: E0120 18:21:51.151808 4661 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/nova-operator@sha256:6defa56fc6a5bfbd5b27d28ff7b1c7bc89b24b2ef956e2a6d97b2726f668a231,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-5vjzf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nova-operator-controller-manager-65849867d6-2g55t_openstack-operators(52bfaf4d-624e-45d7-86d8-4c0e18afe2e6): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 20 18:21:51 crc kubenswrapper[4661]: E0120 18:21:51.153095 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/nova-operator-controller-manager-65849867d6-2g55t" podUID="52bfaf4d-624e-45d7-86d8-4c0e18afe2e6" Jan 20 18:21:51 crc kubenswrapper[4661]: E0120 18:21:51.226073 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/nova-operator@sha256:6defa56fc6a5bfbd5b27d28ff7b1c7bc89b24b2ef956e2a6d97b2726f668a231\\\"\"" pod="openstack-operators/nova-operator-controller-manager-65849867d6-2g55t" podUID="52bfaf4d-624e-45d7-86d8-4c0e18afe2e6" Jan 20 18:21:51 crc kubenswrapper[4661]: E0120 18:21:51.226232 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/keystone-operator@sha256:393d7567eef4fd05af625389f5a7384c6bb75108b21b06183f1f5e33aac5417e\\\"\"" pod="openstack-operators/keystone-operator-controller-manager-767fdc4f47-4g9db" podUID="a5920876-3cd0-41cf-b7d8-6fd8ea0af29c" Jan 20 18:21:51 crc kubenswrapper[4661]: E0120 18:21:51.981554 4661 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/ovn-operator@sha256:8b3bfb9e86618b7ac69443939b0968fae28a22cd62ea1e429b599ff9f8a5f8cf" Jan 20 18:21:51 crc kubenswrapper[4661]: E0120 18:21:51.982000 4661 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/ovn-operator@sha256:8b3bfb9e86618b7ac69443939b0968fae28a22cd62ea1e429b599ff9f8a5f8cf,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-fb2cz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-operator-controller-manager-55db956ddc-4msz7_openstack-operators(f61aad5b-f531-4dc0-8328-4b057c84651e): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 20 18:21:51 crc kubenswrapper[4661]: E0120 18:21:51.983374 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-4msz7" podUID="f61aad5b-f531-4dc0-8328-4b057c84651e" Jan 20 18:21:52 crc kubenswrapper[4661]: E0120 18:21:52.569664 4661 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/test-operator@sha256:244a4906353b84899db16a89e1ebb64491c9f85e69327cb2a72b6da0142a6e5e" Jan 20 18:21:52 crc kubenswrapper[4661]: E0120 18:21:52.569907 4661 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/test-operator@sha256:244a4906353b84899db16a89e1ebb64491c9f85e69327cb2a72b6da0142a6e5e,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-tq2bb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-controller-manager-7cd8bc9dbb-gg985_openstack-operators(7f267072-d784-469d-acad-238e58ddd82c): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 20 18:21:52 crc kubenswrapper[4661]: E0120 18:21:52.571184 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/test-operator-controller-manager-7cd8bc9dbb-gg985" podUID="7f267072-d784-469d-acad-238e58ddd82c" Jan 20 18:21:53 crc kubenswrapper[4661]: I0120 18:21:53.146096 4661 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 20 18:21:53 crc kubenswrapper[4661]: E0120 18:21:53.163702 4661 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/placement-operator@sha256:146961cac3291daf96c1ca2bc7bd52bc94d1f4787a0770e23205c2c9beb0d737" Jan 20 18:21:53 crc kubenswrapper[4661]: E0120 18:21:53.163911 4661 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/placement-operator@sha256:146961cac3291daf96c1ca2bc7bd52bc94d1f4787a0770e23205c2c9beb0d737,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-2zz49,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-operator-controller-manager-686df47fcb-tcgdv_openstack-operators(dbbf0040-fc50-457e-ad76-42d6061a6df1): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 20 18:21:53 crc kubenswrapper[4661]: E0120 18:21:53.166084 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/placement-operator-controller-manager-686df47fcb-tcgdv" podUID="dbbf0040-fc50-457e-ad76-42d6061a6df1" Jan 20 18:21:56 crc kubenswrapper[4661]: E0120 18:21:56.733799 4661 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/watcher-operator@sha256:d687150a46d97eb382dcd8305a2a611943af74771debe1fa9cc13a21e51c69ad" Jan 20 18:21:56 crc kubenswrapper[4661]: E0120 18:21:56.734637 4661 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/watcher-operator@sha256:d687150a46d97eb382dcd8305a2a611943af74771debe1fa9cc13a21e51c69ad,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-6k9xk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-operator-controller-manager-64cd966744-hppzk_openstack-operators(5a07b584-21cc-464b-a3bf-046c6e0ab18f): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 20 18:21:56 crc kubenswrapper[4661]: E0120 18:21:56.735931 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/watcher-operator-controller-manager-64cd966744-hppzk" podUID="5a07b584-21cc-464b-a3bf-046c6e0ab18f" Jan 20 18:21:58 crc kubenswrapper[4661]: E0120 18:21:58.330182 4661 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2" Jan 20 18:21:58 crc kubenswrapper[4661]: E0120 18:21:58.330605 4661 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-zbzjc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-668c99d594-v8gf9_openstack-operators(2e78fff0-2eba-4aa9-a4b0-2f5ff775e1ec): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 20 18:21:58 crc kubenswrapper[4661]: E0120 18:21:58.334156 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-v8gf9" podUID="2e78fff0-2eba-4aa9-a4b0-2f5ff775e1ec" Jan 20 18:21:58 crc kubenswrapper[4661]: I0120 18:21:58.731017 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854xbhqc"] Jan 20 18:21:58 crc kubenswrapper[4661]: W0120 18:21:58.753496 4661 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod65995719_9618_424e_a324_084d52a0cd47.slice/crio-69c926262bb64133d475a216004c98cb9f394d44c302d47c12f864913439f52a WatchSource:0}: Error finding container 69c926262bb64133d475a216004c98cb9f394d44c302d47c12f864913439f52a: Status 404 returned error can't find the container with id 69c926262bb64133d475a216004c98cb9f394d44c302d47c12f864913439f52a Jan 20 18:21:58 crc kubenswrapper[4661]: I0120 18:21:58.814184 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-77c48c7859-w8bbb"] Jan 20 18:21:59 crc kubenswrapper[4661]: I0120 18:21:59.079948 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-58b4997fc9-9wjks"] Jan 20 18:21:59 crc kubenswrapper[4661]: W0120 18:21:59.094682 4661 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd603e76e_8a9d_444f_b251_2d29b5588c8e.slice/crio-edffdf7cceedb0d82b5e220217f9a248baedae48f03d01fec9b212be5fcf964d WatchSource:0}: Error finding container edffdf7cceedb0d82b5e220217f9a248baedae48f03d01fec9b212be5fcf964d: Status 404 returned error can't find the container with id edffdf7cceedb0d82b5e220217f9a248baedae48f03d01fec9b212be5fcf964d Jan 20 18:21:59 crc kubenswrapper[4661]: I0120 18:21:59.287335 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-78757b4889-cszrc" event={"ID":"10ed69a9-7fbf-4139-b2b2-80dec4f8cf41","Type":"ContainerStarted","Data":"4bbbdb8681f69379adddc88d20386ec3d07e4a7277118977a98064d203672bfa"} Jan 20 18:21:59 crc kubenswrapper[4661]: I0120 18:21:59.287973 4661 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-78757b4889-cszrc" Jan 20 18:21:59 crc kubenswrapper[4661]: I0120 18:21:59.288452 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-5w4m2" event={"ID":"04a8f9c5-45fc-47db-adf2-3de38af2cf96","Type":"ContainerStarted","Data":"cafb575da7a47663d0c6e7bbbf793e3c8bebace5b08f55fa80d986863922274e"} Jan 20 18:21:59 crc kubenswrapper[4661]: I0120 18:21:59.288587 4661 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-5w4m2" Jan 20 18:21:59 crc kubenswrapper[4661]: I0120 18:21:59.290161 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854xbhqc" event={"ID":"65995719-9618-424e-a324-084d52a0cd47","Type":"ContainerStarted","Data":"69c926262bb64133d475a216004c98cb9f394d44c302d47c12f864913439f52a"} Jan 20 18:21:59 crc kubenswrapper[4661]: I0120 18:21:59.291372 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-r5bws" event={"ID":"2bf3fc47-9ca2-45aa-9835-1ed5d413b0ec","Type":"ContainerStarted","Data":"1b18d98bdc86e027712dc5364997af5ac5df0f8faa78a88b99b3f78af9e5b553"} Jan 20 18:21:59 crc kubenswrapper[4661]: I0120 18:21:59.291945 4661 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-r5bws" Jan 20 18:21:59 crc kubenswrapper[4661]: I0120 18:21:59.298196 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-9b68f5989-hk9zx" event={"ID":"51bdae14-22a5-4783-8712-fc51ca6d8a07","Type":"ContainerStarted","Data":"5319d18db56b053f83d991ea6bec589bcdb26db9fd87ac75e5f4e3e8d3a793e3"} Jan 20 18:21:59 crc kubenswrapper[4661]: I0120 18:21:59.298808 4661 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-9b68f5989-hk9zx" Jan 20 18:21:59 crc kubenswrapper[4661]: I0120 18:21:59.302335 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-c6994669c-gzjg9" event={"ID":"eccd3436-cb57-49b8-a2f7-106fe5e39c7d","Type":"ContainerStarted","Data":"a6874fa05e437b2711ed440e46b4763c4c5ad61a88e7e5faee4185fd41c42606"} Jan 20 18:21:59 crc kubenswrapper[4661]: I0120 18:21:59.302519 4661 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-c6994669c-gzjg9" Jan 20 18:21:59 crc kubenswrapper[4661]: I0120 18:21:59.303547 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-7fc9b76cf6-mqz45" event={"ID":"5798b368-6725-4e14-a77c-37b7bcfd538d","Type":"ContainerStarted","Data":"140e81b7f7466d482debaf6a6f90d18d0328a6dfa9b7fb044fe1f5d775bc2dda"} Jan 20 18:21:59 crc kubenswrapper[4661]: I0120 18:21:59.303882 4661 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-7fc9b76cf6-mqz45" Jan 20 18:21:59 crc kubenswrapper[4661]: I0120 18:21:59.304648 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-58b4997fc9-9wjks" event={"ID":"d603e76e-8a9d-444f-b251-2d29b5588c8e","Type":"ContainerStarted","Data":"3bdc3da16c9074ff5d8ed162efeced900a49faadbda6db7b628bf39fc6ceb0b7"} Jan 20 18:21:59 crc kubenswrapper[4661]: I0120 18:21:59.304684 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-58b4997fc9-9wjks" event={"ID":"d603e76e-8a9d-444f-b251-2d29b5588c8e","Type":"ContainerStarted","Data":"edffdf7cceedb0d82b5e220217f9a248baedae48f03d01fec9b212be5fcf964d"} Jan 20 18:21:59 crc kubenswrapper[4661]: I0120 18:21:59.304808 4661 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-58b4997fc9-9wjks" Jan 20 18:21:59 crc kubenswrapper[4661]: I0120 18:21:59.311459 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-77c48c7859-w8bbb" event={"ID":"70002b35-6f0d-4679-9279-a80574c467f0","Type":"ContainerStarted","Data":"71f8520d0a4e723925745e1e5039f65ddf79f6e88c0050ea3099ad73532a60a8"} Jan 20 18:21:59 crc kubenswrapper[4661]: I0120 18:21:59.315460 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-5f8f495fcf-2wsx8" event={"ID":"22fe1eac-c7f9-4cef-8811-db5861b4caa2","Type":"ContainerStarted","Data":"9d5ed420b885c42c9cf72d2549abc51a91a83c275b78bf1f20982970a0202929"} Jan 20 18:21:59 crc kubenswrapper[4661]: I0120 18:21:59.315606 4661 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-5f8f495fcf-2wsx8" Jan 20 18:21:59 crc kubenswrapper[4661]: I0120 18:21:59.317275 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-864f6b75bf-svt25" event={"ID":"6c1159da-faf7-4389-b57b-05173827968d","Type":"ContainerStarted","Data":"04d90338337a300da740382ee6500e3f0768a2000f4a45f644db79d5ab12295a"} Jan 20 18:21:59 crc kubenswrapper[4661]: I0120 18:21:59.317436 4661 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-864f6b75bf-svt25" Jan 20 18:21:59 crc kubenswrapper[4661]: I0120 18:21:59.319812 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-69ktn" event={"ID":"12b130a9-df33-4c1a-a145-961791dc9d9d","Type":"ContainerStarted","Data":"e4b0024967b5cd7e42630df0c15bc3cd67a8784740056c834e6d1924b4d32b0b"} Jan 20 18:21:59 crc kubenswrapper[4661]: I0120 18:21:59.319960 4661 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-69ktn" Jan 20 18:21:59 crc kubenswrapper[4661]: I0120 18:21:59.320577 4661 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ironic-operator-controller-manager-78757b4889-cszrc" podStartSLOduration=4.885472396 podStartE2EDuration="47.320567434s" podCreationTimestamp="2026-01-20 18:21:12 +0000 UTC" firstStartedPulling="2026-01-20 18:21:16.11695311 +0000 UTC m=+932.447742772" lastFinishedPulling="2026-01-20 18:21:58.552048148 +0000 UTC m=+974.882837810" observedRunningTime="2026-01-20 18:21:59.313916023 +0000 UTC m=+975.644705685" watchObservedRunningTime="2026-01-20 18:21:59.320567434 +0000 UTC m=+975.651357096" Jan 20 18:21:59 crc kubenswrapper[4661]: I0120 18:21:59.324824 4661 patch_prober.go:28] interesting pod/machine-config-daemon-svf7c container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 20 18:21:59 crc kubenswrapper[4661]: I0120 18:21:59.324885 4661 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-svf7c" podUID="78855c94-da90-4523-8d65-70f7fd153dee" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 20 18:21:59 crc kubenswrapper[4661]: I0120 18:21:59.326535 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-9f958b845-dw6hd" event={"ID":"08e08814-f213-4476-a78d-82cddc30022d","Type":"ContainerStarted","Data":"11831518adf6045dede99a4bcd313a7f9d5a0cc12b30f36d65734d938f1d4753"} Jan 20 18:21:59 crc kubenswrapper[4661]: I0120 18:21:59.326871 4661 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-9f958b845-dw6hd" Jan 20 18:21:59 crc kubenswrapper[4661]: E0120 18:21:59.330722 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-v8gf9" podUID="2e78fff0-2eba-4aa9-a4b0-2f5ff775e1ec" Jan 20 18:21:59 crc kubenswrapper[4661]: I0120 18:21:59.477824 4661 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-r5bws" podStartSLOduration=10.835423451 podStartE2EDuration="47.477660104s" podCreationTimestamp="2026-01-20 18:21:12 +0000 UTC" firstStartedPulling="2026-01-20 18:21:15.91149779 +0000 UTC m=+932.242287452" lastFinishedPulling="2026-01-20 18:21:52.553734433 +0000 UTC m=+968.884524105" observedRunningTime="2026-01-20 18:21:59.36539598 +0000 UTC m=+975.696185642" watchObservedRunningTime="2026-01-20 18:21:59.477660104 +0000 UTC m=+975.808449766" Jan 20 18:21:59 crc kubenswrapper[4661]: I0120 18:21:59.482126 4661 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-manager-58b4997fc9-9wjks" podStartSLOduration=46.482112525 podStartE2EDuration="46.482112525s" podCreationTimestamp="2026-01-20 18:21:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 18:21:59.474962371 +0000 UTC m=+975.805752033" watchObservedRunningTime="2026-01-20 18:21:59.482112525 +0000 UTC m=+975.812902187" Jan 20 18:21:59 crc kubenswrapper[4661]: I0120 18:21:59.567095 4661 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/cinder-operator-controller-manager-9b68f5989-hk9zx" podStartSLOduration=5.066738524 podStartE2EDuration="47.56707894s" podCreationTimestamp="2026-01-20 18:21:12 +0000 UTC" firstStartedPulling="2026-01-20 18:21:16.154314164 +0000 UTC m=+932.485103826" lastFinishedPulling="2026-01-20 18:21:58.65465458 +0000 UTC m=+974.985444242" observedRunningTime="2026-01-20 18:21:59.566819503 +0000 UTC m=+975.897609155" watchObservedRunningTime="2026-01-20 18:21:59.56707894 +0000 UTC m=+975.897868602" Jan 20 18:21:59 crc kubenswrapper[4661]: I0120 18:21:59.638919 4661 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-5w4m2" podStartSLOduration=5.115427152 podStartE2EDuration="47.638897258s" podCreationTimestamp="2026-01-20 18:21:12 +0000 UTC" firstStartedPulling="2026-01-20 18:21:16.137433649 +0000 UTC m=+932.468223311" lastFinishedPulling="2026-01-20 18:21:58.660903755 +0000 UTC m=+974.991693417" observedRunningTime="2026-01-20 18:21:59.635262979 +0000 UTC m=+975.966052641" watchObservedRunningTime="2026-01-20 18:21:59.638897258 +0000 UTC m=+975.969686910" Jan 20 18:21:59 crc kubenswrapper[4661]: I0120 18:21:59.736641 4661 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/glance-operator-controller-manager-c6994669c-gzjg9" podStartSLOduration=4.856226274 podStartE2EDuration="47.736626358s" podCreationTimestamp="2026-01-20 18:21:12 +0000 UTC" firstStartedPulling="2026-01-20 18:21:15.671519431 +0000 UTC m=+932.002309103" lastFinishedPulling="2026-01-20 18:21:58.551919525 +0000 UTC m=+974.882709187" observedRunningTime="2026-01-20 18:21:59.731123949 +0000 UTC m=+976.061913621" watchObservedRunningTime="2026-01-20 18:21:59.736626358 +0000 UTC m=+976.067416020" Jan 20 18:21:59 crc kubenswrapper[4661]: I0120 18:21:59.815266 4661 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/octavia-operator-controller-manager-7fc9b76cf6-mqz45" podStartSLOduration=4.628267396 podStartE2EDuration="46.815250821s" podCreationTimestamp="2026-01-20 18:21:13 +0000 UTC" firstStartedPulling="2026-01-20 18:21:16.411007593 +0000 UTC m=+932.741797255" lastFinishedPulling="2026-01-20 18:21:58.597991018 +0000 UTC m=+974.928780680" observedRunningTime="2026-01-20 18:21:59.814205612 +0000 UTC m=+976.144995274" watchObservedRunningTime="2026-01-20 18:21:59.815250821 +0000 UTC m=+976.146040483" Jan 20 18:22:00 crc kubenswrapper[4661]: I0120 18:22:00.070064 4661 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/designate-operator-controller-manager-9f958b845-dw6hd" podStartSLOduration=5.248948028 podStartE2EDuration="48.070042341s" podCreationTimestamp="2026-01-20 18:21:12 +0000 UTC" firstStartedPulling="2026-01-20 18:21:15.730849943 +0000 UTC m=+932.061639605" lastFinishedPulling="2026-01-20 18:21:58.551944216 +0000 UTC m=+974.882733918" observedRunningTime="2026-01-20 18:22:00.02428928 +0000 UTC m=+976.355078942" watchObservedRunningTime="2026-01-20 18:22:00.070042341 +0000 UTC m=+976.400832003" Jan 20 18:22:00 crc kubenswrapper[4661]: I0120 18:22:00.071864 4661 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/manila-operator-controller-manager-864f6b75bf-svt25" podStartSLOduration=4.926106671 podStartE2EDuration="47.071857991s" podCreationTimestamp="2026-01-20 18:21:13 +0000 UTC" firstStartedPulling="2026-01-20 18:21:16.452227808 +0000 UTC m=+932.783017470" lastFinishedPulling="2026-01-20 18:21:58.597979128 +0000 UTC m=+974.928768790" observedRunningTime="2026-01-20 18:22:00.065589801 +0000 UTC m=+976.396379463" watchObservedRunningTime="2026-01-20 18:22:00.071857991 +0000 UTC m=+976.402647653" Jan 20 18:22:00 crc kubenswrapper[4661]: I0120 18:22:00.178013 4661 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/telemetry-operator-controller-manager-5f8f495fcf-2wsx8" podStartSLOduration=12.093378681 podStartE2EDuration="47.177997249s" podCreationTimestamp="2026-01-20 18:21:13 +0000 UTC" firstStartedPulling="2026-01-20 18:21:16.888422394 +0000 UTC m=+933.219212056" lastFinishedPulling="2026-01-20 18:21:51.973040962 +0000 UTC m=+968.303830624" observedRunningTime="2026-01-20 18:22:00.177734852 +0000 UTC m=+976.508524514" watchObservedRunningTime="2026-01-20 18:22:00.177997249 +0000 UTC m=+976.508786911" Jan 20 18:22:00 crc kubenswrapper[4661]: I0120 18:22:00.260244 4661 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-69ktn" podStartSLOduration=5.043934543 podStartE2EDuration="47.26022925s" podCreationTimestamp="2026-01-20 18:21:13 +0000 UTC" firstStartedPulling="2026-01-20 18:21:16.437198003 +0000 UTC m=+932.767987665" lastFinishedPulling="2026-01-20 18:21:58.65349271 +0000 UTC m=+974.984282372" observedRunningTime="2026-01-20 18:22:00.257866516 +0000 UTC m=+976.588656188" watchObservedRunningTime="2026-01-20 18:22:00.26022925 +0000 UTC m=+976.591018902" Jan 20 18:22:00 crc kubenswrapper[4661]: I0120 18:22:00.341264 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-85dd56d4cc-wqr58" event={"ID":"497cc518-3499-43be-8aff-c4ff58803cba","Type":"ContainerStarted","Data":"89884a6aa98b37a6dbc50c7572e238dbec9f55c74088cf2d4208e589db2219ce"} Jan 20 18:22:00 crc kubenswrapper[4661]: I0120 18:22:00.341480 4661 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-85dd56d4cc-wqr58" Jan 20 18:22:00 crc kubenswrapper[4661]: I0120 18:22:00.342904 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-cb4666565-cqf8m" event={"ID":"1b070a22-e050-4db7-bc74-f8a1129a8d61","Type":"ContainerStarted","Data":"fa62094e5875ea99bcf68e8af8e08679d1ade06763fd3511f85c2f0cc8d321dc"} Jan 20 18:22:00 crc kubenswrapper[4661]: I0120 18:22:00.343204 4661 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-cb4666565-cqf8m" Jan 20 18:22:00 crc kubenswrapper[4661]: I0120 18:22:00.373207 4661 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-controller-manager-85dd56d4cc-wqr58" podStartSLOduration=5.235904438 podStartE2EDuration="47.373191184s" podCreationTimestamp="2026-01-20 18:21:13 +0000 UTC" firstStartedPulling="2026-01-20 18:21:16.611975535 +0000 UTC m=+932.942765197" lastFinishedPulling="2026-01-20 18:21:58.749262281 +0000 UTC m=+975.080051943" observedRunningTime="2026-01-20 18:22:00.368719782 +0000 UTC m=+976.699509454" watchObservedRunningTime="2026-01-20 18:22:00.373191184 +0000 UTC m=+976.703980846" Jan 20 18:22:00 crc kubenswrapper[4661]: I0120 18:22:00.405687 4661 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/neutron-operator-controller-manager-cb4666565-cqf8m" podStartSLOduration=5.214593051 podStartE2EDuration="47.405651074s" podCreationTimestamp="2026-01-20 18:21:13 +0000 UTC" firstStartedPulling="2026-01-20 18:21:16.469883503 +0000 UTC m=+932.800673165" lastFinishedPulling="2026-01-20 18:21:58.660941526 +0000 UTC m=+974.991731188" observedRunningTime="2026-01-20 18:22:00.402772456 +0000 UTC m=+976.733562118" watchObservedRunningTime="2026-01-20 18:22:00.405651074 +0000 UTC m=+976.736440736" Jan 20 18:22:03 crc kubenswrapper[4661]: E0120 18:22:03.143869 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ovn-operator@sha256:8b3bfb9e86618b7ac69443939b0968fae28a22cd62ea1e429b599ff9f8a5f8cf\\\"\"" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-4msz7" podUID="f61aad5b-f531-4dc0-8328-4b057c84651e" Jan 20 18:22:03 crc kubenswrapper[4661]: I0120 18:22:03.505688 4661 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-r5bws" Jan 20 18:22:04 crc kubenswrapper[4661]: I0120 18:22:04.153498 4661 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-85dd56d4cc-wqr58" Jan 20 18:22:04 crc kubenswrapper[4661]: I0120 18:22:04.385612 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854xbhqc" event={"ID":"65995719-9618-424e-a324-084d52a0cd47","Type":"ContainerStarted","Data":"6929b3b958deffbb3717c4c0c4c17b03fc70f2388ec06481ea91ec786ff80cd5"} Jan 20 18:22:04 crc kubenswrapper[4661]: I0120 18:22:04.386272 4661 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854xbhqc" Jan 20 18:22:04 crc kubenswrapper[4661]: I0120 18:22:04.387064 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-7ddb5c749-bbwzg" event={"ID":"e257e7b3-ba70-44d2-abb9-6a6848bf1c06","Type":"ContainerStarted","Data":"3ccb35fdc717e48422f5ae7ccd758438c656272a718aeddf5fc3dbf8ecafc43d"} Jan 20 18:22:04 crc kubenswrapper[4661]: I0120 18:22:04.387446 4661 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-7ddb5c749-bbwzg" Jan 20 18:22:04 crc kubenswrapper[4661]: I0120 18:22:04.388701 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-77c48c7859-w8bbb" event={"ID":"70002b35-6f0d-4679-9279-a80574c467f0","Type":"ContainerStarted","Data":"edcdb85b7ed90982537a0fda3701481888729f8bfc97758a5e0c45f3eb13d664"} Jan 20 18:22:04 crc kubenswrapper[4661]: I0120 18:22:04.389227 4661 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-77c48c7859-w8bbb" Jan 20 18:22:04 crc kubenswrapper[4661]: I0120 18:22:04.412220 4661 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854xbhqc" podStartSLOduration=46.215911305 podStartE2EDuration="51.412201851s" podCreationTimestamp="2026-01-20 18:21:13 +0000 UTC" firstStartedPulling="2026-01-20 18:21:58.766454214 +0000 UTC m=+975.097243876" lastFinishedPulling="2026-01-20 18:22:03.96274476 +0000 UTC m=+980.293534422" observedRunningTime="2026-01-20 18:22:04.405797897 +0000 UTC m=+980.736587559" watchObservedRunningTime="2026-01-20 18:22:04.412201851 +0000 UTC m=+980.742991513" Jan 20 18:22:04 crc kubenswrapper[4661]: I0120 18:22:04.424539 4661 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-77c48c7859-w8bbb" podStartSLOduration=47.322042949 podStartE2EDuration="52.424522225s" podCreationTimestamp="2026-01-20 18:21:12 +0000 UTC" firstStartedPulling="2026-01-20 18:21:58.861112537 +0000 UTC m=+975.191902199" lastFinishedPulling="2026-01-20 18:22:03.963591813 +0000 UTC m=+980.294381475" observedRunningTime="2026-01-20 18:22:04.42251241 +0000 UTC m=+980.753302092" watchObservedRunningTime="2026-01-20 18:22:04.424522225 +0000 UTC m=+980.755311887" Jan 20 18:22:04 crc kubenswrapper[4661]: I0120 18:22:04.446941 4661 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/barbican-operator-controller-manager-7ddb5c749-bbwzg" podStartSLOduration=4.155413295 podStartE2EDuration="52.446923143s" podCreationTimestamp="2026-01-20 18:21:12 +0000 UTC" firstStartedPulling="2026-01-20 18:21:15.672099366 +0000 UTC m=+932.002889038" lastFinishedPulling="2026-01-20 18:22:03.963609214 +0000 UTC m=+980.294398886" observedRunningTime="2026-01-20 18:22:04.440892689 +0000 UTC m=+980.771682371" watchObservedRunningTime="2026-01-20 18:22:04.446923143 +0000 UTC m=+980.777712815" Jan 20 18:22:04 crc kubenswrapper[4661]: I0120 18:22:04.472934 4661 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-5f8f495fcf-2wsx8" Jan 20 18:22:05 crc kubenswrapper[4661]: E0120 18:22:05.143441 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:146961cac3291daf96c1ca2bc7bd52bc94d1f4787a0770e23205c2c9beb0d737\\\"\"" pod="openstack-operators/placement-operator-controller-manager-686df47fcb-tcgdv" podUID="dbbf0040-fc50-457e-ad76-42d6061a6df1" Jan 20 18:22:07 crc kubenswrapper[4661]: E0120 18:22:07.143578 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:244a4906353b84899db16a89e1ebb64491c9f85e69327cb2a72b6da0142a6e5e\\\"\"" pod="openstack-operators/test-operator-controller-manager-7cd8bc9dbb-gg985" podUID="7f267072-d784-469d-acad-238e58ddd82c" Jan 20 18:22:07 crc kubenswrapper[4661]: I0120 18:22:07.406259 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-65849867d6-2g55t" event={"ID":"52bfaf4d-624e-45d7-86d8-4c0e18afe2e6","Type":"ContainerStarted","Data":"ccbee35f1e1b049db0e5747d9aae45ff3630b025945d7ec36b3b72760dc9f84d"} Jan 20 18:22:07 crc kubenswrapper[4661]: I0120 18:22:07.406718 4661 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-65849867d6-2g55t" Jan 20 18:22:07 crc kubenswrapper[4661]: I0120 18:22:07.407797 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-767fdc4f47-4g9db" event={"ID":"a5920876-3cd0-41cf-b7d8-6fd8ea0af29c","Type":"ContainerStarted","Data":"b4710b3d43a327591d1c8492b4fb7d6e2d4116ee745b8c84ccc9692a853406b2"} Jan 20 18:22:07 crc kubenswrapper[4661]: I0120 18:22:07.407988 4661 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-767fdc4f47-4g9db" Jan 20 18:22:07 crc kubenswrapper[4661]: I0120 18:22:07.426575 4661 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-controller-manager-65849867d6-2g55t" podStartSLOduration=4.194025389 podStartE2EDuration="54.426550386s" podCreationTimestamp="2026-01-20 18:21:13 +0000 UTC" firstStartedPulling="2026-01-20 18:21:16.450176404 +0000 UTC m=+932.780966066" lastFinishedPulling="2026-01-20 18:22:06.682701391 +0000 UTC m=+983.013491063" observedRunningTime="2026-01-20 18:22:07.418068456 +0000 UTC m=+983.748858138" watchObservedRunningTime="2026-01-20 18:22:07.426550386 +0000 UTC m=+983.757340088" Jan 20 18:22:07 crc kubenswrapper[4661]: I0120 18:22:07.439290 4661 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-767fdc4f47-4g9db" podStartSLOduration=5.275629996 podStartE2EDuration="55.439276572s" podCreationTimestamp="2026-01-20 18:21:12 +0000 UTC" firstStartedPulling="2026-01-20 18:21:16.459391097 +0000 UTC m=+932.790180759" lastFinishedPulling="2026-01-20 18:22:06.623037673 +0000 UTC m=+982.953827335" observedRunningTime="2026-01-20 18:22:07.43296118 +0000 UTC m=+983.763750842" watchObservedRunningTime="2026-01-20 18:22:07.439276572 +0000 UTC m=+983.770066234" Jan 20 18:22:09 crc kubenswrapper[4661]: E0120 18:22:09.143202 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:d687150a46d97eb382dcd8305a2a611943af74771debe1fa9cc13a21e51c69ad\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-64cd966744-hppzk" podUID="5a07b584-21cc-464b-a3bf-046c6e0ab18f" Jan 20 18:22:09 crc kubenswrapper[4661]: I0120 18:22:09.168685 4661 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-77c48c7859-w8bbb" Jan 20 18:22:09 crc kubenswrapper[4661]: I0120 18:22:09.823099 4661 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854xbhqc" Jan 20 18:22:10 crc kubenswrapper[4661]: I0120 18:22:10.017705 4661 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-58b4997fc9-9wjks" Jan 20 18:22:12 crc kubenswrapper[4661]: I0120 18:22:12.438407 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-v8gf9" event={"ID":"2e78fff0-2eba-4aa9-a4b0-2f5ff775e1ec","Type":"ContainerStarted","Data":"b6c3b3b0572e325adadccbc71ecfa023cdfd9ee120e6a225825676e6ddd7f742"} Jan 20 18:22:12 crc kubenswrapper[4661]: I0120 18:22:12.464990 4661 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-v8gf9" podStartSLOduration=4.555657986 podStartE2EDuration="59.464973921s" podCreationTimestamp="2026-01-20 18:21:13 +0000 UTC" firstStartedPulling="2026-01-20 18:21:16.7690243 +0000 UTC m=+933.099813952" lastFinishedPulling="2026-01-20 18:22:11.678340225 +0000 UTC m=+988.009129887" observedRunningTime="2026-01-20 18:22:12.453779017 +0000 UTC m=+988.784568689" watchObservedRunningTime="2026-01-20 18:22:12.464973921 +0000 UTC m=+988.795763583" Jan 20 18:22:13 crc kubenswrapper[4661]: I0120 18:22:13.063785 4661 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-7ddb5c749-bbwzg" Jan 20 18:22:13 crc kubenswrapper[4661]: I0120 18:22:13.107574 4661 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-9b68f5989-hk9zx" Jan 20 18:22:13 crc kubenswrapper[4661]: I0120 18:22:13.109240 4661 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-9f958b845-dw6hd" Jan 20 18:22:13 crc kubenswrapper[4661]: I0120 18:22:13.166557 4661 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-c6994669c-gzjg9" Jan 20 18:22:13 crc kubenswrapper[4661]: I0120 18:22:13.696627 4661 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-5w4m2" Jan 20 18:22:13 crc kubenswrapper[4661]: I0120 18:22:13.863991 4661 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-7fc9b76cf6-mqz45" Jan 20 18:22:13 crc kubenswrapper[4661]: I0120 18:22:13.866882 4661 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-767fdc4f47-4g9db" Jan 20 18:22:13 crc kubenswrapper[4661]: I0120 18:22:13.896807 4661 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-cb4666565-cqf8m" Jan 20 18:22:13 crc kubenswrapper[4661]: I0120 18:22:13.939554 4661 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-864f6b75bf-svt25" Jan 20 18:22:14 crc kubenswrapper[4661]: I0120 18:22:14.001330 4661 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-69ktn" Jan 20 18:22:14 crc kubenswrapper[4661]: I0120 18:22:14.096005 4661 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-65849867d6-2g55t" Jan 20 18:22:14 crc kubenswrapper[4661]: I0120 18:22:14.150291 4661 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-78757b4889-cszrc" Jan 20 18:22:18 crc kubenswrapper[4661]: I0120 18:22:18.732779 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-686df47fcb-tcgdv" event={"ID":"dbbf0040-fc50-457e-ad76-42d6061a6df1","Type":"ContainerStarted","Data":"444d34d6940e8f8719527ea3b47bf6215abe1646fbb73b053151a5ac7c420b39"} Jan 20 18:22:18 crc kubenswrapper[4661]: I0120 18:22:18.733433 4661 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-686df47fcb-tcgdv" Jan 20 18:22:18 crc kubenswrapper[4661]: I0120 18:22:18.734110 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-4msz7" event={"ID":"f61aad5b-f531-4dc0-8328-4b057c84651e","Type":"ContainerStarted","Data":"63bce31e7982572d719fd0cdd96e44e1392fbe095e0d903f678089c213437033"} Jan 20 18:22:18 crc kubenswrapper[4661]: I0120 18:22:18.734787 4661 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-4msz7" Jan 20 18:22:18 crc kubenswrapper[4661]: I0120 18:22:18.755834 4661 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/placement-operator-controller-manager-686df47fcb-tcgdv" podStartSLOduration=4.715638042 podStartE2EDuration="1m5.755811203s" podCreationTimestamp="2026-01-20 18:21:13 +0000 UTC" firstStartedPulling="2026-01-20 18:21:16.90039585 +0000 UTC m=+933.231185512" lastFinishedPulling="2026-01-20 18:22:17.940569001 +0000 UTC m=+994.271358673" observedRunningTime="2026-01-20 18:22:18.747881738 +0000 UTC m=+995.078671410" watchObservedRunningTime="2026-01-20 18:22:18.755811203 +0000 UTC m=+995.086600865" Jan 20 18:22:22 crc kubenswrapper[4661]: I0120 18:22:22.164357 4661 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-4msz7" podStartSLOduration=7.769268224 podStartE2EDuration="1m9.1643384s" podCreationTimestamp="2026-01-20 18:21:13 +0000 UTC" firstStartedPulling="2026-01-20 18:21:16.646593197 +0000 UTC m=+932.977382859" lastFinishedPulling="2026-01-20 18:22:18.041663373 +0000 UTC m=+994.372453035" observedRunningTime="2026-01-20 18:22:18.763324767 +0000 UTC m=+995.094114439" watchObservedRunningTime="2026-01-20 18:22:22.1643384 +0000 UTC m=+998.495128062" Jan 20 18:22:23 crc kubenswrapper[4661]: I0120 18:22:23.775442 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-7cd8bc9dbb-gg985" event={"ID":"7f267072-d784-469d-acad-238e58ddd82c","Type":"ContainerStarted","Data":"af95336e90a04f0a7b722dac4304413662724696a68005483944d150b16ff71f"} Jan 20 18:22:23 crc kubenswrapper[4661]: I0120 18:22:23.775945 4661 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-7cd8bc9dbb-gg985" Jan 20 18:22:23 crc kubenswrapper[4661]: I0120 18:22:23.776996 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-64cd966744-hppzk" event={"ID":"5a07b584-21cc-464b-a3bf-046c6e0ab18f","Type":"ContainerStarted","Data":"111a9c11c44ceb232377f3172165921f3444ee58e886bc7df0d2a6a6295dd4a2"} Jan 20 18:22:23 crc kubenswrapper[4661]: I0120 18:22:23.777171 4661 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-64cd966744-hppzk" Jan 20 18:22:23 crc kubenswrapper[4661]: I0120 18:22:23.799464 4661 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/test-operator-controller-manager-7cd8bc9dbb-gg985" podStartSLOduration=4.889258083 podStartE2EDuration="1m10.799448497s" podCreationTimestamp="2026-01-20 18:21:13 +0000 UTC" firstStartedPulling="2026-01-20 18:21:16.668513194 +0000 UTC m=+932.999302856" lastFinishedPulling="2026-01-20 18:22:22.578703588 +0000 UTC m=+998.909493270" observedRunningTime="2026-01-20 18:22:23.797749281 +0000 UTC m=+1000.128538943" watchObservedRunningTime="2026-01-20 18:22:23.799448497 +0000 UTC m=+1000.130238159" Jan 20 18:22:23 crc kubenswrapper[4661]: I0120 18:22:23.814949 4661 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-64cd966744-hppzk" podStartSLOduration=4.809634694 podStartE2EDuration="1m10.814934907s" podCreationTimestamp="2026-01-20 18:21:13 +0000 UTC" firstStartedPulling="2026-01-20 18:21:16.629092616 +0000 UTC m=+932.959882278" lastFinishedPulling="2026-01-20 18:22:22.634392819 +0000 UTC m=+998.965182491" observedRunningTime="2026-01-20 18:22:23.811694669 +0000 UTC m=+1000.142484361" watchObservedRunningTime="2026-01-20 18:22:23.814934907 +0000 UTC m=+1000.145724569" Jan 20 18:22:24 crc kubenswrapper[4661]: I0120 18:22:24.206386 4661 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-4msz7" Jan 20 18:22:24 crc kubenswrapper[4661]: I0120 18:22:24.251468 4661 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-686df47fcb-tcgdv" Jan 20 18:22:29 crc kubenswrapper[4661]: I0120 18:22:29.323464 4661 patch_prober.go:28] interesting pod/machine-config-daemon-svf7c container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 20 18:22:29 crc kubenswrapper[4661]: I0120 18:22:29.324309 4661 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-svf7c" podUID="78855c94-da90-4523-8d65-70f7fd153dee" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 20 18:22:29 crc kubenswrapper[4661]: I0120 18:22:29.324394 4661 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-svf7c" Jan 20 18:22:29 crc kubenswrapper[4661]: I0120 18:22:29.325339 4661 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"728daaf1b473865f17a594f3c69374509eda708725908283281a9d0d4f532f9a"} pod="openshift-machine-config-operator/machine-config-daemon-svf7c" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 20 18:22:29 crc kubenswrapper[4661]: I0120 18:22:29.325503 4661 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-svf7c" podUID="78855c94-da90-4523-8d65-70f7fd153dee" containerName="machine-config-daemon" containerID="cri-o://728daaf1b473865f17a594f3c69374509eda708725908283281a9d0d4f532f9a" gracePeriod=600 Jan 20 18:22:29 crc kubenswrapper[4661]: I0120 18:22:29.821995 4661 generic.go:334] "Generic (PLEG): container finished" podID="78855c94-da90-4523-8d65-70f7fd153dee" containerID="728daaf1b473865f17a594f3c69374509eda708725908283281a9d0d4f532f9a" exitCode=0 Jan 20 18:22:29 crc kubenswrapper[4661]: I0120 18:22:29.822191 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-svf7c" event={"ID":"78855c94-da90-4523-8d65-70f7fd153dee","Type":"ContainerDied","Data":"728daaf1b473865f17a594f3c69374509eda708725908283281a9d0d4f532f9a"} Jan 20 18:22:29 crc kubenswrapper[4661]: I0120 18:22:29.822321 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-svf7c" event={"ID":"78855c94-da90-4523-8d65-70f7fd153dee","Type":"ContainerStarted","Data":"6a7b06eb16aab1344c1779c2757f290ec217a65e34e3c4694e2964d4e3f3d079"} Jan 20 18:22:29 crc kubenswrapper[4661]: I0120 18:22:29.822343 4661 scope.go:117] "RemoveContainer" containerID="07ea6c09f7f6b3cd3c82aa283c5480b53e463086680df4020a3d82e4e318e5b2" Jan 20 18:22:34 crc kubenswrapper[4661]: I0120 18:22:34.189477 4661 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-7cd8bc9dbb-gg985" Jan 20 18:22:34 crc kubenswrapper[4661]: I0120 18:22:34.356766 4661 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-64cd966744-hppzk" Jan 20 18:22:50 crc kubenswrapper[4661]: I0120 18:22:50.712187 4661 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-xpqnj"] Jan 20 18:22:50 crc kubenswrapper[4661]: I0120 18:22:50.713659 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-xpqnj" Jan 20 18:22:50 crc kubenswrapper[4661]: I0120 18:22:50.718011 4661 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Jan 20 18:22:50 crc kubenswrapper[4661]: I0120 18:22:50.720396 4661 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Jan 20 18:22:50 crc kubenswrapper[4661]: I0120 18:22:50.723652 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-xpqnj"] Jan 20 18:22:50 crc kubenswrapper[4661]: I0120 18:22:50.725903 4661 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Jan 20 18:22:50 crc kubenswrapper[4661]: I0120 18:22:50.726517 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dnsmasq-dns-dockercfg-7zv8p" Jan 20 18:22:50 crc kubenswrapper[4661]: I0120 18:22:50.788864 4661 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-zxnfn"] Jan 20 18:22:50 crc kubenswrapper[4661]: I0120 18:22:50.790131 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-zxnfn" Jan 20 18:22:50 crc kubenswrapper[4661]: I0120 18:22:50.792954 4661 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Jan 20 18:22:50 crc kubenswrapper[4661]: I0120 18:22:50.800456 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-zxnfn"] Jan 20 18:22:50 crc kubenswrapper[4661]: I0120 18:22:50.807552 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pmzks\" (UniqueName: \"kubernetes.io/projected/0c4ac283-f86b-4a8e-958b-fe189004dc21-kube-api-access-pmzks\") pod \"dnsmasq-dns-675f4bcbfc-xpqnj\" (UID: \"0c4ac283-f86b-4a8e-958b-fe189004dc21\") " pod="openstack/dnsmasq-dns-675f4bcbfc-xpqnj" Jan 20 18:22:50 crc kubenswrapper[4661]: I0120 18:22:50.807654 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0c4ac283-f86b-4a8e-958b-fe189004dc21-config\") pod \"dnsmasq-dns-675f4bcbfc-xpqnj\" (UID: \"0c4ac283-f86b-4a8e-958b-fe189004dc21\") " pod="openstack/dnsmasq-dns-675f4bcbfc-xpqnj" Jan 20 18:22:50 crc kubenswrapper[4661]: I0120 18:22:50.909062 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pmzks\" (UniqueName: \"kubernetes.io/projected/0c4ac283-f86b-4a8e-958b-fe189004dc21-kube-api-access-pmzks\") pod \"dnsmasq-dns-675f4bcbfc-xpqnj\" (UID: \"0c4ac283-f86b-4a8e-958b-fe189004dc21\") " pod="openstack/dnsmasq-dns-675f4bcbfc-xpqnj" Jan 20 18:22:50 crc kubenswrapper[4661]: I0120 18:22:50.909129 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b866af6c-a952-4ef8-aee0-e18ee5799f98-config\") pod \"dnsmasq-dns-78dd6ddcc-zxnfn\" (UID: \"b866af6c-a952-4ef8-aee0-e18ee5799f98\") " pod="openstack/dnsmasq-dns-78dd6ddcc-zxnfn" Jan 20 18:22:50 crc kubenswrapper[4661]: I0120 18:22:50.909162 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v8hm6\" (UniqueName: \"kubernetes.io/projected/b866af6c-a952-4ef8-aee0-e18ee5799f98-kube-api-access-v8hm6\") pod \"dnsmasq-dns-78dd6ddcc-zxnfn\" (UID: \"b866af6c-a952-4ef8-aee0-e18ee5799f98\") " pod="openstack/dnsmasq-dns-78dd6ddcc-zxnfn" Jan 20 18:22:50 crc kubenswrapper[4661]: I0120 18:22:50.909198 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b866af6c-a952-4ef8-aee0-e18ee5799f98-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-zxnfn\" (UID: \"b866af6c-a952-4ef8-aee0-e18ee5799f98\") " pod="openstack/dnsmasq-dns-78dd6ddcc-zxnfn" Jan 20 18:22:50 crc kubenswrapper[4661]: I0120 18:22:50.909230 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0c4ac283-f86b-4a8e-958b-fe189004dc21-config\") pod \"dnsmasq-dns-675f4bcbfc-xpqnj\" (UID: \"0c4ac283-f86b-4a8e-958b-fe189004dc21\") " pod="openstack/dnsmasq-dns-675f4bcbfc-xpqnj" Jan 20 18:22:50 crc kubenswrapper[4661]: I0120 18:22:50.910168 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0c4ac283-f86b-4a8e-958b-fe189004dc21-config\") pod \"dnsmasq-dns-675f4bcbfc-xpqnj\" (UID: \"0c4ac283-f86b-4a8e-958b-fe189004dc21\") " pod="openstack/dnsmasq-dns-675f4bcbfc-xpqnj" Jan 20 18:22:50 crc kubenswrapper[4661]: I0120 18:22:50.928232 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pmzks\" (UniqueName: \"kubernetes.io/projected/0c4ac283-f86b-4a8e-958b-fe189004dc21-kube-api-access-pmzks\") pod \"dnsmasq-dns-675f4bcbfc-xpqnj\" (UID: \"0c4ac283-f86b-4a8e-958b-fe189004dc21\") " pod="openstack/dnsmasq-dns-675f4bcbfc-xpqnj" Jan 20 18:22:51 crc kubenswrapper[4661]: I0120 18:22:51.009950 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b866af6c-a952-4ef8-aee0-e18ee5799f98-config\") pod \"dnsmasq-dns-78dd6ddcc-zxnfn\" (UID: \"b866af6c-a952-4ef8-aee0-e18ee5799f98\") " pod="openstack/dnsmasq-dns-78dd6ddcc-zxnfn" Jan 20 18:22:51 crc kubenswrapper[4661]: I0120 18:22:51.010002 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v8hm6\" (UniqueName: \"kubernetes.io/projected/b866af6c-a952-4ef8-aee0-e18ee5799f98-kube-api-access-v8hm6\") pod \"dnsmasq-dns-78dd6ddcc-zxnfn\" (UID: \"b866af6c-a952-4ef8-aee0-e18ee5799f98\") " pod="openstack/dnsmasq-dns-78dd6ddcc-zxnfn" Jan 20 18:22:51 crc kubenswrapper[4661]: I0120 18:22:51.010045 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b866af6c-a952-4ef8-aee0-e18ee5799f98-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-zxnfn\" (UID: \"b866af6c-a952-4ef8-aee0-e18ee5799f98\") " pod="openstack/dnsmasq-dns-78dd6ddcc-zxnfn" Jan 20 18:22:51 crc kubenswrapper[4661]: I0120 18:22:51.011130 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b866af6c-a952-4ef8-aee0-e18ee5799f98-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-zxnfn\" (UID: \"b866af6c-a952-4ef8-aee0-e18ee5799f98\") " pod="openstack/dnsmasq-dns-78dd6ddcc-zxnfn" Jan 20 18:22:51 crc kubenswrapper[4661]: I0120 18:22:51.011326 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b866af6c-a952-4ef8-aee0-e18ee5799f98-config\") pod \"dnsmasq-dns-78dd6ddcc-zxnfn\" (UID: \"b866af6c-a952-4ef8-aee0-e18ee5799f98\") " pod="openstack/dnsmasq-dns-78dd6ddcc-zxnfn" Jan 20 18:22:51 crc kubenswrapper[4661]: I0120 18:22:51.027428 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v8hm6\" (UniqueName: \"kubernetes.io/projected/b866af6c-a952-4ef8-aee0-e18ee5799f98-kube-api-access-v8hm6\") pod \"dnsmasq-dns-78dd6ddcc-zxnfn\" (UID: \"b866af6c-a952-4ef8-aee0-e18ee5799f98\") " pod="openstack/dnsmasq-dns-78dd6ddcc-zxnfn" Jan 20 18:22:51 crc kubenswrapper[4661]: I0120 18:22:51.029974 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-xpqnj" Jan 20 18:22:51 crc kubenswrapper[4661]: I0120 18:22:51.105084 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-zxnfn" Jan 20 18:22:51 crc kubenswrapper[4661]: I0120 18:22:51.405798 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-zxnfn"] Jan 20 18:22:51 crc kubenswrapper[4661]: I0120 18:22:51.493087 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-xpqnj"] Jan 20 18:22:51 crc kubenswrapper[4661]: I0120 18:22:51.995016 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-xpqnj" event={"ID":"0c4ac283-f86b-4a8e-958b-fe189004dc21","Type":"ContainerStarted","Data":"b36212e36ecfa8c1908c66dc37b7a85f5b308593a451a651a5bda2c9e266544a"} Jan 20 18:22:51 crc kubenswrapper[4661]: I0120 18:22:51.996082 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-zxnfn" event={"ID":"b866af6c-a952-4ef8-aee0-e18ee5799f98","Type":"ContainerStarted","Data":"30d83c7d8c624d246c96b2f438fe0cd46be4681f5c07de111c8c7a3d61738c81"} Jan 20 18:22:53 crc kubenswrapper[4661]: I0120 18:22:53.559251 4661 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-xpqnj"] Jan 20 18:22:53 crc kubenswrapper[4661]: I0120 18:22:53.600916 4661 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-5v9q4"] Jan 20 18:22:53 crc kubenswrapper[4661]: I0120 18:22:53.601981 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-5v9q4" Jan 20 18:22:53 crc kubenswrapper[4661]: I0120 18:22:53.615731 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-5v9q4"] Jan 20 18:22:53 crc kubenswrapper[4661]: I0120 18:22:53.689790 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7ct2d\" (UniqueName: \"kubernetes.io/projected/02aac5db-0152-44ea-94e6-4e8ef20cbe41-kube-api-access-7ct2d\") pod \"dnsmasq-dns-666b6646f7-5v9q4\" (UID: \"02aac5db-0152-44ea-94e6-4e8ef20cbe41\") " pod="openstack/dnsmasq-dns-666b6646f7-5v9q4" Jan 20 18:22:53 crc kubenswrapper[4661]: I0120 18:22:53.689864 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/02aac5db-0152-44ea-94e6-4e8ef20cbe41-config\") pod \"dnsmasq-dns-666b6646f7-5v9q4\" (UID: \"02aac5db-0152-44ea-94e6-4e8ef20cbe41\") " pod="openstack/dnsmasq-dns-666b6646f7-5v9q4" Jan 20 18:22:53 crc kubenswrapper[4661]: I0120 18:22:53.689935 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/02aac5db-0152-44ea-94e6-4e8ef20cbe41-dns-svc\") pod \"dnsmasq-dns-666b6646f7-5v9q4\" (UID: \"02aac5db-0152-44ea-94e6-4e8ef20cbe41\") " pod="openstack/dnsmasq-dns-666b6646f7-5v9q4" Jan 20 18:22:53 crc kubenswrapper[4661]: I0120 18:22:53.793555 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/02aac5db-0152-44ea-94e6-4e8ef20cbe41-dns-svc\") pod \"dnsmasq-dns-666b6646f7-5v9q4\" (UID: \"02aac5db-0152-44ea-94e6-4e8ef20cbe41\") " pod="openstack/dnsmasq-dns-666b6646f7-5v9q4" Jan 20 18:22:53 crc kubenswrapper[4661]: I0120 18:22:53.793649 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7ct2d\" (UniqueName: \"kubernetes.io/projected/02aac5db-0152-44ea-94e6-4e8ef20cbe41-kube-api-access-7ct2d\") pod \"dnsmasq-dns-666b6646f7-5v9q4\" (UID: \"02aac5db-0152-44ea-94e6-4e8ef20cbe41\") " pod="openstack/dnsmasq-dns-666b6646f7-5v9q4" Jan 20 18:22:53 crc kubenswrapper[4661]: I0120 18:22:53.793706 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/02aac5db-0152-44ea-94e6-4e8ef20cbe41-config\") pod \"dnsmasq-dns-666b6646f7-5v9q4\" (UID: \"02aac5db-0152-44ea-94e6-4e8ef20cbe41\") " pod="openstack/dnsmasq-dns-666b6646f7-5v9q4" Jan 20 18:22:53 crc kubenswrapper[4661]: I0120 18:22:53.794742 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/02aac5db-0152-44ea-94e6-4e8ef20cbe41-config\") pod \"dnsmasq-dns-666b6646f7-5v9q4\" (UID: \"02aac5db-0152-44ea-94e6-4e8ef20cbe41\") " pod="openstack/dnsmasq-dns-666b6646f7-5v9q4" Jan 20 18:22:53 crc kubenswrapper[4661]: I0120 18:22:53.795385 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/02aac5db-0152-44ea-94e6-4e8ef20cbe41-dns-svc\") pod \"dnsmasq-dns-666b6646f7-5v9q4\" (UID: \"02aac5db-0152-44ea-94e6-4e8ef20cbe41\") " pod="openstack/dnsmasq-dns-666b6646f7-5v9q4" Jan 20 18:22:53 crc kubenswrapper[4661]: I0120 18:22:53.854785 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7ct2d\" (UniqueName: \"kubernetes.io/projected/02aac5db-0152-44ea-94e6-4e8ef20cbe41-kube-api-access-7ct2d\") pod \"dnsmasq-dns-666b6646f7-5v9q4\" (UID: \"02aac5db-0152-44ea-94e6-4e8ef20cbe41\") " pod="openstack/dnsmasq-dns-666b6646f7-5v9q4" Jan 20 18:22:53 crc kubenswrapper[4661]: I0120 18:22:53.898707 4661 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-zxnfn"] Jan 20 18:22:53 crc kubenswrapper[4661]: I0120 18:22:53.916897 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-5v9q4" Jan 20 18:22:53 crc kubenswrapper[4661]: I0120 18:22:53.934552 4661 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-fplm7"] Jan 20 18:22:53 crc kubenswrapper[4661]: I0120 18:22:53.935653 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-fplm7" Jan 20 18:22:53 crc kubenswrapper[4661]: I0120 18:22:53.954902 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-fplm7"] Jan 20 18:22:54 crc kubenswrapper[4661]: I0120 18:22:54.099250 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h5xjk\" (UniqueName: \"kubernetes.io/projected/6aa88459-9af5-4ddd-a51e-32f0468ebc87-kube-api-access-h5xjk\") pod \"dnsmasq-dns-57d769cc4f-fplm7\" (UID: \"6aa88459-9af5-4ddd-a51e-32f0468ebc87\") " pod="openstack/dnsmasq-dns-57d769cc4f-fplm7" Jan 20 18:22:54 crc kubenswrapper[4661]: I0120 18:22:54.099307 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6aa88459-9af5-4ddd-a51e-32f0468ebc87-config\") pod \"dnsmasq-dns-57d769cc4f-fplm7\" (UID: \"6aa88459-9af5-4ddd-a51e-32f0468ebc87\") " pod="openstack/dnsmasq-dns-57d769cc4f-fplm7" Jan 20 18:22:54 crc kubenswrapper[4661]: I0120 18:22:54.099326 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6aa88459-9af5-4ddd-a51e-32f0468ebc87-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-fplm7\" (UID: \"6aa88459-9af5-4ddd-a51e-32f0468ebc87\") " pod="openstack/dnsmasq-dns-57d769cc4f-fplm7" Jan 20 18:22:54 crc kubenswrapper[4661]: I0120 18:22:54.200126 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h5xjk\" (UniqueName: \"kubernetes.io/projected/6aa88459-9af5-4ddd-a51e-32f0468ebc87-kube-api-access-h5xjk\") pod \"dnsmasq-dns-57d769cc4f-fplm7\" (UID: \"6aa88459-9af5-4ddd-a51e-32f0468ebc87\") " pod="openstack/dnsmasq-dns-57d769cc4f-fplm7" Jan 20 18:22:54 crc kubenswrapper[4661]: I0120 18:22:54.200165 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6aa88459-9af5-4ddd-a51e-32f0468ebc87-config\") pod \"dnsmasq-dns-57d769cc4f-fplm7\" (UID: \"6aa88459-9af5-4ddd-a51e-32f0468ebc87\") " pod="openstack/dnsmasq-dns-57d769cc4f-fplm7" Jan 20 18:22:54 crc kubenswrapper[4661]: I0120 18:22:54.200183 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6aa88459-9af5-4ddd-a51e-32f0468ebc87-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-fplm7\" (UID: \"6aa88459-9af5-4ddd-a51e-32f0468ebc87\") " pod="openstack/dnsmasq-dns-57d769cc4f-fplm7" Jan 20 18:22:54 crc kubenswrapper[4661]: I0120 18:22:54.201051 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6aa88459-9af5-4ddd-a51e-32f0468ebc87-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-fplm7\" (UID: \"6aa88459-9af5-4ddd-a51e-32f0468ebc87\") " pod="openstack/dnsmasq-dns-57d769cc4f-fplm7" Jan 20 18:22:54 crc kubenswrapper[4661]: I0120 18:22:54.201217 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6aa88459-9af5-4ddd-a51e-32f0468ebc87-config\") pod \"dnsmasq-dns-57d769cc4f-fplm7\" (UID: \"6aa88459-9af5-4ddd-a51e-32f0468ebc87\") " pod="openstack/dnsmasq-dns-57d769cc4f-fplm7" Jan 20 18:22:54 crc kubenswrapper[4661]: I0120 18:22:54.240123 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h5xjk\" (UniqueName: \"kubernetes.io/projected/6aa88459-9af5-4ddd-a51e-32f0468ebc87-kube-api-access-h5xjk\") pod \"dnsmasq-dns-57d769cc4f-fplm7\" (UID: \"6aa88459-9af5-4ddd-a51e-32f0468ebc87\") " pod="openstack/dnsmasq-dns-57d769cc4f-fplm7" Jan 20 18:22:54 crc kubenswrapper[4661]: I0120 18:22:54.318859 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-fplm7" Jan 20 18:22:54 crc kubenswrapper[4661]: I0120 18:22:54.591038 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-5v9q4"] Jan 20 18:22:54 crc kubenswrapper[4661]: I0120 18:22:54.760739 4661 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Jan 20 18:22:54 crc kubenswrapper[4661]: I0120 18:22:54.767023 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 20 18:22:54 crc kubenswrapper[4661]: I0120 18:22:54.773848 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Jan 20 18:22:54 crc kubenswrapper[4661]: I0120 18:22:54.774083 4661 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Jan 20 18:22:54 crc kubenswrapper[4661]: I0120 18:22:54.774262 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Jan 20 18:22:54 crc kubenswrapper[4661]: I0120 18:22:54.773943 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Jan 20 18:22:54 crc kubenswrapper[4661]: I0120 18:22:54.774275 4661 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Jan 20 18:22:54 crc kubenswrapper[4661]: I0120 18:22:54.774789 4661 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Jan 20 18:22:54 crc kubenswrapper[4661]: I0120 18:22:54.778262 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 20 18:22:54 crc kubenswrapper[4661]: I0120 18:22:54.779188 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-zv9vw" Jan 20 18:22:54 crc kubenswrapper[4661]: I0120 18:22:54.913425 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b764feba-067a-4a59-a23b-9a9b7725f420-config-data\") pod \"rabbitmq-server-0\" (UID: \"b764feba-067a-4a59-a23b-9a9b7725f420\") " pod="openstack/rabbitmq-server-0" Jan 20 18:22:54 crc kubenswrapper[4661]: I0120 18:22:54.913473 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/b764feba-067a-4a59-a23b-9a9b7725f420-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"b764feba-067a-4a59-a23b-9a9b7725f420\") " pod="openstack/rabbitmq-server-0" Jan 20 18:22:54 crc kubenswrapper[4661]: I0120 18:22:54.913500 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/b764feba-067a-4a59-a23b-9a9b7725f420-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"b764feba-067a-4a59-a23b-9a9b7725f420\") " pod="openstack/rabbitmq-server-0" Jan 20 18:22:54 crc kubenswrapper[4661]: I0120 18:22:54.913535 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/b764feba-067a-4a59-a23b-9a9b7725f420-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"b764feba-067a-4a59-a23b-9a9b7725f420\") " pod="openstack/rabbitmq-server-0" Jan 20 18:22:54 crc kubenswrapper[4661]: I0120 18:22:54.913564 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/b764feba-067a-4a59-a23b-9a9b7725f420-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"b764feba-067a-4a59-a23b-9a9b7725f420\") " pod="openstack/rabbitmq-server-0" Jan 20 18:22:54 crc kubenswrapper[4661]: I0120 18:22:54.913586 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/b764feba-067a-4a59-a23b-9a9b7725f420-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"b764feba-067a-4a59-a23b-9a9b7725f420\") " pod="openstack/rabbitmq-server-0" Jan 20 18:22:54 crc kubenswrapper[4661]: I0120 18:22:54.913612 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/b764feba-067a-4a59-a23b-9a9b7725f420-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"b764feba-067a-4a59-a23b-9a9b7725f420\") " pod="openstack/rabbitmq-server-0" Jan 20 18:22:54 crc kubenswrapper[4661]: I0120 18:22:54.913661 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/b764feba-067a-4a59-a23b-9a9b7725f420-server-conf\") pod \"rabbitmq-server-0\" (UID: \"b764feba-067a-4a59-a23b-9a9b7725f420\") " pod="openstack/rabbitmq-server-0" Jan 20 18:22:54 crc kubenswrapper[4661]: I0120 18:22:54.913707 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/b764feba-067a-4a59-a23b-9a9b7725f420-pod-info\") pod \"rabbitmq-server-0\" (UID: \"b764feba-067a-4a59-a23b-9a9b7725f420\") " pod="openstack/rabbitmq-server-0" Jan 20 18:22:54 crc kubenswrapper[4661]: I0120 18:22:54.913732 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j7sz2\" (UniqueName: \"kubernetes.io/projected/b764feba-067a-4a59-a23b-9a9b7725f420-kube-api-access-j7sz2\") pod \"rabbitmq-server-0\" (UID: \"b764feba-067a-4a59-a23b-9a9b7725f420\") " pod="openstack/rabbitmq-server-0" Jan 20 18:22:54 crc kubenswrapper[4661]: I0120 18:22:54.913785 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"rabbitmq-server-0\" (UID: \"b764feba-067a-4a59-a23b-9a9b7725f420\") " pod="openstack/rabbitmq-server-0" Jan 20 18:22:54 crc kubenswrapper[4661]: I0120 18:22:54.954894 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-fplm7"] Jan 20 18:22:55 crc kubenswrapper[4661]: I0120 18:22:55.017891 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/b764feba-067a-4a59-a23b-9a9b7725f420-server-conf\") pod \"rabbitmq-server-0\" (UID: \"b764feba-067a-4a59-a23b-9a9b7725f420\") " pod="openstack/rabbitmq-server-0" Jan 20 18:22:55 crc kubenswrapper[4661]: I0120 18:22:55.017954 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/b764feba-067a-4a59-a23b-9a9b7725f420-pod-info\") pod \"rabbitmq-server-0\" (UID: \"b764feba-067a-4a59-a23b-9a9b7725f420\") " pod="openstack/rabbitmq-server-0" Jan 20 18:22:55 crc kubenswrapper[4661]: I0120 18:22:55.017990 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j7sz2\" (UniqueName: \"kubernetes.io/projected/b764feba-067a-4a59-a23b-9a9b7725f420-kube-api-access-j7sz2\") pod \"rabbitmq-server-0\" (UID: \"b764feba-067a-4a59-a23b-9a9b7725f420\") " pod="openstack/rabbitmq-server-0" Jan 20 18:22:55 crc kubenswrapper[4661]: I0120 18:22:55.018072 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"rabbitmq-server-0\" (UID: \"b764feba-067a-4a59-a23b-9a9b7725f420\") " pod="openstack/rabbitmq-server-0" Jan 20 18:22:55 crc kubenswrapper[4661]: I0120 18:22:55.018118 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b764feba-067a-4a59-a23b-9a9b7725f420-config-data\") pod \"rabbitmq-server-0\" (UID: \"b764feba-067a-4a59-a23b-9a9b7725f420\") " pod="openstack/rabbitmq-server-0" Jan 20 18:22:55 crc kubenswrapper[4661]: I0120 18:22:55.018156 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/b764feba-067a-4a59-a23b-9a9b7725f420-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"b764feba-067a-4a59-a23b-9a9b7725f420\") " pod="openstack/rabbitmq-server-0" Jan 20 18:22:55 crc kubenswrapper[4661]: I0120 18:22:55.018601 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/b764feba-067a-4a59-a23b-9a9b7725f420-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"b764feba-067a-4a59-a23b-9a9b7725f420\") " pod="openstack/rabbitmq-server-0" Jan 20 18:22:55 crc kubenswrapper[4661]: I0120 18:22:55.018656 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/b764feba-067a-4a59-a23b-9a9b7725f420-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"b764feba-067a-4a59-a23b-9a9b7725f420\") " pod="openstack/rabbitmq-server-0" Jan 20 18:22:55 crc kubenswrapper[4661]: I0120 18:22:55.018745 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/b764feba-067a-4a59-a23b-9a9b7725f420-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"b764feba-067a-4a59-a23b-9a9b7725f420\") " pod="openstack/rabbitmq-server-0" Jan 20 18:22:55 crc kubenswrapper[4661]: I0120 18:22:55.018774 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/b764feba-067a-4a59-a23b-9a9b7725f420-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"b764feba-067a-4a59-a23b-9a9b7725f420\") " pod="openstack/rabbitmq-server-0" Jan 20 18:22:55 crc kubenswrapper[4661]: I0120 18:22:55.018810 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/b764feba-067a-4a59-a23b-9a9b7725f420-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"b764feba-067a-4a59-a23b-9a9b7725f420\") " pod="openstack/rabbitmq-server-0" Jan 20 18:22:55 crc kubenswrapper[4661]: I0120 18:22:55.019844 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/b764feba-067a-4a59-a23b-9a9b7725f420-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"b764feba-067a-4a59-a23b-9a9b7725f420\") " pod="openstack/rabbitmq-server-0" Jan 20 18:22:55 crc kubenswrapper[4661]: I0120 18:22:55.020172 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/b764feba-067a-4a59-a23b-9a9b7725f420-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"b764feba-067a-4a59-a23b-9a9b7725f420\") " pod="openstack/rabbitmq-server-0" Jan 20 18:22:55 crc kubenswrapper[4661]: I0120 18:22:55.020391 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/b764feba-067a-4a59-a23b-9a9b7725f420-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"b764feba-067a-4a59-a23b-9a9b7725f420\") " pod="openstack/rabbitmq-server-0" Jan 20 18:22:55 crc kubenswrapper[4661]: I0120 18:22:55.021142 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b764feba-067a-4a59-a23b-9a9b7725f420-config-data\") pod \"rabbitmq-server-0\" (UID: \"b764feba-067a-4a59-a23b-9a9b7725f420\") " pod="openstack/rabbitmq-server-0" Jan 20 18:22:55 crc kubenswrapper[4661]: I0120 18:22:55.022077 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/b764feba-067a-4a59-a23b-9a9b7725f420-server-conf\") pod \"rabbitmq-server-0\" (UID: \"b764feba-067a-4a59-a23b-9a9b7725f420\") " pod="openstack/rabbitmq-server-0" Jan 20 18:22:55 crc kubenswrapper[4661]: I0120 18:22:55.024828 4661 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"rabbitmq-server-0\" (UID: \"b764feba-067a-4a59-a23b-9a9b7725f420\") device mount path \"/mnt/openstack/pv09\"" pod="openstack/rabbitmq-server-0" Jan 20 18:22:55 crc kubenswrapper[4661]: I0120 18:22:55.026536 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/b764feba-067a-4a59-a23b-9a9b7725f420-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"b764feba-067a-4a59-a23b-9a9b7725f420\") " pod="openstack/rabbitmq-server-0" Jan 20 18:22:55 crc kubenswrapper[4661]: I0120 18:22:55.026917 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/b764feba-067a-4a59-a23b-9a9b7725f420-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"b764feba-067a-4a59-a23b-9a9b7725f420\") " pod="openstack/rabbitmq-server-0" Jan 20 18:22:55 crc kubenswrapper[4661]: I0120 18:22:55.028473 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/b764feba-067a-4a59-a23b-9a9b7725f420-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"b764feba-067a-4a59-a23b-9a9b7725f420\") " pod="openstack/rabbitmq-server-0" Jan 20 18:22:55 crc kubenswrapper[4661]: I0120 18:22:55.039816 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/b764feba-067a-4a59-a23b-9a9b7725f420-pod-info\") pod \"rabbitmq-server-0\" (UID: \"b764feba-067a-4a59-a23b-9a9b7725f420\") " pod="openstack/rabbitmq-server-0" Jan 20 18:22:55 crc kubenswrapper[4661]: I0120 18:22:55.055203 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-5v9q4" event={"ID":"02aac5db-0152-44ea-94e6-4e8ef20cbe41","Type":"ContainerStarted","Data":"0935cd502984ef028c1107da56e111886933782bbe1e11ebf81aba421c57901a"} Jan 20 18:22:55 crc kubenswrapper[4661]: I0120 18:22:55.059259 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j7sz2\" (UniqueName: \"kubernetes.io/projected/b764feba-067a-4a59-a23b-9a9b7725f420-kube-api-access-j7sz2\") pod \"rabbitmq-server-0\" (UID: \"b764feba-067a-4a59-a23b-9a9b7725f420\") " pod="openstack/rabbitmq-server-0" Jan 20 18:22:55 crc kubenswrapper[4661]: I0120 18:22:55.064361 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"rabbitmq-server-0\" (UID: \"b764feba-067a-4a59-a23b-9a9b7725f420\") " pod="openstack/rabbitmq-server-0" Jan 20 18:22:55 crc kubenswrapper[4661]: I0120 18:22:55.070902 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-fplm7" event={"ID":"6aa88459-9af5-4ddd-a51e-32f0468ebc87","Type":"ContainerStarted","Data":"8ae06221f272172507bdcb8d939b653086002b6a236aab289e68b9666a4f91a5"} Jan 20 18:22:55 crc kubenswrapper[4661]: I0120 18:22:55.080808 4661 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 20 18:22:55 crc kubenswrapper[4661]: I0120 18:22:55.086504 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 20 18:22:55 crc kubenswrapper[4661]: I0120 18:22:55.094256 4661 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Jan 20 18:22:55 crc kubenswrapper[4661]: I0120 18:22:55.094751 4661 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Jan 20 18:22:55 crc kubenswrapper[4661]: I0120 18:22:55.094276 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Jan 20 18:22:55 crc kubenswrapper[4661]: I0120 18:22:55.094997 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Jan 20 18:22:55 crc kubenswrapper[4661]: I0120 18:22:55.095001 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-4qjvk" Jan 20 18:22:55 crc kubenswrapper[4661]: I0120 18:22:55.095178 4661 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Jan 20 18:22:55 crc kubenswrapper[4661]: I0120 18:22:55.095335 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Jan 20 18:22:55 crc kubenswrapper[4661]: I0120 18:22:55.117182 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 20 18:22:55 crc kubenswrapper[4661]: I0120 18:22:55.119779 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"19a9e039-d4eb-475e-9ca9-6a6f6bfeb36c\") " pod="openstack/rabbitmq-cell1-server-0" Jan 20 18:22:55 crc kubenswrapper[4661]: I0120 18:22:55.119835 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/19a9e039-d4eb-475e-9ca9-6a6f6bfeb36c-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"19a9e039-d4eb-475e-9ca9-6a6f6bfeb36c\") " pod="openstack/rabbitmq-cell1-server-0" Jan 20 18:22:55 crc kubenswrapper[4661]: I0120 18:22:55.119881 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/19a9e039-d4eb-475e-9ca9-6a6f6bfeb36c-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"19a9e039-d4eb-475e-9ca9-6a6f6bfeb36c\") " pod="openstack/rabbitmq-cell1-server-0" Jan 20 18:22:55 crc kubenswrapper[4661]: I0120 18:22:55.119904 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/19a9e039-d4eb-475e-9ca9-6a6f6bfeb36c-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"19a9e039-d4eb-475e-9ca9-6a6f6bfeb36c\") " pod="openstack/rabbitmq-cell1-server-0" Jan 20 18:22:55 crc kubenswrapper[4661]: I0120 18:22:55.119940 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/19a9e039-d4eb-475e-9ca9-6a6f6bfeb36c-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"19a9e039-d4eb-475e-9ca9-6a6f6bfeb36c\") " pod="openstack/rabbitmq-cell1-server-0" Jan 20 18:22:55 crc kubenswrapper[4661]: I0120 18:22:55.119960 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/19a9e039-d4eb-475e-9ca9-6a6f6bfeb36c-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"19a9e039-d4eb-475e-9ca9-6a6f6bfeb36c\") " pod="openstack/rabbitmq-cell1-server-0" Jan 20 18:22:55 crc kubenswrapper[4661]: I0120 18:22:55.119974 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/19a9e039-d4eb-475e-9ca9-6a6f6bfeb36c-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"19a9e039-d4eb-475e-9ca9-6a6f6bfeb36c\") " pod="openstack/rabbitmq-cell1-server-0" Jan 20 18:22:55 crc kubenswrapper[4661]: I0120 18:22:55.120006 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/19a9e039-d4eb-475e-9ca9-6a6f6bfeb36c-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"19a9e039-d4eb-475e-9ca9-6a6f6bfeb36c\") " pod="openstack/rabbitmq-cell1-server-0" Jan 20 18:22:55 crc kubenswrapper[4661]: I0120 18:22:55.120031 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/19a9e039-d4eb-475e-9ca9-6a6f6bfeb36c-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"19a9e039-d4eb-475e-9ca9-6a6f6bfeb36c\") " pod="openstack/rabbitmq-cell1-server-0" Jan 20 18:22:55 crc kubenswrapper[4661]: I0120 18:22:55.120047 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/19a9e039-d4eb-475e-9ca9-6a6f6bfeb36c-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"19a9e039-d4eb-475e-9ca9-6a6f6bfeb36c\") " pod="openstack/rabbitmq-cell1-server-0" Jan 20 18:22:55 crc kubenswrapper[4661]: I0120 18:22:55.120065 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bzql6\" (UniqueName: \"kubernetes.io/projected/19a9e039-d4eb-475e-9ca9-6a6f6bfeb36c-kube-api-access-bzql6\") pod \"rabbitmq-cell1-server-0\" (UID: \"19a9e039-d4eb-475e-9ca9-6a6f6bfeb36c\") " pod="openstack/rabbitmq-cell1-server-0" Jan 20 18:22:55 crc kubenswrapper[4661]: I0120 18:22:55.151107 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 20 18:22:55 crc kubenswrapper[4661]: I0120 18:22:55.221205 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/19a9e039-d4eb-475e-9ca9-6a6f6bfeb36c-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"19a9e039-d4eb-475e-9ca9-6a6f6bfeb36c\") " pod="openstack/rabbitmq-cell1-server-0" Jan 20 18:22:55 crc kubenswrapper[4661]: I0120 18:22:55.221243 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/19a9e039-d4eb-475e-9ca9-6a6f6bfeb36c-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"19a9e039-d4eb-475e-9ca9-6a6f6bfeb36c\") " pod="openstack/rabbitmq-cell1-server-0" Jan 20 18:22:55 crc kubenswrapper[4661]: I0120 18:22:55.221259 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/19a9e039-d4eb-475e-9ca9-6a6f6bfeb36c-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"19a9e039-d4eb-475e-9ca9-6a6f6bfeb36c\") " pod="openstack/rabbitmq-cell1-server-0" Jan 20 18:22:55 crc kubenswrapper[4661]: I0120 18:22:55.221288 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/19a9e039-d4eb-475e-9ca9-6a6f6bfeb36c-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"19a9e039-d4eb-475e-9ca9-6a6f6bfeb36c\") " pod="openstack/rabbitmq-cell1-server-0" Jan 20 18:22:55 crc kubenswrapper[4661]: I0120 18:22:55.221307 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/19a9e039-d4eb-475e-9ca9-6a6f6bfeb36c-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"19a9e039-d4eb-475e-9ca9-6a6f6bfeb36c\") " pod="openstack/rabbitmq-cell1-server-0" Jan 20 18:22:55 crc kubenswrapper[4661]: I0120 18:22:55.221327 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/19a9e039-d4eb-475e-9ca9-6a6f6bfeb36c-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"19a9e039-d4eb-475e-9ca9-6a6f6bfeb36c\") " pod="openstack/rabbitmq-cell1-server-0" Jan 20 18:22:55 crc kubenswrapper[4661]: I0120 18:22:55.221345 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bzql6\" (UniqueName: \"kubernetes.io/projected/19a9e039-d4eb-475e-9ca9-6a6f6bfeb36c-kube-api-access-bzql6\") pod \"rabbitmq-cell1-server-0\" (UID: \"19a9e039-d4eb-475e-9ca9-6a6f6bfeb36c\") " pod="openstack/rabbitmq-cell1-server-0" Jan 20 18:22:55 crc kubenswrapper[4661]: I0120 18:22:55.221387 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"19a9e039-d4eb-475e-9ca9-6a6f6bfeb36c\") " pod="openstack/rabbitmq-cell1-server-0" Jan 20 18:22:55 crc kubenswrapper[4661]: I0120 18:22:55.221405 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/19a9e039-d4eb-475e-9ca9-6a6f6bfeb36c-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"19a9e039-d4eb-475e-9ca9-6a6f6bfeb36c\") " pod="openstack/rabbitmq-cell1-server-0" Jan 20 18:22:55 crc kubenswrapper[4661]: I0120 18:22:55.221422 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/19a9e039-d4eb-475e-9ca9-6a6f6bfeb36c-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"19a9e039-d4eb-475e-9ca9-6a6f6bfeb36c\") " pod="openstack/rabbitmq-cell1-server-0" Jan 20 18:22:55 crc kubenswrapper[4661]: I0120 18:22:55.221444 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/19a9e039-d4eb-475e-9ca9-6a6f6bfeb36c-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"19a9e039-d4eb-475e-9ca9-6a6f6bfeb36c\") " pod="openstack/rabbitmq-cell1-server-0" Jan 20 18:22:55 crc kubenswrapper[4661]: I0120 18:22:55.222203 4661 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"19a9e039-d4eb-475e-9ca9-6a6f6bfeb36c\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/rabbitmq-cell1-server-0" Jan 20 18:22:55 crc kubenswrapper[4661]: I0120 18:22:55.222805 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/19a9e039-d4eb-475e-9ca9-6a6f6bfeb36c-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"19a9e039-d4eb-475e-9ca9-6a6f6bfeb36c\") " pod="openstack/rabbitmq-cell1-server-0" Jan 20 18:22:55 crc kubenswrapper[4661]: I0120 18:22:55.223131 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/19a9e039-d4eb-475e-9ca9-6a6f6bfeb36c-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"19a9e039-d4eb-475e-9ca9-6a6f6bfeb36c\") " pod="openstack/rabbitmq-cell1-server-0" Jan 20 18:22:55 crc kubenswrapper[4661]: I0120 18:22:55.223445 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/19a9e039-d4eb-475e-9ca9-6a6f6bfeb36c-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"19a9e039-d4eb-475e-9ca9-6a6f6bfeb36c\") " pod="openstack/rabbitmq-cell1-server-0" Jan 20 18:22:55 crc kubenswrapper[4661]: I0120 18:22:55.224106 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/19a9e039-d4eb-475e-9ca9-6a6f6bfeb36c-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"19a9e039-d4eb-475e-9ca9-6a6f6bfeb36c\") " pod="openstack/rabbitmq-cell1-server-0" Jan 20 18:22:55 crc kubenswrapper[4661]: I0120 18:22:55.224726 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/19a9e039-d4eb-475e-9ca9-6a6f6bfeb36c-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"19a9e039-d4eb-475e-9ca9-6a6f6bfeb36c\") " pod="openstack/rabbitmq-cell1-server-0" Jan 20 18:22:55 crc kubenswrapper[4661]: I0120 18:22:55.229866 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/19a9e039-d4eb-475e-9ca9-6a6f6bfeb36c-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"19a9e039-d4eb-475e-9ca9-6a6f6bfeb36c\") " pod="openstack/rabbitmq-cell1-server-0" Jan 20 18:22:55 crc kubenswrapper[4661]: I0120 18:22:55.231257 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/19a9e039-d4eb-475e-9ca9-6a6f6bfeb36c-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"19a9e039-d4eb-475e-9ca9-6a6f6bfeb36c\") " pod="openstack/rabbitmq-cell1-server-0" Jan 20 18:22:55 crc kubenswrapper[4661]: I0120 18:22:55.236692 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/19a9e039-d4eb-475e-9ca9-6a6f6bfeb36c-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"19a9e039-d4eb-475e-9ca9-6a6f6bfeb36c\") " pod="openstack/rabbitmq-cell1-server-0" Jan 20 18:22:55 crc kubenswrapper[4661]: I0120 18:22:55.243019 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/19a9e039-d4eb-475e-9ca9-6a6f6bfeb36c-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"19a9e039-d4eb-475e-9ca9-6a6f6bfeb36c\") " pod="openstack/rabbitmq-cell1-server-0" Jan 20 18:22:55 crc kubenswrapper[4661]: I0120 18:22:55.257310 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bzql6\" (UniqueName: \"kubernetes.io/projected/19a9e039-d4eb-475e-9ca9-6a6f6bfeb36c-kube-api-access-bzql6\") pod \"rabbitmq-cell1-server-0\" (UID: \"19a9e039-d4eb-475e-9ca9-6a6f6bfeb36c\") " pod="openstack/rabbitmq-cell1-server-0" Jan 20 18:22:55 crc kubenswrapper[4661]: I0120 18:22:55.273065 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"19a9e039-d4eb-475e-9ca9-6a6f6bfeb36c\") " pod="openstack/rabbitmq-cell1-server-0" Jan 20 18:22:55 crc kubenswrapper[4661]: I0120 18:22:55.470991 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 20 18:22:55 crc kubenswrapper[4661]: I0120 18:22:55.798108 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 20 18:22:56 crc kubenswrapper[4661]: I0120 18:22:56.042885 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 20 18:22:56 crc kubenswrapper[4661]: W0120 18:22:56.105985 4661 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod19a9e039_d4eb_475e_9ca9_6a6f6bfeb36c.slice/crio-455c04a109377b319c20e188276ca2154a9c7a825716089f2558649fcee5ea68 WatchSource:0}: Error finding container 455c04a109377b319c20e188276ca2154a9c7a825716089f2558649fcee5ea68: Status 404 returned error can't find the container with id 455c04a109377b319c20e188276ca2154a9c7a825716089f2558649fcee5ea68 Jan 20 18:22:56 crc kubenswrapper[4661]: I0120 18:22:56.114448 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"b764feba-067a-4a59-a23b-9a9b7725f420","Type":"ContainerStarted","Data":"57b498dfb51f9d23488313c50d7e93db17366501a94de9dedc3fc5727d94708b"} Jan 20 18:22:56 crc kubenswrapper[4661]: I0120 18:22:56.341057 4661 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Jan 20 18:22:56 crc kubenswrapper[4661]: I0120 18:22:56.342772 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Jan 20 18:22:56 crc kubenswrapper[4661]: I0120 18:22:56.348544 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Jan 20 18:22:56 crc kubenswrapper[4661]: I0120 18:22:56.355465 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-dockercfg-rrlr4" Jan 20 18:22:56 crc kubenswrapper[4661]: I0120 18:22:56.356137 4661 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Jan 20 18:22:56 crc kubenswrapper[4661]: I0120 18:22:56.356261 4661 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Jan 20 18:22:56 crc kubenswrapper[4661]: I0120 18:22:56.360415 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Jan 20 18:22:56 crc kubenswrapper[4661]: I0120 18:22:56.378371 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Jan 20 18:22:56 crc kubenswrapper[4661]: I0120 18:22:56.440710 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x6tmz\" (UniqueName: \"kubernetes.io/projected/1c705fc7-9ad0-4254-ad57-63db21057251-kube-api-access-x6tmz\") pod \"openstack-galera-0\" (UID: \"1c705fc7-9ad0-4254-ad57-63db21057251\") " pod="openstack/openstack-galera-0" Jan 20 18:22:56 crc kubenswrapper[4661]: I0120 18:22:56.440803 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/1c705fc7-9ad0-4254-ad57-63db21057251-config-data-default\") pod \"openstack-galera-0\" (UID: \"1c705fc7-9ad0-4254-ad57-63db21057251\") " pod="openstack/openstack-galera-0" Jan 20 18:22:56 crc kubenswrapper[4661]: I0120 18:22:56.440885 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/1c705fc7-9ad0-4254-ad57-63db21057251-config-data-generated\") pod \"openstack-galera-0\" (UID: \"1c705fc7-9ad0-4254-ad57-63db21057251\") " pod="openstack/openstack-galera-0" Jan 20 18:22:56 crc kubenswrapper[4661]: I0120 18:22:56.440964 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/1c705fc7-9ad0-4254-ad57-63db21057251-kolla-config\") pod \"openstack-galera-0\" (UID: \"1c705fc7-9ad0-4254-ad57-63db21057251\") " pod="openstack/openstack-galera-0" Jan 20 18:22:56 crc kubenswrapper[4661]: I0120 18:22:56.441012 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/1c705fc7-9ad0-4254-ad57-63db21057251-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"1c705fc7-9ad0-4254-ad57-63db21057251\") " pod="openstack/openstack-galera-0" Jan 20 18:22:56 crc kubenswrapper[4661]: I0120 18:22:56.441059 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"openstack-galera-0\" (UID: \"1c705fc7-9ad0-4254-ad57-63db21057251\") " pod="openstack/openstack-galera-0" Jan 20 18:22:56 crc kubenswrapper[4661]: I0120 18:22:56.441087 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1c705fc7-9ad0-4254-ad57-63db21057251-operator-scripts\") pod \"openstack-galera-0\" (UID: \"1c705fc7-9ad0-4254-ad57-63db21057251\") " pod="openstack/openstack-galera-0" Jan 20 18:22:56 crc kubenswrapper[4661]: I0120 18:22:56.441134 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1c705fc7-9ad0-4254-ad57-63db21057251-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"1c705fc7-9ad0-4254-ad57-63db21057251\") " pod="openstack/openstack-galera-0" Jan 20 18:22:56 crc kubenswrapper[4661]: I0120 18:22:56.543622 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x6tmz\" (UniqueName: \"kubernetes.io/projected/1c705fc7-9ad0-4254-ad57-63db21057251-kube-api-access-x6tmz\") pod \"openstack-galera-0\" (UID: \"1c705fc7-9ad0-4254-ad57-63db21057251\") " pod="openstack/openstack-galera-0" Jan 20 18:22:56 crc kubenswrapper[4661]: I0120 18:22:56.543792 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/1c705fc7-9ad0-4254-ad57-63db21057251-config-data-default\") pod \"openstack-galera-0\" (UID: \"1c705fc7-9ad0-4254-ad57-63db21057251\") " pod="openstack/openstack-galera-0" Jan 20 18:22:56 crc kubenswrapper[4661]: I0120 18:22:56.543888 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/1c705fc7-9ad0-4254-ad57-63db21057251-config-data-generated\") pod \"openstack-galera-0\" (UID: \"1c705fc7-9ad0-4254-ad57-63db21057251\") " pod="openstack/openstack-galera-0" Jan 20 18:22:56 crc kubenswrapper[4661]: I0120 18:22:56.543938 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/1c705fc7-9ad0-4254-ad57-63db21057251-kolla-config\") pod \"openstack-galera-0\" (UID: \"1c705fc7-9ad0-4254-ad57-63db21057251\") " pod="openstack/openstack-galera-0" Jan 20 18:22:56 crc kubenswrapper[4661]: I0120 18:22:56.543975 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/1c705fc7-9ad0-4254-ad57-63db21057251-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"1c705fc7-9ad0-4254-ad57-63db21057251\") " pod="openstack/openstack-galera-0" Jan 20 18:22:56 crc kubenswrapper[4661]: I0120 18:22:56.544009 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"openstack-galera-0\" (UID: \"1c705fc7-9ad0-4254-ad57-63db21057251\") " pod="openstack/openstack-galera-0" Jan 20 18:22:56 crc kubenswrapper[4661]: I0120 18:22:56.544039 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1c705fc7-9ad0-4254-ad57-63db21057251-operator-scripts\") pod \"openstack-galera-0\" (UID: \"1c705fc7-9ad0-4254-ad57-63db21057251\") " pod="openstack/openstack-galera-0" Jan 20 18:22:56 crc kubenswrapper[4661]: I0120 18:22:56.544251 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1c705fc7-9ad0-4254-ad57-63db21057251-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"1c705fc7-9ad0-4254-ad57-63db21057251\") " pod="openstack/openstack-galera-0" Jan 20 18:22:56 crc kubenswrapper[4661]: I0120 18:22:56.550067 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/1c705fc7-9ad0-4254-ad57-63db21057251-kolla-config\") pod \"openstack-galera-0\" (UID: \"1c705fc7-9ad0-4254-ad57-63db21057251\") " pod="openstack/openstack-galera-0" Jan 20 18:22:56 crc kubenswrapper[4661]: I0120 18:22:56.551126 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/1c705fc7-9ad0-4254-ad57-63db21057251-config-data-default\") pod \"openstack-galera-0\" (UID: \"1c705fc7-9ad0-4254-ad57-63db21057251\") " pod="openstack/openstack-galera-0" Jan 20 18:22:56 crc kubenswrapper[4661]: I0120 18:22:56.551459 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/1c705fc7-9ad0-4254-ad57-63db21057251-config-data-generated\") pod \"openstack-galera-0\" (UID: \"1c705fc7-9ad0-4254-ad57-63db21057251\") " pod="openstack/openstack-galera-0" Jan 20 18:22:56 crc kubenswrapper[4661]: I0120 18:22:56.554395 4661 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"openstack-galera-0\" (UID: \"1c705fc7-9ad0-4254-ad57-63db21057251\") device mount path \"/mnt/openstack/pv07\"" pod="openstack/openstack-galera-0" Jan 20 18:22:56 crc kubenswrapper[4661]: I0120 18:22:56.559745 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1c705fc7-9ad0-4254-ad57-63db21057251-operator-scripts\") pod \"openstack-galera-0\" (UID: \"1c705fc7-9ad0-4254-ad57-63db21057251\") " pod="openstack/openstack-galera-0" Jan 20 18:22:56 crc kubenswrapper[4661]: I0120 18:22:56.562853 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/1c705fc7-9ad0-4254-ad57-63db21057251-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"1c705fc7-9ad0-4254-ad57-63db21057251\") " pod="openstack/openstack-galera-0" Jan 20 18:22:56 crc kubenswrapper[4661]: I0120 18:22:56.604176 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x6tmz\" (UniqueName: \"kubernetes.io/projected/1c705fc7-9ad0-4254-ad57-63db21057251-kube-api-access-x6tmz\") pod \"openstack-galera-0\" (UID: \"1c705fc7-9ad0-4254-ad57-63db21057251\") " pod="openstack/openstack-galera-0" Jan 20 18:22:56 crc kubenswrapper[4661]: I0120 18:22:56.607219 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1c705fc7-9ad0-4254-ad57-63db21057251-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"1c705fc7-9ad0-4254-ad57-63db21057251\") " pod="openstack/openstack-galera-0" Jan 20 18:22:56 crc kubenswrapper[4661]: I0120 18:22:56.617651 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"openstack-galera-0\" (UID: \"1c705fc7-9ad0-4254-ad57-63db21057251\") " pod="openstack/openstack-galera-0" Jan 20 18:22:56 crc kubenswrapper[4661]: I0120 18:22:56.674123 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Jan 20 18:22:57 crc kubenswrapper[4661]: I0120 18:22:57.167213 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"19a9e039-d4eb-475e-9ca9-6a6f6bfeb36c","Type":"ContainerStarted","Data":"455c04a109377b319c20e188276ca2154a9c7a825716089f2558649fcee5ea68"} Jan 20 18:22:57 crc kubenswrapper[4661]: I0120 18:22:57.312812 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Jan 20 18:22:57 crc kubenswrapper[4661]: I0120 18:22:57.689995 4661 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 20 18:22:57 crc kubenswrapper[4661]: I0120 18:22:57.691746 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Jan 20 18:22:57 crc kubenswrapper[4661]: I0120 18:22:57.695994 4661 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Jan 20 18:22:57 crc kubenswrapper[4661]: I0120 18:22:57.696191 4661 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Jan 20 18:22:57 crc kubenswrapper[4661]: I0120 18:22:57.696336 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Jan 20 18:22:57 crc kubenswrapper[4661]: I0120 18:22:57.696887 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-cell1-dockercfg-xsznm" Jan 20 18:22:57 crc kubenswrapper[4661]: I0120 18:22:57.718771 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 20 18:22:57 crc kubenswrapper[4661]: I0120 18:22:57.793092 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1a3386fb-6ffa-47fa-8697-8d3c45ff61be-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"1a3386fb-6ffa-47fa-8697-8d3c45ff61be\") " pod="openstack/openstack-cell1-galera-0" Jan 20 18:22:57 crc kubenswrapper[4661]: I0120 18:22:57.793163 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m6fxt\" (UniqueName: \"kubernetes.io/projected/1a3386fb-6ffa-47fa-8697-8d3c45ff61be-kube-api-access-m6fxt\") pod \"openstack-cell1-galera-0\" (UID: \"1a3386fb-6ffa-47fa-8697-8d3c45ff61be\") " pod="openstack/openstack-cell1-galera-0" Jan 20 18:22:57 crc kubenswrapper[4661]: I0120 18:22:57.793200 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/1a3386fb-6ffa-47fa-8697-8d3c45ff61be-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"1a3386fb-6ffa-47fa-8697-8d3c45ff61be\") " pod="openstack/openstack-cell1-galera-0" Jan 20 18:22:57 crc kubenswrapper[4661]: I0120 18:22:57.793226 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/1a3386fb-6ffa-47fa-8697-8d3c45ff61be-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"1a3386fb-6ffa-47fa-8697-8d3c45ff61be\") " pod="openstack/openstack-cell1-galera-0" Jan 20 18:22:57 crc kubenswrapper[4661]: I0120 18:22:57.793282 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1a3386fb-6ffa-47fa-8697-8d3c45ff61be-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"1a3386fb-6ffa-47fa-8697-8d3c45ff61be\") " pod="openstack/openstack-cell1-galera-0" Jan 20 18:22:57 crc kubenswrapper[4661]: I0120 18:22:57.793317 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/1a3386fb-6ffa-47fa-8697-8d3c45ff61be-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"1a3386fb-6ffa-47fa-8697-8d3c45ff61be\") " pod="openstack/openstack-cell1-galera-0" Jan 20 18:22:57 crc kubenswrapper[4661]: I0120 18:22:57.793338 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/1a3386fb-6ffa-47fa-8697-8d3c45ff61be-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"1a3386fb-6ffa-47fa-8697-8d3c45ff61be\") " pod="openstack/openstack-cell1-galera-0" Jan 20 18:22:57 crc kubenswrapper[4661]: I0120 18:22:57.793367 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"openstack-cell1-galera-0\" (UID: \"1a3386fb-6ffa-47fa-8697-8d3c45ff61be\") " pod="openstack/openstack-cell1-galera-0" Jan 20 18:22:57 crc kubenswrapper[4661]: I0120 18:22:57.903839 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1a3386fb-6ffa-47fa-8697-8d3c45ff61be-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"1a3386fb-6ffa-47fa-8697-8d3c45ff61be\") " pod="openstack/openstack-cell1-galera-0" Jan 20 18:22:57 crc kubenswrapper[4661]: I0120 18:22:57.903915 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m6fxt\" (UniqueName: \"kubernetes.io/projected/1a3386fb-6ffa-47fa-8697-8d3c45ff61be-kube-api-access-m6fxt\") pod \"openstack-cell1-galera-0\" (UID: \"1a3386fb-6ffa-47fa-8697-8d3c45ff61be\") " pod="openstack/openstack-cell1-galera-0" Jan 20 18:22:57 crc kubenswrapper[4661]: I0120 18:22:57.903945 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/1a3386fb-6ffa-47fa-8697-8d3c45ff61be-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"1a3386fb-6ffa-47fa-8697-8d3c45ff61be\") " pod="openstack/openstack-cell1-galera-0" Jan 20 18:22:57 crc kubenswrapper[4661]: I0120 18:22:57.903965 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/1a3386fb-6ffa-47fa-8697-8d3c45ff61be-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"1a3386fb-6ffa-47fa-8697-8d3c45ff61be\") " pod="openstack/openstack-cell1-galera-0" Jan 20 18:22:57 crc kubenswrapper[4661]: I0120 18:22:57.903992 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1a3386fb-6ffa-47fa-8697-8d3c45ff61be-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"1a3386fb-6ffa-47fa-8697-8d3c45ff61be\") " pod="openstack/openstack-cell1-galera-0" Jan 20 18:22:57 crc kubenswrapper[4661]: I0120 18:22:57.904763 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/1a3386fb-6ffa-47fa-8697-8d3c45ff61be-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"1a3386fb-6ffa-47fa-8697-8d3c45ff61be\") " pod="openstack/openstack-cell1-galera-0" Jan 20 18:22:57 crc kubenswrapper[4661]: I0120 18:22:57.904789 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/1a3386fb-6ffa-47fa-8697-8d3c45ff61be-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"1a3386fb-6ffa-47fa-8697-8d3c45ff61be\") " pod="openstack/openstack-cell1-galera-0" Jan 20 18:22:57 crc kubenswrapper[4661]: I0120 18:22:57.904814 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"openstack-cell1-galera-0\" (UID: \"1a3386fb-6ffa-47fa-8697-8d3c45ff61be\") " pod="openstack/openstack-cell1-galera-0" Jan 20 18:22:57 crc kubenswrapper[4661]: I0120 18:22:57.905014 4661 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"openstack-cell1-galera-0\" (UID: \"1a3386fb-6ffa-47fa-8697-8d3c45ff61be\") device mount path \"/mnt/openstack/pv11\"" pod="openstack/openstack-cell1-galera-0" Jan 20 18:22:57 crc kubenswrapper[4661]: I0120 18:22:57.905512 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/1a3386fb-6ffa-47fa-8697-8d3c45ff61be-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"1a3386fb-6ffa-47fa-8697-8d3c45ff61be\") " pod="openstack/openstack-cell1-galera-0" Jan 20 18:22:57 crc kubenswrapper[4661]: I0120 18:22:57.905570 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/1a3386fb-6ffa-47fa-8697-8d3c45ff61be-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"1a3386fb-6ffa-47fa-8697-8d3c45ff61be\") " pod="openstack/openstack-cell1-galera-0" Jan 20 18:22:57 crc kubenswrapper[4661]: I0120 18:22:57.905726 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/1a3386fb-6ffa-47fa-8697-8d3c45ff61be-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"1a3386fb-6ffa-47fa-8697-8d3c45ff61be\") " pod="openstack/openstack-cell1-galera-0" Jan 20 18:22:57 crc kubenswrapper[4661]: I0120 18:22:57.906062 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1a3386fb-6ffa-47fa-8697-8d3c45ff61be-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"1a3386fb-6ffa-47fa-8697-8d3c45ff61be\") " pod="openstack/openstack-cell1-galera-0" Jan 20 18:22:57 crc kubenswrapper[4661]: I0120 18:22:57.909506 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/1a3386fb-6ffa-47fa-8697-8d3c45ff61be-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"1a3386fb-6ffa-47fa-8697-8d3c45ff61be\") " pod="openstack/openstack-cell1-galera-0" Jan 20 18:22:57 crc kubenswrapper[4661]: I0120 18:22:57.923967 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m6fxt\" (UniqueName: \"kubernetes.io/projected/1a3386fb-6ffa-47fa-8697-8d3c45ff61be-kube-api-access-m6fxt\") pod \"openstack-cell1-galera-0\" (UID: \"1a3386fb-6ffa-47fa-8697-8d3c45ff61be\") " pod="openstack/openstack-cell1-galera-0" Jan 20 18:22:57 crc kubenswrapper[4661]: I0120 18:22:57.925092 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1a3386fb-6ffa-47fa-8697-8d3c45ff61be-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"1a3386fb-6ffa-47fa-8697-8d3c45ff61be\") " pod="openstack/openstack-cell1-galera-0" Jan 20 18:22:57 crc kubenswrapper[4661]: I0120 18:22:57.932301 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"openstack-cell1-galera-0\" (UID: \"1a3386fb-6ffa-47fa-8697-8d3c45ff61be\") " pod="openstack/openstack-cell1-galera-0" Jan 20 18:22:58 crc kubenswrapper[4661]: I0120 18:22:58.042695 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Jan 20 18:22:58 crc kubenswrapper[4661]: I0120 18:22:58.178551 4661 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Jan 20 18:22:58 crc kubenswrapper[4661]: I0120 18:22:58.191688 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Jan 20 18:22:58 crc kubenswrapper[4661]: I0120 18:22:58.196046 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"memcached-memcached-dockercfg-lfcwx" Jan 20 18:22:58 crc kubenswrapper[4661]: I0120 18:22:58.196257 4661 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Jan 20 18:22:58 crc kubenswrapper[4661]: I0120 18:22:58.196787 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-memcached-svc" Jan 20 18:22:58 crc kubenswrapper[4661]: I0120 18:22:58.198015 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Jan 20 18:22:58 crc kubenswrapper[4661]: I0120 18:22:58.208182 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"1c705fc7-9ad0-4254-ad57-63db21057251","Type":"ContainerStarted","Data":"7f0674a842039a44188a7dfe018e843477e6b0159b74e8e36bcc3b61f3aee19d"} Jan 20 18:22:58 crc kubenswrapper[4661]: I0120 18:22:58.316590 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ee2394e6-ec1c-4093-9c8d-6a5795f4d146-config-data\") pod \"memcached-0\" (UID: \"ee2394e6-ec1c-4093-9c8d-6a5795f4d146\") " pod="openstack/memcached-0" Jan 20 18:22:58 crc kubenswrapper[4661]: I0120 18:22:58.317161 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ee2394e6-ec1c-4093-9c8d-6a5795f4d146-combined-ca-bundle\") pod \"memcached-0\" (UID: \"ee2394e6-ec1c-4093-9c8d-6a5795f4d146\") " pod="openstack/memcached-0" Jan 20 18:22:58 crc kubenswrapper[4661]: I0120 18:22:58.317207 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/ee2394e6-ec1c-4093-9c8d-6a5795f4d146-kolla-config\") pod \"memcached-0\" (UID: \"ee2394e6-ec1c-4093-9c8d-6a5795f4d146\") " pod="openstack/memcached-0" Jan 20 18:22:58 crc kubenswrapper[4661]: I0120 18:22:58.317227 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bzljh\" (UniqueName: \"kubernetes.io/projected/ee2394e6-ec1c-4093-9c8d-6a5795f4d146-kube-api-access-bzljh\") pod \"memcached-0\" (UID: \"ee2394e6-ec1c-4093-9c8d-6a5795f4d146\") " pod="openstack/memcached-0" Jan 20 18:22:58 crc kubenswrapper[4661]: I0120 18:22:58.317256 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/ee2394e6-ec1c-4093-9c8d-6a5795f4d146-memcached-tls-certs\") pod \"memcached-0\" (UID: \"ee2394e6-ec1c-4093-9c8d-6a5795f4d146\") " pod="openstack/memcached-0" Jan 20 18:22:58 crc kubenswrapper[4661]: I0120 18:22:58.418187 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ee2394e6-ec1c-4093-9c8d-6a5795f4d146-config-data\") pod \"memcached-0\" (UID: \"ee2394e6-ec1c-4093-9c8d-6a5795f4d146\") " pod="openstack/memcached-0" Jan 20 18:22:58 crc kubenswrapper[4661]: I0120 18:22:58.418263 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ee2394e6-ec1c-4093-9c8d-6a5795f4d146-combined-ca-bundle\") pod \"memcached-0\" (UID: \"ee2394e6-ec1c-4093-9c8d-6a5795f4d146\") " pod="openstack/memcached-0" Jan 20 18:22:58 crc kubenswrapper[4661]: I0120 18:22:58.418286 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/ee2394e6-ec1c-4093-9c8d-6a5795f4d146-kolla-config\") pod \"memcached-0\" (UID: \"ee2394e6-ec1c-4093-9c8d-6a5795f4d146\") " pod="openstack/memcached-0" Jan 20 18:22:58 crc kubenswrapper[4661]: I0120 18:22:58.418321 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bzljh\" (UniqueName: \"kubernetes.io/projected/ee2394e6-ec1c-4093-9c8d-6a5795f4d146-kube-api-access-bzljh\") pod \"memcached-0\" (UID: \"ee2394e6-ec1c-4093-9c8d-6a5795f4d146\") " pod="openstack/memcached-0" Jan 20 18:22:58 crc kubenswrapper[4661]: I0120 18:22:58.418347 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/ee2394e6-ec1c-4093-9c8d-6a5795f4d146-memcached-tls-certs\") pod \"memcached-0\" (UID: \"ee2394e6-ec1c-4093-9c8d-6a5795f4d146\") " pod="openstack/memcached-0" Jan 20 18:22:58 crc kubenswrapper[4661]: I0120 18:22:58.420473 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/ee2394e6-ec1c-4093-9c8d-6a5795f4d146-kolla-config\") pod \"memcached-0\" (UID: \"ee2394e6-ec1c-4093-9c8d-6a5795f4d146\") " pod="openstack/memcached-0" Jan 20 18:22:58 crc kubenswrapper[4661]: I0120 18:22:58.422249 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ee2394e6-ec1c-4093-9c8d-6a5795f4d146-config-data\") pod \"memcached-0\" (UID: \"ee2394e6-ec1c-4093-9c8d-6a5795f4d146\") " pod="openstack/memcached-0" Jan 20 18:22:58 crc kubenswrapper[4661]: I0120 18:22:58.429320 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ee2394e6-ec1c-4093-9c8d-6a5795f4d146-combined-ca-bundle\") pod \"memcached-0\" (UID: \"ee2394e6-ec1c-4093-9c8d-6a5795f4d146\") " pod="openstack/memcached-0" Jan 20 18:22:58 crc kubenswrapper[4661]: I0120 18:22:58.431568 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/ee2394e6-ec1c-4093-9c8d-6a5795f4d146-memcached-tls-certs\") pod \"memcached-0\" (UID: \"ee2394e6-ec1c-4093-9c8d-6a5795f4d146\") " pod="openstack/memcached-0" Jan 20 18:22:58 crc kubenswrapper[4661]: I0120 18:22:58.466253 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bzljh\" (UniqueName: \"kubernetes.io/projected/ee2394e6-ec1c-4093-9c8d-6a5795f4d146-kube-api-access-bzljh\") pod \"memcached-0\" (UID: \"ee2394e6-ec1c-4093-9c8d-6a5795f4d146\") " pod="openstack/memcached-0" Jan 20 18:22:58 crc kubenswrapper[4661]: I0120 18:22:58.581242 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Jan 20 18:22:59 crc kubenswrapper[4661]: I0120 18:22:59.117931 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 20 18:22:59 crc kubenswrapper[4661]: I0120 18:22:59.266862 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"1a3386fb-6ffa-47fa-8697-8d3c45ff61be","Type":"ContainerStarted","Data":"ccbcdfac27812ce951ed0999e4c4cce35c19971b44f787eb11a8c78f1260309a"} Jan 20 18:22:59 crc kubenswrapper[4661]: I0120 18:22:59.340014 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Jan 20 18:23:00 crc kubenswrapper[4661]: I0120 18:23:00.248556 4661 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Jan 20 18:23:00 crc kubenswrapper[4661]: I0120 18:23:00.250204 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 20 18:23:00 crc kubenswrapper[4661]: I0120 18:23:00.254940 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-ceilometer-dockercfg-tk4hq" Jan 20 18:23:00 crc kubenswrapper[4661]: I0120 18:23:00.287180 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 20 18:23:00 crc kubenswrapper[4661]: I0120 18:23:00.295548 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7nvgf\" (UniqueName: \"kubernetes.io/projected/b2c8897f-8188-4a97-8839-e205b94514c7-kube-api-access-7nvgf\") pod \"kube-state-metrics-0\" (UID: \"b2c8897f-8188-4a97-8839-e205b94514c7\") " pod="openstack/kube-state-metrics-0" Jan 20 18:23:00 crc kubenswrapper[4661]: I0120 18:23:00.346466 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"ee2394e6-ec1c-4093-9c8d-6a5795f4d146","Type":"ContainerStarted","Data":"735a3c6c61bdd3206b87ab54ff903c3071477c88b5345718ed9fdd05718dad13"} Jan 20 18:23:00 crc kubenswrapper[4661]: I0120 18:23:00.400448 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7nvgf\" (UniqueName: \"kubernetes.io/projected/b2c8897f-8188-4a97-8839-e205b94514c7-kube-api-access-7nvgf\") pod \"kube-state-metrics-0\" (UID: \"b2c8897f-8188-4a97-8839-e205b94514c7\") " pod="openstack/kube-state-metrics-0" Jan 20 18:23:00 crc kubenswrapper[4661]: I0120 18:23:00.424769 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7nvgf\" (UniqueName: \"kubernetes.io/projected/b2c8897f-8188-4a97-8839-e205b94514c7-kube-api-access-7nvgf\") pod \"kube-state-metrics-0\" (UID: \"b2c8897f-8188-4a97-8839-e205b94514c7\") " pod="openstack/kube-state-metrics-0" Jan 20 18:23:00 crc kubenswrapper[4661]: I0120 18:23:00.624175 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 20 18:23:01 crc kubenswrapper[4661]: I0120 18:23:01.368203 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 20 18:23:01 crc kubenswrapper[4661]: I0120 18:23:01.399650 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"b2c8897f-8188-4a97-8839-e205b94514c7","Type":"ContainerStarted","Data":"fd7d6e00d9b53c5cf44324345d643ed00376d51d84b938c2fc28429f580ef091"} Jan 20 18:23:03 crc kubenswrapper[4661]: I0120 18:23:03.698642 4661 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-p7h4x"] Jan 20 18:23:03 crc kubenswrapper[4661]: I0120 18:23:03.700123 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-p7h4x" Jan 20 18:23:03 crc kubenswrapper[4661]: I0120 18:23:03.704750 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncontroller-ovncontroller-dockercfg-9wpdz" Jan 20 18:23:03 crc kubenswrapper[4661]: I0120 18:23:03.704812 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovncontroller-ovndbs" Jan 20 18:23:03 crc kubenswrapper[4661]: I0120 18:23:03.704945 4661 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-scripts" Jan 20 18:23:03 crc kubenswrapper[4661]: I0120 18:23:03.716603 4661 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ovs-9k84x"] Jan 20 18:23:03 crc kubenswrapper[4661]: I0120 18:23:03.718024 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-9k84x" Jan 20 18:23:03 crc kubenswrapper[4661]: I0120 18:23:03.734534 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-p7h4x"] Jan 20 18:23:03 crc kubenswrapper[4661]: I0120 18:23:03.756736 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-9k84x"] Jan 20 18:23:03 crc kubenswrapper[4661]: I0120 18:23:03.859214 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/65017fb7-6ab3-43d0-a308-a3d8da39b811-var-run-ovn\") pod \"ovn-controller-p7h4x\" (UID: \"65017fb7-6ab3-43d0-a308-a3d8da39b811\") " pod="openstack/ovn-controller-p7h4x" Jan 20 18:23:03 crc kubenswrapper[4661]: I0120 18:23:03.859268 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/6d1b9f50-80c4-494b-8ea6-f3cd3ca1b98d-var-lib\") pod \"ovn-controller-ovs-9k84x\" (UID: \"6d1b9f50-80c4-494b-8ea6-f3cd3ca1b98d\") " pod="openstack/ovn-controller-ovs-9k84x" Jan 20 18:23:03 crc kubenswrapper[4661]: I0120 18:23:03.859306 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/65017fb7-6ab3-43d0-a308-a3d8da39b811-scripts\") pod \"ovn-controller-p7h4x\" (UID: \"65017fb7-6ab3-43d0-a308-a3d8da39b811\") " pod="openstack/ovn-controller-p7h4x" Jan 20 18:23:03 crc kubenswrapper[4661]: I0120 18:23:03.859325 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/65017fb7-6ab3-43d0-a308-a3d8da39b811-ovn-controller-tls-certs\") pod \"ovn-controller-p7h4x\" (UID: \"65017fb7-6ab3-43d0-a308-a3d8da39b811\") " pod="openstack/ovn-controller-p7h4x" Jan 20 18:23:03 crc kubenswrapper[4661]: I0120 18:23:03.859353 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6rz4n\" (UniqueName: \"kubernetes.io/projected/6d1b9f50-80c4-494b-8ea6-f3cd3ca1b98d-kube-api-access-6rz4n\") pod \"ovn-controller-ovs-9k84x\" (UID: \"6d1b9f50-80c4-494b-8ea6-f3cd3ca1b98d\") " pod="openstack/ovn-controller-ovs-9k84x" Jan 20 18:23:03 crc kubenswrapper[4661]: I0120 18:23:03.859376 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/6d1b9f50-80c4-494b-8ea6-f3cd3ca1b98d-var-run\") pod \"ovn-controller-ovs-9k84x\" (UID: \"6d1b9f50-80c4-494b-8ea6-f3cd3ca1b98d\") " pod="openstack/ovn-controller-ovs-9k84x" Jan 20 18:23:03 crc kubenswrapper[4661]: I0120 18:23:03.859437 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/65017fb7-6ab3-43d0-a308-a3d8da39b811-combined-ca-bundle\") pod \"ovn-controller-p7h4x\" (UID: \"65017fb7-6ab3-43d0-a308-a3d8da39b811\") " pod="openstack/ovn-controller-p7h4x" Jan 20 18:23:03 crc kubenswrapper[4661]: I0120 18:23:03.859466 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/6d1b9f50-80c4-494b-8ea6-f3cd3ca1b98d-var-log\") pod \"ovn-controller-ovs-9k84x\" (UID: \"6d1b9f50-80c4-494b-8ea6-f3cd3ca1b98d\") " pod="openstack/ovn-controller-ovs-9k84x" Jan 20 18:23:03 crc kubenswrapper[4661]: I0120 18:23:03.859489 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2whh7\" (UniqueName: \"kubernetes.io/projected/65017fb7-6ab3-43d0-a308-a3d8da39b811-kube-api-access-2whh7\") pod \"ovn-controller-p7h4x\" (UID: \"65017fb7-6ab3-43d0-a308-a3d8da39b811\") " pod="openstack/ovn-controller-p7h4x" Jan 20 18:23:03 crc kubenswrapper[4661]: I0120 18:23:03.859533 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6d1b9f50-80c4-494b-8ea6-f3cd3ca1b98d-scripts\") pod \"ovn-controller-ovs-9k84x\" (UID: \"6d1b9f50-80c4-494b-8ea6-f3cd3ca1b98d\") " pod="openstack/ovn-controller-ovs-9k84x" Jan 20 18:23:03 crc kubenswrapper[4661]: I0120 18:23:03.859563 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/65017fb7-6ab3-43d0-a308-a3d8da39b811-var-run\") pod \"ovn-controller-p7h4x\" (UID: \"65017fb7-6ab3-43d0-a308-a3d8da39b811\") " pod="openstack/ovn-controller-p7h4x" Jan 20 18:23:03 crc kubenswrapper[4661]: I0120 18:23:03.859597 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/65017fb7-6ab3-43d0-a308-a3d8da39b811-var-log-ovn\") pod \"ovn-controller-p7h4x\" (UID: \"65017fb7-6ab3-43d0-a308-a3d8da39b811\") " pod="openstack/ovn-controller-p7h4x" Jan 20 18:23:03 crc kubenswrapper[4661]: I0120 18:23:03.859618 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/6d1b9f50-80c4-494b-8ea6-f3cd3ca1b98d-etc-ovs\") pod \"ovn-controller-ovs-9k84x\" (UID: \"6d1b9f50-80c4-494b-8ea6-f3cd3ca1b98d\") " pod="openstack/ovn-controller-ovs-9k84x" Jan 20 18:23:03 crc kubenswrapper[4661]: I0120 18:23:03.965147 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/65017fb7-6ab3-43d0-a308-a3d8da39b811-combined-ca-bundle\") pod \"ovn-controller-p7h4x\" (UID: \"65017fb7-6ab3-43d0-a308-a3d8da39b811\") " pod="openstack/ovn-controller-p7h4x" Jan 20 18:23:03 crc kubenswrapper[4661]: I0120 18:23:03.965199 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/6d1b9f50-80c4-494b-8ea6-f3cd3ca1b98d-var-log\") pod \"ovn-controller-ovs-9k84x\" (UID: \"6d1b9f50-80c4-494b-8ea6-f3cd3ca1b98d\") " pod="openstack/ovn-controller-ovs-9k84x" Jan 20 18:23:03 crc kubenswrapper[4661]: I0120 18:23:03.965227 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2whh7\" (UniqueName: \"kubernetes.io/projected/65017fb7-6ab3-43d0-a308-a3d8da39b811-kube-api-access-2whh7\") pod \"ovn-controller-p7h4x\" (UID: \"65017fb7-6ab3-43d0-a308-a3d8da39b811\") " pod="openstack/ovn-controller-p7h4x" Jan 20 18:23:03 crc kubenswrapper[4661]: I0120 18:23:03.965262 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6d1b9f50-80c4-494b-8ea6-f3cd3ca1b98d-scripts\") pod \"ovn-controller-ovs-9k84x\" (UID: \"6d1b9f50-80c4-494b-8ea6-f3cd3ca1b98d\") " pod="openstack/ovn-controller-ovs-9k84x" Jan 20 18:23:03 crc kubenswrapper[4661]: I0120 18:23:03.965289 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/65017fb7-6ab3-43d0-a308-a3d8da39b811-var-run\") pod \"ovn-controller-p7h4x\" (UID: \"65017fb7-6ab3-43d0-a308-a3d8da39b811\") " pod="openstack/ovn-controller-p7h4x" Jan 20 18:23:03 crc kubenswrapper[4661]: I0120 18:23:03.965321 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/65017fb7-6ab3-43d0-a308-a3d8da39b811-var-log-ovn\") pod \"ovn-controller-p7h4x\" (UID: \"65017fb7-6ab3-43d0-a308-a3d8da39b811\") " pod="openstack/ovn-controller-p7h4x" Jan 20 18:23:03 crc kubenswrapper[4661]: I0120 18:23:03.965342 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/6d1b9f50-80c4-494b-8ea6-f3cd3ca1b98d-etc-ovs\") pod \"ovn-controller-ovs-9k84x\" (UID: \"6d1b9f50-80c4-494b-8ea6-f3cd3ca1b98d\") " pod="openstack/ovn-controller-ovs-9k84x" Jan 20 18:23:03 crc kubenswrapper[4661]: I0120 18:23:03.965363 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/65017fb7-6ab3-43d0-a308-a3d8da39b811-var-run-ovn\") pod \"ovn-controller-p7h4x\" (UID: \"65017fb7-6ab3-43d0-a308-a3d8da39b811\") " pod="openstack/ovn-controller-p7h4x" Jan 20 18:23:03 crc kubenswrapper[4661]: I0120 18:23:03.965385 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/6d1b9f50-80c4-494b-8ea6-f3cd3ca1b98d-var-lib\") pod \"ovn-controller-ovs-9k84x\" (UID: \"6d1b9f50-80c4-494b-8ea6-f3cd3ca1b98d\") " pod="openstack/ovn-controller-ovs-9k84x" Jan 20 18:23:03 crc kubenswrapper[4661]: I0120 18:23:03.965415 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/65017fb7-6ab3-43d0-a308-a3d8da39b811-scripts\") pod \"ovn-controller-p7h4x\" (UID: \"65017fb7-6ab3-43d0-a308-a3d8da39b811\") " pod="openstack/ovn-controller-p7h4x" Jan 20 18:23:03 crc kubenswrapper[4661]: I0120 18:23:03.965434 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/65017fb7-6ab3-43d0-a308-a3d8da39b811-ovn-controller-tls-certs\") pod \"ovn-controller-p7h4x\" (UID: \"65017fb7-6ab3-43d0-a308-a3d8da39b811\") " pod="openstack/ovn-controller-p7h4x" Jan 20 18:23:03 crc kubenswrapper[4661]: I0120 18:23:03.965458 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6rz4n\" (UniqueName: \"kubernetes.io/projected/6d1b9f50-80c4-494b-8ea6-f3cd3ca1b98d-kube-api-access-6rz4n\") pod \"ovn-controller-ovs-9k84x\" (UID: \"6d1b9f50-80c4-494b-8ea6-f3cd3ca1b98d\") " pod="openstack/ovn-controller-ovs-9k84x" Jan 20 18:23:03 crc kubenswrapper[4661]: I0120 18:23:03.965477 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/6d1b9f50-80c4-494b-8ea6-f3cd3ca1b98d-var-run\") pod \"ovn-controller-ovs-9k84x\" (UID: \"6d1b9f50-80c4-494b-8ea6-f3cd3ca1b98d\") " pod="openstack/ovn-controller-ovs-9k84x" Jan 20 18:23:03 crc kubenswrapper[4661]: I0120 18:23:03.966413 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/6d1b9f50-80c4-494b-8ea6-f3cd3ca1b98d-var-run\") pod \"ovn-controller-ovs-9k84x\" (UID: \"6d1b9f50-80c4-494b-8ea6-f3cd3ca1b98d\") " pod="openstack/ovn-controller-ovs-9k84x" Jan 20 18:23:03 crc kubenswrapper[4661]: I0120 18:23:03.966461 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/65017fb7-6ab3-43d0-a308-a3d8da39b811-var-log-ovn\") pod \"ovn-controller-p7h4x\" (UID: \"65017fb7-6ab3-43d0-a308-a3d8da39b811\") " pod="openstack/ovn-controller-p7h4x" Jan 20 18:23:03 crc kubenswrapper[4661]: I0120 18:23:03.966543 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/6d1b9f50-80c4-494b-8ea6-f3cd3ca1b98d-etc-ovs\") pod \"ovn-controller-ovs-9k84x\" (UID: \"6d1b9f50-80c4-494b-8ea6-f3cd3ca1b98d\") " pod="openstack/ovn-controller-ovs-9k84x" Jan 20 18:23:03 crc kubenswrapper[4661]: I0120 18:23:03.969432 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/65017fb7-6ab3-43d0-a308-a3d8da39b811-var-run-ovn\") pod \"ovn-controller-p7h4x\" (UID: \"65017fb7-6ab3-43d0-a308-a3d8da39b811\") " pod="openstack/ovn-controller-p7h4x" Jan 20 18:23:03 crc kubenswrapper[4661]: I0120 18:23:03.969565 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/6d1b9f50-80c4-494b-8ea6-f3cd3ca1b98d-var-lib\") pod \"ovn-controller-ovs-9k84x\" (UID: \"6d1b9f50-80c4-494b-8ea6-f3cd3ca1b98d\") " pod="openstack/ovn-controller-ovs-9k84x" Jan 20 18:23:03 crc kubenswrapper[4661]: I0120 18:23:03.969619 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/65017fb7-6ab3-43d0-a308-a3d8da39b811-var-run\") pod \"ovn-controller-p7h4x\" (UID: \"65017fb7-6ab3-43d0-a308-a3d8da39b811\") " pod="openstack/ovn-controller-p7h4x" Jan 20 18:23:03 crc kubenswrapper[4661]: I0120 18:23:03.969768 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/6d1b9f50-80c4-494b-8ea6-f3cd3ca1b98d-var-log\") pod \"ovn-controller-ovs-9k84x\" (UID: \"6d1b9f50-80c4-494b-8ea6-f3cd3ca1b98d\") " pod="openstack/ovn-controller-ovs-9k84x" Jan 20 18:23:03 crc kubenswrapper[4661]: I0120 18:23:03.971097 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6d1b9f50-80c4-494b-8ea6-f3cd3ca1b98d-scripts\") pod \"ovn-controller-ovs-9k84x\" (UID: \"6d1b9f50-80c4-494b-8ea6-f3cd3ca1b98d\") " pod="openstack/ovn-controller-ovs-9k84x" Jan 20 18:23:03 crc kubenswrapper[4661]: I0120 18:23:03.974503 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/65017fb7-6ab3-43d0-a308-a3d8da39b811-combined-ca-bundle\") pod \"ovn-controller-p7h4x\" (UID: \"65017fb7-6ab3-43d0-a308-a3d8da39b811\") " pod="openstack/ovn-controller-p7h4x" Jan 20 18:23:03 crc kubenswrapper[4661]: I0120 18:23:03.984285 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/65017fb7-6ab3-43d0-a308-a3d8da39b811-scripts\") pod \"ovn-controller-p7h4x\" (UID: \"65017fb7-6ab3-43d0-a308-a3d8da39b811\") " pod="openstack/ovn-controller-p7h4x" Jan 20 18:23:03 crc kubenswrapper[4661]: I0120 18:23:03.993387 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2whh7\" (UniqueName: \"kubernetes.io/projected/65017fb7-6ab3-43d0-a308-a3d8da39b811-kube-api-access-2whh7\") pod \"ovn-controller-p7h4x\" (UID: \"65017fb7-6ab3-43d0-a308-a3d8da39b811\") " pod="openstack/ovn-controller-p7h4x" Jan 20 18:23:04 crc kubenswrapper[4661]: I0120 18:23:04.001964 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/65017fb7-6ab3-43d0-a308-a3d8da39b811-ovn-controller-tls-certs\") pod \"ovn-controller-p7h4x\" (UID: \"65017fb7-6ab3-43d0-a308-a3d8da39b811\") " pod="openstack/ovn-controller-p7h4x" Jan 20 18:23:04 crc kubenswrapper[4661]: I0120 18:23:04.002310 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6rz4n\" (UniqueName: \"kubernetes.io/projected/6d1b9f50-80c4-494b-8ea6-f3cd3ca1b98d-kube-api-access-6rz4n\") pod \"ovn-controller-ovs-9k84x\" (UID: \"6d1b9f50-80c4-494b-8ea6-f3cd3ca1b98d\") " pod="openstack/ovn-controller-ovs-9k84x" Jan 20 18:23:04 crc kubenswrapper[4661]: I0120 18:23:04.030518 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-p7h4x" Jan 20 18:23:04 crc kubenswrapper[4661]: I0120 18:23:04.043257 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-9k84x" Jan 20 18:23:05 crc kubenswrapper[4661]: I0120 18:23:05.976577 4661 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 20 18:23:05 crc kubenswrapper[4661]: I0120 18:23:05.977720 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Jan 20 18:23:05 crc kubenswrapper[4661]: I0120 18:23:05.980935 4661 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Jan 20 18:23:05 crc kubenswrapper[4661]: I0120 18:23:05.987874 4661 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Jan 20 18:23:05 crc kubenswrapper[4661]: I0120 18:23:05.987897 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-nb-ovndbs" Jan 20 18:23:05 crc kubenswrapper[4661]: I0120 18:23:05.987952 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-5fmlq" Jan 20 18:23:05 crc kubenswrapper[4661]: I0120 18:23:05.988628 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovn-metrics" Jan 20 18:23:06 crc kubenswrapper[4661]: I0120 18:23:05.999478 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 20 18:23:06 crc kubenswrapper[4661]: I0120 18:23:06.121010 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/8a95987a-80ef-495d-adf7-f60c952836ce-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"8a95987a-80ef-495d-adf7-f60c952836ce\") " pod="openstack/ovsdbserver-nb-0" Jan 20 18:23:06 crc kubenswrapper[4661]: I0120 18:23:06.121058 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/8a95987a-80ef-495d-adf7-f60c952836ce-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"8a95987a-80ef-495d-adf7-f60c952836ce\") " pod="openstack/ovsdbserver-nb-0" Jan 20 18:23:06 crc kubenswrapper[4661]: I0120 18:23:06.121094 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8a95987a-80ef-495d-adf7-f60c952836ce-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"8a95987a-80ef-495d-adf7-f60c952836ce\") " pod="openstack/ovsdbserver-nb-0" Jan 20 18:23:06 crc kubenswrapper[4661]: I0120 18:23:06.121168 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/8a95987a-80ef-495d-adf7-f60c952836ce-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"8a95987a-80ef-495d-adf7-f60c952836ce\") " pod="openstack/ovsdbserver-nb-0" Jan 20 18:23:06 crc kubenswrapper[4661]: I0120 18:23:06.121196 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cpphh\" (UniqueName: \"kubernetes.io/projected/8a95987a-80ef-495d-adf7-f60c952836ce-kube-api-access-cpphh\") pod \"ovsdbserver-nb-0\" (UID: \"8a95987a-80ef-495d-adf7-f60c952836ce\") " pod="openstack/ovsdbserver-nb-0" Jan 20 18:23:06 crc kubenswrapper[4661]: I0120 18:23:06.121216 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8a95987a-80ef-495d-adf7-f60c952836ce-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"8a95987a-80ef-495d-adf7-f60c952836ce\") " pod="openstack/ovsdbserver-nb-0" Jan 20 18:23:06 crc kubenswrapper[4661]: I0120 18:23:06.121231 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8a95987a-80ef-495d-adf7-f60c952836ce-config\") pod \"ovsdbserver-nb-0\" (UID: \"8a95987a-80ef-495d-adf7-f60c952836ce\") " pod="openstack/ovsdbserver-nb-0" Jan 20 18:23:06 crc kubenswrapper[4661]: I0120 18:23:06.121258 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"ovsdbserver-nb-0\" (UID: \"8a95987a-80ef-495d-adf7-f60c952836ce\") " pod="openstack/ovsdbserver-nb-0" Jan 20 18:23:06 crc kubenswrapper[4661]: I0120 18:23:06.223205 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"ovsdbserver-nb-0\" (UID: \"8a95987a-80ef-495d-adf7-f60c952836ce\") " pod="openstack/ovsdbserver-nb-0" Jan 20 18:23:06 crc kubenswrapper[4661]: I0120 18:23:06.223290 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/8a95987a-80ef-495d-adf7-f60c952836ce-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"8a95987a-80ef-495d-adf7-f60c952836ce\") " pod="openstack/ovsdbserver-nb-0" Jan 20 18:23:06 crc kubenswrapper[4661]: I0120 18:23:06.223323 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/8a95987a-80ef-495d-adf7-f60c952836ce-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"8a95987a-80ef-495d-adf7-f60c952836ce\") " pod="openstack/ovsdbserver-nb-0" Jan 20 18:23:06 crc kubenswrapper[4661]: I0120 18:23:06.223352 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8a95987a-80ef-495d-adf7-f60c952836ce-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"8a95987a-80ef-495d-adf7-f60c952836ce\") " pod="openstack/ovsdbserver-nb-0" Jan 20 18:23:06 crc kubenswrapper[4661]: I0120 18:23:06.223394 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/8a95987a-80ef-495d-adf7-f60c952836ce-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"8a95987a-80ef-495d-adf7-f60c952836ce\") " pod="openstack/ovsdbserver-nb-0" Jan 20 18:23:06 crc kubenswrapper[4661]: I0120 18:23:06.223412 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cpphh\" (UniqueName: \"kubernetes.io/projected/8a95987a-80ef-495d-adf7-f60c952836ce-kube-api-access-cpphh\") pod \"ovsdbserver-nb-0\" (UID: \"8a95987a-80ef-495d-adf7-f60c952836ce\") " pod="openstack/ovsdbserver-nb-0" Jan 20 18:23:06 crc kubenswrapper[4661]: I0120 18:23:06.223430 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8a95987a-80ef-495d-adf7-f60c952836ce-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"8a95987a-80ef-495d-adf7-f60c952836ce\") " pod="openstack/ovsdbserver-nb-0" Jan 20 18:23:06 crc kubenswrapper[4661]: I0120 18:23:06.223446 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8a95987a-80ef-495d-adf7-f60c952836ce-config\") pod \"ovsdbserver-nb-0\" (UID: \"8a95987a-80ef-495d-adf7-f60c952836ce\") " pod="openstack/ovsdbserver-nb-0" Jan 20 18:23:06 crc kubenswrapper[4661]: I0120 18:23:06.224277 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/8a95987a-80ef-495d-adf7-f60c952836ce-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"8a95987a-80ef-495d-adf7-f60c952836ce\") " pod="openstack/ovsdbserver-nb-0" Jan 20 18:23:06 crc kubenswrapper[4661]: I0120 18:23:06.224456 4661 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"ovsdbserver-nb-0\" (UID: \"8a95987a-80ef-495d-adf7-f60c952836ce\") device mount path \"/mnt/openstack/pv05\"" pod="openstack/ovsdbserver-nb-0" Jan 20 18:23:06 crc kubenswrapper[4661]: I0120 18:23:06.224880 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8a95987a-80ef-495d-adf7-f60c952836ce-config\") pod \"ovsdbserver-nb-0\" (UID: \"8a95987a-80ef-495d-adf7-f60c952836ce\") " pod="openstack/ovsdbserver-nb-0" Jan 20 18:23:06 crc kubenswrapper[4661]: I0120 18:23:06.225778 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8a95987a-80ef-495d-adf7-f60c952836ce-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"8a95987a-80ef-495d-adf7-f60c952836ce\") " pod="openstack/ovsdbserver-nb-0" Jan 20 18:23:06 crc kubenswrapper[4661]: I0120 18:23:06.236463 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/8a95987a-80ef-495d-adf7-f60c952836ce-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"8a95987a-80ef-495d-adf7-f60c952836ce\") " pod="openstack/ovsdbserver-nb-0" Jan 20 18:23:06 crc kubenswrapper[4661]: I0120 18:23:06.246413 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/8a95987a-80ef-495d-adf7-f60c952836ce-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"8a95987a-80ef-495d-adf7-f60c952836ce\") " pod="openstack/ovsdbserver-nb-0" Jan 20 18:23:06 crc kubenswrapper[4661]: I0120 18:23:06.252590 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8a95987a-80ef-495d-adf7-f60c952836ce-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"8a95987a-80ef-495d-adf7-f60c952836ce\") " pod="openstack/ovsdbserver-nb-0" Jan 20 18:23:06 crc kubenswrapper[4661]: I0120 18:23:06.255528 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cpphh\" (UniqueName: \"kubernetes.io/projected/8a95987a-80ef-495d-adf7-f60c952836ce-kube-api-access-cpphh\") pod \"ovsdbserver-nb-0\" (UID: \"8a95987a-80ef-495d-adf7-f60c952836ce\") " pod="openstack/ovsdbserver-nb-0" Jan 20 18:23:06 crc kubenswrapper[4661]: I0120 18:23:06.270199 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"ovsdbserver-nb-0\" (UID: \"8a95987a-80ef-495d-adf7-f60c952836ce\") " pod="openstack/ovsdbserver-nb-0" Jan 20 18:23:06 crc kubenswrapper[4661]: I0120 18:23:06.303108 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Jan 20 18:23:08 crc kubenswrapper[4661]: I0120 18:23:08.050970 4661 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 20 18:23:08 crc kubenswrapper[4661]: I0120 18:23:08.052294 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Jan 20 18:23:08 crc kubenswrapper[4661]: I0120 18:23:08.055235 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-sb-ovndbs" Jan 20 18:23:08 crc kubenswrapper[4661]: I0120 18:23:08.058053 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-w2p87" Jan 20 18:23:08 crc kubenswrapper[4661]: I0120 18:23:08.058109 4661 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Jan 20 18:23:08 crc kubenswrapper[4661]: I0120 18:23:08.058146 4661 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Jan 20 18:23:08 crc kubenswrapper[4661]: I0120 18:23:08.066691 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 20 18:23:08 crc kubenswrapper[4661]: I0120 18:23:08.167382 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/20636e35-51a8-4c79-888a-64d59e109a53-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"20636e35-51a8-4c79-888a-64d59e109a53\") " pod="openstack/ovsdbserver-sb-0" Jan 20 18:23:08 crc kubenswrapper[4661]: I0120 18:23:08.167463 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/20636e35-51a8-4c79-888a-64d59e109a53-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"20636e35-51a8-4c79-888a-64d59e109a53\") " pod="openstack/ovsdbserver-sb-0" Jan 20 18:23:08 crc kubenswrapper[4661]: I0120 18:23:08.167496 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/20636e35-51a8-4c79-888a-64d59e109a53-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"20636e35-51a8-4c79-888a-64d59e109a53\") " pod="openstack/ovsdbserver-sb-0" Jan 20 18:23:08 crc kubenswrapper[4661]: I0120 18:23:08.167514 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/20636e35-51a8-4c79-888a-64d59e109a53-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"20636e35-51a8-4c79-888a-64d59e109a53\") " pod="openstack/ovsdbserver-sb-0" Jan 20 18:23:08 crc kubenswrapper[4661]: I0120 18:23:08.167550 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m8h6m\" (UniqueName: \"kubernetes.io/projected/20636e35-51a8-4c79-888a-64d59e109a53-kube-api-access-m8h6m\") pod \"ovsdbserver-sb-0\" (UID: \"20636e35-51a8-4c79-888a-64d59e109a53\") " pod="openstack/ovsdbserver-sb-0" Jan 20 18:23:08 crc kubenswrapper[4661]: I0120 18:23:08.167595 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"ovsdbserver-sb-0\" (UID: \"20636e35-51a8-4c79-888a-64d59e109a53\") " pod="openstack/ovsdbserver-sb-0" Jan 20 18:23:08 crc kubenswrapper[4661]: I0120 18:23:08.167647 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/20636e35-51a8-4c79-888a-64d59e109a53-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"20636e35-51a8-4c79-888a-64d59e109a53\") " pod="openstack/ovsdbserver-sb-0" Jan 20 18:23:08 crc kubenswrapper[4661]: I0120 18:23:08.167720 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/20636e35-51a8-4c79-888a-64d59e109a53-config\") pod \"ovsdbserver-sb-0\" (UID: \"20636e35-51a8-4c79-888a-64d59e109a53\") " pod="openstack/ovsdbserver-sb-0" Jan 20 18:23:08 crc kubenswrapper[4661]: I0120 18:23:08.268939 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/20636e35-51a8-4c79-888a-64d59e109a53-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"20636e35-51a8-4c79-888a-64d59e109a53\") " pod="openstack/ovsdbserver-sb-0" Jan 20 18:23:08 crc kubenswrapper[4661]: I0120 18:23:08.269650 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/20636e35-51a8-4c79-888a-64d59e109a53-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"20636e35-51a8-4c79-888a-64d59e109a53\") " pod="openstack/ovsdbserver-sb-0" Jan 20 18:23:08 crc kubenswrapper[4661]: I0120 18:23:08.269976 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/20636e35-51a8-4c79-888a-64d59e109a53-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"20636e35-51a8-4c79-888a-64d59e109a53\") " pod="openstack/ovsdbserver-sb-0" Jan 20 18:23:08 crc kubenswrapper[4661]: I0120 18:23:08.270005 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/20636e35-51a8-4c79-888a-64d59e109a53-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"20636e35-51a8-4c79-888a-64d59e109a53\") " pod="openstack/ovsdbserver-sb-0" Jan 20 18:23:08 crc kubenswrapper[4661]: I0120 18:23:08.270053 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m8h6m\" (UniqueName: \"kubernetes.io/projected/20636e35-51a8-4c79-888a-64d59e109a53-kube-api-access-m8h6m\") pod \"ovsdbserver-sb-0\" (UID: \"20636e35-51a8-4c79-888a-64d59e109a53\") " pod="openstack/ovsdbserver-sb-0" Jan 20 18:23:08 crc kubenswrapper[4661]: I0120 18:23:08.270071 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"ovsdbserver-sb-0\" (UID: \"20636e35-51a8-4c79-888a-64d59e109a53\") " pod="openstack/ovsdbserver-sb-0" Jan 20 18:23:08 crc kubenswrapper[4661]: I0120 18:23:08.270095 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/20636e35-51a8-4c79-888a-64d59e109a53-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"20636e35-51a8-4c79-888a-64d59e109a53\") " pod="openstack/ovsdbserver-sb-0" Jan 20 18:23:08 crc kubenswrapper[4661]: I0120 18:23:08.270128 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/20636e35-51a8-4c79-888a-64d59e109a53-config\") pod \"ovsdbserver-sb-0\" (UID: \"20636e35-51a8-4c79-888a-64d59e109a53\") " pod="openstack/ovsdbserver-sb-0" Jan 20 18:23:08 crc kubenswrapper[4661]: I0120 18:23:08.270787 4661 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"ovsdbserver-sb-0\" (UID: \"20636e35-51a8-4c79-888a-64d59e109a53\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/ovsdbserver-sb-0" Jan 20 18:23:08 crc kubenswrapper[4661]: I0120 18:23:08.271060 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/20636e35-51a8-4c79-888a-64d59e109a53-config\") pod \"ovsdbserver-sb-0\" (UID: \"20636e35-51a8-4c79-888a-64d59e109a53\") " pod="openstack/ovsdbserver-sb-0" Jan 20 18:23:08 crc kubenswrapper[4661]: I0120 18:23:08.271370 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/20636e35-51a8-4c79-888a-64d59e109a53-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"20636e35-51a8-4c79-888a-64d59e109a53\") " pod="openstack/ovsdbserver-sb-0" Jan 20 18:23:08 crc kubenswrapper[4661]: I0120 18:23:08.271481 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/20636e35-51a8-4c79-888a-64d59e109a53-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"20636e35-51a8-4c79-888a-64d59e109a53\") " pod="openstack/ovsdbserver-sb-0" Jan 20 18:23:08 crc kubenswrapper[4661]: I0120 18:23:08.275182 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/20636e35-51a8-4c79-888a-64d59e109a53-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"20636e35-51a8-4c79-888a-64d59e109a53\") " pod="openstack/ovsdbserver-sb-0" Jan 20 18:23:08 crc kubenswrapper[4661]: I0120 18:23:08.275563 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/20636e35-51a8-4c79-888a-64d59e109a53-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"20636e35-51a8-4c79-888a-64d59e109a53\") " pod="openstack/ovsdbserver-sb-0" Jan 20 18:23:08 crc kubenswrapper[4661]: I0120 18:23:08.279813 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/20636e35-51a8-4c79-888a-64d59e109a53-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"20636e35-51a8-4c79-888a-64d59e109a53\") " pod="openstack/ovsdbserver-sb-0" Jan 20 18:23:08 crc kubenswrapper[4661]: I0120 18:23:08.288913 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m8h6m\" (UniqueName: \"kubernetes.io/projected/20636e35-51a8-4c79-888a-64d59e109a53-kube-api-access-m8h6m\") pod \"ovsdbserver-sb-0\" (UID: \"20636e35-51a8-4c79-888a-64d59e109a53\") " pod="openstack/ovsdbserver-sb-0" Jan 20 18:23:08 crc kubenswrapper[4661]: I0120 18:23:08.298135 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"ovsdbserver-sb-0\" (UID: \"20636e35-51a8-4c79-888a-64d59e109a53\") " pod="openstack/ovsdbserver-sb-0" Jan 20 18:23:08 crc kubenswrapper[4661]: I0120 18:23:08.389445 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Jan 20 18:23:18 crc kubenswrapper[4661]: E0120 18:23:18.964742 4661 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified" Jan 20 18:23:18 crc kubenswrapper[4661]: E0120 18:23:18.965592 4661 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:setup-container,Image:quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified,Command:[sh -c cp /tmp/erlang-cookie-secret/.erlang.cookie /var/lib/rabbitmq/.erlang.cookie && chmod 600 /var/lib/rabbitmq/.erlang.cookie ; cp /tmp/rabbitmq-plugins/enabled_plugins /operator/enabled_plugins ; echo '[default]' > /var/lib/rabbitmq/.rabbitmqadmin.conf && sed -e 's/default_user/username/' -e 's/default_pass/password/' /tmp/default_user.conf >> /var/lib/rabbitmq/.rabbitmqadmin.conf && chmod 600 /var/lib/rabbitmq/.rabbitmqadmin.conf ; sleep 30],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:plugins-conf,ReadOnly:false,MountPath:/tmp/rabbitmq-plugins/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-erlang-cookie,ReadOnly:false,MountPath:/var/lib/rabbitmq/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:erlang-cookie-secret,ReadOnly:false,MountPath:/tmp/erlang-cookie-secret/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-plugins,ReadOnly:false,MountPath:/operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:persistence,ReadOnly:false,MountPath:/var/lib/rabbitmq/mnesia/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-confd,ReadOnly:false,MountPath:/tmp/default_user.conf,SubPath:default_user.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bzql6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cell1-server-0_openstack(19a9e039-d4eb-475e-9ca9-6a6f6bfeb36c): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 20 18:23:18 crc kubenswrapper[4661]: E0120 18:23:18.966920 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/rabbitmq-cell1-server-0" podUID="19a9e039-d4eb-475e-9ca9-6a6f6bfeb36c" Jan 20 18:23:19 crc kubenswrapper[4661]: E0120 18:23:19.567861 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified\\\"\"" pod="openstack/rabbitmq-cell1-server-0" podUID="19a9e039-d4eb-475e-9ca9-6a6f6bfeb36c" Jan 20 18:23:25 crc kubenswrapper[4661]: E0120 18:23:25.155601 4661 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-mariadb:current-podified" Jan 20 18:23:25 crc kubenswrapper[4661]: E0120 18:23:25.156116 4661 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:mysql-bootstrap,Image:quay.io/podified-antelope-centos9/openstack-mariadb:current-podified,Command:[bash /var/lib/operator-scripts/mysql_bootstrap.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:True,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:mysql-db,ReadOnly:false,MountPath:/var/lib/mysql,SubPath:mysql,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-default,ReadOnly:true,MountPath:/var/lib/config-data/default,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-generated,ReadOnly:false,MountPath:/var/lib/config-data/generated,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:operator-scripts,ReadOnly:true,MountPath:/var/lib/operator-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kolla-config,ReadOnly:true,MountPath:/var/lib/kolla/config_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-x6tmz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod openstack-galera-0_openstack(1c705fc7-9ad0-4254-ad57-63db21057251): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 20 18:23:25 crc kubenswrapper[4661]: E0120 18:23:25.157436 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/openstack-galera-0" podUID="1c705fc7-9ad0-4254-ad57-63db21057251" Jan 20 18:23:25 crc kubenswrapper[4661]: E0120 18:23:25.164883 4661 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-mariadb:current-podified" Jan 20 18:23:25 crc kubenswrapper[4661]: E0120 18:23:25.165309 4661 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:mysql-bootstrap,Image:quay.io/podified-antelope-centos9/openstack-mariadb:current-podified,Command:[bash /var/lib/operator-scripts/mysql_bootstrap.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:True,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:mysql-db,ReadOnly:false,MountPath:/var/lib/mysql,SubPath:mysql,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-default,ReadOnly:true,MountPath:/var/lib/config-data/default,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-generated,ReadOnly:false,MountPath:/var/lib/config-data/generated,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:operator-scripts,ReadOnly:true,MountPath:/var/lib/operator-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kolla-config,ReadOnly:true,MountPath:/var/lib/kolla/config_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-m6fxt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod openstack-cell1-galera-0_openstack(1a3386fb-6ffa-47fa-8697-8d3c45ff61be): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 20 18:23:25 crc kubenswrapper[4661]: E0120 18:23:25.166583 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/openstack-cell1-galera-0" podUID="1a3386fb-6ffa-47fa-8697-8d3c45ff61be" Jan 20 18:23:25 crc kubenswrapper[4661]: E0120 18:23:25.622864 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-mariadb:current-podified\\\"\"" pod="openstack/openstack-galera-0" podUID="1c705fc7-9ad0-4254-ad57-63db21057251" Jan 20 18:23:25 crc kubenswrapper[4661]: E0120 18:23:25.625230 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-mariadb:current-podified\\\"\"" pod="openstack/openstack-cell1-galera-0" podUID="1a3386fb-6ffa-47fa-8697-8d3c45ff61be" Jan 20 18:23:25 crc kubenswrapper[4661]: E0120 18:23:25.842526 4661 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-memcached:current-podified" Jan 20 18:23:25 crc kubenswrapper[4661]: E0120 18:23:25.842808 4661 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:memcached,Image:quay.io/podified-antelope-centos9/openstack-memcached:current-podified,Command:[/usr/bin/dumb-init -- /usr/local/bin/kolla_start],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:memcached,HostPort:0,ContainerPort:11211,Protocol:TCP,HostIP:,},ContainerPort{Name:memcached-tls,HostPort:0,ContainerPort:11212,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:POD_IPS,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIPs,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:CONFIG_HASH,Value:n56ch7fh664h55bh57fh5d4h5d6hb4h96h64dhb9h95h547h586h68dh5f7hbh6ch587h667h8bhb6hdbh597h55bh654h65ch8fh674h5fh575h7dq,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/src,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kolla-config,ReadOnly:true,MountPath:/var/lib/kolla/config_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:memcached-tls-certs,ReadOnly:true,MountPath:/var/lib/config-data/tls/certs/memcached.crt,SubPath:tls.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:memcached-tls-certs,ReadOnly:true,MountPath:/var/lib/config-data/tls/private/memcached.key,SubPath:tls.key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bzljh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 11211 },Host:,},GRPC:nil,},InitialDelaySeconds:3,TimeoutSeconds:5,PeriodSeconds:3,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 11211 },Host:,},GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42457,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42457,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod memcached-0_openstack(ee2394e6-ec1c-4093-9c8d-6a5795f4d146): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 20 18:23:25 crc kubenswrapper[4661]: E0120 18:23:25.845888 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"memcached\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/memcached-0" podUID="ee2394e6-ec1c-4093-9c8d-6a5795f4d146" Jan 20 18:23:26 crc kubenswrapper[4661]: I0120 18:23:26.565527 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 20 18:23:26 crc kubenswrapper[4661]: E0120 18:23:26.631563 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"memcached\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-memcached:current-podified\\\"\"" pod="openstack/memcached-0" podUID="ee2394e6-ec1c-4093-9c8d-6a5795f4d146" Jan 20 18:23:26 crc kubenswrapper[4661]: W0120 18:23:26.760695 4661 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8a95987a_80ef_495d_adf7_f60c952836ce.slice/crio-028382b83f4c64167e36f1ca7497d4025254ea835b8d9be3b0fa668f390ee0f0 WatchSource:0}: Error finding container 028382b83f4c64167e36f1ca7497d4025254ea835b8d9be3b0fa668f390ee0f0: Status 404 returned error can't find the container with id 028382b83f4c64167e36f1ca7497d4025254ea835b8d9be3b0fa668f390ee0f0 Jan 20 18:23:26 crc kubenswrapper[4661]: E0120 18:23:26.776895 4661 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Jan 20 18:23:26 crc kubenswrapper[4661]: E0120 18:23:26.777091 4661 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n68chd6h679hbfh55fhc6h5ffh5d8h94h56ch589hb4hc5h57bh677hcdh655h8dh667h675h654h66ch567h8fh659h5b4h675h566h55bh54h67dh6dq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7ct2d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-666b6646f7-5v9q4_openstack(02aac5db-0152-44ea-94e6-4e8ef20cbe41): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 20 18:23:26 crc kubenswrapper[4661]: E0120 18:23:26.779632 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-666b6646f7-5v9q4" podUID="02aac5db-0152-44ea-94e6-4e8ef20cbe41" Jan 20 18:23:26 crc kubenswrapper[4661]: E0120 18:23:26.857717 4661 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Jan 20 18:23:26 crc kubenswrapper[4661]: E0120 18:23:26.858318 4661 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n659h4h664hbh658h587h67ch89h587h8fh679hc6hf9h55fh644h5d5h698h68dh5cdh5ffh669h54ch9h689hb8hd4h5bfhd8h5d7h5fh665h574q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-h5xjk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-57d769cc4f-fplm7_openstack(6aa88459-9af5-4ddd-a51e-32f0468ebc87): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 20 18:23:26 crc kubenswrapper[4661]: E0120 18:23:26.859566 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-57d769cc4f-fplm7" podUID="6aa88459-9af5-4ddd-a51e-32f0468ebc87" Jan 20 18:23:26 crc kubenswrapper[4661]: E0120 18:23:26.926542 4661 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Jan 20 18:23:26 crc kubenswrapper[4661]: E0120 18:23:26.926718 4661 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nffh5bdhf4h5f8h79h55h77h58fh56dh7bh6fh578hbch55dh68h56bhd9h65dh57ch658hc9h566h666h688h58h65dh684h5d7h6ch575h5d6h88q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pmzks,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-675f4bcbfc-xpqnj_openstack(0c4ac283-f86b-4a8e-958b-fe189004dc21): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 20 18:23:26 crc kubenswrapper[4661]: E0120 18:23:26.928428 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-675f4bcbfc-xpqnj" podUID="0c4ac283-f86b-4a8e-958b-fe189004dc21" Jan 20 18:23:26 crc kubenswrapper[4661]: E0120 18:23:26.929765 4661 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Jan 20 18:23:26 crc kubenswrapper[4661]: E0120 18:23:26.929840 4661 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:ndfhb5h667h568h584h5f9h58dh565h664h587h597h577h64bh5c4h66fh647hbdh68ch5c5h68dh686h5f7h64hd7hc6h55fh57bh98h57fh87h5fh57fq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-v8hm6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-78dd6ddcc-zxnfn_openstack(b866af6c-a952-4ef8-aee0-e18ee5799f98): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 20 18:23:26 crc kubenswrapper[4661]: E0120 18:23:26.931327 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-78dd6ddcc-zxnfn" podUID="b866af6c-a952-4ef8-aee0-e18ee5799f98" Jan 20 18:23:27 crc kubenswrapper[4661]: I0120 18:23:27.249105 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-p7h4x"] Jan 20 18:23:27 crc kubenswrapper[4661]: I0120 18:23:27.518852 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 20 18:23:27 crc kubenswrapper[4661]: I0120 18:23:27.646956 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-p7h4x" event={"ID":"65017fb7-6ab3-43d0-a308-a3d8da39b811","Type":"ContainerStarted","Data":"6f663f481e6d874c6d12606214b330ae068f49d7445c7a7febece948627123e6"} Jan 20 18:23:27 crc kubenswrapper[4661]: I0120 18:23:27.648324 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"20636e35-51a8-4c79-888a-64d59e109a53","Type":"ContainerStarted","Data":"c08abc4bdb56e21fc4be3d838e100c93b59bf7c3581e375ef4263fcad6b66a7c"} Jan 20 18:23:27 crc kubenswrapper[4661]: I0120 18:23:27.660968 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"8a95987a-80ef-495d-adf7-f60c952836ce","Type":"ContainerStarted","Data":"028382b83f4c64167e36f1ca7497d4025254ea835b8d9be3b0fa668f390ee0f0"} Jan 20 18:23:27 crc kubenswrapper[4661]: E0120 18:23:27.671035 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified\\\"\"" pod="openstack/dnsmasq-dns-666b6646f7-5v9q4" podUID="02aac5db-0152-44ea-94e6-4e8ef20cbe41" Jan 20 18:23:27 crc kubenswrapper[4661]: E0120 18:23:27.675069 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified\\\"\"" pod="openstack/dnsmasq-dns-57d769cc4f-fplm7" podUID="6aa88459-9af5-4ddd-a51e-32f0468ebc87" Jan 20 18:23:27 crc kubenswrapper[4661]: I0120 18:23:27.777617 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-9k84x"] Jan 20 18:23:27 crc kubenswrapper[4661]: W0120 18:23:27.890859 4661 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6d1b9f50_80c4_494b_8ea6_f3cd3ca1b98d.slice/crio-105634757cd806143cfa6994ae75c4e1918e41106706e6347fd96c74dfe60f51 WatchSource:0}: Error finding container 105634757cd806143cfa6994ae75c4e1918e41106706e6347fd96c74dfe60f51: Status 404 returned error can't find the container with id 105634757cd806143cfa6994ae75c4e1918e41106706e6347fd96c74dfe60f51 Jan 20 18:23:28 crc kubenswrapper[4661]: I0120 18:23:28.334650 4661 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-zxnfn" Jan 20 18:23:28 crc kubenswrapper[4661]: I0120 18:23:28.343540 4661 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-xpqnj" Jan 20 18:23:28 crc kubenswrapper[4661]: I0120 18:23:28.438879 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b866af6c-a952-4ef8-aee0-e18ee5799f98-dns-svc\") pod \"b866af6c-a952-4ef8-aee0-e18ee5799f98\" (UID: \"b866af6c-a952-4ef8-aee0-e18ee5799f98\") " Jan 20 18:23:28 crc kubenswrapper[4661]: I0120 18:23:28.438945 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b866af6c-a952-4ef8-aee0-e18ee5799f98-config\") pod \"b866af6c-a952-4ef8-aee0-e18ee5799f98\" (UID: \"b866af6c-a952-4ef8-aee0-e18ee5799f98\") " Jan 20 18:23:28 crc kubenswrapper[4661]: I0120 18:23:28.439014 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v8hm6\" (UniqueName: \"kubernetes.io/projected/b866af6c-a952-4ef8-aee0-e18ee5799f98-kube-api-access-v8hm6\") pod \"b866af6c-a952-4ef8-aee0-e18ee5799f98\" (UID: \"b866af6c-a952-4ef8-aee0-e18ee5799f98\") " Jan 20 18:23:28 crc kubenswrapper[4661]: I0120 18:23:28.439373 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b866af6c-a952-4ef8-aee0-e18ee5799f98-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "b866af6c-a952-4ef8-aee0-e18ee5799f98" (UID: "b866af6c-a952-4ef8-aee0-e18ee5799f98"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 18:23:28 crc kubenswrapper[4661]: I0120 18:23:28.439392 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b866af6c-a952-4ef8-aee0-e18ee5799f98-config" (OuterVolumeSpecName: "config") pod "b866af6c-a952-4ef8-aee0-e18ee5799f98" (UID: "b866af6c-a952-4ef8-aee0-e18ee5799f98"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 18:23:28 crc kubenswrapper[4661]: I0120 18:23:28.439530 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pmzks\" (UniqueName: \"kubernetes.io/projected/0c4ac283-f86b-4a8e-958b-fe189004dc21-kube-api-access-pmzks\") pod \"0c4ac283-f86b-4a8e-958b-fe189004dc21\" (UID: \"0c4ac283-f86b-4a8e-958b-fe189004dc21\") " Jan 20 18:23:28 crc kubenswrapper[4661]: I0120 18:23:28.439989 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0c4ac283-f86b-4a8e-958b-fe189004dc21-config\") pod \"0c4ac283-f86b-4a8e-958b-fe189004dc21\" (UID: \"0c4ac283-f86b-4a8e-958b-fe189004dc21\") " Jan 20 18:23:28 crc kubenswrapper[4661]: I0120 18:23:28.440287 4661 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b866af6c-a952-4ef8-aee0-e18ee5799f98-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 20 18:23:28 crc kubenswrapper[4661]: I0120 18:23:28.440305 4661 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b866af6c-a952-4ef8-aee0-e18ee5799f98-config\") on node \"crc\" DevicePath \"\"" Jan 20 18:23:28 crc kubenswrapper[4661]: I0120 18:23:28.440316 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0c4ac283-f86b-4a8e-958b-fe189004dc21-config" (OuterVolumeSpecName: "config") pod "0c4ac283-f86b-4a8e-958b-fe189004dc21" (UID: "0c4ac283-f86b-4a8e-958b-fe189004dc21"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 18:23:28 crc kubenswrapper[4661]: I0120 18:23:28.532041 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b866af6c-a952-4ef8-aee0-e18ee5799f98-kube-api-access-v8hm6" (OuterVolumeSpecName: "kube-api-access-v8hm6") pod "b866af6c-a952-4ef8-aee0-e18ee5799f98" (UID: "b866af6c-a952-4ef8-aee0-e18ee5799f98"). InnerVolumeSpecName "kube-api-access-v8hm6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 18:23:28 crc kubenswrapper[4661]: I0120 18:23:28.532191 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0c4ac283-f86b-4a8e-958b-fe189004dc21-kube-api-access-pmzks" (OuterVolumeSpecName: "kube-api-access-pmzks") pod "0c4ac283-f86b-4a8e-958b-fe189004dc21" (UID: "0c4ac283-f86b-4a8e-958b-fe189004dc21"). InnerVolumeSpecName "kube-api-access-pmzks". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 18:23:28 crc kubenswrapper[4661]: I0120 18:23:28.541453 4661 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pmzks\" (UniqueName: \"kubernetes.io/projected/0c4ac283-f86b-4a8e-958b-fe189004dc21-kube-api-access-pmzks\") on node \"crc\" DevicePath \"\"" Jan 20 18:23:28 crc kubenswrapper[4661]: I0120 18:23:28.541480 4661 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0c4ac283-f86b-4a8e-958b-fe189004dc21-config\") on node \"crc\" DevicePath \"\"" Jan 20 18:23:28 crc kubenswrapper[4661]: I0120 18:23:28.541493 4661 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v8hm6\" (UniqueName: \"kubernetes.io/projected/b866af6c-a952-4ef8-aee0-e18ee5799f98-kube-api-access-v8hm6\") on node \"crc\" DevicePath \"\"" Jan 20 18:23:28 crc kubenswrapper[4661]: I0120 18:23:28.674311 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-9k84x" event={"ID":"6d1b9f50-80c4-494b-8ea6-f3cd3ca1b98d","Type":"ContainerStarted","Data":"105634757cd806143cfa6994ae75c4e1918e41106706e6347fd96c74dfe60f51"} Jan 20 18:23:28 crc kubenswrapper[4661]: I0120 18:23:28.675821 4661 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-xpqnj" Jan 20 18:23:28 crc kubenswrapper[4661]: I0120 18:23:28.675854 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-xpqnj" event={"ID":"0c4ac283-f86b-4a8e-958b-fe189004dc21","Type":"ContainerDied","Data":"b36212e36ecfa8c1908c66dc37b7a85f5b308593a451a651a5bda2c9e266544a"} Jan 20 18:23:28 crc kubenswrapper[4661]: I0120 18:23:28.676988 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-zxnfn" event={"ID":"b866af6c-a952-4ef8-aee0-e18ee5799f98","Type":"ContainerDied","Data":"30d83c7d8c624d246c96b2f438fe0cd46be4681f5c07de111c8c7a3d61738c81"} Jan 20 18:23:28 crc kubenswrapper[4661]: I0120 18:23:28.677091 4661 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-zxnfn" Jan 20 18:23:28 crc kubenswrapper[4661]: I0120 18:23:28.686812 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"b764feba-067a-4a59-a23b-9a9b7725f420","Type":"ContainerStarted","Data":"8ded1c3a4a61ed6debad60aa74cc2e5774f7de46bb912d100ebb824fcb556ec7"} Jan 20 18:23:28 crc kubenswrapper[4661]: I0120 18:23:28.793574 4661 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-xpqnj"] Jan 20 18:23:28 crc kubenswrapper[4661]: I0120 18:23:28.800731 4661 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-xpqnj"] Jan 20 18:23:28 crc kubenswrapper[4661]: I0120 18:23:28.831197 4661 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-zxnfn"] Jan 20 18:23:28 crc kubenswrapper[4661]: I0120 18:23:28.837802 4661 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-zxnfn"] Jan 20 18:23:30 crc kubenswrapper[4661]: I0120 18:23:30.152857 4661 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0c4ac283-f86b-4a8e-958b-fe189004dc21" path="/var/lib/kubelet/pods/0c4ac283-f86b-4a8e-958b-fe189004dc21/volumes" Jan 20 18:23:30 crc kubenswrapper[4661]: I0120 18:23:30.153938 4661 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b866af6c-a952-4ef8-aee0-e18ee5799f98" path="/var/lib/kubelet/pods/b866af6c-a952-4ef8-aee0-e18ee5799f98/volumes" Jan 20 18:23:31 crc kubenswrapper[4661]: I0120 18:23:31.711389 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"b2c8897f-8188-4a97-8839-e205b94514c7","Type":"ContainerStarted","Data":"6610edecbf5f8693af6fa0504bc285ed19cc0eb56dd390c3e1503ba694919956"} Jan 20 18:23:31 crc kubenswrapper[4661]: I0120 18:23:31.712548 4661 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Jan 20 18:23:31 crc kubenswrapper[4661]: I0120 18:23:31.738145 4661 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=3.55782491 podStartE2EDuration="31.73812421s" podCreationTimestamp="2026-01-20 18:23:00 +0000 UTC" firstStartedPulling="2026-01-20 18:23:01.38964311 +0000 UTC m=+1037.720432772" lastFinishedPulling="2026-01-20 18:23:29.56994241 +0000 UTC m=+1065.900732072" observedRunningTime="2026-01-20 18:23:31.72905997 +0000 UTC m=+1068.059849642" watchObservedRunningTime="2026-01-20 18:23:31.73812421 +0000 UTC m=+1068.068913872" Jan 20 18:23:32 crc kubenswrapper[4661]: I0120 18:23:32.736883 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"8a95987a-80ef-495d-adf7-f60c952836ce","Type":"ContainerStarted","Data":"bff7e06c8ba64f9f0e50911832b9d7489689f9967b99f6247c1079861bdb6fd2"} Jan 20 18:23:32 crc kubenswrapper[4661]: I0120 18:23:32.741312 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"20636e35-51a8-4c79-888a-64d59e109a53","Type":"ContainerStarted","Data":"f3edff744e32733141b5a997d5832b90bf32a9a517010515c04bec796b82dfe9"} Jan 20 18:23:33 crc kubenswrapper[4661]: I0120 18:23:33.779062 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"19a9e039-d4eb-475e-9ca9-6a6f6bfeb36c","Type":"ContainerStarted","Data":"c76853192ee68da9d687430ebad4adc079589fbe7ce8c6c6524d5c045257a90f"} Jan 20 18:23:33 crc kubenswrapper[4661]: I0120 18:23:33.785187 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-9k84x" event={"ID":"6d1b9f50-80c4-494b-8ea6-f3cd3ca1b98d","Type":"ContainerDied","Data":"686048a708a329d03c2b30874c06908a5b646303fef164d3cecb69f19ba29c47"} Jan 20 18:23:33 crc kubenswrapper[4661]: I0120 18:23:33.784755 4661 generic.go:334] "Generic (PLEG): container finished" podID="6d1b9f50-80c4-494b-8ea6-f3cd3ca1b98d" containerID="686048a708a329d03c2b30874c06908a5b646303fef164d3cecb69f19ba29c47" exitCode=0 Jan 20 18:23:33 crc kubenswrapper[4661]: I0120 18:23:33.802177 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-p7h4x" event={"ID":"65017fb7-6ab3-43d0-a308-a3d8da39b811","Type":"ContainerStarted","Data":"ea3631c9181994cd174196a1345c394028d9bc7a4a3035eea557a27317f4d7fe"} Jan 20 18:23:33 crc kubenswrapper[4661]: I0120 18:23:33.802257 4661 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-p7h4x" Jan 20 18:23:33 crc kubenswrapper[4661]: I0120 18:23:33.832157 4661 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-p7h4x" podStartSLOduration=26.005975813 podStartE2EDuration="30.832136658s" podCreationTimestamp="2026-01-20 18:23:03 +0000 UTC" firstStartedPulling="2026-01-20 18:23:27.633803038 +0000 UTC m=+1063.964592700" lastFinishedPulling="2026-01-20 18:23:32.459963883 +0000 UTC m=+1068.790753545" observedRunningTime="2026-01-20 18:23:33.828084301 +0000 UTC m=+1070.158873973" watchObservedRunningTime="2026-01-20 18:23:33.832136658 +0000 UTC m=+1070.162926330" Jan 20 18:23:34 crc kubenswrapper[4661]: I0120 18:23:34.813394 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-9k84x" event={"ID":"6d1b9f50-80c4-494b-8ea6-f3cd3ca1b98d","Type":"ContainerStarted","Data":"45c67651fb358e28dd672d9397a801610d544c7ad119e55265c616202c62be86"} Jan 20 18:23:34 crc kubenswrapper[4661]: I0120 18:23:34.813720 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-9k84x" event={"ID":"6d1b9f50-80c4-494b-8ea6-f3cd3ca1b98d","Type":"ContainerStarted","Data":"0d4c162a8f82d849fcdd89aea51ca0d41133686077b870f68e4e471ee144cfbf"} Jan 20 18:23:34 crc kubenswrapper[4661]: I0120 18:23:34.813807 4661 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-9k84x" Jan 20 18:23:34 crc kubenswrapper[4661]: I0120 18:23:34.813833 4661 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-9k84x" Jan 20 18:23:34 crc kubenswrapper[4661]: I0120 18:23:34.841632 4661 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ovs-9k84x" podStartSLOduration=27.311170395 podStartE2EDuration="31.841615549s" podCreationTimestamp="2026-01-20 18:23:03 +0000 UTC" firstStartedPulling="2026-01-20 18:23:27.893897687 +0000 UTC m=+1064.224687349" lastFinishedPulling="2026-01-20 18:23:32.424342841 +0000 UTC m=+1068.755132503" observedRunningTime="2026-01-20 18:23:34.833564127 +0000 UTC m=+1071.164353789" watchObservedRunningTime="2026-01-20 18:23:34.841615549 +0000 UTC m=+1071.172405211" Jan 20 18:23:36 crc kubenswrapper[4661]: I0120 18:23:36.829460 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"20636e35-51a8-4c79-888a-64d59e109a53","Type":"ContainerStarted","Data":"1691623d4d1a00dde2cf9bcf433fc3bf284e9de8d45edf7744e54ac5f5c796b6"} Jan 20 18:23:36 crc kubenswrapper[4661]: I0120 18:23:36.831805 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"8a95987a-80ef-495d-adf7-f60c952836ce","Type":"ContainerStarted","Data":"3135fdba25eaad67def3ca060cb7e7d78f75161cb297afa051d05d11335e1e82"} Jan 20 18:23:36 crc kubenswrapper[4661]: I0120 18:23:36.860513 4661 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=21.168548492 podStartE2EDuration="29.860494051s" podCreationTimestamp="2026-01-20 18:23:07 +0000 UTC" firstStartedPulling="2026-01-20 18:23:27.634041914 +0000 UTC m=+1063.964831576" lastFinishedPulling="2026-01-20 18:23:36.325987473 +0000 UTC m=+1072.656777135" observedRunningTime="2026-01-20 18:23:36.850397484 +0000 UTC m=+1073.181187146" watchObservedRunningTime="2026-01-20 18:23:36.860494051 +0000 UTC m=+1073.191283713" Jan 20 18:23:36 crc kubenswrapper[4661]: I0120 18:23:36.877913 4661 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-0" podStartSLOduration=23.31557829 podStartE2EDuration="32.877896081s" podCreationTimestamp="2026-01-20 18:23:04 +0000 UTC" firstStartedPulling="2026-01-20 18:23:26.785202051 +0000 UTC m=+1063.115991713" lastFinishedPulling="2026-01-20 18:23:36.347519842 +0000 UTC m=+1072.678309504" observedRunningTime="2026-01-20 18:23:36.87330179 +0000 UTC m=+1073.204091452" watchObservedRunningTime="2026-01-20 18:23:36.877896081 +0000 UTC m=+1073.208685743" Jan 20 18:23:38 crc kubenswrapper[4661]: I0120 18:23:38.390705 4661 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Jan 20 18:23:38 crc kubenswrapper[4661]: I0120 18:23:38.391242 4661 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Jan 20 18:23:38 crc kubenswrapper[4661]: I0120 18:23:38.441565 4661 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Jan 20 18:23:38 crc kubenswrapper[4661]: I0120 18:23:38.887172 4661 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Jan 20 18:23:39 crc kubenswrapper[4661]: I0120 18:23:39.248657 4661 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-fplm7"] Jan 20 18:23:39 crc kubenswrapper[4661]: I0120 18:23:39.305305 4661 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-0" Jan 20 18:23:39 crc kubenswrapper[4661]: I0120 18:23:39.319808 4661 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7f896c8c65-hl6rr"] Jan 20 18:23:39 crc kubenswrapper[4661]: I0120 18:23:39.321300 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7f896c8c65-hl6rr" Jan 20 18:23:39 crc kubenswrapper[4661]: I0120 18:23:39.326676 4661 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Jan 20 18:23:39 crc kubenswrapper[4661]: I0120 18:23:39.363379 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7f896c8c65-hl6rr"] Jan 20 18:23:39 crc kubenswrapper[4661]: I0120 18:23:39.408939 4661 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-metrics-8mm55"] Jan 20 18:23:39 crc kubenswrapper[4661]: I0120 18:23:39.410047 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-8mm55" Jan 20 18:23:39 crc kubenswrapper[4661]: I0120 18:23:39.414052 4661 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-metrics-config" Jan 20 18:23:39 crc kubenswrapper[4661]: I0120 18:23:39.435391 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-8mm55"] Jan 20 18:23:39 crc kubenswrapper[4661]: I0120 18:23:39.467968 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/abfb2e06-c405-418f-9bf9-83aeb2b3f706-dns-svc\") pod \"dnsmasq-dns-7f896c8c65-hl6rr\" (UID: \"abfb2e06-c405-418f-9bf9-83aeb2b3f706\") " pod="openstack/dnsmasq-dns-7f896c8c65-hl6rr" Jan 20 18:23:39 crc kubenswrapper[4661]: I0120 18:23:39.468056 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4g4nk\" (UniqueName: \"kubernetes.io/projected/abfb2e06-c405-418f-9bf9-83aeb2b3f706-kube-api-access-4g4nk\") pod \"dnsmasq-dns-7f896c8c65-hl6rr\" (UID: \"abfb2e06-c405-418f-9bf9-83aeb2b3f706\") " pod="openstack/dnsmasq-dns-7f896c8c65-hl6rr" Jan 20 18:23:39 crc kubenswrapper[4661]: I0120 18:23:39.468091 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/abfb2e06-c405-418f-9bf9-83aeb2b3f706-ovsdbserver-sb\") pod \"dnsmasq-dns-7f896c8c65-hl6rr\" (UID: \"abfb2e06-c405-418f-9bf9-83aeb2b3f706\") " pod="openstack/dnsmasq-dns-7f896c8c65-hl6rr" Jan 20 18:23:39 crc kubenswrapper[4661]: I0120 18:23:39.468136 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/abfb2e06-c405-418f-9bf9-83aeb2b3f706-config\") pod \"dnsmasq-dns-7f896c8c65-hl6rr\" (UID: \"abfb2e06-c405-418f-9bf9-83aeb2b3f706\") " pod="openstack/dnsmasq-dns-7f896c8c65-hl6rr" Jan 20 18:23:39 crc kubenswrapper[4661]: I0120 18:23:39.486010 4661 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-0" Jan 20 18:23:39 crc kubenswrapper[4661]: I0120 18:23:39.575108 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/abfb2e06-c405-418f-9bf9-83aeb2b3f706-dns-svc\") pod \"dnsmasq-dns-7f896c8c65-hl6rr\" (UID: \"abfb2e06-c405-418f-9bf9-83aeb2b3f706\") " pod="openstack/dnsmasq-dns-7f896c8c65-hl6rr" Jan 20 18:23:39 crc kubenswrapper[4661]: I0120 18:23:39.575176 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/97b7dc90-ebf5-4783-9d06-b0e30eb4d2d8-combined-ca-bundle\") pod \"ovn-controller-metrics-8mm55\" (UID: \"97b7dc90-ebf5-4783-9d06-b0e30eb4d2d8\") " pod="openstack/ovn-controller-metrics-8mm55" Jan 20 18:23:39 crc kubenswrapper[4661]: I0120 18:23:39.575233 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s94lk\" (UniqueName: \"kubernetes.io/projected/97b7dc90-ebf5-4783-9d06-b0e30eb4d2d8-kube-api-access-s94lk\") pod \"ovn-controller-metrics-8mm55\" (UID: \"97b7dc90-ebf5-4783-9d06-b0e30eb4d2d8\") " pod="openstack/ovn-controller-metrics-8mm55" Jan 20 18:23:39 crc kubenswrapper[4661]: I0120 18:23:39.575292 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/97b7dc90-ebf5-4783-9d06-b0e30eb4d2d8-ovn-rundir\") pod \"ovn-controller-metrics-8mm55\" (UID: \"97b7dc90-ebf5-4783-9d06-b0e30eb4d2d8\") " pod="openstack/ovn-controller-metrics-8mm55" Jan 20 18:23:39 crc kubenswrapper[4661]: I0120 18:23:39.575316 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/97b7dc90-ebf5-4783-9d06-b0e30eb4d2d8-ovs-rundir\") pod \"ovn-controller-metrics-8mm55\" (UID: \"97b7dc90-ebf5-4783-9d06-b0e30eb4d2d8\") " pod="openstack/ovn-controller-metrics-8mm55" Jan 20 18:23:39 crc kubenswrapper[4661]: I0120 18:23:39.575353 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4g4nk\" (UniqueName: \"kubernetes.io/projected/abfb2e06-c405-418f-9bf9-83aeb2b3f706-kube-api-access-4g4nk\") pod \"dnsmasq-dns-7f896c8c65-hl6rr\" (UID: \"abfb2e06-c405-418f-9bf9-83aeb2b3f706\") " pod="openstack/dnsmasq-dns-7f896c8c65-hl6rr" Jan 20 18:23:39 crc kubenswrapper[4661]: I0120 18:23:39.575390 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/97b7dc90-ebf5-4783-9d06-b0e30eb4d2d8-config\") pod \"ovn-controller-metrics-8mm55\" (UID: \"97b7dc90-ebf5-4783-9d06-b0e30eb4d2d8\") " pod="openstack/ovn-controller-metrics-8mm55" Jan 20 18:23:39 crc kubenswrapper[4661]: I0120 18:23:39.575420 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/abfb2e06-c405-418f-9bf9-83aeb2b3f706-ovsdbserver-sb\") pod \"dnsmasq-dns-7f896c8c65-hl6rr\" (UID: \"abfb2e06-c405-418f-9bf9-83aeb2b3f706\") " pod="openstack/dnsmasq-dns-7f896c8c65-hl6rr" Jan 20 18:23:39 crc kubenswrapper[4661]: I0120 18:23:39.575481 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/97b7dc90-ebf5-4783-9d06-b0e30eb4d2d8-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-8mm55\" (UID: \"97b7dc90-ebf5-4783-9d06-b0e30eb4d2d8\") " pod="openstack/ovn-controller-metrics-8mm55" Jan 20 18:23:39 crc kubenswrapper[4661]: I0120 18:23:39.575499 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/abfb2e06-c405-418f-9bf9-83aeb2b3f706-config\") pod \"dnsmasq-dns-7f896c8c65-hl6rr\" (UID: \"abfb2e06-c405-418f-9bf9-83aeb2b3f706\") " pod="openstack/dnsmasq-dns-7f896c8c65-hl6rr" Jan 20 18:23:39 crc kubenswrapper[4661]: I0120 18:23:39.576399 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/abfb2e06-c405-418f-9bf9-83aeb2b3f706-dns-svc\") pod \"dnsmasq-dns-7f896c8c65-hl6rr\" (UID: \"abfb2e06-c405-418f-9bf9-83aeb2b3f706\") " pod="openstack/dnsmasq-dns-7f896c8c65-hl6rr" Jan 20 18:23:39 crc kubenswrapper[4661]: I0120 18:23:39.577509 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/abfb2e06-c405-418f-9bf9-83aeb2b3f706-ovsdbserver-sb\") pod \"dnsmasq-dns-7f896c8c65-hl6rr\" (UID: \"abfb2e06-c405-418f-9bf9-83aeb2b3f706\") " pod="openstack/dnsmasq-dns-7f896c8c65-hl6rr" Jan 20 18:23:39 crc kubenswrapper[4661]: I0120 18:23:39.578023 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/abfb2e06-c405-418f-9bf9-83aeb2b3f706-config\") pod \"dnsmasq-dns-7f896c8c65-hl6rr\" (UID: \"abfb2e06-c405-418f-9bf9-83aeb2b3f706\") " pod="openstack/dnsmasq-dns-7f896c8c65-hl6rr" Jan 20 18:23:39 crc kubenswrapper[4661]: I0120 18:23:39.613603 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4g4nk\" (UniqueName: \"kubernetes.io/projected/abfb2e06-c405-418f-9bf9-83aeb2b3f706-kube-api-access-4g4nk\") pod \"dnsmasq-dns-7f896c8c65-hl6rr\" (UID: \"abfb2e06-c405-418f-9bf9-83aeb2b3f706\") " pod="openstack/dnsmasq-dns-7f896c8c65-hl6rr" Jan 20 18:23:39 crc kubenswrapper[4661]: I0120 18:23:39.657236 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7f896c8c65-hl6rr" Jan 20 18:23:39 crc kubenswrapper[4661]: I0120 18:23:39.677253 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/97b7dc90-ebf5-4783-9d06-b0e30eb4d2d8-combined-ca-bundle\") pod \"ovn-controller-metrics-8mm55\" (UID: \"97b7dc90-ebf5-4783-9d06-b0e30eb4d2d8\") " pod="openstack/ovn-controller-metrics-8mm55" Jan 20 18:23:39 crc kubenswrapper[4661]: I0120 18:23:39.677308 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s94lk\" (UniqueName: \"kubernetes.io/projected/97b7dc90-ebf5-4783-9d06-b0e30eb4d2d8-kube-api-access-s94lk\") pod \"ovn-controller-metrics-8mm55\" (UID: \"97b7dc90-ebf5-4783-9d06-b0e30eb4d2d8\") " pod="openstack/ovn-controller-metrics-8mm55" Jan 20 18:23:39 crc kubenswrapper[4661]: I0120 18:23:39.677341 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/97b7dc90-ebf5-4783-9d06-b0e30eb4d2d8-ovn-rundir\") pod \"ovn-controller-metrics-8mm55\" (UID: \"97b7dc90-ebf5-4783-9d06-b0e30eb4d2d8\") " pod="openstack/ovn-controller-metrics-8mm55" Jan 20 18:23:39 crc kubenswrapper[4661]: I0120 18:23:39.677357 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/97b7dc90-ebf5-4783-9d06-b0e30eb4d2d8-ovs-rundir\") pod \"ovn-controller-metrics-8mm55\" (UID: \"97b7dc90-ebf5-4783-9d06-b0e30eb4d2d8\") " pod="openstack/ovn-controller-metrics-8mm55" Jan 20 18:23:39 crc kubenswrapper[4661]: I0120 18:23:39.677384 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/97b7dc90-ebf5-4783-9d06-b0e30eb4d2d8-config\") pod \"ovn-controller-metrics-8mm55\" (UID: \"97b7dc90-ebf5-4783-9d06-b0e30eb4d2d8\") " pod="openstack/ovn-controller-metrics-8mm55" Jan 20 18:23:39 crc kubenswrapper[4661]: I0120 18:23:39.677416 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/97b7dc90-ebf5-4783-9d06-b0e30eb4d2d8-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-8mm55\" (UID: \"97b7dc90-ebf5-4783-9d06-b0e30eb4d2d8\") " pod="openstack/ovn-controller-metrics-8mm55" Jan 20 18:23:39 crc kubenswrapper[4661]: I0120 18:23:39.681163 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/97b7dc90-ebf5-4783-9d06-b0e30eb4d2d8-ovn-rundir\") pod \"ovn-controller-metrics-8mm55\" (UID: \"97b7dc90-ebf5-4783-9d06-b0e30eb4d2d8\") " pod="openstack/ovn-controller-metrics-8mm55" Jan 20 18:23:39 crc kubenswrapper[4661]: I0120 18:23:39.681246 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/97b7dc90-ebf5-4783-9d06-b0e30eb4d2d8-ovs-rundir\") pod \"ovn-controller-metrics-8mm55\" (UID: \"97b7dc90-ebf5-4783-9d06-b0e30eb4d2d8\") " pod="openstack/ovn-controller-metrics-8mm55" Jan 20 18:23:39 crc kubenswrapper[4661]: I0120 18:23:39.681875 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/97b7dc90-ebf5-4783-9d06-b0e30eb4d2d8-config\") pod \"ovn-controller-metrics-8mm55\" (UID: \"97b7dc90-ebf5-4783-9d06-b0e30eb4d2d8\") " pod="openstack/ovn-controller-metrics-8mm55" Jan 20 18:23:39 crc kubenswrapper[4661]: I0120 18:23:39.685177 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/97b7dc90-ebf5-4783-9d06-b0e30eb4d2d8-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-8mm55\" (UID: \"97b7dc90-ebf5-4783-9d06-b0e30eb4d2d8\") " pod="openstack/ovn-controller-metrics-8mm55" Jan 20 18:23:39 crc kubenswrapper[4661]: I0120 18:23:39.688642 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/97b7dc90-ebf5-4783-9d06-b0e30eb4d2d8-combined-ca-bundle\") pod \"ovn-controller-metrics-8mm55\" (UID: \"97b7dc90-ebf5-4783-9d06-b0e30eb4d2d8\") " pod="openstack/ovn-controller-metrics-8mm55" Jan 20 18:23:39 crc kubenswrapper[4661]: I0120 18:23:39.715137 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s94lk\" (UniqueName: \"kubernetes.io/projected/97b7dc90-ebf5-4783-9d06-b0e30eb4d2d8-kube-api-access-s94lk\") pod \"ovn-controller-metrics-8mm55\" (UID: \"97b7dc90-ebf5-4783-9d06-b0e30eb4d2d8\") " pod="openstack/ovn-controller-metrics-8mm55" Jan 20 18:23:39 crc kubenswrapper[4661]: I0120 18:23:39.730720 4661 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-5v9q4"] Jan 20 18:23:39 crc kubenswrapper[4661]: I0120 18:23:39.764359 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-8mm55" Jan 20 18:23:39 crc kubenswrapper[4661]: I0120 18:23:39.777472 4661 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-tpwdx"] Jan 20 18:23:39 crc kubenswrapper[4661]: I0120 18:23:39.778952 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-tpwdx" Jan 20 18:23:39 crc kubenswrapper[4661]: I0120 18:23:39.780218 4661 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-fplm7" Jan 20 18:23:39 crc kubenswrapper[4661]: I0120 18:23:39.780475 4661 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Jan 20 18:23:39 crc kubenswrapper[4661]: I0120 18:23:39.848529 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-tpwdx"] Jan 20 18:23:39 crc kubenswrapper[4661]: I0120 18:23:39.879312 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6aa88459-9af5-4ddd-a51e-32f0468ebc87-dns-svc\") pod \"6aa88459-9af5-4ddd-a51e-32f0468ebc87\" (UID: \"6aa88459-9af5-4ddd-a51e-32f0468ebc87\") " Jan 20 18:23:39 crc kubenswrapper[4661]: I0120 18:23:39.879408 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6aa88459-9af5-4ddd-a51e-32f0468ebc87-config\") pod \"6aa88459-9af5-4ddd-a51e-32f0468ebc87\" (UID: \"6aa88459-9af5-4ddd-a51e-32f0468ebc87\") " Jan 20 18:23:39 crc kubenswrapper[4661]: I0120 18:23:39.879488 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h5xjk\" (UniqueName: \"kubernetes.io/projected/6aa88459-9af5-4ddd-a51e-32f0468ebc87-kube-api-access-h5xjk\") pod \"6aa88459-9af5-4ddd-a51e-32f0468ebc87\" (UID: \"6aa88459-9af5-4ddd-a51e-32f0468ebc87\") " Jan 20 18:23:39 crc kubenswrapper[4661]: I0120 18:23:39.879839 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ad7ce2d6-6f59-4934-b54c-d1d763e14c22-ovsdbserver-nb\") pod \"dnsmasq-dns-86db49b7ff-tpwdx\" (UID: \"ad7ce2d6-6f59-4934-b54c-d1d763e14c22\") " pod="openstack/dnsmasq-dns-86db49b7ff-tpwdx" Jan 20 18:23:39 crc kubenswrapper[4661]: I0120 18:23:39.879865 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n2x9d\" (UniqueName: \"kubernetes.io/projected/ad7ce2d6-6f59-4934-b54c-d1d763e14c22-kube-api-access-n2x9d\") pod \"dnsmasq-dns-86db49b7ff-tpwdx\" (UID: \"ad7ce2d6-6f59-4934-b54c-d1d763e14c22\") " pod="openstack/dnsmasq-dns-86db49b7ff-tpwdx" Jan 20 18:23:39 crc kubenswrapper[4661]: I0120 18:23:39.879942 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ad7ce2d6-6f59-4934-b54c-d1d763e14c22-config\") pod \"dnsmasq-dns-86db49b7ff-tpwdx\" (UID: \"ad7ce2d6-6f59-4934-b54c-d1d763e14c22\") " pod="openstack/dnsmasq-dns-86db49b7ff-tpwdx" Jan 20 18:23:39 crc kubenswrapper[4661]: I0120 18:23:39.879962 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ad7ce2d6-6f59-4934-b54c-d1d763e14c22-dns-svc\") pod \"dnsmasq-dns-86db49b7ff-tpwdx\" (UID: \"ad7ce2d6-6f59-4934-b54c-d1d763e14c22\") " pod="openstack/dnsmasq-dns-86db49b7ff-tpwdx" Jan 20 18:23:39 crc kubenswrapper[4661]: I0120 18:23:39.880016 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ad7ce2d6-6f59-4934-b54c-d1d763e14c22-ovsdbserver-sb\") pod \"dnsmasq-dns-86db49b7ff-tpwdx\" (UID: \"ad7ce2d6-6f59-4934-b54c-d1d763e14c22\") " pod="openstack/dnsmasq-dns-86db49b7ff-tpwdx" Jan 20 18:23:39 crc kubenswrapper[4661]: I0120 18:23:39.880567 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6aa88459-9af5-4ddd-a51e-32f0468ebc87-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "6aa88459-9af5-4ddd-a51e-32f0468ebc87" (UID: "6aa88459-9af5-4ddd-a51e-32f0468ebc87"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 18:23:39 crc kubenswrapper[4661]: I0120 18:23:39.881063 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"1a3386fb-6ffa-47fa-8697-8d3c45ff61be","Type":"ContainerStarted","Data":"541c042338d70d95a20a237ddcf0560205f24c544d0bf1f913f65243150dd01f"} Jan 20 18:23:39 crc kubenswrapper[4661]: I0120 18:23:39.883103 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6aa88459-9af5-4ddd-a51e-32f0468ebc87-config" (OuterVolumeSpecName: "config") pod "6aa88459-9af5-4ddd-a51e-32f0468ebc87" (UID: "6aa88459-9af5-4ddd-a51e-32f0468ebc87"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 18:23:39 crc kubenswrapper[4661]: I0120 18:23:39.884805 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6aa88459-9af5-4ddd-a51e-32f0468ebc87-kube-api-access-h5xjk" (OuterVolumeSpecName: "kube-api-access-h5xjk") pod "6aa88459-9af5-4ddd-a51e-32f0468ebc87" (UID: "6aa88459-9af5-4ddd-a51e-32f0468ebc87"). InnerVolumeSpecName "kube-api-access-h5xjk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 18:23:39 crc kubenswrapper[4661]: I0120 18:23:39.891049 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-fplm7" event={"ID":"6aa88459-9af5-4ddd-a51e-32f0468ebc87","Type":"ContainerDied","Data":"8ae06221f272172507bdcb8d939b653086002b6a236aab289e68b9666a4f91a5"} Jan 20 18:23:39 crc kubenswrapper[4661]: I0120 18:23:39.891075 4661 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-fplm7" Jan 20 18:23:39 crc kubenswrapper[4661]: I0120 18:23:39.893046 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"1c705fc7-9ad0-4254-ad57-63db21057251","Type":"ContainerStarted","Data":"0c1cf65e4b07b5646bb3f7db29b931fb4b255d998e1838dbee9912bb2b5e47d4"} Jan 20 18:23:39 crc kubenswrapper[4661]: I0120 18:23:39.893393 4661 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-0" Jan 20 18:23:39 crc kubenswrapper[4661]: I0120 18:23:39.970131 4661 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-0" Jan 20 18:23:39 crc kubenswrapper[4661]: I0120 18:23:39.987584 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ad7ce2d6-6f59-4934-b54c-d1d763e14c22-ovsdbserver-sb\") pod \"dnsmasq-dns-86db49b7ff-tpwdx\" (UID: \"ad7ce2d6-6f59-4934-b54c-d1d763e14c22\") " pod="openstack/dnsmasq-dns-86db49b7ff-tpwdx" Jan 20 18:23:39 crc kubenswrapper[4661]: I0120 18:23:39.987662 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ad7ce2d6-6f59-4934-b54c-d1d763e14c22-ovsdbserver-nb\") pod \"dnsmasq-dns-86db49b7ff-tpwdx\" (UID: \"ad7ce2d6-6f59-4934-b54c-d1d763e14c22\") " pod="openstack/dnsmasq-dns-86db49b7ff-tpwdx" Jan 20 18:23:39 crc kubenswrapper[4661]: I0120 18:23:39.987720 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n2x9d\" (UniqueName: \"kubernetes.io/projected/ad7ce2d6-6f59-4934-b54c-d1d763e14c22-kube-api-access-n2x9d\") pod \"dnsmasq-dns-86db49b7ff-tpwdx\" (UID: \"ad7ce2d6-6f59-4934-b54c-d1d763e14c22\") " pod="openstack/dnsmasq-dns-86db49b7ff-tpwdx" Jan 20 18:23:39 crc kubenswrapper[4661]: I0120 18:23:39.987808 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ad7ce2d6-6f59-4934-b54c-d1d763e14c22-config\") pod \"dnsmasq-dns-86db49b7ff-tpwdx\" (UID: \"ad7ce2d6-6f59-4934-b54c-d1d763e14c22\") " pod="openstack/dnsmasq-dns-86db49b7ff-tpwdx" Jan 20 18:23:39 crc kubenswrapper[4661]: I0120 18:23:39.987826 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ad7ce2d6-6f59-4934-b54c-d1d763e14c22-dns-svc\") pod \"dnsmasq-dns-86db49b7ff-tpwdx\" (UID: \"ad7ce2d6-6f59-4934-b54c-d1d763e14c22\") " pod="openstack/dnsmasq-dns-86db49b7ff-tpwdx" Jan 20 18:23:39 crc kubenswrapper[4661]: I0120 18:23:39.987869 4661 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6aa88459-9af5-4ddd-a51e-32f0468ebc87-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 20 18:23:39 crc kubenswrapper[4661]: I0120 18:23:39.987880 4661 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6aa88459-9af5-4ddd-a51e-32f0468ebc87-config\") on node \"crc\" DevicePath \"\"" Jan 20 18:23:39 crc kubenswrapper[4661]: I0120 18:23:39.987889 4661 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h5xjk\" (UniqueName: \"kubernetes.io/projected/6aa88459-9af5-4ddd-a51e-32f0468ebc87-kube-api-access-h5xjk\") on node \"crc\" DevicePath \"\"" Jan 20 18:23:39 crc kubenswrapper[4661]: I0120 18:23:39.988862 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ad7ce2d6-6f59-4934-b54c-d1d763e14c22-dns-svc\") pod \"dnsmasq-dns-86db49b7ff-tpwdx\" (UID: \"ad7ce2d6-6f59-4934-b54c-d1d763e14c22\") " pod="openstack/dnsmasq-dns-86db49b7ff-tpwdx" Jan 20 18:23:39 crc kubenswrapper[4661]: I0120 18:23:39.989259 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ad7ce2d6-6f59-4934-b54c-d1d763e14c22-ovsdbserver-sb\") pod \"dnsmasq-dns-86db49b7ff-tpwdx\" (UID: \"ad7ce2d6-6f59-4934-b54c-d1d763e14c22\") " pod="openstack/dnsmasq-dns-86db49b7ff-tpwdx" Jan 20 18:23:39 crc kubenswrapper[4661]: I0120 18:23:39.989639 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ad7ce2d6-6f59-4934-b54c-d1d763e14c22-config\") pod \"dnsmasq-dns-86db49b7ff-tpwdx\" (UID: \"ad7ce2d6-6f59-4934-b54c-d1d763e14c22\") " pod="openstack/dnsmasq-dns-86db49b7ff-tpwdx" Jan 20 18:23:39 crc kubenswrapper[4661]: I0120 18:23:39.990036 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ad7ce2d6-6f59-4934-b54c-d1d763e14c22-ovsdbserver-nb\") pod \"dnsmasq-dns-86db49b7ff-tpwdx\" (UID: \"ad7ce2d6-6f59-4934-b54c-d1d763e14c22\") " pod="openstack/dnsmasq-dns-86db49b7ff-tpwdx" Jan 20 18:23:40 crc kubenswrapper[4661]: I0120 18:23:40.014013 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n2x9d\" (UniqueName: \"kubernetes.io/projected/ad7ce2d6-6f59-4934-b54c-d1d763e14c22-kube-api-access-n2x9d\") pod \"dnsmasq-dns-86db49b7ff-tpwdx\" (UID: \"ad7ce2d6-6f59-4934-b54c-d1d763e14c22\") " pod="openstack/dnsmasq-dns-86db49b7ff-tpwdx" Jan 20 18:23:40 crc kubenswrapper[4661]: I0120 18:23:40.060769 4661 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-fplm7"] Jan 20 18:23:40 crc kubenswrapper[4661]: I0120 18:23:40.068206 4661 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-fplm7"] Jan 20 18:23:40 crc kubenswrapper[4661]: I0120 18:23:40.154729 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-tpwdx" Jan 20 18:23:40 crc kubenswrapper[4661]: I0120 18:23:40.181558 4661 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6aa88459-9af5-4ddd-a51e-32f0468ebc87" path="/var/lib/kubelet/pods/6aa88459-9af5-4ddd-a51e-32f0468ebc87/volumes" Jan 20 18:23:40 crc kubenswrapper[4661]: I0120 18:23:40.229639 4661 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-5v9q4" Jan 20 18:23:40 crc kubenswrapper[4661]: I0120 18:23:40.295259 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7ct2d\" (UniqueName: \"kubernetes.io/projected/02aac5db-0152-44ea-94e6-4e8ef20cbe41-kube-api-access-7ct2d\") pod \"02aac5db-0152-44ea-94e6-4e8ef20cbe41\" (UID: \"02aac5db-0152-44ea-94e6-4e8ef20cbe41\") " Jan 20 18:23:40 crc kubenswrapper[4661]: I0120 18:23:40.295716 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/02aac5db-0152-44ea-94e6-4e8ef20cbe41-config\") pod \"02aac5db-0152-44ea-94e6-4e8ef20cbe41\" (UID: \"02aac5db-0152-44ea-94e6-4e8ef20cbe41\") " Jan 20 18:23:40 crc kubenswrapper[4661]: I0120 18:23:40.295819 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/02aac5db-0152-44ea-94e6-4e8ef20cbe41-dns-svc\") pod \"02aac5db-0152-44ea-94e6-4e8ef20cbe41\" (UID: \"02aac5db-0152-44ea-94e6-4e8ef20cbe41\") " Jan 20 18:23:40 crc kubenswrapper[4661]: I0120 18:23:40.298035 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/02aac5db-0152-44ea-94e6-4e8ef20cbe41-config" (OuterVolumeSpecName: "config") pod "02aac5db-0152-44ea-94e6-4e8ef20cbe41" (UID: "02aac5db-0152-44ea-94e6-4e8ef20cbe41"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 18:23:40 crc kubenswrapper[4661]: I0120 18:23:40.298353 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/02aac5db-0152-44ea-94e6-4e8ef20cbe41-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "02aac5db-0152-44ea-94e6-4e8ef20cbe41" (UID: "02aac5db-0152-44ea-94e6-4e8ef20cbe41"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 18:23:40 crc kubenswrapper[4661]: I0120 18:23:40.304171 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/02aac5db-0152-44ea-94e6-4e8ef20cbe41-kube-api-access-7ct2d" (OuterVolumeSpecName: "kube-api-access-7ct2d") pod "02aac5db-0152-44ea-94e6-4e8ef20cbe41" (UID: "02aac5db-0152-44ea-94e6-4e8ef20cbe41"). InnerVolumeSpecName "kube-api-access-7ct2d". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 18:23:40 crc kubenswrapper[4661]: I0120 18:23:40.312357 4661 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-northd-0"] Jan 20 18:23:40 crc kubenswrapper[4661]: I0120 18:23:40.313657 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Jan 20 18:23:40 crc kubenswrapper[4661]: I0120 18:23:40.317200 4661 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Jan 20 18:23:40 crc kubenswrapper[4661]: I0120 18:23:40.317306 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovnnorthd-ovndbs" Jan 20 18:23:40 crc kubenswrapper[4661]: I0120 18:23:40.317821 4661 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Jan 20 18:23:40 crc kubenswrapper[4661]: I0120 18:23:40.317933 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-plsrt" Jan 20 18:23:40 crc kubenswrapper[4661]: I0120 18:23:40.334210 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Jan 20 18:23:40 crc kubenswrapper[4661]: I0120 18:23:40.370474 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-8mm55"] Jan 20 18:23:40 crc kubenswrapper[4661]: I0120 18:23:40.398473 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/26291859-ffb9-435a-92bd-7ebc53f7e4bc-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"26291859-ffb9-435a-92bd-7ebc53f7e4bc\") " pod="openstack/ovn-northd-0" Jan 20 18:23:40 crc kubenswrapper[4661]: I0120 18:23:40.399083 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/26291859-ffb9-435a-92bd-7ebc53f7e4bc-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"26291859-ffb9-435a-92bd-7ebc53f7e4bc\") " pod="openstack/ovn-northd-0" Jan 20 18:23:40 crc kubenswrapper[4661]: I0120 18:23:40.399166 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/26291859-ffb9-435a-92bd-7ebc53f7e4bc-scripts\") pod \"ovn-northd-0\" (UID: \"26291859-ffb9-435a-92bd-7ebc53f7e4bc\") " pod="openstack/ovn-northd-0" Jan 20 18:23:40 crc kubenswrapper[4661]: I0120 18:23:40.399255 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/26291859-ffb9-435a-92bd-7ebc53f7e4bc-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"26291859-ffb9-435a-92bd-7ebc53f7e4bc\") " pod="openstack/ovn-northd-0" Jan 20 18:23:40 crc kubenswrapper[4661]: I0120 18:23:40.399529 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/26291859-ffb9-435a-92bd-7ebc53f7e4bc-config\") pod \"ovn-northd-0\" (UID: \"26291859-ffb9-435a-92bd-7ebc53f7e4bc\") " pod="openstack/ovn-northd-0" Jan 20 18:23:40 crc kubenswrapper[4661]: I0120 18:23:40.399584 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/26291859-ffb9-435a-92bd-7ebc53f7e4bc-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"26291859-ffb9-435a-92bd-7ebc53f7e4bc\") " pod="openstack/ovn-northd-0" Jan 20 18:23:40 crc kubenswrapper[4661]: I0120 18:23:40.399642 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fq865\" (UniqueName: \"kubernetes.io/projected/26291859-ffb9-435a-92bd-7ebc53f7e4bc-kube-api-access-fq865\") pod \"ovn-northd-0\" (UID: \"26291859-ffb9-435a-92bd-7ebc53f7e4bc\") " pod="openstack/ovn-northd-0" Jan 20 18:23:40 crc kubenswrapper[4661]: I0120 18:23:40.399793 4661 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7ct2d\" (UniqueName: \"kubernetes.io/projected/02aac5db-0152-44ea-94e6-4e8ef20cbe41-kube-api-access-7ct2d\") on node \"crc\" DevicePath \"\"" Jan 20 18:23:40 crc kubenswrapper[4661]: I0120 18:23:40.399810 4661 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/02aac5db-0152-44ea-94e6-4e8ef20cbe41-config\") on node \"crc\" DevicePath \"\"" Jan 20 18:23:40 crc kubenswrapper[4661]: I0120 18:23:40.399823 4661 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/02aac5db-0152-44ea-94e6-4e8ef20cbe41-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 20 18:23:40 crc kubenswrapper[4661]: I0120 18:23:40.445382 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7f896c8c65-hl6rr"] Jan 20 18:23:40 crc kubenswrapper[4661]: W0120 18:23:40.448885 4661 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podabfb2e06_c405_418f_9bf9_83aeb2b3f706.slice/crio-f6179c3b6b5f9291b66364f469eba06e8879ddb79865bd048e5e5975df1c585d WatchSource:0}: Error finding container f6179c3b6b5f9291b66364f469eba06e8879ddb79865bd048e5e5975df1c585d: Status 404 returned error can't find the container with id f6179c3b6b5f9291b66364f469eba06e8879ddb79865bd048e5e5975df1c585d Jan 20 18:23:40 crc kubenswrapper[4661]: I0120 18:23:40.501263 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/26291859-ffb9-435a-92bd-7ebc53f7e4bc-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"26291859-ffb9-435a-92bd-7ebc53f7e4bc\") " pod="openstack/ovn-northd-0" Jan 20 18:23:40 crc kubenswrapper[4661]: I0120 18:23:40.501358 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/26291859-ffb9-435a-92bd-7ebc53f7e4bc-config\") pod \"ovn-northd-0\" (UID: \"26291859-ffb9-435a-92bd-7ebc53f7e4bc\") " pod="openstack/ovn-northd-0" Jan 20 18:23:40 crc kubenswrapper[4661]: I0120 18:23:40.501388 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/26291859-ffb9-435a-92bd-7ebc53f7e4bc-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"26291859-ffb9-435a-92bd-7ebc53f7e4bc\") " pod="openstack/ovn-northd-0" Jan 20 18:23:40 crc kubenswrapper[4661]: I0120 18:23:40.501425 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fq865\" (UniqueName: \"kubernetes.io/projected/26291859-ffb9-435a-92bd-7ebc53f7e4bc-kube-api-access-fq865\") pod \"ovn-northd-0\" (UID: \"26291859-ffb9-435a-92bd-7ebc53f7e4bc\") " pod="openstack/ovn-northd-0" Jan 20 18:23:40 crc kubenswrapper[4661]: I0120 18:23:40.501469 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/26291859-ffb9-435a-92bd-7ebc53f7e4bc-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"26291859-ffb9-435a-92bd-7ebc53f7e4bc\") " pod="openstack/ovn-northd-0" Jan 20 18:23:40 crc kubenswrapper[4661]: I0120 18:23:40.501490 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/26291859-ffb9-435a-92bd-7ebc53f7e4bc-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"26291859-ffb9-435a-92bd-7ebc53f7e4bc\") " pod="openstack/ovn-northd-0" Jan 20 18:23:40 crc kubenswrapper[4661]: I0120 18:23:40.501528 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/26291859-ffb9-435a-92bd-7ebc53f7e4bc-scripts\") pod \"ovn-northd-0\" (UID: \"26291859-ffb9-435a-92bd-7ebc53f7e4bc\") " pod="openstack/ovn-northd-0" Jan 20 18:23:40 crc kubenswrapper[4661]: I0120 18:23:40.502739 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/26291859-ffb9-435a-92bd-7ebc53f7e4bc-scripts\") pod \"ovn-northd-0\" (UID: \"26291859-ffb9-435a-92bd-7ebc53f7e4bc\") " pod="openstack/ovn-northd-0" Jan 20 18:23:40 crc kubenswrapper[4661]: I0120 18:23:40.502831 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/26291859-ffb9-435a-92bd-7ebc53f7e4bc-config\") pod \"ovn-northd-0\" (UID: \"26291859-ffb9-435a-92bd-7ebc53f7e4bc\") " pod="openstack/ovn-northd-0" Jan 20 18:23:40 crc kubenswrapper[4661]: I0120 18:23:40.503294 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/26291859-ffb9-435a-92bd-7ebc53f7e4bc-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"26291859-ffb9-435a-92bd-7ebc53f7e4bc\") " pod="openstack/ovn-northd-0" Jan 20 18:23:40 crc kubenswrapper[4661]: I0120 18:23:40.512583 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/26291859-ffb9-435a-92bd-7ebc53f7e4bc-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"26291859-ffb9-435a-92bd-7ebc53f7e4bc\") " pod="openstack/ovn-northd-0" Jan 20 18:23:40 crc kubenswrapper[4661]: I0120 18:23:40.514527 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/26291859-ffb9-435a-92bd-7ebc53f7e4bc-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"26291859-ffb9-435a-92bd-7ebc53f7e4bc\") " pod="openstack/ovn-northd-0" Jan 20 18:23:40 crc kubenswrapper[4661]: I0120 18:23:40.519094 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/26291859-ffb9-435a-92bd-7ebc53f7e4bc-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"26291859-ffb9-435a-92bd-7ebc53f7e4bc\") " pod="openstack/ovn-northd-0" Jan 20 18:23:40 crc kubenswrapper[4661]: I0120 18:23:40.520691 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fq865\" (UniqueName: \"kubernetes.io/projected/26291859-ffb9-435a-92bd-7ebc53f7e4bc-kube-api-access-fq865\") pod \"ovn-northd-0\" (UID: \"26291859-ffb9-435a-92bd-7ebc53f7e4bc\") " pod="openstack/ovn-northd-0" Jan 20 18:23:40 crc kubenswrapper[4661]: I0120 18:23:40.627657 4661 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Jan 20 18:23:40 crc kubenswrapper[4661]: I0120 18:23:40.656861 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Jan 20 18:23:40 crc kubenswrapper[4661]: I0120 18:23:40.758899 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-tpwdx"] Jan 20 18:23:40 crc kubenswrapper[4661]: I0120 18:23:40.916391 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-8mm55" event={"ID":"97b7dc90-ebf5-4783-9d06-b0e30eb4d2d8","Type":"ContainerStarted","Data":"f1a0b791a33c5d8b26d8d9a62045cc0e43046abd5d772d340b03d1affb591bec"} Jan 20 18:23:40 crc kubenswrapper[4661]: I0120 18:23:40.916824 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-8mm55" event={"ID":"97b7dc90-ebf5-4783-9d06-b0e30eb4d2d8","Type":"ContainerStarted","Data":"40ac7771fdeb7dd9dad024a97755aa1eef6e5b403e320b9725c92a5d608a9d47"} Jan 20 18:23:40 crc kubenswrapper[4661]: I0120 18:23:40.918080 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-tpwdx" event={"ID":"ad7ce2d6-6f59-4934-b54c-d1d763e14c22","Type":"ContainerStarted","Data":"c7f6a11b2515772f670274153e8b86c0ce63d949dfed8f067e9abc14185068c2"} Jan 20 18:23:40 crc kubenswrapper[4661]: I0120 18:23:40.920195 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"ee2394e6-ec1c-4093-9c8d-6a5795f4d146","Type":"ContainerStarted","Data":"a670c149cf862318408f9d8a2fac608a6620a5a74a83aafc3dc5c523cc3a681b"} Jan 20 18:23:40 crc kubenswrapper[4661]: I0120 18:23:40.920734 4661 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Jan 20 18:23:40 crc kubenswrapper[4661]: I0120 18:23:40.921822 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7f896c8c65-hl6rr" event={"ID":"abfb2e06-c405-418f-9bf9-83aeb2b3f706","Type":"ContainerStarted","Data":"f6179c3b6b5f9291b66364f469eba06e8879ddb79865bd048e5e5975df1c585d"} Jan 20 18:23:40 crc kubenswrapper[4661]: I0120 18:23:40.923861 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-5v9q4" event={"ID":"02aac5db-0152-44ea-94e6-4e8ef20cbe41","Type":"ContainerDied","Data":"0935cd502984ef028c1107da56e111886933782bbe1e11ebf81aba421c57901a"} Jan 20 18:23:40 crc kubenswrapper[4661]: I0120 18:23:40.923926 4661 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-5v9q4" Jan 20 18:23:40 crc kubenswrapper[4661]: I0120 18:23:40.945470 4661 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-metrics-8mm55" podStartSLOduration=1.945426251 podStartE2EDuration="1.945426251s" podCreationTimestamp="2026-01-20 18:23:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 18:23:40.930122036 +0000 UTC m=+1077.260911698" watchObservedRunningTime="2026-01-20 18:23:40.945426251 +0000 UTC m=+1077.276215923" Jan 20 18:23:40 crc kubenswrapper[4661]: I0120 18:23:40.991308 4661 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=1.73415769 podStartE2EDuration="42.991288134s" podCreationTimestamp="2026-01-20 18:22:58 +0000 UTC" firstStartedPulling="2026-01-20 18:22:59.377622429 +0000 UTC m=+1035.708412091" lastFinishedPulling="2026-01-20 18:23:40.634752873 +0000 UTC m=+1076.965542535" observedRunningTime="2026-01-20 18:23:40.973607026 +0000 UTC m=+1077.304396688" watchObservedRunningTime="2026-01-20 18:23:40.991288134 +0000 UTC m=+1077.322077796" Jan 20 18:23:41 crc kubenswrapper[4661]: I0120 18:23:41.059188 4661 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-5v9q4"] Jan 20 18:23:41 crc kubenswrapper[4661]: I0120 18:23:41.069299 4661 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-5v9q4"] Jan 20 18:23:41 crc kubenswrapper[4661]: I0120 18:23:41.114095 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Jan 20 18:23:41 crc kubenswrapper[4661]: W0120 18:23:41.115074 4661 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod26291859_ffb9_435a_92bd_7ebc53f7e4bc.slice/crio-9b822f2e434bb1dafc4af6590523d409059f418346ec868fea73f1323da86395 WatchSource:0}: Error finding container 9b822f2e434bb1dafc4af6590523d409059f418346ec868fea73f1323da86395: Status 404 returned error can't find the container with id 9b822f2e434bb1dafc4af6590523d409059f418346ec868fea73f1323da86395 Jan 20 18:23:41 crc kubenswrapper[4661]: I0120 18:23:41.932228 4661 generic.go:334] "Generic (PLEG): container finished" podID="abfb2e06-c405-418f-9bf9-83aeb2b3f706" containerID="ccf3d86081744efc06d5e9416839f08abfa050f98fcaa753ca7091f5462d6667" exitCode=0 Jan 20 18:23:41 crc kubenswrapper[4661]: I0120 18:23:41.932317 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7f896c8c65-hl6rr" event={"ID":"abfb2e06-c405-418f-9bf9-83aeb2b3f706","Type":"ContainerDied","Data":"ccf3d86081744efc06d5e9416839f08abfa050f98fcaa753ca7091f5462d6667"} Jan 20 18:23:41 crc kubenswrapper[4661]: I0120 18:23:41.934626 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"26291859-ffb9-435a-92bd-7ebc53f7e4bc","Type":"ContainerStarted","Data":"9b822f2e434bb1dafc4af6590523d409059f418346ec868fea73f1323da86395"} Jan 20 18:23:42 crc kubenswrapper[4661]: I0120 18:23:42.152724 4661 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="02aac5db-0152-44ea-94e6-4e8ef20cbe41" path="/var/lib/kubelet/pods/02aac5db-0152-44ea-94e6-4e8ef20cbe41/volumes" Jan 20 18:23:42 crc kubenswrapper[4661]: I0120 18:23:42.944428 4661 generic.go:334] "Generic (PLEG): container finished" podID="ad7ce2d6-6f59-4934-b54c-d1d763e14c22" containerID="f658e4ffe764882dfdf026ba57515baa960160168300760983d30baa32fdc2ad" exitCode=0 Jan 20 18:23:42 crc kubenswrapper[4661]: I0120 18:23:42.944794 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-tpwdx" event={"ID":"ad7ce2d6-6f59-4934-b54c-d1d763e14c22","Type":"ContainerDied","Data":"f658e4ffe764882dfdf026ba57515baa960160168300760983d30baa32fdc2ad"} Jan 20 18:23:42 crc kubenswrapper[4661]: I0120 18:23:42.962568 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7f896c8c65-hl6rr" event={"ID":"abfb2e06-c405-418f-9bf9-83aeb2b3f706","Type":"ContainerStarted","Data":"3feab671dde3ac83e74994d52cf4732099fe210db1909e83646cfd4ab4be6485"} Jan 20 18:23:42 crc kubenswrapper[4661]: I0120 18:23:42.962690 4661 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7f896c8c65-hl6rr" Jan 20 18:23:42 crc kubenswrapper[4661]: I0120 18:23:42.966038 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"26291859-ffb9-435a-92bd-7ebc53f7e4bc","Type":"ContainerStarted","Data":"3a2823e4c450d096e408f3d5571c23599d883baf082252178c30ed151079ab72"} Jan 20 18:23:42 crc kubenswrapper[4661]: I0120 18:23:42.991364 4661 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7f896c8c65-hl6rr" podStartSLOduration=3.473345855 podStartE2EDuration="3.991348537s" podCreationTimestamp="2026-01-20 18:23:39 +0000 UTC" firstStartedPulling="2026-01-20 18:23:40.451430724 +0000 UTC m=+1076.782220386" lastFinishedPulling="2026-01-20 18:23:40.969433406 +0000 UTC m=+1077.300223068" observedRunningTime="2026-01-20 18:23:42.989802106 +0000 UTC m=+1079.320591798" watchObservedRunningTime="2026-01-20 18:23:42.991348537 +0000 UTC m=+1079.322138199" Jan 20 18:23:43 crc kubenswrapper[4661]: I0120 18:23:43.978054 4661 generic.go:334] "Generic (PLEG): container finished" podID="1c705fc7-9ad0-4254-ad57-63db21057251" containerID="0c1cf65e4b07b5646bb3f7db29b931fb4b255d998e1838dbee9912bb2b5e47d4" exitCode=0 Jan 20 18:23:43 crc kubenswrapper[4661]: I0120 18:23:43.978169 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"1c705fc7-9ad0-4254-ad57-63db21057251","Type":"ContainerDied","Data":"0c1cf65e4b07b5646bb3f7db29b931fb4b255d998e1838dbee9912bb2b5e47d4"} Jan 20 18:23:43 crc kubenswrapper[4661]: I0120 18:23:43.981321 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-tpwdx" event={"ID":"ad7ce2d6-6f59-4934-b54c-d1d763e14c22","Type":"ContainerStarted","Data":"90718394324f3ce0bcc7a3bc9fafd1d5d961f48bebf829f6bc2b013ce89bde2d"} Jan 20 18:23:43 crc kubenswrapper[4661]: I0120 18:23:43.982181 4661 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-86db49b7ff-tpwdx" Jan 20 18:23:43 crc kubenswrapper[4661]: I0120 18:23:43.987541 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"26291859-ffb9-435a-92bd-7ebc53f7e4bc","Type":"ContainerStarted","Data":"600f6ad1d031c5cfc40adf1fc2e5c5c899693f3a953e13434a5131ca596fda97"} Jan 20 18:23:43 crc kubenswrapper[4661]: I0120 18:23:43.988088 4661 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Jan 20 18:23:44 crc kubenswrapper[4661]: I0120 18:23:44.059885 4661 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-northd-0" podStartSLOduration=2.45197073 podStartE2EDuration="4.0598649s" podCreationTimestamp="2026-01-20 18:23:40 +0000 UTC" firstStartedPulling="2026-01-20 18:23:41.117525153 +0000 UTC m=+1077.448314815" lastFinishedPulling="2026-01-20 18:23:42.725419313 +0000 UTC m=+1079.056208985" observedRunningTime="2026-01-20 18:23:44.046512557 +0000 UTC m=+1080.377302259" watchObservedRunningTime="2026-01-20 18:23:44.0598649 +0000 UTC m=+1080.390654572" Jan 20 18:23:44 crc kubenswrapper[4661]: I0120 18:23:44.076224 4661 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-86db49b7ff-tpwdx" podStartSLOduration=4.617862018 podStartE2EDuration="5.076207992s" podCreationTimestamp="2026-01-20 18:23:39 +0000 UTC" firstStartedPulling="2026-01-20 18:23:40.787142714 +0000 UTC m=+1077.117932376" lastFinishedPulling="2026-01-20 18:23:41.245488688 +0000 UTC m=+1077.576278350" observedRunningTime="2026-01-20 18:23:44.071235851 +0000 UTC m=+1080.402025523" watchObservedRunningTime="2026-01-20 18:23:44.076207992 +0000 UTC m=+1080.406997674" Jan 20 18:23:45 crc kubenswrapper[4661]: I0120 18:23:45.000060 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"1c705fc7-9ad0-4254-ad57-63db21057251","Type":"ContainerStarted","Data":"17ff8296027e4b0dc5f4ed1ba3a01350e971a1a593cd92cbf5bf8aacba82f71e"} Jan 20 18:23:45 crc kubenswrapper[4661]: I0120 18:23:45.003128 4661 generic.go:334] "Generic (PLEG): container finished" podID="1a3386fb-6ffa-47fa-8697-8d3c45ff61be" containerID="541c042338d70d95a20a237ddcf0560205f24c544d0bf1f913f65243150dd01f" exitCode=0 Jan 20 18:23:45 crc kubenswrapper[4661]: I0120 18:23:45.003206 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"1a3386fb-6ffa-47fa-8697-8d3c45ff61be","Type":"ContainerDied","Data":"541c042338d70d95a20a237ddcf0560205f24c544d0bf1f913f65243150dd01f"} Jan 20 18:23:45 crc kubenswrapper[4661]: I0120 18:23:45.052319 4661 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=8.796730687 podStartE2EDuration="50.052287621s" podCreationTimestamp="2026-01-20 18:22:55 +0000 UTC" firstStartedPulling="2026-01-20 18:22:57.364030366 +0000 UTC m=+1033.694820028" lastFinishedPulling="2026-01-20 18:23:38.6195873 +0000 UTC m=+1074.950376962" observedRunningTime="2026-01-20 18:23:45.040187441 +0000 UTC m=+1081.370977133" watchObservedRunningTime="2026-01-20 18:23:45.052287621 +0000 UTC m=+1081.383077323" Jan 20 18:23:46 crc kubenswrapper[4661]: I0120 18:23:46.018432 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"1a3386fb-6ffa-47fa-8697-8d3c45ff61be","Type":"ContainerStarted","Data":"269e5669e9da178c545ab63733473e3ce1fd7dfdc9e84e187d28520e65a84b50"} Jan 20 18:23:46 crc kubenswrapper[4661]: I0120 18:23:46.062736 4661 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=-9223371986.792114 podStartE2EDuration="50.062661955s" podCreationTimestamp="2026-01-20 18:22:56 +0000 UTC" firstStartedPulling="2026-01-20 18:22:59.218035661 +0000 UTC m=+1035.548825323" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 18:23:46.05001601 +0000 UTC m=+1082.380805752" watchObservedRunningTime="2026-01-20 18:23:46.062661955 +0000 UTC m=+1082.393451657" Jan 20 18:23:46 crc kubenswrapper[4661]: I0120 18:23:46.675192 4661 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Jan 20 18:23:46 crc kubenswrapper[4661]: I0120 18:23:46.675652 4661 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Jan 20 18:23:48 crc kubenswrapper[4661]: I0120 18:23:48.043205 4661 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Jan 20 18:23:48 crc kubenswrapper[4661]: I0120 18:23:48.045016 4661 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Jan 20 18:23:48 crc kubenswrapper[4661]: I0120 18:23:48.582866 4661 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Jan 20 18:23:49 crc kubenswrapper[4661]: I0120 18:23:49.659486 4661 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7f896c8c65-hl6rr" Jan 20 18:23:50 crc kubenswrapper[4661]: I0120 18:23:50.158409 4661 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-86db49b7ff-tpwdx" Jan 20 18:23:50 crc kubenswrapper[4661]: I0120 18:23:50.268707 4661 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7f896c8c65-hl6rr"] Jan 20 18:23:50 crc kubenswrapper[4661]: I0120 18:23:50.269181 4661 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7f896c8c65-hl6rr" podUID="abfb2e06-c405-418f-9bf9-83aeb2b3f706" containerName="dnsmasq-dns" containerID="cri-o://3feab671dde3ac83e74994d52cf4732099fe210db1909e83646cfd4ab4be6485" gracePeriod=10 Jan 20 18:23:50 crc kubenswrapper[4661]: I0120 18:23:50.697922 4661 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7f896c8c65-hl6rr" Jan 20 18:23:50 crc kubenswrapper[4661]: I0120 18:23:50.819384 4661 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Jan 20 18:23:50 crc kubenswrapper[4661]: I0120 18:23:50.874445 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4g4nk\" (UniqueName: \"kubernetes.io/projected/abfb2e06-c405-418f-9bf9-83aeb2b3f706-kube-api-access-4g4nk\") pod \"abfb2e06-c405-418f-9bf9-83aeb2b3f706\" (UID: \"abfb2e06-c405-418f-9bf9-83aeb2b3f706\") " Jan 20 18:23:50 crc kubenswrapper[4661]: I0120 18:23:50.874500 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/abfb2e06-c405-418f-9bf9-83aeb2b3f706-config\") pod \"abfb2e06-c405-418f-9bf9-83aeb2b3f706\" (UID: \"abfb2e06-c405-418f-9bf9-83aeb2b3f706\") " Jan 20 18:23:50 crc kubenswrapper[4661]: I0120 18:23:50.874586 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/abfb2e06-c405-418f-9bf9-83aeb2b3f706-dns-svc\") pod \"abfb2e06-c405-418f-9bf9-83aeb2b3f706\" (UID: \"abfb2e06-c405-418f-9bf9-83aeb2b3f706\") " Jan 20 18:23:50 crc kubenswrapper[4661]: I0120 18:23:50.874646 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/abfb2e06-c405-418f-9bf9-83aeb2b3f706-ovsdbserver-sb\") pod \"abfb2e06-c405-418f-9bf9-83aeb2b3f706\" (UID: \"abfb2e06-c405-418f-9bf9-83aeb2b3f706\") " Jan 20 18:23:50 crc kubenswrapper[4661]: I0120 18:23:50.880089 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/abfb2e06-c405-418f-9bf9-83aeb2b3f706-kube-api-access-4g4nk" (OuterVolumeSpecName: "kube-api-access-4g4nk") pod "abfb2e06-c405-418f-9bf9-83aeb2b3f706" (UID: "abfb2e06-c405-418f-9bf9-83aeb2b3f706"). InnerVolumeSpecName "kube-api-access-4g4nk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 18:23:50 crc kubenswrapper[4661]: I0120 18:23:50.905491 4661 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Jan 20 18:23:50 crc kubenswrapper[4661]: I0120 18:23:50.924301 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/abfb2e06-c405-418f-9bf9-83aeb2b3f706-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "abfb2e06-c405-418f-9bf9-83aeb2b3f706" (UID: "abfb2e06-c405-418f-9bf9-83aeb2b3f706"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 18:23:50 crc kubenswrapper[4661]: I0120 18:23:50.937187 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/abfb2e06-c405-418f-9bf9-83aeb2b3f706-config" (OuterVolumeSpecName: "config") pod "abfb2e06-c405-418f-9bf9-83aeb2b3f706" (UID: "abfb2e06-c405-418f-9bf9-83aeb2b3f706"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 18:23:50 crc kubenswrapper[4661]: I0120 18:23:50.949615 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/abfb2e06-c405-418f-9bf9-83aeb2b3f706-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "abfb2e06-c405-418f-9bf9-83aeb2b3f706" (UID: "abfb2e06-c405-418f-9bf9-83aeb2b3f706"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 18:23:50 crc kubenswrapper[4661]: I0120 18:23:50.976841 4661 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/abfb2e06-c405-418f-9bf9-83aeb2b3f706-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 20 18:23:50 crc kubenswrapper[4661]: I0120 18:23:50.976880 4661 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/abfb2e06-c405-418f-9bf9-83aeb2b3f706-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 20 18:23:50 crc kubenswrapper[4661]: I0120 18:23:50.976899 4661 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4g4nk\" (UniqueName: \"kubernetes.io/projected/abfb2e06-c405-418f-9bf9-83aeb2b3f706-kube-api-access-4g4nk\") on node \"crc\" DevicePath \"\"" Jan 20 18:23:50 crc kubenswrapper[4661]: I0120 18:23:50.976911 4661 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/abfb2e06-c405-418f-9bf9-83aeb2b3f706-config\") on node \"crc\" DevicePath \"\"" Jan 20 18:23:51 crc kubenswrapper[4661]: I0120 18:23:51.063499 4661 generic.go:334] "Generic (PLEG): container finished" podID="abfb2e06-c405-418f-9bf9-83aeb2b3f706" containerID="3feab671dde3ac83e74994d52cf4732099fe210db1909e83646cfd4ab4be6485" exitCode=0 Jan 20 18:23:51 crc kubenswrapper[4661]: I0120 18:23:51.064234 4661 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7f896c8c65-hl6rr" Jan 20 18:23:51 crc kubenswrapper[4661]: I0120 18:23:51.064871 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7f896c8c65-hl6rr" event={"ID":"abfb2e06-c405-418f-9bf9-83aeb2b3f706","Type":"ContainerDied","Data":"3feab671dde3ac83e74994d52cf4732099fe210db1909e83646cfd4ab4be6485"} Jan 20 18:23:51 crc kubenswrapper[4661]: I0120 18:23:51.064930 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7f896c8c65-hl6rr" event={"ID":"abfb2e06-c405-418f-9bf9-83aeb2b3f706","Type":"ContainerDied","Data":"f6179c3b6b5f9291b66364f469eba06e8879ddb79865bd048e5e5975df1c585d"} Jan 20 18:23:51 crc kubenswrapper[4661]: I0120 18:23:51.064971 4661 scope.go:117] "RemoveContainer" containerID="3feab671dde3ac83e74994d52cf4732099fe210db1909e83646cfd4ab4be6485" Jan 20 18:23:51 crc kubenswrapper[4661]: I0120 18:23:51.092974 4661 scope.go:117] "RemoveContainer" containerID="ccf3d86081744efc06d5e9416839f08abfa050f98fcaa753ca7091f5462d6667" Jan 20 18:23:51 crc kubenswrapper[4661]: I0120 18:23:51.096327 4661 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7f896c8c65-hl6rr"] Jan 20 18:23:51 crc kubenswrapper[4661]: I0120 18:23:51.106110 4661 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7f896c8c65-hl6rr"] Jan 20 18:23:51 crc kubenswrapper[4661]: I0120 18:23:51.108948 4661 scope.go:117] "RemoveContainer" containerID="3feab671dde3ac83e74994d52cf4732099fe210db1909e83646cfd4ab4be6485" Jan 20 18:23:51 crc kubenswrapper[4661]: E0120 18:23:51.109274 4661 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3feab671dde3ac83e74994d52cf4732099fe210db1909e83646cfd4ab4be6485\": container with ID starting with 3feab671dde3ac83e74994d52cf4732099fe210db1909e83646cfd4ab4be6485 not found: ID does not exist" containerID="3feab671dde3ac83e74994d52cf4732099fe210db1909e83646cfd4ab4be6485" Jan 20 18:23:51 crc kubenswrapper[4661]: I0120 18:23:51.109302 4661 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3feab671dde3ac83e74994d52cf4732099fe210db1909e83646cfd4ab4be6485"} err="failed to get container status \"3feab671dde3ac83e74994d52cf4732099fe210db1909e83646cfd4ab4be6485\": rpc error: code = NotFound desc = could not find container \"3feab671dde3ac83e74994d52cf4732099fe210db1909e83646cfd4ab4be6485\": container with ID starting with 3feab671dde3ac83e74994d52cf4732099fe210db1909e83646cfd4ab4be6485 not found: ID does not exist" Jan 20 18:23:51 crc kubenswrapper[4661]: I0120 18:23:51.109324 4661 scope.go:117] "RemoveContainer" containerID="ccf3d86081744efc06d5e9416839f08abfa050f98fcaa753ca7091f5462d6667" Jan 20 18:23:51 crc kubenswrapper[4661]: E0120 18:23:51.109789 4661 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ccf3d86081744efc06d5e9416839f08abfa050f98fcaa753ca7091f5462d6667\": container with ID starting with ccf3d86081744efc06d5e9416839f08abfa050f98fcaa753ca7091f5462d6667 not found: ID does not exist" containerID="ccf3d86081744efc06d5e9416839f08abfa050f98fcaa753ca7091f5462d6667" Jan 20 18:23:51 crc kubenswrapper[4661]: I0120 18:23:51.109811 4661 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ccf3d86081744efc06d5e9416839f08abfa050f98fcaa753ca7091f5462d6667"} err="failed to get container status \"ccf3d86081744efc06d5e9416839f08abfa050f98fcaa753ca7091f5462d6667\": rpc error: code = NotFound desc = could not find container \"ccf3d86081744efc06d5e9416839f08abfa050f98fcaa753ca7091f5462d6667\": container with ID starting with ccf3d86081744efc06d5e9416839f08abfa050f98fcaa753ca7091f5462d6667 not found: ID does not exist" Jan 20 18:23:52 crc kubenswrapper[4661]: I0120 18:23:52.160927 4661 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="abfb2e06-c405-418f-9bf9-83aeb2b3f706" path="/var/lib/kubelet/pods/abfb2e06-c405-418f-9bf9-83aeb2b3f706/volumes" Jan 20 18:23:52 crc kubenswrapper[4661]: I0120 18:23:52.186542 4661 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Jan 20 18:23:52 crc kubenswrapper[4661]: I0120 18:23:52.266824 4661 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Jan 20 18:23:55 crc kubenswrapper[4661]: I0120 18:23:55.395550 4661 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-nd6kj"] Jan 20 18:23:55 crc kubenswrapper[4661]: E0120 18:23:55.396280 4661 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="abfb2e06-c405-418f-9bf9-83aeb2b3f706" containerName="dnsmasq-dns" Jan 20 18:23:55 crc kubenswrapper[4661]: I0120 18:23:55.396296 4661 state_mem.go:107] "Deleted CPUSet assignment" podUID="abfb2e06-c405-418f-9bf9-83aeb2b3f706" containerName="dnsmasq-dns" Jan 20 18:23:55 crc kubenswrapper[4661]: E0120 18:23:55.396317 4661 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="abfb2e06-c405-418f-9bf9-83aeb2b3f706" containerName="init" Jan 20 18:23:55 crc kubenswrapper[4661]: I0120 18:23:55.396324 4661 state_mem.go:107] "Deleted CPUSet assignment" podUID="abfb2e06-c405-418f-9bf9-83aeb2b3f706" containerName="init" Jan 20 18:23:55 crc kubenswrapper[4661]: I0120 18:23:55.396496 4661 memory_manager.go:354] "RemoveStaleState removing state" podUID="abfb2e06-c405-418f-9bf9-83aeb2b3f706" containerName="dnsmasq-dns" Jan 20 18:23:55 crc kubenswrapper[4661]: I0120 18:23:55.397080 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-nd6kj" Jan 20 18:23:55 crc kubenswrapper[4661]: I0120 18:23:55.399735 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Jan 20 18:23:55 crc kubenswrapper[4661]: I0120 18:23:55.410536 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-nd6kj"] Jan 20 18:23:55 crc kubenswrapper[4661]: I0120 18:23:55.497507 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/23c3d076-1a7b-4376-88d8-5544a19f4a9d-operator-scripts\") pod \"root-account-create-update-nd6kj\" (UID: \"23c3d076-1a7b-4376-88d8-5544a19f4a9d\") " pod="openstack/root-account-create-update-nd6kj" Jan 20 18:23:55 crc kubenswrapper[4661]: I0120 18:23:55.497561 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-69785\" (UniqueName: \"kubernetes.io/projected/23c3d076-1a7b-4376-88d8-5544a19f4a9d-kube-api-access-69785\") pod \"root-account-create-update-nd6kj\" (UID: \"23c3d076-1a7b-4376-88d8-5544a19f4a9d\") " pod="openstack/root-account-create-update-nd6kj" Jan 20 18:23:55 crc kubenswrapper[4661]: I0120 18:23:55.599134 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/23c3d076-1a7b-4376-88d8-5544a19f4a9d-operator-scripts\") pod \"root-account-create-update-nd6kj\" (UID: \"23c3d076-1a7b-4376-88d8-5544a19f4a9d\") " pod="openstack/root-account-create-update-nd6kj" Jan 20 18:23:55 crc kubenswrapper[4661]: I0120 18:23:55.599200 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-69785\" (UniqueName: \"kubernetes.io/projected/23c3d076-1a7b-4376-88d8-5544a19f4a9d-kube-api-access-69785\") pod \"root-account-create-update-nd6kj\" (UID: \"23c3d076-1a7b-4376-88d8-5544a19f4a9d\") " pod="openstack/root-account-create-update-nd6kj" Jan 20 18:23:55 crc kubenswrapper[4661]: I0120 18:23:55.600015 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/23c3d076-1a7b-4376-88d8-5544a19f4a9d-operator-scripts\") pod \"root-account-create-update-nd6kj\" (UID: \"23c3d076-1a7b-4376-88d8-5544a19f4a9d\") " pod="openstack/root-account-create-update-nd6kj" Jan 20 18:23:55 crc kubenswrapper[4661]: I0120 18:23:55.619203 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-69785\" (UniqueName: \"kubernetes.io/projected/23c3d076-1a7b-4376-88d8-5544a19f4a9d-kube-api-access-69785\") pod \"root-account-create-update-nd6kj\" (UID: \"23c3d076-1a7b-4376-88d8-5544a19f4a9d\") " pod="openstack/root-account-create-update-nd6kj" Jan 20 18:23:55 crc kubenswrapper[4661]: I0120 18:23:55.719884 4661 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Jan 20 18:23:55 crc kubenswrapper[4661]: I0120 18:23:55.721082 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-nd6kj" Jan 20 18:23:56 crc kubenswrapper[4661]: I0120 18:23:56.200203 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-nd6kj"] Jan 20 18:23:56 crc kubenswrapper[4661]: W0120 18:23:56.212942 4661 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod23c3d076_1a7b_4376_88d8_5544a19f4a9d.slice/crio-1e49b0ec3610be36fd7eadb474476f252915efcc2b9199eac7a5ab3b7e2f0812 WatchSource:0}: Error finding container 1e49b0ec3610be36fd7eadb474476f252915efcc2b9199eac7a5ab3b7e2f0812: Status 404 returned error can't find the container with id 1e49b0ec3610be36fd7eadb474476f252915efcc2b9199eac7a5ab3b7e2f0812 Jan 20 18:23:57 crc kubenswrapper[4661]: I0120 18:23:57.118105 4661 generic.go:334] "Generic (PLEG): container finished" podID="23c3d076-1a7b-4376-88d8-5544a19f4a9d" containerID="94410b0d1d664184358059c9752a856bce000fdbd3932d5acd266822b3cc9626" exitCode=0 Jan 20 18:23:57 crc kubenswrapper[4661]: I0120 18:23:57.118396 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-nd6kj" event={"ID":"23c3d076-1a7b-4376-88d8-5544a19f4a9d","Type":"ContainerDied","Data":"94410b0d1d664184358059c9752a856bce000fdbd3932d5acd266822b3cc9626"} Jan 20 18:23:57 crc kubenswrapper[4661]: I0120 18:23:57.118425 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-nd6kj" event={"ID":"23c3d076-1a7b-4376-88d8-5544a19f4a9d","Type":"ContainerStarted","Data":"1e49b0ec3610be36fd7eadb474476f252915efcc2b9199eac7a5ab3b7e2f0812"} Jan 20 18:23:58 crc kubenswrapper[4661]: I0120 18:23:58.123248 4661 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-create-7hvj8"] Jan 20 18:23:58 crc kubenswrapper[4661]: I0120 18:23:58.125869 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-7hvj8" Jan 20 18:23:58 crc kubenswrapper[4661]: I0120 18:23:58.129501 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-7hvj8"] Jan 20 18:23:58 crc kubenswrapper[4661]: I0120 18:23:58.242164 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ece50d74-6552-42cf-b0df-72749a5e2edb-operator-scripts\") pod \"keystone-db-create-7hvj8\" (UID: \"ece50d74-6552-42cf-b0df-72749a5e2edb\") " pod="openstack/keystone-db-create-7hvj8" Jan 20 18:23:58 crc kubenswrapper[4661]: I0120 18:23:58.242583 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v5m69\" (UniqueName: \"kubernetes.io/projected/ece50d74-6552-42cf-b0df-72749a5e2edb-kube-api-access-v5m69\") pod \"keystone-db-create-7hvj8\" (UID: \"ece50d74-6552-42cf-b0df-72749a5e2edb\") " pod="openstack/keystone-db-create-7hvj8" Jan 20 18:23:58 crc kubenswrapper[4661]: I0120 18:23:58.249245 4661 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-25c4-account-create-update-ztjbg"] Jan 20 18:23:58 crc kubenswrapper[4661]: I0120 18:23:58.250186 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-25c4-account-create-update-ztjbg" Jan 20 18:23:58 crc kubenswrapper[4661]: I0120 18:23:58.252761 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Jan 20 18:23:58 crc kubenswrapper[4661]: I0120 18:23:58.261253 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-25c4-account-create-update-ztjbg"] Jan 20 18:23:58 crc kubenswrapper[4661]: I0120 18:23:58.343974 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v5m69\" (UniqueName: \"kubernetes.io/projected/ece50d74-6552-42cf-b0df-72749a5e2edb-kube-api-access-v5m69\") pod \"keystone-db-create-7hvj8\" (UID: \"ece50d74-6552-42cf-b0df-72749a5e2edb\") " pod="openstack/keystone-db-create-7hvj8" Jan 20 18:23:58 crc kubenswrapper[4661]: I0120 18:23:58.344042 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/28dece6c-e91e-47ad-997e-0b0e6575a39c-operator-scripts\") pod \"keystone-25c4-account-create-update-ztjbg\" (UID: \"28dece6c-e91e-47ad-997e-0b0e6575a39c\") " pod="openstack/keystone-25c4-account-create-update-ztjbg" Jan 20 18:23:58 crc kubenswrapper[4661]: I0120 18:23:58.344070 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ece50d74-6552-42cf-b0df-72749a5e2edb-operator-scripts\") pod \"keystone-db-create-7hvj8\" (UID: \"ece50d74-6552-42cf-b0df-72749a5e2edb\") " pod="openstack/keystone-db-create-7hvj8" Jan 20 18:23:58 crc kubenswrapper[4661]: I0120 18:23:58.344194 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-46m5s\" (UniqueName: \"kubernetes.io/projected/28dece6c-e91e-47ad-997e-0b0e6575a39c-kube-api-access-46m5s\") pod \"keystone-25c4-account-create-update-ztjbg\" (UID: \"28dece6c-e91e-47ad-997e-0b0e6575a39c\") " pod="openstack/keystone-25c4-account-create-update-ztjbg" Jan 20 18:23:58 crc kubenswrapper[4661]: I0120 18:23:58.344960 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ece50d74-6552-42cf-b0df-72749a5e2edb-operator-scripts\") pod \"keystone-db-create-7hvj8\" (UID: \"ece50d74-6552-42cf-b0df-72749a5e2edb\") " pod="openstack/keystone-db-create-7hvj8" Jan 20 18:23:58 crc kubenswrapper[4661]: I0120 18:23:58.368482 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v5m69\" (UniqueName: \"kubernetes.io/projected/ece50d74-6552-42cf-b0df-72749a5e2edb-kube-api-access-v5m69\") pod \"keystone-db-create-7hvj8\" (UID: \"ece50d74-6552-42cf-b0df-72749a5e2edb\") " pod="openstack/keystone-db-create-7hvj8" Jan 20 18:23:58 crc kubenswrapper[4661]: I0120 18:23:58.447846 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-7hvj8" Jan 20 18:23:58 crc kubenswrapper[4661]: I0120 18:23:58.447930 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/28dece6c-e91e-47ad-997e-0b0e6575a39c-operator-scripts\") pod \"keystone-25c4-account-create-update-ztjbg\" (UID: \"28dece6c-e91e-47ad-997e-0b0e6575a39c\") " pod="openstack/keystone-25c4-account-create-update-ztjbg" Jan 20 18:23:58 crc kubenswrapper[4661]: I0120 18:23:58.447985 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-46m5s\" (UniqueName: \"kubernetes.io/projected/28dece6c-e91e-47ad-997e-0b0e6575a39c-kube-api-access-46m5s\") pod \"keystone-25c4-account-create-update-ztjbg\" (UID: \"28dece6c-e91e-47ad-997e-0b0e6575a39c\") " pod="openstack/keystone-25c4-account-create-update-ztjbg" Jan 20 18:23:58 crc kubenswrapper[4661]: I0120 18:23:58.448887 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/28dece6c-e91e-47ad-997e-0b0e6575a39c-operator-scripts\") pod \"keystone-25c4-account-create-update-ztjbg\" (UID: \"28dece6c-e91e-47ad-997e-0b0e6575a39c\") " pod="openstack/keystone-25c4-account-create-update-ztjbg" Jan 20 18:23:58 crc kubenswrapper[4661]: I0120 18:23:58.465019 4661 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-nd6kj" Jan 20 18:23:58 crc kubenswrapper[4661]: I0120 18:23:58.475025 4661 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-create-46xmv"] Jan 20 18:23:58 crc kubenswrapper[4661]: E0120 18:23:58.475392 4661 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="23c3d076-1a7b-4376-88d8-5544a19f4a9d" containerName="mariadb-account-create-update" Jan 20 18:23:58 crc kubenswrapper[4661]: I0120 18:23:58.475415 4661 state_mem.go:107] "Deleted CPUSet assignment" podUID="23c3d076-1a7b-4376-88d8-5544a19f4a9d" containerName="mariadb-account-create-update" Jan 20 18:23:58 crc kubenswrapper[4661]: I0120 18:23:58.475655 4661 memory_manager.go:354] "RemoveStaleState removing state" podUID="23c3d076-1a7b-4376-88d8-5544a19f4a9d" containerName="mariadb-account-create-update" Jan 20 18:23:58 crc kubenswrapper[4661]: I0120 18:23:58.477436 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-46xmv" Jan 20 18:23:58 crc kubenswrapper[4661]: I0120 18:23:58.489366 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-46xmv"] Jan 20 18:23:58 crc kubenswrapper[4661]: I0120 18:23:58.505986 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-46m5s\" (UniqueName: \"kubernetes.io/projected/28dece6c-e91e-47ad-997e-0b0e6575a39c-kube-api-access-46m5s\") pod \"keystone-25c4-account-create-update-ztjbg\" (UID: \"28dece6c-e91e-47ad-997e-0b0e6575a39c\") " pod="openstack/keystone-25c4-account-create-update-ztjbg" Jan 20 18:23:58 crc kubenswrapper[4661]: I0120 18:23:58.550416 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-69785\" (UniqueName: \"kubernetes.io/projected/23c3d076-1a7b-4376-88d8-5544a19f4a9d-kube-api-access-69785\") pod \"23c3d076-1a7b-4376-88d8-5544a19f4a9d\" (UID: \"23c3d076-1a7b-4376-88d8-5544a19f4a9d\") " Jan 20 18:23:58 crc kubenswrapper[4661]: I0120 18:23:58.550509 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/23c3d076-1a7b-4376-88d8-5544a19f4a9d-operator-scripts\") pod \"23c3d076-1a7b-4376-88d8-5544a19f4a9d\" (UID: \"23c3d076-1a7b-4376-88d8-5544a19f4a9d\") " Jan 20 18:23:58 crc kubenswrapper[4661]: I0120 18:23:58.551805 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/23c3d076-1a7b-4376-88d8-5544a19f4a9d-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "23c3d076-1a7b-4376-88d8-5544a19f4a9d" (UID: "23c3d076-1a7b-4376-88d8-5544a19f4a9d"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 18:23:58 crc kubenswrapper[4661]: I0120 18:23:58.569804 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/23c3d076-1a7b-4376-88d8-5544a19f4a9d-kube-api-access-69785" (OuterVolumeSpecName: "kube-api-access-69785") pod "23c3d076-1a7b-4376-88d8-5544a19f4a9d" (UID: "23c3d076-1a7b-4376-88d8-5544a19f4a9d"). InnerVolumeSpecName "kube-api-access-69785". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 18:23:58 crc kubenswrapper[4661]: I0120 18:23:58.569839 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-25c4-account-create-update-ztjbg" Jan 20 18:23:58 crc kubenswrapper[4661]: I0120 18:23:58.594827 4661 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-8948-account-create-update-s48vf"] Jan 20 18:23:58 crc kubenswrapper[4661]: I0120 18:23:58.595943 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-8948-account-create-update-s48vf" Jan 20 18:23:58 crc kubenswrapper[4661]: I0120 18:23:58.598681 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Jan 20 18:23:58 crc kubenswrapper[4661]: I0120 18:23:58.600971 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-8948-account-create-update-s48vf"] Jan 20 18:23:58 crc kubenswrapper[4661]: I0120 18:23:58.652690 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4b6667ee-ad18-4f1c-9e5c-ff6574793de1-operator-scripts\") pod \"placement-db-create-46xmv\" (UID: \"4b6667ee-ad18-4f1c-9e5c-ff6574793de1\") " pod="openstack/placement-db-create-46xmv" Jan 20 18:23:58 crc kubenswrapper[4661]: I0120 18:23:58.652783 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dcqsb\" (UniqueName: \"kubernetes.io/projected/4b6667ee-ad18-4f1c-9e5c-ff6574793de1-kube-api-access-dcqsb\") pod \"placement-db-create-46xmv\" (UID: \"4b6667ee-ad18-4f1c-9e5c-ff6574793de1\") " pod="openstack/placement-db-create-46xmv" Jan 20 18:23:58 crc kubenswrapper[4661]: I0120 18:23:58.652887 4661 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-69785\" (UniqueName: \"kubernetes.io/projected/23c3d076-1a7b-4376-88d8-5544a19f4a9d-kube-api-access-69785\") on node \"crc\" DevicePath \"\"" Jan 20 18:23:58 crc kubenswrapper[4661]: I0120 18:23:58.652898 4661 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/23c3d076-1a7b-4376-88d8-5544a19f4a9d-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 20 18:23:58 crc kubenswrapper[4661]: I0120 18:23:58.755089 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4b6667ee-ad18-4f1c-9e5c-ff6574793de1-operator-scripts\") pod \"placement-db-create-46xmv\" (UID: \"4b6667ee-ad18-4f1c-9e5c-ff6574793de1\") " pod="openstack/placement-db-create-46xmv" Jan 20 18:23:58 crc kubenswrapper[4661]: I0120 18:23:58.755164 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mkd9w\" (UniqueName: \"kubernetes.io/projected/d1035984-3393-4204-9840-6ede3ceef2e0-kube-api-access-mkd9w\") pod \"placement-8948-account-create-update-s48vf\" (UID: \"d1035984-3393-4204-9840-6ede3ceef2e0\") " pod="openstack/placement-8948-account-create-update-s48vf" Jan 20 18:23:58 crc kubenswrapper[4661]: I0120 18:23:58.755238 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dcqsb\" (UniqueName: \"kubernetes.io/projected/4b6667ee-ad18-4f1c-9e5c-ff6574793de1-kube-api-access-dcqsb\") pod \"placement-db-create-46xmv\" (UID: \"4b6667ee-ad18-4f1c-9e5c-ff6574793de1\") " pod="openstack/placement-db-create-46xmv" Jan 20 18:23:58 crc kubenswrapper[4661]: I0120 18:23:58.755276 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d1035984-3393-4204-9840-6ede3ceef2e0-operator-scripts\") pod \"placement-8948-account-create-update-s48vf\" (UID: \"d1035984-3393-4204-9840-6ede3ceef2e0\") " pod="openstack/placement-8948-account-create-update-s48vf" Jan 20 18:23:58 crc kubenswrapper[4661]: I0120 18:23:58.757004 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4b6667ee-ad18-4f1c-9e5c-ff6574793de1-operator-scripts\") pod \"placement-db-create-46xmv\" (UID: \"4b6667ee-ad18-4f1c-9e5c-ff6574793de1\") " pod="openstack/placement-db-create-46xmv" Jan 20 18:23:58 crc kubenswrapper[4661]: I0120 18:23:58.772424 4661 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-create-skwf2"] Jan 20 18:23:58 crc kubenswrapper[4661]: I0120 18:23:58.777696 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-skwf2" Jan 20 18:23:58 crc kubenswrapper[4661]: I0120 18:23:58.779955 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dcqsb\" (UniqueName: \"kubernetes.io/projected/4b6667ee-ad18-4f1c-9e5c-ff6574793de1-kube-api-access-dcqsb\") pod \"placement-db-create-46xmv\" (UID: \"4b6667ee-ad18-4f1c-9e5c-ff6574793de1\") " pod="openstack/placement-db-create-46xmv" Jan 20 18:23:58 crc kubenswrapper[4661]: I0120 18:23:58.798702 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-skwf2"] Jan 20 18:23:58 crc kubenswrapper[4661]: I0120 18:23:58.842949 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-46xmv" Jan 20 18:23:58 crc kubenswrapper[4661]: I0120 18:23:58.858435 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d1035984-3393-4204-9840-6ede3ceef2e0-operator-scripts\") pod \"placement-8948-account-create-update-s48vf\" (UID: \"d1035984-3393-4204-9840-6ede3ceef2e0\") " pod="openstack/placement-8948-account-create-update-s48vf" Jan 20 18:23:58 crc kubenswrapper[4661]: I0120 18:23:58.858562 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mkd9w\" (UniqueName: \"kubernetes.io/projected/d1035984-3393-4204-9840-6ede3ceef2e0-kube-api-access-mkd9w\") pod \"placement-8948-account-create-update-s48vf\" (UID: \"d1035984-3393-4204-9840-6ede3ceef2e0\") " pod="openstack/placement-8948-account-create-update-s48vf" Jan 20 18:23:58 crc kubenswrapper[4661]: I0120 18:23:58.863563 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d1035984-3393-4204-9840-6ede3ceef2e0-operator-scripts\") pod \"placement-8948-account-create-update-s48vf\" (UID: \"d1035984-3393-4204-9840-6ede3ceef2e0\") " pod="openstack/placement-8948-account-create-update-s48vf" Jan 20 18:23:58 crc kubenswrapper[4661]: I0120 18:23:58.876825 4661 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-7af2-account-create-update-42cbf"] Jan 20 18:23:58 crc kubenswrapper[4661]: I0120 18:23:58.877857 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-7af2-account-create-update-42cbf" Jan 20 18:23:58 crc kubenswrapper[4661]: I0120 18:23:58.886912 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Jan 20 18:23:58 crc kubenswrapper[4661]: I0120 18:23:58.895862 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-7af2-account-create-update-42cbf"] Jan 20 18:23:58 crc kubenswrapper[4661]: I0120 18:23:58.907398 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mkd9w\" (UniqueName: \"kubernetes.io/projected/d1035984-3393-4204-9840-6ede3ceef2e0-kube-api-access-mkd9w\") pod \"placement-8948-account-create-update-s48vf\" (UID: \"d1035984-3393-4204-9840-6ede3ceef2e0\") " pod="openstack/placement-8948-account-create-update-s48vf" Jan 20 18:23:58 crc kubenswrapper[4661]: I0120 18:23:58.915195 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-8948-account-create-update-s48vf" Jan 20 18:23:58 crc kubenswrapper[4661]: I0120 18:23:58.945463 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-7hvj8"] Jan 20 18:23:58 crc kubenswrapper[4661]: W0120 18:23:58.959932 4661 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podece50d74_6552_42cf_b0df_72749a5e2edb.slice/crio-8458399fe4838db0e0abab65fbbaf21df11359a349c0ae70ef7698c81ae96479 WatchSource:0}: Error finding container 8458399fe4838db0e0abab65fbbaf21df11359a349c0ae70ef7698c81ae96479: Status 404 returned error can't find the container with id 8458399fe4838db0e0abab65fbbaf21df11359a349c0ae70ef7698c81ae96479 Jan 20 18:23:58 crc kubenswrapper[4661]: I0120 18:23:58.960019 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fd6f1c00-4ad6-48c1-9f69-88479e6de2f7-operator-scripts\") pod \"glance-db-create-skwf2\" (UID: \"fd6f1c00-4ad6-48c1-9f69-88479e6de2f7\") " pod="openstack/glance-db-create-skwf2" Jan 20 18:23:58 crc kubenswrapper[4661]: I0120 18:23:58.960090 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zdtbx\" (UniqueName: \"kubernetes.io/projected/fd6f1c00-4ad6-48c1-9f69-88479e6de2f7-kube-api-access-zdtbx\") pod \"glance-db-create-skwf2\" (UID: \"fd6f1c00-4ad6-48c1-9f69-88479e6de2f7\") " pod="openstack/glance-db-create-skwf2" Jan 20 18:23:59 crc kubenswrapper[4661]: I0120 18:23:59.061919 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v2m4k\" (UniqueName: \"kubernetes.io/projected/bca3d32d-572f-43fc-87b1-0a1a25a49703-kube-api-access-v2m4k\") pod \"glance-7af2-account-create-update-42cbf\" (UID: \"bca3d32d-572f-43fc-87b1-0a1a25a49703\") " pod="openstack/glance-7af2-account-create-update-42cbf" Jan 20 18:23:59 crc kubenswrapper[4661]: I0120 18:23:59.062549 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fd6f1c00-4ad6-48c1-9f69-88479e6de2f7-operator-scripts\") pod \"glance-db-create-skwf2\" (UID: \"fd6f1c00-4ad6-48c1-9f69-88479e6de2f7\") " pod="openstack/glance-db-create-skwf2" Jan 20 18:23:59 crc kubenswrapper[4661]: I0120 18:23:59.062583 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bca3d32d-572f-43fc-87b1-0a1a25a49703-operator-scripts\") pod \"glance-7af2-account-create-update-42cbf\" (UID: \"bca3d32d-572f-43fc-87b1-0a1a25a49703\") " pod="openstack/glance-7af2-account-create-update-42cbf" Jan 20 18:23:59 crc kubenswrapper[4661]: I0120 18:23:59.062614 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zdtbx\" (UniqueName: \"kubernetes.io/projected/fd6f1c00-4ad6-48c1-9f69-88479e6de2f7-kube-api-access-zdtbx\") pod \"glance-db-create-skwf2\" (UID: \"fd6f1c00-4ad6-48c1-9f69-88479e6de2f7\") " pod="openstack/glance-db-create-skwf2" Jan 20 18:23:59 crc kubenswrapper[4661]: I0120 18:23:59.063490 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fd6f1c00-4ad6-48c1-9f69-88479e6de2f7-operator-scripts\") pod \"glance-db-create-skwf2\" (UID: \"fd6f1c00-4ad6-48c1-9f69-88479e6de2f7\") " pod="openstack/glance-db-create-skwf2" Jan 20 18:23:59 crc kubenswrapper[4661]: I0120 18:23:59.078373 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zdtbx\" (UniqueName: \"kubernetes.io/projected/fd6f1c00-4ad6-48c1-9f69-88479e6de2f7-kube-api-access-zdtbx\") pod \"glance-db-create-skwf2\" (UID: \"fd6f1c00-4ad6-48c1-9f69-88479e6de2f7\") " pod="openstack/glance-db-create-skwf2" Jan 20 18:23:59 crc kubenswrapper[4661]: I0120 18:23:59.093279 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-skwf2" Jan 20 18:23:59 crc kubenswrapper[4661]: I0120 18:23:59.117595 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-25c4-account-create-update-ztjbg"] Jan 20 18:23:59 crc kubenswrapper[4661]: I0120 18:23:59.158065 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-7hvj8" event={"ID":"ece50d74-6552-42cf-b0df-72749a5e2edb","Type":"ContainerStarted","Data":"8458399fe4838db0e0abab65fbbaf21df11359a349c0ae70ef7698c81ae96479"} Jan 20 18:23:59 crc kubenswrapper[4661]: I0120 18:23:59.164564 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v2m4k\" (UniqueName: \"kubernetes.io/projected/bca3d32d-572f-43fc-87b1-0a1a25a49703-kube-api-access-v2m4k\") pod \"glance-7af2-account-create-update-42cbf\" (UID: \"bca3d32d-572f-43fc-87b1-0a1a25a49703\") " pod="openstack/glance-7af2-account-create-update-42cbf" Jan 20 18:23:59 crc kubenswrapper[4661]: I0120 18:23:59.164657 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bca3d32d-572f-43fc-87b1-0a1a25a49703-operator-scripts\") pod \"glance-7af2-account-create-update-42cbf\" (UID: \"bca3d32d-572f-43fc-87b1-0a1a25a49703\") " pod="openstack/glance-7af2-account-create-update-42cbf" Jan 20 18:23:59 crc kubenswrapper[4661]: I0120 18:23:59.165585 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bca3d32d-572f-43fc-87b1-0a1a25a49703-operator-scripts\") pod \"glance-7af2-account-create-update-42cbf\" (UID: \"bca3d32d-572f-43fc-87b1-0a1a25a49703\") " pod="openstack/glance-7af2-account-create-update-42cbf" Jan 20 18:23:59 crc kubenswrapper[4661]: I0120 18:23:59.168253 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-nd6kj" event={"ID":"23c3d076-1a7b-4376-88d8-5544a19f4a9d","Type":"ContainerDied","Data":"1e49b0ec3610be36fd7eadb474476f252915efcc2b9199eac7a5ab3b7e2f0812"} Jan 20 18:23:59 crc kubenswrapper[4661]: I0120 18:23:59.168293 4661 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1e49b0ec3610be36fd7eadb474476f252915efcc2b9199eac7a5ab3b7e2f0812" Jan 20 18:23:59 crc kubenswrapper[4661]: I0120 18:23:59.168352 4661 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-nd6kj" Jan 20 18:23:59 crc kubenswrapper[4661]: I0120 18:23:59.189614 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v2m4k\" (UniqueName: \"kubernetes.io/projected/bca3d32d-572f-43fc-87b1-0a1a25a49703-kube-api-access-v2m4k\") pod \"glance-7af2-account-create-update-42cbf\" (UID: \"bca3d32d-572f-43fc-87b1-0a1a25a49703\") " pod="openstack/glance-7af2-account-create-update-42cbf" Jan 20 18:23:59 crc kubenswrapper[4661]: I0120 18:23:59.202313 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-7af2-account-create-update-42cbf" Jan 20 18:23:59 crc kubenswrapper[4661]: I0120 18:23:59.331009 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-46xmv"] Jan 20 18:23:59 crc kubenswrapper[4661]: I0120 18:23:59.431654 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-8948-account-create-update-s48vf"] Jan 20 18:23:59 crc kubenswrapper[4661]: I0120 18:23:59.593027 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-skwf2"] Jan 20 18:23:59 crc kubenswrapper[4661]: I0120 18:23:59.717078 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-7af2-account-create-update-42cbf"] Jan 20 18:23:59 crc kubenswrapper[4661]: W0120 18:23:59.718023 4661 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbca3d32d_572f_43fc_87b1_0a1a25a49703.slice/crio-f6d0b20ec9d709595105e48c211c4b040ec0ded1ac2f44d89cc390e7280735c9 WatchSource:0}: Error finding container f6d0b20ec9d709595105e48c211c4b040ec0ded1ac2f44d89cc390e7280735c9: Status 404 returned error can't find the container with id f6d0b20ec9d709595105e48c211c4b040ec0ded1ac2f44d89cc390e7280735c9 Jan 20 18:24:00 crc kubenswrapper[4661]: I0120 18:24:00.183978 4661 generic.go:334] "Generic (PLEG): container finished" podID="bca3d32d-572f-43fc-87b1-0a1a25a49703" containerID="1b98bc9b353adc6754aac8ed5d8c1c3647e1efd437d4a3a951202a4910cf9f1b" exitCode=0 Jan 20 18:24:00 crc kubenswrapper[4661]: I0120 18:24:00.184340 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-7af2-account-create-update-42cbf" event={"ID":"bca3d32d-572f-43fc-87b1-0a1a25a49703","Type":"ContainerDied","Data":"1b98bc9b353adc6754aac8ed5d8c1c3647e1efd437d4a3a951202a4910cf9f1b"} Jan 20 18:24:00 crc kubenswrapper[4661]: I0120 18:24:00.184373 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-7af2-account-create-update-42cbf" event={"ID":"bca3d32d-572f-43fc-87b1-0a1a25a49703","Type":"ContainerStarted","Data":"f6d0b20ec9d709595105e48c211c4b040ec0ded1ac2f44d89cc390e7280735c9"} Jan 20 18:24:00 crc kubenswrapper[4661]: I0120 18:24:00.187784 4661 generic.go:334] "Generic (PLEG): container finished" podID="d1035984-3393-4204-9840-6ede3ceef2e0" containerID="10b0355e1ceaec1926b283ca42258049e0ac4ab4f55bfdcdf7ff23c401443adf" exitCode=0 Jan 20 18:24:00 crc kubenswrapper[4661]: I0120 18:24:00.187884 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-8948-account-create-update-s48vf" event={"ID":"d1035984-3393-4204-9840-6ede3ceef2e0","Type":"ContainerDied","Data":"10b0355e1ceaec1926b283ca42258049e0ac4ab4f55bfdcdf7ff23c401443adf"} Jan 20 18:24:00 crc kubenswrapper[4661]: I0120 18:24:00.188013 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-8948-account-create-update-s48vf" event={"ID":"d1035984-3393-4204-9840-6ede3ceef2e0","Type":"ContainerStarted","Data":"30b2efe533b76b03e53b1a8c2f2cecb11db9739b879586cfc63d432f764e7094"} Jan 20 18:24:00 crc kubenswrapper[4661]: I0120 18:24:00.194123 4661 generic.go:334] "Generic (PLEG): container finished" podID="28dece6c-e91e-47ad-997e-0b0e6575a39c" containerID="53dbd93507567298a5fc017b2004c1628a8d96a582e78b258b6696fb4175eeb0" exitCode=0 Jan 20 18:24:00 crc kubenswrapper[4661]: I0120 18:24:00.194285 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-25c4-account-create-update-ztjbg" event={"ID":"28dece6c-e91e-47ad-997e-0b0e6575a39c","Type":"ContainerDied","Data":"53dbd93507567298a5fc017b2004c1628a8d96a582e78b258b6696fb4175eeb0"} Jan 20 18:24:00 crc kubenswrapper[4661]: I0120 18:24:00.194350 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-25c4-account-create-update-ztjbg" event={"ID":"28dece6c-e91e-47ad-997e-0b0e6575a39c","Type":"ContainerStarted","Data":"3080aff343cd8c04dbb13c754bfd73ef8751a8c9b1c037701b4d61fb205759ac"} Jan 20 18:24:00 crc kubenswrapper[4661]: I0120 18:24:00.196409 4661 generic.go:334] "Generic (PLEG): container finished" podID="ece50d74-6552-42cf-b0df-72749a5e2edb" containerID="9e4c8a7d827f28c523e5281cf0854f34859108e98f972c95150d51db93dfe8e0" exitCode=0 Jan 20 18:24:00 crc kubenswrapper[4661]: I0120 18:24:00.196517 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-7hvj8" event={"ID":"ece50d74-6552-42cf-b0df-72749a5e2edb","Type":"ContainerDied","Data":"9e4c8a7d827f28c523e5281cf0854f34859108e98f972c95150d51db93dfe8e0"} Jan 20 18:24:00 crc kubenswrapper[4661]: I0120 18:24:00.211788 4661 generic.go:334] "Generic (PLEG): container finished" podID="4b6667ee-ad18-4f1c-9e5c-ff6574793de1" containerID="3fd3299ad3a125607c38b5170e61b32800735fb36df0c819cf020b3b58918cdd" exitCode=0 Jan 20 18:24:00 crc kubenswrapper[4661]: I0120 18:24:00.211882 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-46xmv" event={"ID":"4b6667ee-ad18-4f1c-9e5c-ff6574793de1","Type":"ContainerDied","Data":"3fd3299ad3a125607c38b5170e61b32800735fb36df0c819cf020b3b58918cdd"} Jan 20 18:24:00 crc kubenswrapper[4661]: I0120 18:24:00.212123 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-46xmv" event={"ID":"4b6667ee-ad18-4f1c-9e5c-ff6574793de1","Type":"ContainerStarted","Data":"305fb5c1c2cb607ece08e49aa9ec989e01b1d636e61ace16b193354d723a3e6e"} Jan 20 18:24:00 crc kubenswrapper[4661]: I0120 18:24:00.213951 4661 generic.go:334] "Generic (PLEG): container finished" podID="fd6f1c00-4ad6-48c1-9f69-88479e6de2f7" containerID="c3bd633c20d931ea4c152a53725bd9c1498d20d4b02ae9324b567a0c1df8c057" exitCode=0 Jan 20 18:24:00 crc kubenswrapper[4661]: I0120 18:24:00.214150 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-skwf2" event={"ID":"fd6f1c00-4ad6-48c1-9f69-88479e6de2f7","Type":"ContainerDied","Data":"c3bd633c20d931ea4c152a53725bd9c1498d20d4b02ae9324b567a0c1df8c057"} Jan 20 18:24:00 crc kubenswrapper[4661]: I0120 18:24:00.214245 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-skwf2" event={"ID":"fd6f1c00-4ad6-48c1-9f69-88479e6de2f7","Type":"ContainerStarted","Data":"edcdb25493c3651c54d0297849190f071b0a6711040f431b0485261fcf3f04fc"} Jan 20 18:24:01 crc kubenswrapper[4661]: I0120 18:24:01.230138 4661 generic.go:334] "Generic (PLEG): container finished" podID="b764feba-067a-4a59-a23b-9a9b7725f420" containerID="8ded1c3a4a61ed6debad60aa74cc2e5774f7de46bb912d100ebb824fcb556ec7" exitCode=0 Jan 20 18:24:01 crc kubenswrapper[4661]: I0120 18:24:01.230393 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"b764feba-067a-4a59-a23b-9a9b7725f420","Type":"ContainerDied","Data":"8ded1c3a4a61ed6debad60aa74cc2e5774f7de46bb912d100ebb824fcb556ec7"} Jan 20 18:24:01 crc kubenswrapper[4661]: I0120 18:24:01.673723 4661 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-nd6kj"] Jan 20 18:24:01 crc kubenswrapper[4661]: I0120 18:24:01.678530 4661 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-nd6kj"] Jan 20 18:24:01 crc kubenswrapper[4661]: I0120 18:24:01.713891 4661 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-8948-account-create-update-s48vf" Jan 20 18:24:01 crc kubenswrapper[4661]: I0120 18:24:01.814545 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d1035984-3393-4204-9840-6ede3ceef2e0-operator-scripts\") pod \"d1035984-3393-4204-9840-6ede3ceef2e0\" (UID: \"d1035984-3393-4204-9840-6ede3ceef2e0\") " Jan 20 18:24:01 crc kubenswrapper[4661]: I0120 18:24:01.814794 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mkd9w\" (UniqueName: \"kubernetes.io/projected/d1035984-3393-4204-9840-6ede3ceef2e0-kube-api-access-mkd9w\") pod \"d1035984-3393-4204-9840-6ede3ceef2e0\" (UID: \"d1035984-3393-4204-9840-6ede3ceef2e0\") " Jan 20 18:24:01 crc kubenswrapper[4661]: I0120 18:24:01.815776 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d1035984-3393-4204-9840-6ede3ceef2e0-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "d1035984-3393-4204-9840-6ede3ceef2e0" (UID: "d1035984-3393-4204-9840-6ede3ceef2e0"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 18:24:01 crc kubenswrapper[4661]: I0120 18:24:01.824314 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d1035984-3393-4204-9840-6ede3ceef2e0-kube-api-access-mkd9w" (OuterVolumeSpecName: "kube-api-access-mkd9w") pod "d1035984-3393-4204-9840-6ede3ceef2e0" (UID: "d1035984-3393-4204-9840-6ede3ceef2e0"). InnerVolumeSpecName "kube-api-access-mkd9w". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 18:24:01 crc kubenswrapper[4661]: I0120 18:24:01.863855 4661 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-25c4-account-create-update-ztjbg" Jan 20 18:24:01 crc kubenswrapper[4661]: I0120 18:24:01.906726 4661 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-46xmv" Jan 20 18:24:01 crc kubenswrapper[4661]: I0120 18:24:01.916314 4661 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d1035984-3393-4204-9840-6ede3ceef2e0-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 20 18:24:01 crc kubenswrapper[4661]: I0120 18:24:01.916341 4661 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mkd9w\" (UniqueName: \"kubernetes.io/projected/d1035984-3393-4204-9840-6ede3ceef2e0-kube-api-access-mkd9w\") on node \"crc\" DevicePath \"\"" Jan 20 18:24:01 crc kubenswrapper[4661]: I0120 18:24:01.925737 4661 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-7hvj8" Jan 20 18:24:01 crc kubenswrapper[4661]: I0120 18:24:01.936642 4661 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-7af2-account-create-update-42cbf" Jan 20 18:24:01 crc kubenswrapper[4661]: I0120 18:24:01.946230 4661 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-skwf2" Jan 20 18:24:02 crc kubenswrapper[4661]: I0120 18:24:02.017799 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zdtbx\" (UniqueName: \"kubernetes.io/projected/fd6f1c00-4ad6-48c1-9f69-88479e6de2f7-kube-api-access-zdtbx\") pod \"fd6f1c00-4ad6-48c1-9f69-88479e6de2f7\" (UID: \"fd6f1c00-4ad6-48c1-9f69-88479e6de2f7\") " Jan 20 18:24:02 crc kubenswrapper[4661]: I0120 18:24:02.017845 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v5m69\" (UniqueName: \"kubernetes.io/projected/ece50d74-6552-42cf-b0df-72749a5e2edb-kube-api-access-v5m69\") pod \"ece50d74-6552-42cf-b0df-72749a5e2edb\" (UID: \"ece50d74-6552-42cf-b0df-72749a5e2edb\") " Jan 20 18:24:02 crc kubenswrapper[4661]: I0120 18:24:02.017884 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-46m5s\" (UniqueName: \"kubernetes.io/projected/28dece6c-e91e-47ad-997e-0b0e6575a39c-kube-api-access-46m5s\") pod \"28dece6c-e91e-47ad-997e-0b0e6575a39c\" (UID: \"28dece6c-e91e-47ad-997e-0b0e6575a39c\") " Jan 20 18:24:02 crc kubenswrapper[4661]: I0120 18:24:02.017922 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/28dece6c-e91e-47ad-997e-0b0e6575a39c-operator-scripts\") pod \"28dece6c-e91e-47ad-997e-0b0e6575a39c\" (UID: \"28dece6c-e91e-47ad-997e-0b0e6575a39c\") " Jan 20 18:24:02 crc kubenswrapper[4661]: I0120 18:24:02.017959 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4b6667ee-ad18-4f1c-9e5c-ff6574793de1-operator-scripts\") pod \"4b6667ee-ad18-4f1c-9e5c-ff6574793de1\" (UID: \"4b6667ee-ad18-4f1c-9e5c-ff6574793de1\") " Jan 20 18:24:02 crc kubenswrapper[4661]: I0120 18:24:02.017995 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ece50d74-6552-42cf-b0df-72749a5e2edb-operator-scripts\") pod \"ece50d74-6552-42cf-b0df-72749a5e2edb\" (UID: \"ece50d74-6552-42cf-b0df-72749a5e2edb\") " Jan 20 18:24:02 crc kubenswrapper[4661]: I0120 18:24:02.018013 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v2m4k\" (UniqueName: \"kubernetes.io/projected/bca3d32d-572f-43fc-87b1-0a1a25a49703-kube-api-access-v2m4k\") pod \"bca3d32d-572f-43fc-87b1-0a1a25a49703\" (UID: \"bca3d32d-572f-43fc-87b1-0a1a25a49703\") " Jan 20 18:24:02 crc kubenswrapper[4661]: I0120 18:24:02.018098 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dcqsb\" (UniqueName: \"kubernetes.io/projected/4b6667ee-ad18-4f1c-9e5c-ff6574793de1-kube-api-access-dcqsb\") pod \"4b6667ee-ad18-4f1c-9e5c-ff6574793de1\" (UID: \"4b6667ee-ad18-4f1c-9e5c-ff6574793de1\") " Jan 20 18:24:02 crc kubenswrapper[4661]: I0120 18:24:02.018164 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bca3d32d-572f-43fc-87b1-0a1a25a49703-operator-scripts\") pod \"bca3d32d-572f-43fc-87b1-0a1a25a49703\" (UID: \"bca3d32d-572f-43fc-87b1-0a1a25a49703\") " Jan 20 18:24:02 crc kubenswrapper[4661]: I0120 18:24:02.018214 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fd6f1c00-4ad6-48c1-9f69-88479e6de2f7-operator-scripts\") pod \"fd6f1c00-4ad6-48c1-9f69-88479e6de2f7\" (UID: \"fd6f1c00-4ad6-48c1-9f69-88479e6de2f7\") " Jan 20 18:24:02 crc kubenswrapper[4661]: I0120 18:24:02.018917 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fd6f1c00-4ad6-48c1-9f69-88479e6de2f7-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "fd6f1c00-4ad6-48c1-9f69-88479e6de2f7" (UID: "fd6f1c00-4ad6-48c1-9f69-88479e6de2f7"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 18:24:02 crc kubenswrapper[4661]: I0120 18:24:02.019905 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ece50d74-6552-42cf-b0df-72749a5e2edb-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "ece50d74-6552-42cf-b0df-72749a5e2edb" (UID: "ece50d74-6552-42cf-b0df-72749a5e2edb"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 18:24:02 crc kubenswrapper[4661]: I0120 18:24:02.019942 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bca3d32d-572f-43fc-87b1-0a1a25a49703-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "bca3d32d-572f-43fc-87b1-0a1a25a49703" (UID: "bca3d32d-572f-43fc-87b1-0a1a25a49703"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 18:24:02 crc kubenswrapper[4661]: I0120 18:24:02.020316 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4b6667ee-ad18-4f1c-9e5c-ff6574793de1-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "4b6667ee-ad18-4f1c-9e5c-ff6574793de1" (UID: "4b6667ee-ad18-4f1c-9e5c-ff6574793de1"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 18:24:02 crc kubenswrapper[4661]: I0120 18:24:02.020497 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/28dece6c-e91e-47ad-997e-0b0e6575a39c-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "28dece6c-e91e-47ad-997e-0b0e6575a39c" (UID: "28dece6c-e91e-47ad-997e-0b0e6575a39c"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 18:24:02 crc kubenswrapper[4661]: I0120 18:24:02.022082 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fd6f1c00-4ad6-48c1-9f69-88479e6de2f7-kube-api-access-zdtbx" (OuterVolumeSpecName: "kube-api-access-zdtbx") pod "fd6f1c00-4ad6-48c1-9f69-88479e6de2f7" (UID: "fd6f1c00-4ad6-48c1-9f69-88479e6de2f7"). InnerVolumeSpecName "kube-api-access-zdtbx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 18:24:02 crc kubenswrapper[4661]: I0120 18:24:02.022196 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/28dece6c-e91e-47ad-997e-0b0e6575a39c-kube-api-access-46m5s" (OuterVolumeSpecName: "kube-api-access-46m5s") pod "28dece6c-e91e-47ad-997e-0b0e6575a39c" (UID: "28dece6c-e91e-47ad-997e-0b0e6575a39c"). InnerVolumeSpecName "kube-api-access-46m5s". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 18:24:02 crc kubenswrapper[4661]: I0120 18:24:02.022435 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ece50d74-6552-42cf-b0df-72749a5e2edb-kube-api-access-v5m69" (OuterVolumeSpecName: "kube-api-access-v5m69") pod "ece50d74-6552-42cf-b0df-72749a5e2edb" (UID: "ece50d74-6552-42cf-b0df-72749a5e2edb"). InnerVolumeSpecName "kube-api-access-v5m69". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 18:24:02 crc kubenswrapper[4661]: I0120 18:24:02.022763 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4b6667ee-ad18-4f1c-9e5c-ff6574793de1-kube-api-access-dcqsb" (OuterVolumeSpecName: "kube-api-access-dcqsb") pod "4b6667ee-ad18-4f1c-9e5c-ff6574793de1" (UID: "4b6667ee-ad18-4f1c-9e5c-ff6574793de1"). InnerVolumeSpecName "kube-api-access-dcqsb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 18:24:02 crc kubenswrapper[4661]: I0120 18:24:02.023178 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bca3d32d-572f-43fc-87b1-0a1a25a49703-kube-api-access-v2m4k" (OuterVolumeSpecName: "kube-api-access-v2m4k") pod "bca3d32d-572f-43fc-87b1-0a1a25a49703" (UID: "bca3d32d-572f-43fc-87b1-0a1a25a49703"). InnerVolumeSpecName "kube-api-access-v2m4k". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 18:24:02 crc kubenswrapper[4661]: I0120 18:24:02.119598 4661 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zdtbx\" (UniqueName: \"kubernetes.io/projected/fd6f1c00-4ad6-48c1-9f69-88479e6de2f7-kube-api-access-zdtbx\") on node \"crc\" DevicePath \"\"" Jan 20 18:24:02 crc kubenswrapper[4661]: I0120 18:24:02.119632 4661 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v5m69\" (UniqueName: \"kubernetes.io/projected/ece50d74-6552-42cf-b0df-72749a5e2edb-kube-api-access-v5m69\") on node \"crc\" DevicePath \"\"" Jan 20 18:24:02 crc kubenswrapper[4661]: I0120 18:24:02.119640 4661 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-46m5s\" (UniqueName: \"kubernetes.io/projected/28dece6c-e91e-47ad-997e-0b0e6575a39c-kube-api-access-46m5s\") on node \"crc\" DevicePath \"\"" Jan 20 18:24:02 crc kubenswrapper[4661]: I0120 18:24:02.119652 4661 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/28dece6c-e91e-47ad-997e-0b0e6575a39c-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 20 18:24:02 crc kubenswrapper[4661]: I0120 18:24:02.119660 4661 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4b6667ee-ad18-4f1c-9e5c-ff6574793de1-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 20 18:24:02 crc kubenswrapper[4661]: I0120 18:24:02.119683 4661 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ece50d74-6552-42cf-b0df-72749a5e2edb-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 20 18:24:02 crc kubenswrapper[4661]: I0120 18:24:02.119693 4661 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v2m4k\" (UniqueName: \"kubernetes.io/projected/bca3d32d-572f-43fc-87b1-0a1a25a49703-kube-api-access-v2m4k\") on node \"crc\" DevicePath \"\"" Jan 20 18:24:02 crc kubenswrapper[4661]: I0120 18:24:02.119701 4661 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dcqsb\" (UniqueName: \"kubernetes.io/projected/4b6667ee-ad18-4f1c-9e5c-ff6574793de1-kube-api-access-dcqsb\") on node \"crc\" DevicePath \"\"" Jan 20 18:24:02 crc kubenswrapper[4661]: I0120 18:24:02.119710 4661 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bca3d32d-572f-43fc-87b1-0a1a25a49703-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 20 18:24:02 crc kubenswrapper[4661]: I0120 18:24:02.119718 4661 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fd6f1c00-4ad6-48c1-9f69-88479e6de2f7-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 20 18:24:02 crc kubenswrapper[4661]: I0120 18:24:02.151392 4661 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="23c3d076-1a7b-4376-88d8-5544a19f4a9d" path="/var/lib/kubelet/pods/23c3d076-1a7b-4376-88d8-5544a19f4a9d/volumes" Jan 20 18:24:02 crc kubenswrapper[4661]: I0120 18:24:02.240448 4661 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-7hvj8" Jan 20 18:24:02 crc kubenswrapper[4661]: I0120 18:24:02.240476 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-7hvj8" event={"ID":"ece50d74-6552-42cf-b0df-72749a5e2edb","Type":"ContainerDied","Data":"8458399fe4838db0e0abab65fbbaf21df11359a349c0ae70ef7698c81ae96479"} Jan 20 18:24:02 crc kubenswrapper[4661]: I0120 18:24:02.240620 4661 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8458399fe4838db0e0abab65fbbaf21df11359a349c0ae70ef7698c81ae96479" Jan 20 18:24:02 crc kubenswrapper[4661]: I0120 18:24:02.242978 4661 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-46xmv" Jan 20 18:24:02 crc kubenswrapper[4661]: I0120 18:24:02.242972 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-46xmv" event={"ID":"4b6667ee-ad18-4f1c-9e5c-ff6574793de1","Type":"ContainerDied","Data":"305fb5c1c2cb607ece08e49aa9ec989e01b1d636e61ace16b193354d723a3e6e"} Jan 20 18:24:02 crc kubenswrapper[4661]: I0120 18:24:02.243102 4661 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="305fb5c1c2cb607ece08e49aa9ec989e01b1d636e61ace16b193354d723a3e6e" Jan 20 18:24:02 crc kubenswrapper[4661]: I0120 18:24:02.246322 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-skwf2" event={"ID":"fd6f1c00-4ad6-48c1-9f69-88479e6de2f7","Type":"ContainerDied","Data":"edcdb25493c3651c54d0297849190f071b0a6711040f431b0485261fcf3f04fc"} Jan 20 18:24:02 crc kubenswrapper[4661]: I0120 18:24:02.246383 4661 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="edcdb25493c3651c54d0297849190f071b0a6711040f431b0485261fcf3f04fc" Jan 20 18:24:02 crc kubenswrapper[4661]: I0120 18:24:02.246347 4661 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-skwf2" Jan 20 18:24:02 crc kubenswrapper[4661]: I0120 18:24:02.248706 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-7af2-account-create-update-42cbf" event={"ID":"bca3d32d-572f-43fc-87b1-0a1a25a49703","Type":"ContainerDied","Data":"f6d0b20ec9d709595105e48c211c4b040ec0ded1ac2f44d89cc390e7280735c9"} Jan 20 18:24:02 crc kubenswrapper[4661]: I0120 18:24:02.248876 4661 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f6d0b20ec9d709595105e48c211c4b040ec0ded1ac2f44d89cc390e7280735c9" Jan 20 18:24:02 crc kubenswrapper[4661]: I0120 18:24:02.249057 4661 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-7af2-account-create-update-42cbf" Jan 20 18:24:02 crc kubenswrapper[4661]: I0120 18:24:02.254484 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"b764feba-067a-4a59-a23b-9a9b7725f420","Type":"ContainerStarted","Data":"f76ef2f693bfe5015a9e3c2fa43181da0038d57efb1604983c4edf8f96f9dfef"} Jan 20 18:24:02 crc kubenswrapper[4661]: I0120 18:24:02.254949 4661 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Jan 20 18:24:02 crc kubenswrapper[4661]: I0120 18:24:02.257884 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-8948-account-create-update-s48vf" event={"ID":"d1035984-3393-4204-9840-6ede3ceef2e0","Type":"ContainerDied","Data":"30b2efe533b76b03e53b1a8c2f2cecb11db9739b879586cfc63d432f764e7094"} Jan 20 18:24:02 crc kubenswrapper[4661]: I0120 18:24:02.257916 4661 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="30b2efe533b76b03e53b1a8c2f2cecb11db9739b879586cfc63d432f764e7094" Jan 20 18:24:02 crc kubenswrapper[4661]: I0120 18:24:02.258004 4661 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-8948-account-create-update-s48vf" Jan 20 18:24:02 crc kubenswrapper[4661]: I0120 18:24:02.259717 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-25c4-account-create-update-ztjbg" event={"ID":"28dece6c-e91e-47ad-997e-0b0e6575a39c","Type":"ContainerDied","Data":"3080aff343cd8c04dbb13c754bfd73ef8751a8c9b1c037701b4d61fb205759ac"} Jan 20 18:24:02 crc kubenswrapper[4661]: I0120 18:24:02.259736 4661 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3080aff343cd8c04dbb13c754bfd73ef8751a8c9b1c037701b4d61fb205759ac" Jan 20 18:24:02 crc kubenswrapper[4661]: I0120 18:24:02.259777 4661 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-25c4-account-create-update-ztjbg" Jan 20 18:24:02 crc kubenswrapper[4661]: I0120 18:24:02.292893 4661 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=38.343830006 podStartE2EDuration="1m9.292878368s" podCreationTimestamp="2026-01-20 18:22:53 +0000 UTC" firstStartedPulling="2026-01-20 18:22:55.825522818 +0000 UTC m=+1032.156312480" lastFinishedPulling="2026-01-20 18:23:26.77457117 +0000 UTC m=+1063.105360842" observedRunningTime="2026-01-20 18:24:02.284325682 +0000 UTC m=+1098.615115364" watchObservedRunningTime="2026-01-20 18:24:02.292878368 +0000 UTC m=+1098.623668030" Jan 20 18:24:04 crc kubenswrapper[4661]: I0120 18:24:04.102044 4661 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-9k84x" Jan 20 18:24:04 crc kubenswrapper[4661]: I0120 18:24:04.106138 4661 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-p7h4x" podUID="65017fb7-6ab3-43d0-a308-a3d8da39b811" containerName="ovn-controller" probeResult="failure" output=< Jan 20 18:24:04 crc kubenswrapper[4661]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Jan 20 18:24:04 crc kubenswrapper[4661]: > Jan 20 18:24:04 crc kubenswrapper[4661]: I0120 18:24:04.113289 4661 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-9k84x" Jan 20 18:24:04 crc kubenswrapper[4661]: I0120 18:24:04.206964 4661 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-sync-tck9n"] Jan 20 18:24:04 crc kubenswrapper[4661]: E0120 18:24:04.207514 4661 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4b6667ee-ad18-4f1c-9e5c-ff6574793de1" containerName="mariadb-database-create" Jan 20 18:24:04 crc kubenswrapper[4661]: I0120 18:24:04.207530 4661 state_mem.go:107] "Deleted CPUSet assignment" podUID="4b6667ee-ad18-4f1c-9e5c-ff6574793de1" containerName="mariadb-database-create" Jan 20 18:24:04 crc kubenswrapper[4661]: E0120 18:24:04.207576 4661 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ece50d74-6552-42cf-b0df-72749a5e2edb" containerName="mariadb-database-create" Jan 20 18:24:04 crc kubenswrapper[4661]: I0120 18:24:04.207583 4661 state_mem.go:107] "Deleted CPUSet assignment" podUID="ece50d74-6552-42cf-b0df-72749a5e2edb" containerName="mariadb-database-create" Jan 20 18:24:04 crc kubenswrapper[4661]: E0120 18:24:04.207599 4661 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d1035984-3393-4204-9840-6ede3ceef2e0" containerName="mariadb-account-create-update" Jan 20 18:24:04 crc kubenswrapper[4661]: I0120 18:24:04.207605 4661 state_mem.go:107] "Deleted CPUSet assignment" podUID="d1035984-3393-4204-9840-6ede3ceef2e0" containerName="mariadb-account-create-update" Jan 20 18:24:04 crc kubenswrapper[4661]: E0120 18:24:04.207618 4661 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fd6f1c00-4ad6-48c1-9f69-88479e6de2f7" containerName="mariadb-database-create" Jan 20 18:24:04 crc kubenswrapper[4661]: I0120 18:24:04.207623 4661 state_mem.go:107] "Deleted CPUSet assignment" podUID="fd6f1c00-4ad6-48c1-9f69-88479e6de2f7" containerName="mariadb-database-create" Jan 20 18:24:04 crc kubenswrapper[4661]: E0120 18:24:04.207658 4661 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bca3d32d-572f-43fc-87b1-0a1a25a49703" containerName="mariadb-account-create-update" Jan 20 18:24:04 crc kubenswrapper[4661]: I0120 18:24:04.207686 4661 state_mem.go:107] "Deleted CPUSet assignment" podUID="bca3d32d-572f-43fc-87b1-0a1a25a49703" containerName="mariadb-account-create-update" Jan 20 18:24:04 crc kubenswrapper[4661]: E0120 18:24:04.207694 4661 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="28dece6c-e91e-47ad-997e-0b0e6575a39c" containerName="mariadb-account-create-update" Jan 20 18:24:04 crc kubenswrapper[4661]: I0120 18:24:04.207700 4661 state_mem.go:107] "Deleted CPUSet assignment" podUID="28dece6c-e91e-47ad-997e-0b0e6575a39c" containerName="mariadb-account-create-update" Jan 20 18:24:04 crc kubenswrapper[4661]: I0120 18:24:04.208174 4661 memory_manager.go:354] "RemoveStaleState removing state" podUID="d1035984-3393-4204-9840-6ede3ceef2e0" containerName="mariadb-account-create-update" Jan 20 18:24:04 crc kubenswrapper[4661]: I0120 18:24:04.208205 4661 memory_manager.go:354] "RemoveStaleState removing state" podUID="4b6667ee-ad18-4f1c-9e5c-ff6574793de1" containerName="mariadb-database-create" Jan 20 18:24:04 crc kubenswrapper[4661]: I0120 18:24:04.208218 4661 memory_manager.go:354] "RemoveStaleState removing state" podUID="bca3d32d-572f-43fc-87b1-0a1a25a49703" containerName="mariadb-account-create-update" Jan 20 18:24:04 crc kubenswrapper[4661]: I0120 18:24:04.208232 4661 memory_manager.go:354] "RemoveStaleState removing state" podUID="fd6f1c00-4ad6-48c1-9f69-88479e6de2f7" containerName="mariadb-database-create" Jan 20 18:24:04 crc kubenswrapper[4661]: I0120 18:24:04.208241 4661 memory_manager.go:354] "RemoveStaleState removing state" podUID="ece50d74-6552-42cf-b0df-72749a5e2edb" containerName="mariadb-database-create" Jan 20 18:24:04 crc kubenswrapper[4661]: I0120 18:24:04.208251 4661 memory_manager.go:354] "RemoveStaleState removing state" podUID="28dece6c-e91e-47ad-997e-0b0e6575a39c" containerName="mariadb-account-create-update" Jan 20 18:24:04 crc kubenswrapper[4661]: I0120 18:24:04.209917 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-tck9n" Jan 20 18:24:04 crc kubenswrapper[4661]: I0120 18:24:04.212139 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-config-data" Jan 20 18:24:04 crc kubenswrapper[4661]: I0120 18:24:04.212390 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-q5zvh" Jan 20 18:24:04 crc kubenswrapper[4661]: I0120 18:24:04.226636 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-tck9n"] Jan 20 18:24:04 crc kubenswrapper[4661]: I0120 18:24:04.275679 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/64020290-0e73-480d-b523-d7cb664eacfd-db-sync-config-data\") pod \"glance-db-sync-tck9n\" (UID: \"64020290-0e73-480d-b523-d7cb664eacfd\") " pod="openstack/glance-db-sync-tck9n" Jan 20 18:24:04 crc kubenswrapper[4661]: I0120 18:24:04.275950 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/64020290-0e73-480d-b523-d7cb664eacfd-config-data\") pod \"glance-db-sync-tck9n\" (UID: \"64020290-0e73-480d-b523-d7cb664eacfd\") " pod="openstack/glance-db-sync-tck9n" Jan 20 18:24:04 crc kubenswrapper[4661]: I0120 18:24:04.276353 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vlhwh\" (UniqueName: \"kubernetes.io/projected/64020290-0e73-480d-b523-d7cb664eacfd-kube-api-access-vlhwh\") pod \"glance-db-sync-tck9n\" (UID: \"64020290-0e73-480d-b523-d7cb664eacfd\") " pod="openstack/glance-db-sync-tck9n" Jan 20 18:24:04 crc kubenswrapper[4661]: I0120 18:24:04.276497 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/64020290-0e73-480d-b523-d7cb664eacfd-combined-ca-bundle\") pod \"glance-db-sync-tck9n\" (UID: \"64020290-0e73-480d-b523-d7cb664eacfd\") " pod="openstack/glance-db-sync-tck9n" Jan 20 18:24:04 crc kubenswrapper[4661]: I0120 18:24:04.363948 4661 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-p7h4x-config-m9gbv"] Jan 20 18:24:04 crc kubenswrapper[4661]: I0120 18:24:04.365285 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-p7h4x-config-m9gbv" Jan 20 18:24:04 crc kubenswrapper[4661]: I0120 18:24:04.368096 4661 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Jan 20 18:24:04 crc kubenswrapper[4661]: I0120 18:24:04.378455 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/64020290-0e73-480d-b523-d7cb664eacfd-db-sync-config-data\") pod \"glance-db-sync-tck9n\" (UID: \"64020290-0e73-480d-b523-d7cb664eacfd\") " pod="openstack/glance-db-sync-tck9n" Jan 20 18:24:04 crc kubenswrapper[4661]: I0120 18:24:04.378555 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/64020290-0e73-480d-b523-d7cb664eacfd-config-data\") pod \"glance-db-sync-tck9n\" (UID: \"64020290-0e73-480d-b523-d7cb664eacfd\") " pod="openstack/glance-db-sync-tck9n" Jan 20 18:24:04 crc kubenswrapper[4661]: I0120 18:24:04.378705 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vlhwh\" (UniqueName: \"kubernetes.io/projected/64020290-0e73-480d-b523-d7cb664eacfd-kube-api-access-vlhwh\") pod \"glance-db-sync-tck9n\" (UID: \"64020290-0e73-480d-b523-d7cb664eacfd\") " pod="openstack/glance-db-sync-tck9n" Jan 20 18:24:04 crc kubenswrapper[4661]: I0120 18:24:04.378791 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/64020290-0e73-480d-b523-d7cb664eacfd-combined-ca-bundle\") pod \"glance-db-sync-tck9n\" (UID: \"64020290-0e73-480d-b523-d7cb664eacfd\") " pod="openstack/glance-db-sync-tck9n" Jan 20 18:24:04 crc kubenswrapper[4661]: I0120 18:24:04.387430 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/64020290-0e73-480d-b523-d7cb664eacfd-db-sync-config-data\") pod \"glance-db-sync-tck9n\" (UID: \"64020290-0e73-480d-b523-d7cb664eacfd\") " pod="openstack/glance-db-sync-tck9n" Jan 20 18:24:04 crc kubenswrapper[4661]: I0120 18:24:04.387769 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/64020290-0e73-480d-b523-d7cb664eacfd-config-data\") pod \"glance-db-sync-tck9n\" (UID: \"64020290-0e73-480d-b523-d7cb664eacfd\") " pod="openstack/glance-db-sync-tck9n" Jan 20 18:24:04 crc kubenswrapper[4661]: I0120 18:24:04.388055 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/64020290-0e73-480d-b523-d7cb664eacfd-combined-ca-bundle\") pod \"glance-db-sync-tck9n\" (UID: \"64020290-0e73-480d-b523-d7cb664eacfd\") " pod="openstack/glance-db-sync-tck9n" Jan 20 18:24:04 crc kubenswrapper[4661]: I0120 18:24:04.393471 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-p7h4x-config-m9gbv"] Jan 20 18:24:04 crc kubenswrapper[4661]: I0120 18:24:04.404122 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vlhwh\" (UniqueName: \"kubernetes.io/projected/64020290-0e73-480d-b523-d7cb664eacfd-kube-api-access-vlhwh\") pod \"glance-db-sync-tck9n\" (UID: \"64020290-0e73-480d-b523-d7cb664eacfd\") " pod="openstack/glance-db-sync-tck9n" Jan 20 18:24:04 crc kubenswrapper[4661]: I0120 18:24:04.480080 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lrmzw\" (UniqueName: \"kubernetes.io/projected/b8b28a87-c7f9-42a5-9820-91ab27411fcb-kube-api-access-lrmzw\") pod \"ovn-controller-p7h4x-config-m9gbv\" (UID: \"b8b28a87-c7f9-42a5-9820-91ab27411fcb\") " pod="openstack/ovn-controller-p7h4x-config-m9gbv" Jan 20 18:24:04 crc kubenswrapper[4661]: I0120 18:24:04.480134 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/b8b28a87-c7f9-42a5-9820-91ab27411fcb-additional-scripts\") pod \"ovn-controller-p7h4x-config-m9gbv\" (UID: \"b8b28a87-c7f9-42a5-9820-91ab27411fcb\") " pod="openstack/ovn-controller-p7h4x-config-m9gbv" Jan 20 18:24:04 crc kubenswrapper[4661]: I0120 18:24:04.480173 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b8b28a87-c7f9-42a5-9820-91ab27411fcb-scripts\") pod \"ovn-controller-p7h4x-config-m9gbv\" (UID: \"b8b28a87-c7f9-42a5-9820-91ab27411fcb\") " pod="openstack/ovn-controller-p7h4x-config-m9gbv" Jan 20 18:24:04 crc kubenswrapper[4661]: I0120 18:24:04.480236 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/b8b28a87-c7f9-42a5-9820-91ab27411fcb-var-log-ovn\") pod \"ovn-controller-p7h4x-config-m9gbv\" (UID: \"b8b28a87-c7f9-42a5-9820-91ab27411fcb\") " pod="openstack/ovn-controller-p7h4x-config-m9gbv" Jan 20 18:24:04 crc kubenswrapper[4661]: I0120 18:24:04.480265 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/b8b28a87-c7f9-42a5-9820-91ab27411fcb-var-run\") pod \"ovn-controller-p7h4x-config-m9gbv\" (UID: \"b8b28a87-c7f9-42a5-9820-91ab27411fcb\") " pod="openstack/ovn-controller-p7h4x-config-m9gbv" Jan 20 18:24:04 crc kubenswrapper[4661]: I0120 18:24:04.480424 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/b8b28a87-c7f9-42a5-9820-91ab27411fcb-var-run-ovn\") pod \"ovn-controller-p7h4x-config-m9gbv\" (UID: \"b8b28a87-c7f9-42a5-9820-91ab27411fcb\") " pod="openstack/ovn-controller-p7h4x-config-m9gbv" Jan 20 18:24:04 crc kubenswrapper[4661]: I0120 18:24:04.531519 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-tck9n" Jan 20 18:24:04 crc kubenswrapper[4661]: I0120 18:24:04.581910 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/b8b28a87-c7f9-42a5-9820-91ab27411fcb-additional-scripts\") pod \"ovn-controller-p7h4x-config-m9gbv\" (UID: \"b8b28a87-c7f9-42a5-9820-91ab27411fcb\") " pod="openstack/ovn-controller-p7h4x-config-m9gbv" Jan 20 18:24:04 crc kubenswrapper[4661]: I0120 18:24:04.582520 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b8b28a87-c7f9-42a5-9820-91ab27411fcb-scripts\") pod \"ovn-controller-p7h4x-config-m9gbv\" (UID: \"b8b28a87-c7f9-42a5-9820-91ab27411fcb\") " pod="openstack/ovn-controller-p7h4x-config-m9gbv" Jan 20 18:24:04 crc kubenswrapper[4661]: I0120 18:24:04.582647 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/b8b28a87-c7f9-42a5-9820-91ab27411fcb-var-log-ovn\") pod \"ovn-controller-p7h4x-config-m9gbv\" (UID: \"b8b28a87-c7f9-42a5-9820-91ab27411fcb\") " pod="openstack/ovn-controller-p7h4x-config-m9gbv" Jan 20 18:24:04 crc kubenswrapper[4661]: I0120 18:24:04.582781 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/b8b28a87-c7f9-42a5-9820-91ab27411fcb-var-run\") pod \"ovn-controller-p7h4x-config-m9gbv\" (UID: \"b8b28a87-c7f9-42a5-9820-91ab27411fcb\") " pod="openstack/ovn-controller-p7h4x-config-m9gbv" Jan 20 18:24:04 crc kubenswrapper[4661]: I0120 18:24:04.582866 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/b8b28a87-c7f9-42a5-9820-91ab27411fcb-var-run-ovn\") pod \"ovn-controller-p7h4x-config-m9gbv\" (UID: \"b8b28a87-c7f9-42a5-9820-91ab27411fcb\") " pod="openstack/ovn-controller-p7h4x-config-m9gbv" Jan 20 18:24:04 crc kubenswrapper[4661]: I0120 18:24:04.582974 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lrmzw\" (UniqueName: \"kubernetes.io/projected/b8b28a87-c7f9-42a5-9820-91ab27411fcb-kube-api-access-lrmzw\") pod \"ovn-controller-p7h4x-config-m9gbv\" (UID: \"b8b28a87-c7f9-42a5-9820-91ab27411fcb\") " pod="openstack/ovn-controller-p7h4x-config-m9gbv" Jan 20 18:24:04 crc kubenswrapper[4661]: I0120 18:24:04.583001 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/b8b28a87-c7f9-42a5-9820-91ab27411fcb-additional-scripts\") pod \"ovn-controller-p7h4x-config-m9gbv\" (UID: \"b8b28a87-c7f9-42a5-9820-91ab27411fcb\") " pod="openstack/ovn-controller-p7h4x-config-m9gbv" Jan 20 18:24:04 crc kubenswrapper[4661]: I0120 18:24:04.583130 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/b8b28a87-c7f9-42a5-9820-91ab27411fcb-var-run\") pod \"ovn-controller-p7h4x-config-m9gbv\" (UID: \"b8b28a87-c7f9-42a5-9820-91ab27411fcb\") " pod="openstack/ovn-controller-p7h4x-config-m9gbv" Jan 20 18:24:04 crc kubenswrapper[4661]: I0120 18:24:04.583170 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/b8b28a87-c7f9-42a5-9820-91ab27411fcb-var-log-ovn\") pod \"ovn-controller-p7h4x-config-m9gbv\" (UID: \"b8b28a87-c7f9-42a5-9820-91ab27411fcb\") " pod="openstack/ovn-controller-p7h4x-config-m9gbv" Jan 20 18:24:04 crc kubenswrapper[4661]: I0120 18:24:04.583277 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/b8b28a87-c7f9-42a5-9820-91ab27411fcb-var-run-ovn\") pod \"ovn-controller-p7h4x-config-m9gbv\" (UID: \"b8b28a87-c7f9-42a5-9820-91ab27411fcb\") " pod="openstack/ovn-controller-p7h4x-config-m9gbv" Jan 20 18:24:04 crc kubenswrapper[4661]: I0120 18:24:04.584325 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b8b28a87-c7f9-42a5-9820-91ab27411fcb-scripts\") pod \"ovn-controller-p7h4x-config-m9gbv\" (UID: \"b8b28a87-c7f9-42a5-9820-91ab27411fcb\") " pod="openstack/ovn-controller-p7h4x-config-m9gbv" Jan 20 18:24:04 crc kubenswrapper[4661]: I0120 18:24:04.604445 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lrmzw\" (UniqueName: \"kubernetes.io/projected/b8b28a87-c7f9-42a5-9820-91ab27411fcb-kube-api-access-lrmzw\") pod \"ovn-controller-p7h4x-config-m9gbv\" (UID: \"b8b28a87-c7f9-42a5-9820-91ab27411fcb\") " pod="openstack/ovn-controller-p7h4x-config-m9gbv" Jan 20 18:24:04 crc kubenswrapper[4661]: I0120 18:24:04.679103 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-p7h4x-config-m9gbv" Jan 20 18:24:05 crc kubenswrapper[4661]: I0120 18:24:05.059584 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-tck9n"] Jan 20 18:24:05 crc kubenswrapper[4661]: W0120 18:24:05.065597 4661 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod64020290_0e73_480d_b523_d7cb664eacfd.slice/crio-79e213f3358db4cd094bf3bccfc3937b7be5bf666a77547574f1bb83bb60a318 WatchSource:0}: Error finding container 79e213f3358db4cd094bf3bccfc3937b7be5bf666a77547574f1bb83bb60a318: Status 404 returned error can't find the container with id 79e213f3358db4cd094bf3bccfc3937b7be5bf666a77547574f1bb83bb60a318 Jan 20 18:24:05 crc kubenswrapper[4661]: I0120 18:24:05.121386 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-p7h4x-config-m9gbv"] Jan 20 18:24:05 crc kubenswrapper[4661]: I0120 18:24:05.286111 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-tck9n" event={"ID":"64020290-0e73-480d-b523-d7cb664eacfd","Type":"ContainerStarted","Data":"79e213f3358db4cd094bf3bccfc3937b7be5bf666a77547574f1bb83bb60a318"} Jan 20 18:24:05 crc kubenswrapper[4661]: I0120 18:24:05.287040 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-p7h4x-config-m9gbv" event={"ID":"b8b28a87-c7f9-42a5-9820-91ab27411fcb","Type":"ContainerStarted","Data":"5855d425f174676d633f7d418ea0d8023a8de78bf14a01e8971096edbec3dc53"} Jan 20 18:24:06 crc kubenswrapper[4661]: I0120 18:24:06.296466 4661 generic.go:334] "Generic (PLEG): container finished" podID="19a9e039-d4eb-475e-9ca9-6a6f6bfeb36c" containerID="c76853192ee68da9d687430ebad4adc079589fbe7ce8c6c6524d5c045257a90f" exitCode=0 Jan 20 18:24:06 crc kubenswrapper[4661]: I0120 18:24:06.296891 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"19a9e039-d4eb-475e-9ca9-6a6f6bfeb36c","Type":"ContainerDied","Data":"c76853192ee68da9d687430ebad4adc079589fbe7ce8c6c6524d5c045257a90f"} Jan 20 18:24:06 crc kubenswrapper[4661]: I0120 18:24:06.300626 4661 generic.go:334] "Generic (PLEG): container finished" podID="b8b28a87-c7f9-42a5-9820-91ab27411fcb" containerID="50a31ec5968f93b74cfbba609fda811b5e0b978fdeae0a5f9a884b824faa4103" exitCode=0 Jan 20 18:24:06 crc kubenswrapper[4661]: I0120 18:24:06.300702 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-p7h4x-config-m9gbv" event={"ID":"b8b28a87-c7f9-42a5-9820-91ab27411fcb","Type":"ContainerDied","Data":"50a31ec5968f93b74cfbba609fda811b5e0b978fdeae0a5f9a884b824faa4103"} Jan 20 18:24:06 crc kubenswrapper[4661]: I0120 18:24:06.683182 4661 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-dkb68"] Jan 20 18:24:06 crc kubenswrapper[4661]: I0120 18:24:06.684527 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-dkb68" Jan 20 18:24:06 crc kubenswrapper[4661]: I0120 18:24:06.692046 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-mariadb-root-db-secret" Jan 20 18:24:06 crc kubenswrapper[4661]: I0120 18:24:06.697698 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-dkb68"] Jan 20 18:24:06 crc kubenswrapper[4661]: I0120 18:24:06.823337 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f1a5c71b-b855-4712-b208-fc14557dd032-operator-scripts\") pod \"root-account-create-update-dkb68\" (UID: \"f1a5c71b-b855-4712-b208-fc14557dd032\") " pod="openstack/root-account-create-update-dkb68" Jan 20 18:24:06 crc kubenswrapper[4661]: I0120 18:24:06.823458 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7c2rg\" (UniqueName: \"kubernetes.io/projected/f1a5c71b-b855-4712-b208-fc14557dd032-kube-api-access-7c2rg\") pod \"root-account-create-update-dkb68\" (UID: \"f1a5c71b-b855-4712-b208-fc14557dd032\") " pod="openstack/root-account-create-update-dkb68" Jan 20 18:24:06 crc kubenswrapper[4661]: I0120 18:24:06.925959 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7c2rg\" (UniqueName: \"kubernetes.io/projected/f1a5c71b-b855-4712-b208-fc14557dd032-kube-api-access-7c2rg\") pod \"root-account-create-update-dkb68\" (UID: \"f1a5c71b-b855-4712-b208-fc14557dd032\") " pod="openstack/root-account-create-update-dkb68" Jan 20 18:24:06 crc kubenswrapper[4661]: I0120 18:24:06.926046 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f1a5c71b-b855-4712-b208-fc14557dd032-operator-scripts\") pod \"root-account-create-update-dkb68\" (UID: \"f1a5c71b-b855-4712-b208-fc14557dd032\") " pod="openstack/root-account-create-update-dkb68" Jan 20 18:24:06 crc kubenswrapper[4661]: I0120 18:24:06.926893 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f1a5c71b-b855-4712-b208-fc14557dd032-operator-scripts\") pod \"root-account-create-update-dkb68\" (UID: \"f1a5c71b-b855-4712-b208-fc14557dd032\") " pod="openstack/root-account-create-update-dkb68" Jan 20 18:24:06 crc kubenswrapper[4661]: I0120 18:24:06.953380 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7c2rg\" (UniqueName: \"kubernetes.io/projected/f1a5c71b-b855-4712-b208-fc14557dd032-kube-api-access-7c2rg\") pod \"root-account-create-update-dkb68\" (UID: \"f1a5c71b-b855-4712-b208-fc14557dd032\") " pod="openstack/root-account-create-update-dkb68" Jan 20 18:24:06 crc kubenswrapper[4661]: I0120 18:24:06.998545 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-dkb68" Jan 20 18:24:07 crc kubenswrapper[4661]: I0120 18:24:07.328352 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"19a9e039-d4eb-475e-9ca9-6a6f6bfeb36c","Type":"ContainerStarted","Data":"54ed9a66e76eeab8c7020b313888bd2f37e08aaa23c11a301fd10b6bde71813e"} Jan 20 18:24:07 crc kubenswrapper[4661]: I0120 18:24:07.329163 4661 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Jan 20 18:24:07 crc kubenswrapper[4661]: I0120 18:24:07.356823 4661 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=-9223371963.49797 podStartE2EDuration="1m13.356805473s" podCreationTimestamp="2026-01-20 18:22:54 +0000 UTC" firstStartedPulling="2026-01-20 18:22:56.109881491 +0000 UTC m=+1032.440671143" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 18:24:07.349840219 +0000 UTC m=+1103.680629881" watchObservedRunningTime="2026-01-20 18:24:07.356805473 +0000 UTC m=+1103.687595155" Jan 20 18:24:07 crc kubenswrapper[4661]: I0120 18:24:07.494735 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-dkb68"] Jan 20 18:24:07 crc kubenswrapper[4661]: I0120 18:24:07.643440 4661 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-p7h4x-config-m9gbv" Jan 20 18:24:07 crc kubenswrapper[4661]: I0120 18:24:07.741242 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lrmzw\" (UniqueName: \"kubernetes.io/projected/b8b28a87-c7f9-42a5-9820-91ab27411fcb-kube-api-access-lrmzw\") pod \"b8b28a87-c7f9-42a5-9820-91ab27411fcb\" (UID: \"b8b28a87-c7f9-42a5-9820-91ab27411fcb\") " Jan 20 18:24:07 crc kubenswrapper[4661]: I0120 18:24:07.741382 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b8b28a87-c7f9-42a5-9820-91ab27411fcb-scripts\") pod \"b8b28a87-c7f9-42a5-9820-91ab27411fcb\" (UID: \"b8b28a87-c7f9-42a5-9820-91ab27411fcb\") " Jan 20 18:24:07 crc kubenswrapper[4661]: I0120 18:24:07.741428 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/b8b28a87-c7f9-42a5-9820-91ab27411fcb-var-run\") pod \"b8b28a87-c7f9-42a5-9820-91ab27411fcb\" (UID: \"b8b28a87-c7f9-42a5-9820-91ab27411fcb\") " Jan 20 18:24:07 crc kubenswrapper[4661]: I0120 18:24:07.741452 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/b8b28a87-c7f9-42a5-9820-91ab27411fcb-var-log-ovn\") pod \"b8b28a87-c7f9-42a5-9820-91ab27411fcb\" (UID: \"b8b28a87-c7f9-42a5-9820-91ab27411fcb\") " Jan 20 18:24:07 crc kubenswrapper[4661]: I0120 18:24:07.741510 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/b8b28a87-c7f9-42a5-9820-91ab27411fcb-var-run-ovn\") pod \"b8b28a87-c7f9-42a5-9820-91ab27411fcb\" (UID: \"b8b28a87-c7f9-42a5-9820-91ab27411fcb\") " Jan 20 18:24:07 crc kubenswrapper[4661]: I0120 18:24:07.741567 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/b8b28a87-c7f9-42a5-9820-91ab27411fcb-additional-scripts\") pod \"b8b28a87-c7f9-42a5-9820-91ab27411fcb\" (UID: \"b8b28a87-c7f9-42a5-9820-91ab27411fcb\") " Jan 20 18:24:07 crc kubenswrapper[4661]: I0120 18:24:07.742273 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b8b28a87-c7f9-42a5-9820-91ab27411fcb-var-run" (OuterVolumeSpecName: "var-run") pod "b8b28a87-c7f9-42a5-9820-91ab27411fcb" (UID: "b8b28a87-c7f9-42a5-9820-91ab27411fcb"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 20 18:24:07 crc kubenswrapper[4661]: I0120 18:24:07.742387 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b8b28a87-c7f9-42a5-9820-91ab27411fcb-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "b8b28a87-c7f9-42a5-9820-91ab27411fcb" (UID: "b8b28a87-c7f9-42a5-9820-91ab27411fcb"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 20 18:24:07 crc kubenswrapper[4661]: I0120 18:24:07.742453 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b8b28a87-c7f9-42a5-9820-91ab27411fcb-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "b8b28a87-c7f9-42a5-9820-91ab27411fcb" (UID: "b8b28a87-c7f9-42a5-9820-91ab27411fcb"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 20 18:24:07 crc kubenswrapper[4661]: I0120 18:24:07.743197 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b8b28a87-c7f9-42a5-9820-91ab27411fcb-scripts" (OuterVolumeSpecName: "scripts") pod "b8b28a87-c7f9-42a5-9820-91ab27411fcb" (UID: "b8b28a87-c7f9-42a5-9820-91ab27411fcb"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 18:24:07 crc kubenswrapper[4661]: I0120 18:24:07.743738 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b8b28a87-c7f9-42a5-9820-91ab27411fcb-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "b8b28a87-c7f9-42a5-9820-91ab27411fcb" (UID: "b8b28a87-c7f9-42a5-9820-91ab27411fcb"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 18:24:07 crc kubenswrapper[4661]: I0120 18:24:07.769325 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b8b28a87-c7f9-42a5-9820-91ab27411fcb-kube-api-access-lrmzw" (OuterVolumeSpecName: "kube-api-access-lrmzw") pod "b8b28a87-c7f9-42a5-9820-91ab27411fcb" (UID: "b8b28a87-c7f9-42a5-9820-91ab27411fcb"). InnerVolumeSpecName "kube-api-access-lrmzw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 18:24:07 crc kubenswrapper[4661]: I0120 18:24:07.844080 4661 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b8b28a87-c7f9-42a5-9820-91ab27411fcb-scripts\") on node \"crc\" DevicePath \"\"" Jan 20 18:24:07 crc kubenswrapper[4661]: I0120 18:24:07.844122 4661 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/b8b28a87-c7f9-42a5-9820-91ab27411fcb-var-run\") on node \"crc\" DevicePath \"\"" Jan 20 18:24:07 crc kubenswrapper[4661]: I0120 18:24:07.844131 4661 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/b8b28a87-c7f9-42a5-9820-91ab27411fcb-var-log-ovn\") on node \"crc\" DevicePath \"\"" Jan 20 18:24:07 crc kubenswrapper[4661]: I0120 18:24:07.844206 4661 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/b8b28a87-c7f9-42a5-9820-91ab27411fcb-var-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 20 18:24:07 crc kubenswrapper[4661]: I0120 18:24:07.844215 4661 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/b8b28a87-c7f9-42a5-9820-91ab27411fcb-additional-scripts\") on node \"crc\" DevicePath \"\"" Jan 20 18:24:07 crc kubenswrapper[4661]: I0120 18:24:07.844227 4661 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lrmzw\" (UniqueName: \"kubernetes.io/projected/b8b28a87-c7f9-42a5-9820-91ab27411fcb-kube-api-access-lrmzw\") on node \"crc\" DevicePath \"\"" Jan 20 18:24:08 crc kubenswrapper[4661]: I0120 18:24:08.338213 4661 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-p7h4x-config-m9gbv" Jan 20 18:24:08 crc kubenswrapper[4661]: I0120 18:24:08.338208 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-p7h4x-config-m9gbv" event={"ID":"b8b28a87-c7f9-42a5-9820-91ab27411fcb","Type":"ContainerDied","Data":"5855d425f174676d633f7d418ea0d8023a8de78bf14a01e8971096edbec3dc53"} Jan 20 18:24:08 crc kubenswrapper[4661]: I0120 18:24:08.338409 4661 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5855d425f174676d633f7d418ea0d8023a8de78bf14a01e8971096edbec3dc53" Jan 20 18:24:08 crc kubenswrapper[4661]: I0120 18:24:08.339511 4661 generic.go:334] "Generic (PLEG): container finished" podID="f1a5c71b-b855-4712-b208-fc14557dd032" containerID="0fcc75b4d92303fc8e95432c1e013aae3f7cbe712d1e1f207d350c66e0f597c2" exitCode=0 Jan 20 18:24:08 crc kubenswrapper[4661]: I0120 18:24:08.339711 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-dkb68" event={"ID":"f1a5c71b-b855-4712-b208-fc14557dd032","Type":"ContainerDied","Data":"0fcc75b4d92303fc8e95432c1e013aae3f7cbe712d1e1f207d350c66e0f597c2"} Jan 20 18:24:08 crc kubenswrapper[4661]: I0120 18:24:08.339764 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-dkb68" event={"ID":"f1a5c71b-b855-4712-b208-fc14557dd032","Type":"ContainerStarted","Data":"4785d36446990690f761f10f0961948cf9ed7c206f3891e115d18be91b8176ff"} Jan 20 18:24:08 crc kubenswrapper[4661]: I0120 18:24:08.741589 4661 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-p7h4x-config-m9gbv"] Jan 20 18:24:08 crc kubenswrapper[4661]: I0120 18:24:08.750195 4661 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-p7h4x-config-m9gbv"] Jan 20 18:24:08 crc kubenswrapper[4661]: I0120 18:24:08.857676 4661 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-p7h4x-config-p6wtq"] Jan 20 18:24:08 crc kubenswrapper[4661]: E0120 18:24:08.858063 4661 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b8b28a87-c7f9-42a5-9820-91ab27411fcb" containerName="ovn-config" Jan 20 18:24:08 crc kubenswrapper[4661]: I0120 18:24:08.858080 4661 state_mem.go:107] "Deleted CPUSet assignment" podUID="b8b28a87-c7f9-42a5-9820-91ab27411fcb" containerName="ovn-config" Jan 20 18:24:08 crc kubenswrapper[4661]: I0120 18:24:08.858213 4661 memory_manager.go:354] "RemoveStaleState removing state" podUID="b8b28a87-c7f9-42a5-9820-91ab27411fcb" containerName="ovn-config" Jan 20 18:24:08 crc kubenswrapper[4661]: I0120 18:24:08.858745 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-p7h4x-config-p6wtq" Jan 20 18:24:08 crc kubenswrapper[4661]: I0120 18:24:08.862461 4661 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Jan 20 18:24:08 crc kubenswrapper[4661]: I0120 18:24:08.873318 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-p7h4x-config-p6wtq"] Jan 20 18:24:08 crc kubenswrapper[4661]: I0120 18:24:08.960335 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/abdc289a-5047-465e-ac25-3934ef40796c-additional-scripts\") pod \"ovn-controller-p7h4x-config-p6wtq\" (UID: \"abdc289a-5047-465e-ac25-3934ef40796c\") " pod="openstack/ovn-controller-p7h4x-config-p6wtq" Jan 20 18:24:08 crc kubenswrapper[4661]: I0120 18:24:08.960713 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/abdc289a-5047-465e-ac25-3934ef40796c-var-log-ovn\") pod \"ovn-controller-p7h4x-config-p6wtq\" (UID: \"abdc289a-5047-465e-ac25-3934ef40796c\") " pod="openstack/ovn-controller-p7h4x-config-p6wtq" Jan 20 18:24:08 crc kubenswrapper[4661]: I0120 18:24:08.960732 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/abdc289a-5047-465e-ac25-3934ef40796c-var-run-ovn\") pod \"ovn-controller-p7h4x-config-p6wtq\" (UID: \"abdc289a-5047-465e-ac25-3934ef40796c\") " pod="openstack/ovn-controller-p7h4x-config-p6wtq" Jan 20 18:24:08 crc kubenswrapper[4661]: I0120 18:24:08.960751 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7z2sf\" (UniqueName: \"kubernetes.io/projected/abdc289a-5047-465e-ac25-3934ef40796c-kube-api-access-7z2sf\") pod \"ovn-controller-p7h4x-config-p6wtq\" (UID: \"abdc289a-5047-465e-ac25-3934ef40796c\") " pod="openstack/ovn-controller-p7h4x-config-p6wtq" Jan 20 18:24:08 crc kubenswrapper[4661]: I0120 18:24:08.960780 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/abdc289a-5047-465e-ac25-3934ef40796c-var-run\") pod \"ovn-controller-p7h4x-config-p6wtq\" (UID: \"abdc289a-5047-465e-ac25-3934ef40796c\") " pod="openstack/ovn-controller-p7h4x-config-p6wtq" Jan 20 18:24:08 crc kubenswrapper[4661]: I0120 18:24:08.960824 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/abdc289a-5047-465e-ac25-3934ef40796c-scripts\") pod \"ovn-controller-p7h4x-config-p6wtq\" (UID: \"abdc289a-5047-465e-ac25-3934ef40796c\") " pod="openstack/ovn-controller-p7h4x-config-p6wtq" Jan 20 18:24:09 crc kubenswrapper[4661]: I0120 18:24:09.061540 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/abdc289a-5047-465e-ac25-3934ef40796c-var-log-ovn\") pod \"ovn-controller-p7h4x-config-p6wtq\" (UID: \"abdc289a-5047-465e-ac25-3934ef40796c\") " pod="openstack/ovn-controller-p7h4x-config-p6wtq" Jan 20 18:24:09 crc kubenswrapper[4661]: I0120 18:24:09.061583 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/abdc289a-5047-465e-ac25-3934ef40796c-var-run-ovn\") pod \"ovn-controller-p7h4x-config-p6wtq\" (UID: \"abdc289a-5047-465e-ac25-3934ef40796c\") " pod="openstack/ovn-controller-p7h4x-config-p6wtq" Jan 20 18:24:09 crc kubenswrapper[4661]: I0120 18:24:09.061604 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7z2sf\" (UniqueName: \"kubernetes.io/projected/abdc289a-5047-465e-ac25-3934ef40796c-kube-api-access-7z2sf\") pod \"ovn-controller-p7h4x-config-p6wtq\" (UID: \"abdc289a-5047-465e-ac25-3934ef40796c\") " pod="openstack/ovn-controller-p7h4x-config-p6wtq" Jan 20 18:24:09 crc kubenswrapper[4661]: I0120 18:24:09.061628 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/abdc289a-5047-465e-ac25-3934ef40796c-var-run\") pod \"ovn-controller-p7h4x-config-p6wtq\" (UID: \"abdc289a-5047-465e-ac25-3934ef40796c\") " pod="openstack/ovn-controller-p7h4x-config-p6wtq" Jan 20 18:24:09 crc kubenswrapper[4661]: I0120 18:24:09.061660 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/abdc289a-5047-465e-ac25-3934ef40796c-scripts\") pod \"ovn-controller-p7h4x-config-p6wtq\" (UID: \"abdc289a-5047-465e-ac25-3934ef40796c\") " pod="openstack/ovn-controller-p7h4x-config-p6wtq" Jan 20 18:24:09 crc kubenswrapper[4661]: I0120 18:24:09.064033 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/abdc289a-5047-465e-ac25-3934ef40796c-additional-scripts\") pod \"ovn-controller-p7h4x-config-p6wtq\" (UID: \"abdc289a-5047-465e-ac25-3934ef40796c\") " pod="openstack/ovn-controller-p7h4x-config-p6wtq" Jan 20 18:24:09 crc kubenswrapper[4661]: I0120 18:24:09.064853 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/abdc289a-5047-465e-ac25-3934ef40796c-additional-scripts\") pod \"ovn-controller-p7h4x-config-p6wtq\" (UID: \"abdc289a-5047-465e-ac25-3934ef40796c\") " pod="openstack/ovn-controller-p7h4x-config-p6wtq" Jan 20 18:24:09 crc kubenswrapper[4661]: I0120 18:24:09.065135 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/abdc289a-5047-465e-ac25-3934ef40796c-var-log-ovn\") pod \"ovn-controller-p7h4x-config-p6wtq\" (UID: \"abdc289a-5047-465e-ac25-3934ef40796c\") " pod="openstack/ovn-controller-p7h4x-config-p6wtq" Jan 20 18:24:09 crc kubenswrapper[4661]: I0120 18:24:09.065189 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/abdc289a-5047-465e-ac25-3934ef40796c-var-run-ovn\") pod \"ovn-controller-p7h4x-config-p6wtq\" (UID: \"abdc289a-5047-465e-ac25-3934ef40796c\") " pod="openstack/ovn-controller-p7h4x-config-p6wtq" Jan 20 18:24:09 crc kubenswrapper[4661]: I0120 18:24:09.065482 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/abdc289a-5047-465e-ac25-3934ef40796c-var-run\") pod \"ovn-controller-p7h4x-config-p6wtq\" (UID: \"abdc289a-5047-465e-ac25-3934ef40796c\") " pod="openstack/ovn-controller-p7h4x-config-p6wtq" Jan 20 18:24:09 crc kubenswrapper[4661]: I0120 18:24:09.067207 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/abdc289a-5047-465e-ac25-3934ef40796c-scripts\") pod \"ovn-controller-p7h4x-config-p6wtq\" (UID: \"abdc289a-5047-465e-ac25-3934ef40796c\") " pod="openstack/ovn-controller-p7h4x-config-p6wtq" Jan 20 18:24:09 crc kubenswrapper[4661]: I0120 18:24:09.097477 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7z2sf\" (UniqueName: \"kubernetes.io/projected/abdc289a-5047-465e-ac25-3934ef40796c-kube-api-access-7z2sf\") pod \"ovn-controller-p7h4x-config-p6wtq\" (UID: \"abdc289a-5047-465e-ac25-3934ef40796c\") " pod="openstack/ovn-controller-p7h4x-config-p6wtq" Jan 20 18:24:09 crc kubenswrapper[4661]: I0120 18:24:09.097727 4661 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-p7h4x" Jan 20 18:24:09 crc kubenswrapper[4661]: I0120 18:24:09.175443 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-p7h4x-config-p6wtq" Jan 20 18:24:09 crc kubenswrapper[4661]: I0120 18:24:09.676975 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-p7h4x-config-p6wtq"] Jan 20 18:24:09 crc kubenswrapper[4661]: I0120 18:24:09.751371 4661 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-dkb68" Jan 20 18:24:09 crc kubenswrapper[4661]: I0120 18:24:09.775827 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c2rg\" (UniqueName: \"kubernetes.io/projected/f1a5c71b-b855-4712-b208-fc14557dd032-kube-api-access-7c2rg\") pod \"f1a5c71b-b855-4712-b208-fc14557dd032\" (UID: \"f1a5c71b-b855-4712-b208-fc14557dd032\") " Jan 20 18:24:09 crc kubenswrapper[4661]: I0120 18:24:09.775960 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f1a5c71b-b855-4712-b208-fc14557dd032-operator-scripts\") pod \"f1a5c71b-b855-4712-b208-fc14557dd032\" (UID: \"f1a5c71b-b855-4712-b208-fc14557dd032\") " Jan 20 18:24:09 crc kubenswrapper[4661]: I0120 18:24:09.776949 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f1a5c71b-b855-4712-b208-fc14557dd032-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "f1a5c71b-b855-4712-b208-fc14557dd032" (UID: "f1a5c71b-b855-4712-b208-fc14557dd032"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 18:24:09 crc kubenswrapper[4661]: I0120 18:24:09.782053 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f1a5c71b-b855-4712-b208-fc14557dd032-kube-api-access-7c2rg" (OuterVolumeSpecName: "kube-api-access-7c2rg") pod "f1a5c71b-b855-4712-b208-fc14557dd032" (UID: "f1a5c71b-b855-4712-b208-fc14557dd032"). InnerVolumeSpecName "kube-api-access-7c2rg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 18:24:09 crc kubenswrapper[4661]: I0120 18:24:09.877784 4661 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f1a5c71b-b855-4712-b208-fc14557dd032-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 20 18:24:09 crc kubenswrapper[4661]: I0120 18:24:09.877825 4661 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c2rg\" (UniqueName: \"kubernetes.io/projected/f1a5c71b-b855-4712-b208-fc14557dd032-kube-api-access-7c2rg\") on node \"crc\" DevicePath \"\"" Jan 20 18:24:10 crc kubenswrapper[4661]: I0120 18:24:10.155498 4661 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b8b28a87-c7f9-42a5-9820-91ab27411fcb" path="/var/lib/kubelet/pods/b8b28a87-c7f9-42a5-9820-91ab27411fcb/volumes" Jan 20 18:24:10 crc kubenswrapper[4661]: I0120 18:24:10.354733 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-dkb68" event={"ID":"f1a5c71b-b855-4712-b208-fc14557dd032","Type":"ContainerDied","Data":"4785d36446990690f761f10f0961948cf9ed7c206f3891e115d18be91b8176ff"} Jan 20 18:24:10 crc kubenswrapper[4661]: I0120 18:24:10.354775 4661 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4785d36446990690f761f10f0961948cf9ed7c206f3891e115d18be91b8176ff" Jan 20 18:24:10 crc kubenswrapper[4661]: I0120 18:24:10.354838 4661 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-dkb68" Jan 20 18:24:10 crc kubenswrapper[4661]: I0120 18:24:10.356536 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-p7h4x-config-p6wtq" event={"ID":"abdc289a-5047-465e-ac25-3934ef40796c","Type":"ContainerStarted","Data":"8fbf5cae51bd158c8d54d07c785bdfdf44fa3cb6c7bce59c1f5d24f8d6c18c23"} Jan 20 18:24:11 crc kubenswrapper[4661]: I0120 18:24:11.369228 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-p7h4x-config-p6wtq" event={"ID":"abdc289a-5047-465e-ac25-3934ef40796c","Type":"ContainerStarted","Data":"6670cf7646a37fc2b25aa36c24b0ac42f4b0e90eb50263c69d96c7a1d1fa734f"} Jan 20 18:24:12 crc kubenswrapper[4661]: I0120 18:24:12.379146 4661 generic.go:334] "Generic (PLEG): container finished" podID="abdc289a-5047-465e-ac25-3934ef40796c" containerID="6670cf7646a37fc2b25aa36c24b0ac42f4b0e90eb50263c69d96c7a1d1fa734f" exitCode=0 Jan 20 18:24:12 crc kubenswrapper[4661]: I0120 18:24:12.379218 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-p7h4x-config-p6wtq" event={"ID":"abdc289a-5047-465e-ac25-3934ef40796c","Type":"ContainerDied","Data":"6670cf7646a37fc2b25aa36c24b0ac42f4b0e90eb50263c69d96c7a1d1fa734f"} Jan 20 18:24:15 crc kubenswrapper[4661]: I0120 18:24:15.119888 4661 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Jan 20 18:24:15 crc kubenswrapper[4661]: I0120 18:24:15.456720 4661 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-create-b6hh6"] Jan 20 18:24:15 crc kubenswrapper[4661]: E0120 18:24:15.457251 4661 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f1a5c71b-b855-4712-b208-fc14557dd032" containerName="mariadb-account-create-update" Jan 20 18:24:15 crc kubenswrapper[4661]: I0120 18:24:15.457324 4661 state_mem.go:107] "Deleted CPUSet assignment" podUID="f1a5c71b-b855-4712-b208-fc14557dd032" containerName="mariadb-account-create-update" Jan 20 18:24:15 crc kubenswrapper[4661]: I0120 18:24:15.457516 4661 memory_manager.go:354] "RemoveStaleState removing state" podUID="f1a5c71b-b855-4712-b208-fc14557dd032" containerName="mariadb-account-create-update" Jan 20 18:24:15 crc kubenswrapper[4661]: I0120 18:24:15.458070 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-b6hh6" Jan 20 18:24:15 crc kubenswrapper[4661]: I0120 18:24:15.477931 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-b6hh6"] Jan 20 18:24:15 crc kubenswrapper[4661]: I0120 18:24:15.572995 4661 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-create-p9spf"] Jan 20 18:24:15 crc kubenswrapper[4661]: I0120 18:24:15.573247 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f44143b3-87a7-4a6c-a5ca-fd2ddbb5a5b2-operator-scripts\") pod \"cinder-db-create-b6hh6\" (UID: \"f44143b3-87a7-4a6c-a5ca-fd2ddbb5a5b2\") " pod="openstack/cinder-db-create-b6hh6" Jan 20 18:24:15 crc kubenswrapper[4661]: I0120 18:24:15.573285 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w8s4n\" (UniqueName: \"kubernetes.io/projected/f44143b3-87a7-4a6c-a5ca-fd2ddbb5a5b2-kube-api-access-w8s4n\") pod \"cinder-db-create-b6hh6\" (UID: \"f44143b3-87a7-4a6c-a5ca-fd2ddbb5a5b2\") " pod="openstack/cinder-db-create-b6hh6" Jan 20 18:24:15 crc kubenswrapper[4661]: I0120 18:24:15.605611 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-p9spf" Jan 20 18:24:15 crc kubenswrapper[4661]: I0120 18:24:15.615946 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-p9spf"] Jan 20 18:24:15 crc kubenswrapper[4661]: I0120 18:24:15.641636 4661 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-4694-account-create-update-d9j2f"] Jan 20 18:24:15 crc kubenswrapper[4661]: I0120 18:24:15.643000 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-4694-account-create-update-d9j2f" Jan 20 18:24:15 crc kubenswrapper[4661]: I0120 18:24:15.644720 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Jan 20 18:24:15 crc kubenswrapper[4661]: I0120 18:24:15.648476 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-4694-account-create-update-d9j2f"] Jan 20 18:24:15 crc kubenswrapper[4661]: I0120 18:24:15.674489 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f44143b3-87a7-4a6c-a5ca-fd2ddbb5a5b2-operator-scripts\") pod \"cinder-db-create-b6hh6\" (UID: \"f44143b3-87a7-4a6c-a5ca-fd2ddbb5a5b2\") " pod="openstack/cinder-db-create-b6hh6" Jan 20 18:24:15 crc kubenswrapper[4661]: I0120 18:24:15.674542 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w8s4n\" (UniqueName: \"kubernetes.io/projected/f44143b3-87a7-4a6c-a5ca-fd2ddbb5a5b2-kube-api-access-w8s4n\") pod \"cinder-db-create-b6hh6\" (UID: \"f44143b3-87a7-4a6c-a5ca-fd2ddbb5a5b2\") " pod="openstack/cinder-db-create-b6hh6" Jan 20 18:24:15 crc kubenswrapper[4661]: I0120 18:24:15.675751 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f44143b3-87a7-4a6c-a5ca-fd2ddbb5a5b2-operator-scripts\") pod \"cinder-db-create-b6hh6\" (UID: \"f44143b3-87a7-4a6c-a5ca-fd2ddbb5a5b2\") " pod="openstack/cinder-db-create-b6hh6" Jan 20 18:24:15 crc kubenswrapper[4661]: I0120 18:24:15.701894 4661 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-6b29-account-create-update-mp485"] Jan 20 18:24:15 crc kubenswrapper[4661]: I0120 18:24:15.703120 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-6b29-account-create-update-mp485" Jan 20 18:24:15 crc kubenswrapper[4661]: I0120 18:24:15.705597 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Jan 20 18:24:15 crc kubenswrapper[4661]: I0120 18:24:15.742733 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-6b29-account-create-update-mp485"] Jan 20 18:24:15 crc kubenswrapper[4661]: I0120 18:24:15.753222 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w8s4n\" (UniqueName: \"kubernetes.io/projected/f44143b3-87a7-4a6c-a5ca-fd2ddbb5a5b2-kube-api-access-w8s4n\") pod \"cinder-db-create-b6hh6\" (UID: \"f44143b3-87a7-4a6c-a5ca-fd2ddbb5a5b2\") " pod="openstack/cinder-db-create-b6hh6" Jan 20 18:24:15 crc kubenswrapper[4661]: I0120 18:24:15.781772 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-b6hh6" Jan 20 18:24:15 crc kubenswrapper[4661]: I0120 18:24:15.782415 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wxw7c\" (UniqueName: \"kubernetes.io/projected/ba6f71cd-96ff-473f-b875-b8f44a58dab4-kube-api-access-wxw7c\") pod \"barbican-4694-account-create-update-d9j2f\" (UID: \"ba6f71cd-96ff-473f-b875-b8f44a58dab4\") " pod="openstack/barbican-4694-account-create-update-d9j2f" Jan 20 18:24:15 crc kubenswrapper[4661]: I0120 18:24:15.782619 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ba6f71cd-96ff-473f-b875-b8f44a58dab4-operator-scripts\") pod \"barbican-4694-account-create-update-d9j2f\" (UID: \"ba6f71cd-96ff-473f-b875-b8f44a58dab4\") " pod="openstack/barbican-4694-account-create-update-d9j2f" Jan 20 18:24:15 crc kubenswrapper[4661]: I0120 18:24:15.782826 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d42815f3-34a4-40b4-bfca-cc453cb7d569-operator-scripts\") pod \"barbican-db-create-p9spf\" (UID: \"d42815f3-34a4-40b4-bfca-cc453cb7d569\") " pod="openstack/barbican-db-create-p9spf" Jan 20 18:24:15 crc kubenswrapper[4661]: I0120 18:24:15.782915 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jffwg\" (UniqueName: \"kubernetes.io/projected/d42815f3-34a4-40b4-bfca-cc453cb7d569-kube-api-access-jffwg\") pod \"barbican-db-create-p9spf\" (UID: \"d42815f3-34a4-40b4-bfca-cc453cb7d569\") " pod="openstack/barbican-db-create-p9spf" Jan 20 18:24:15 crc kubenswrapper[4661]: I0120 18:24:15.847595 4661 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-create-g2bml"] Jan 20 18:24:15 crc kubenswrapper[4661]: I0120 18:24:15.848954 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-g2bml" Jan 20 18:24:15 crc kubenswrapper[4661]: I0120 18:24:15.863139 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-g2bml"] Jan 20 18:24:15 crc kubenswrapper[4661]: I0120 18:24:15.884499 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d42815f3-34a4-40b4-bfca-cc453cb7d569-operator-scripts\") pod \"barbican-db-create-p9spf\" (UID: \"d42815f3-34a4-40b4-bfca-cc453cb7d569\") " pod="openstack/barbican-db-create-p9spf" Jan 20 18:24:15 crc kubenswrapper[4661]: I0120 18:24:15.884951 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jffwg\" (UniqueName: \"kubernetes.io/projected/d42815f3-34a4-40b4-bfca-cc453cb7d569-kube-api-access-jffwg\") pod \"barbican-db-create-p9spf\" (UID: \"d42815f3-34a4-40b4-bfca-cc453cb7d569\") " pod="openstack/barbican-db-create-p9spf" Jan 20 18:24:15 crc kubenswrapper[4661]: I0120 18:24:15.885043 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-555np\" (UniqueName: \"kubernetes.io/projected/8ca68c3b-f314-4a85-80c3-e3ff0a17b449-kube-api-access-555np\") pod \"cinder-6b29-account-create-update-mp485\" (UID: \"8ca68c3b-f314-4a85-80c3-e3ff0a17b449\") " pod="openstack/cinder-6b29-account-create-update-mp485" Jan 20 18:24:15 crc kubenswrapper[4661]: I0120 18:24:15.885138 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wxw7c\" (UniqueName: \"kubernetes.io/projected/ba6f71cd-96ff-473f-b875-b8f44a58dab4-kube-api-access-wxw7c\") pod \"barbican-4694-account-create-update-d9j2f\" (UID: \"ba6f71cd-96ff-473f-b875-b8f44a58dab4\") " pod="openstack/barbican-4694-account-create-update-d9j2f" Jan 20 18:24:15 crc kubenswrapper[4661]: I0120 18:24:15.885235 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ba6f71cd-96ff-473f-b875-b8f44a58dab4-operator-scripts\") pod \"barbican-4694-account-create-update-d9j2f\" (UID: \"ba6f71cd-96ff-473f-b875-b8f44a58dab4\") " pod="openstack/barbican-4694-account-create-update-d9j2f" Jan 20 18:24:15 crc kubenswrapper[4661]: I0120 18:24:15.885315 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8ca68c3b-f314-4a85-80c3-e3ff0a17b449-operator-scripts\") pod \"cinder-6b29-account-create-update-mp485\" (UID: \"8ca68c3b-f314-4a85-80c3-e3ff0a17b449\") " pod="openstack/cinder-6b29-account-create-update-mp485" Jan 20 18:24:15 crc kubenswrapper[4661]: I0120 18:24:15.885992 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d42815f3-34a4-40b4-bfca-cc453cb7d569-operator-scripts\") pod \"barbican-db-create-p9spf\" (UID: \"d42815f3-34a4-40b4-bfca-cc453cb7d569\") " pod="openstack/barbican-db-create-p9spf" Jan 20 18:24:15 crc kubenswrapper[4661]: I0120 18:24:15.886856 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ba6f71cd-96ff-473f-b875-b8f44a58dab4-operator-scripts\") pod \"barbican-4694-account-create-update-d9j2f\" (UID: \"ba6f71cd-96ff-473f-b875-b8f44a58dab4\") " pod="openstack/barbican-4694-account-create-update-d9j2f" Jan 20 18:24:15 crc kubenswrapper[4661]: I0120 18:24:15.910660 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wxw7c\" (UniqueName: \"kubernetes.io/projected/ba6f71cd-96ff-473f-b875-b8f44a58dab4-kube-api-access-wxw7c\") pod \"barbican-4694-account-create-update-d9j2f\" (UID: \"ba6f71cd-96ff-473f-b875-b8f44a58dab4\") " pod="openstack/barbican-4694-account-create-update-d9j2f" Jan 20 18:24:15 crc kubenswrapper[4661]: I0120 18:24:15.919438 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jffwg\" (UniqueName: \"kubernetes.io/projected/d42815f3-34a4-40b4-bfca-cc453cb7d569-kube-api-access-jffwg\") pod \"barbican-db-create-p9spf\" (UID: \"d42815f3-34a4-40b4-bfca-cc453cb7d569\") " pod="openstack/barbican-db-create-p9spf" Jan 20 18:24:15 crc kubenswrapper[4661]: I0120 18:24:15.934419 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-p9spf" Jan 20 18:24:15 crc kubenswrapper[4661]: I0120 18:24:15.966079 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-4694-account-create-update-d9j2f" Jan 20 18:24:15 crc kubenswrapper[4661]: I0120 18:24:15.976577 4661 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-sync-wq7m7"] Jan 20 18:24:15 crc kubenswrapper[4661]: I0120 18:24:15.977774 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-wq7m7" Jan 20 18:24:15 crc kubenswrapper[4661]: I0120 18:24:15.982847 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 20 18:24:15 crc kubenswrapper[4661]: I0120 18:24:15.983256 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 20 18:24:15 crc kubenswrapper[4661]: I0120 18:24:15.983517 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 20 18:24:15 crc kubenswrapper[4661]: I0120 18:24:15.984174 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-bvbmp" Jan 20 18:24:15 crc kubenswrapper[4661]: I0120 18:24:15.989782 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8ca68c3b-f314-4a85-80c3-e3ff0a17b449-operator-scripts\") pod \"cinder-6b29-account-create-update-mp485\" (UID: \"8ca68c3b-f314-4a85-80c3-e3ff0a17b449\") " pod="openstack/cinder-6b29-account-create-update-mp485" Jan 20 18:24:15 crc kubenswrapper[4661]: I0120 18:24:15.989889 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c78vr\" (UniqueName: \"kubernetes.io/projected/becce439-c271-41e5-9d39-0bdd4b284772-kube-api-access-c78vr\") pod \"neutron-db-create-g2bml\" (UID: \"becce439-c271-41e5-9d39-0bdd4b284772\") " pod="openstack/neutron-db-create-g2bml" Jan 20 18:24:15 crc kubenswrapper[4661]: I0120 18:24:15.989915 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-555np\" (UniqueName: \"kubernetes.io/projected/8ca68c3b-f314-4a85-80c3-e3ff0a17b449-kube-api-access-555np\") pod \"cinder-6b29-account-create-update-mp485\" (UID: \"8ca68c3b-f314-4a85-80c3-e3ff0a17b449\") " pod="openstack/cinder-6b29-account-create-update-mp485" Jan 20 18:24:15 crc kubenswrapper[4661]: I0120 18:24:15.989957 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/becce439-c271-41e5-9d39-0bdd4b284772-operator-scripts\") pod \"neutron-db-create-g2bml\" (UID: \"becce439-c271-41e5-9d39-0bdd4b284772\") " pod="openstack/neutron-db-create-g2bml" Jan 20 18:24:15 crc kubenswrapper[4661]: I0120 18:24:15.990570 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8ca68c3b-f314-4a85-80c3-e3ff0a17b449-operator-scripts\") pod \"cinder-6b29-account-create-update-mp485\" (UID: \"8ca68c3b-f314-4a85-80c3-e3ff0a17b449\") " pod="openstack/cinder-6b29-account-create-update-mp485" Jan 20 18:24:16 crc kubenswrapper[4661]: I0120 18:24:16.001402 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-wq7m7"] Jan 20 18:24:16 crc kubenswrapper[4661]: I0120 18:24:16.010152 4661 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-c726-account-create-update-9nmqv"] Jan 20 18:24:16 crc kubenswrapper[4661]: I0120 18:24:16.011217 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-c726-account-create-update-9nmqv" Jan 20 18:24:16 crc kubenswrapper[4661]: I0120 18:24:16.016208 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-555np\" (UniqueName: \"kubernetes.io/projected/8ca68c3b-f314-4a85-80c3-e3ff0a17b449-kube-api-access-555np\") pod \"cinder-6b29-account-create-update-mp485\" (UID: \"8ca68c3b-f314-4a85-80c3-e3ff0a17b449\") " pod="openstack/cinder-6b29-account-create-update-mp485" Jan 20 18:24:16 crc kubenswrapper[4661]: I0120 18:24:16.026121 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-6b29-account-create-update-mp485" Jan 20 18:24:16 crc kubenswrapper[4661]: I0120 18:24:16.043556 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Jan 20 18:24:16 crc kubenswrapper[4661]: I0120 18:24:16.092349 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/becce439-c271-41e5-9d39-0bdd4b284772-operator-scripts\") pod \"neutron-db-create-g2bml\" (UID: \"becce439-c271-41e5-9d39-0bdd4b284772\") " pod="openstack/neutron-db-create-g2bml" Jan 20 18:24:16 crc kubenswrapper[4661]: I0120 18:24:16.092491 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dx5xx\" (UniqueName: \"kubernetes.io/projected/289c1012-9041-4cb4-baa7-888d31048e4c-kube-api-access-dx5xx\") pod \"keystone-db-sync-wq7m7\" (UID: \"289c1012-9041-4cb4-baa7-888d31048e4c\") " pod="openstack/keystone-db-sync-wq7m7" Jan 20 18:24:16 crc kubenswrapper[4661]: I0120 18:24:16.092541 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c78vr\" (UniqueName: \"kubernetes.io/projected/becce439-c271-41e5-9d39-0bdd4b284772-kube-api-access-c78vr\") pod \"neutron-db-create-g2bml\" (UID: \"becce439-c271-41e5-9d39-0bdd4b284772\") " pod="openstack/neutron-db-create-g2bml" Jan 20 18:24:16 crc kubenswrapper[4661]: I0120 18:24:16.092560 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/289c1012-9041-4cb4-baa7-888d31048e4c-config-data\") pod \"keystone-db-sync-wq7m7\" (UID: \"289c1012-9041-4cb4-baa7-888d31048e4c\") " pod="openstack/keystone-db-sync-wq7m7" Jan 20 18:24:16 crc kubenswrapper[4661]: I0120 18:24:16.092606 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/289c1012-9041-4cb4-baa7-888d31048e4c-combined-ca-bundle\") pod \"keystone-db-sync-wq7m7\" (UID: \"289c1012-9041-4cb4-baa7-888d31048e4c\") " pod="openstack/keystone-db-sync-wq7m7" Jan 20 18:24:16 crc kubenswrapper[4661]: I0120 18:24:16.093403 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/becce439-c271-41e5-9d39-0bdd4b284772-operator-scripts\") pod \"neutron-db-create-g2bml\" (UID: \"becce439-c271-41e5-9d39-0bdd4b284772\") " pod="openstack/neutron-db-create-g2bml" Jan 20 18:24:16 crc kubenswrapper[4661]: I0120 18:24:16.097379 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-c726-account-create-update-9nmqv"] Jan 20 18:24:16 crc kubenswrapper[4661]: I0120 18:24:16.188892 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c78vr\" (UniqueName: \"kubernetes.io/projected/becce439-c271-41e5-9d39-0bdd4b284772-kube-api-access-c78vr\") pod \"neutron-db-create-g2bml\" (UID: \"becce439-c271-41e5-9d39-0bdd4b284772\") " pod="openstack/neutron-db-create-g2bml" Jan 20 18:24:16 crc kubenswrapper[4661]: I0120 18:24:16.194508 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/289c1012-9041-4cb4-baa7-888d31048e4c-config-data\") pod \"keystone-db-sync-wq7m7\" (UID: \"289c1012-9041-4cb4-baa7-888d31048e4c\") " pod="openstack/keystone-db-sync-wq7m7" Jan 20 18:24:16 crc kubenswrapper[4661]: I0120 18:24:16.194564 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/289c1012-9041-4cb4-baa7-888d31048e4c-combined-ca-bundle\") pod \"keystone-db-sync-wq7m7\" (UID: \"289c1012-9041-4cb4-baa7-888d31048e4c\") " pod="openstack/keystone-db-sync-wq7m7" Jan 20 18:24:16 crc kubenswrapper[4661]: I0120 18:24:16.194611 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8ecbb4c9-acd4-45bd-a270-8798d5aa5926-operator-scripts\") pod \"neutron-c726-account-create-update-9nmqv\" (UID: \"8ecbb4c9-acd4-45bd-a270-8798d5aa5926\") " pod="openstack/neutron-c726-account-create-update-9nmqv" Jan 20 18:24:16 crc kubenswrapper[4661]: I0120 18:24:16.194632 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-48gzx\" (UniqueName: \"kubernetes.io/projected/8ecbb4c9-acd4-45bd-a270-8798d5aa5926-kube-api-access-48gzx\") pod \"neutron-c726-account-create-update-9nmqv\" (UID: \"8ecbb4c9-acd4-45bd-a270-8798d5aa5926\") " pod="openstack/neutron-c726-account-create-update-9nmqv" Jan 20 18:24:16 crc kubenswrapper[4661]: I0120 18:24:16.194747 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dx5xx\" (UniqueName: \"kubernetes.io/projected/289c1012-9041-4cb4-baa7-888d31048e4c-kube-api-access-dx5xx\") pod \"keystone-db-sync-wq7m7\" (UID: \"289c1012-9041-4cb4-baa7-888d31048e4c\") " pod="openstack/keystone-db-sync-wq7m7" Jan 20 18:24:16 crc kubenswrapper[4661]: I0120 18:24:16.197575 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/289c1012-9041-4cb4-baa7-888d31048e4c-combined-ca-bundle\") pod \"keystone-db-sync-wq7m7\" (UID: \"289c1012-9041-4cb4-baa7-888d31048e4c\") " pod="openstack/keystone-db-sync-wq7m7" Jan 20 18:24:16 crc kubenswrapper[4661]: I0120 18:24:16.209333 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/289c1012-9041-4cb4-baa7-888d31048e4c-config-data\") pod \"keystone-db-sync-wq7m7\" (UID: \"289c1012-9041-4cb4-baa7-888d31048e4c\") " pod="openstack/keystone-db-sync-wq7m7" Jan 20 18:24:16 crc kubenswrapper[4661]: I0120 18:24:16.216450 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dx5xx\" (UniqueName: \"kubernetes.io/projected/289c1012-9041-4cb4-baa7-888d31048e4c-kube-api-access-dx5xx\") pod \"keystone-db-sync-wq7m7\" (UID: \"289c1012-9041-4cb4-baa7-888d31048e4c\") " pod="openstack/keystone-db-sync-wq7m7" Jan 20 18:24:16 crc kubenswrapper[4661]: I0120 18:24:16.296038 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8ecbb4c9-acd4-45bd-a270-8798d5aa5926-operator-scripts\") pod \"neutron-c726-account-create-update-9nmqv\" (UID: \"8ecbb4c9-acd4-45bd-a270-8798d5aa5926\") " pod="openstack/neutron-c726-account-create-update-9nmqv" Jan 20 18:24:16 crc kubenswrapper[4661]: I0120 18:24:16.297011 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8ecbb4c9-acd4-45bd-a270-8798d5aa5926-operator-scripts\") pod \"neutron-c726-account-create-update-9nmqv\" (UID: \"8ecbb4c9-acd4-45bd-a270-8798d5aa5926\") " pod="openstack/neutron-c726-account-create-update-9nmqv" Jan 20 18:24:16 crc kubenswrapper[4661]: I0120 18:24:16.297059 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-48gzx\" (UniqueName: \"kubernetes.io/projected/8ecbb4c9-acd4-45bd-a270-8798d5aa5926-kube-api-access-48gzx\") pod \"neutron-c726-account-create-update-9nmqv\" (UID: \"8ecbb4c9-acd4-45bd-a270-8798d5aa5926\") " pod="openstack/neutron-c726-account-create-update-9nmqv" Jan 20 18:24:16 crc kubenswrapper[4661]: I0120 18:24:16.305869 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-wq7m7" Jan 20 18:24:16 crc kubenswrapper[4661]: I0120 18:24:16.314356 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-48gzx\" (UniqueName: \"kubernetes.io/projected/8ecbb4c9-acd4-45bd-a270-8798d5aa5926-kube-api-access-48gzx\") pod \"neutron-c726-account-create-update-9nmqv\" (UID: \"8ecbb4c9-acd4-45bd-a270-8798d5aa5926\") " pod="openstack/neutron-c726-account-create-update-9nmqv" Jan 20 18:24:16 crc kubenswrapper[4661]: I0120 18:24:16.363072 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-c726-account-create-update-9nmqv" Jan 20 18:24:16 crc kubenswrapper[4661]: I0120 18:24:16.471405 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-g2bml" Jan 20 18:24:20 crc kubenswrapper[4661]: I0120 18:24:20.200938 4661 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-p7h4x-config-p6wtq" Jan 20 18:24:20 crc kubenswrapper[4661]: I0120 18:24:20.312595 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/abdc289a-5047-465e-ac25-3934ef40796c-var-run\") pod \"abdc289a-5047-465e-ac25-3934ef40796c\" (UID: \"abdc289a-5047-465e-ac25-3934ef40796c\") " Jan 20 18:24:20 crc kubenswrapper[4661]: I0120 18:24:20.312823 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/abdc289a-5047-465e-ac25-3934ef40796c-var-run" (OuterVolumeSpecName: "var-run") pod "abdc289a-5047-465e-ac25-3934ef40796c" (UID: "abdc289a-5047-465e-ac25-3934ef40796c"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 20 18:24:20 crc kubenswrapper[4661]: I0120 18:24:20.312888 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/abdc289a-5047-465e-ac25-3934ef40796c-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "abdc289a-5047-465e-ac25-3934ef40796c" (UID: "abdc289a-5047-465e-ac25-3934ef40796c"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 20 18:24:20 crc kubenswrapper[4661]: I0120 18:24:20.312857 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/abdc289a-5047-465e-ac25-3934ef40796c-var-log-ovn\") pod \"abdc289a-5047-465e-ac25-3934ef40796c\" (UID: \"abdc289a-5047-465e-ac25-3934ef40796c\") " Jan 20 18:24:20 crc kubenswrapper[4661]: I0120 18:24:20.313000 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7z2sf\" (UniqueName: \"kubernetes.io/projected/abdc289a-5047-465e-ac25-3934ef40796c-kube-api-access-7z2sf\") pod \"abdc289a-5047-465e-ac25-3934ef40796c\" (UID: \"abdc289a-5047-465e-ac25-3934ef40796c\") " Jan 20 18:24:20 crc kubenswrapper[4661]: I0120 18:24:20.313039 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/abdc289a-5047-465e-ac25-3934ef40796c-additional-scripts\") pod \"abdc289a-5047-465e-ac25-3934ef40796c\" (UID: \"abdc289a-5047-465e-ac25-3934ef40796c\") " Jan 20 18:24:20 crc kubenswrapper[4661]: I0120 18:24:20.313056 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/abdc289a-5047-465e-ac25-3934ef40796c-scripts\") pod \"abdc289a-5047-465e-ac25-3934ef40796c\" (UID: \"abdc289a-5047-465e-ac25-3934ef40796c\") " Jan 20 18:24:20 crc kubenswrapper[4661]: I0120 18:24:20.313121 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/abdc289a-5047-465e-ac25-3934ef40796c-var-run-ovn\") pod \"abdc289a-5047-465e-ac25-3934ef40796c\" (UID: \"abdc289a-5047-465e-ac25-3934ef40796c\") " Jan 20 18:24:20 crc kubenswrapper[4661]: I0120 18:24:20.313784 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/abdc289a-5047-465e-ac25-3934ef40796c-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "abdc289a-5047-465e-ac25-3934ef40796c" (UID: "abdc289a-5047-465e-ac25-3934ef40796c"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 20 18:24:20 crc kubenswrapper[4661]: I0120 18:24:20.313790 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/abdc289a-5047-465e-ac25-3934ef40796c-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "abdc289a-5047-465e-ac25-3934ef40796c" (UID: "abdc289a-5047-465e-ac25-3934ef40796c"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 18:24:20 crc kubenswrapper[4661]: I0120 18:24:20.314785 4661 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/abdc289a-5047-465e-ac25-3934ef40796c-var-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 20 18:24:20 crc kubenswrapper[4661]: I0120 18:24:20.314804 4661 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/abdc289a-5047-465e-ac25-3934ef40796c-var-run\") on node \"crc\" DevicePath \"\"" Jan 20 18:24:20 crc kubenswrapper[4661]: I0120 18:24:20.314813 4661 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/abdc289a-5047-465e-ac25-3934ef40796c-var-log-ovn\") on node \"crc\" DevicePath \"\"" Jan 20 18:24:20 crc kubenswrapper[4661]: I0120 18:24:20.314820 4661 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/abdc289a-5047-465e-ac25-3934ef40796c-additional-scripts\") on node \"crc\" DevicePath \"\"" Jan 20 18:24:20 crc kubenswrapper[4661]: I0120 18:24:20.317222 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/abdc289a-5047-465e-ac25-3934ef40796c-scripts" (OuterVolumeSpecName: "scripts") pod "abdc289a-5047-465e-ac25-3934ef40796c" (UID: "abdc289a-5047-465e-ac25-3934ef40796c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 18:24:20 crc kubenswrapper[4661]: I0120 18:24:20.326699 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/abdc289a-5047-465e-ac25-3934ef40796c-kube-api-access-7z2sf" (OuterVolumeSpecName: "kube-api-access-7z2sf") pod "abdc289a-5047-465e-ac25-3934ef40796c" (UID: "abdc289a-5047-465e-ac25-3934ef40796c"). InnerVolumeSpecName "kube-api-access-7z2sf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 18:24:20 crc kubenswrapper[4661]: I0120 18:24:20.416046 4661 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7z2sf\" (UniqueName: \"kubernetes.io/projected/abdc289a-5047-465e-ac25-3934ef40796c-kube-api-access-7z2sf\") on node \"crc\" DevicePath \"\"" Jan 20 18:24:20 crc kubenswrapper[4661]: I0120 18:24:20.416323 4661 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/abdc289a-5047-465e-ac25-3934ef40796c-scripts\") on node \"crc\" DevicePath \"\"" Jan 20 18:24:20 crc kubenswrapper[4661]: I0120 18:24:20.456421 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-p7h4x-config-p6wtq" event={"ID":"abdc289a-5047-465e-ac25-3934ef40796c","Type":"ContainerDied","Data":"8fbf5cae51bd158c8d54d07c785bdfdf44fa3cb6c7bce59c1f5d24f8d6c18c23"} Jan 20 18:24:20 crc kubenswrapper[4661]: I0120 18:24:20.456453 4661 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8fbf5cae51bd158c8d54d07c785bdfdf44fa3cb6c7bce59c1f5d24f8d6c18c23" Jan 20 18:24:20 crc kubenswrapper[4661]: I0120 18:24:20.456508 4661 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-p7h4x-config-p6wtq" Jan 20 18:24:20 crc kubenswrapper[4661]: I0120 18:24:20.744801 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-wq7m7"] Jan 20 18:24:20 crc kubenswrapper[4661]: I0120 18:24:20.853509 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-b6hh6"] Jan 20 18:24:20 crc kubenswrapper[4661]: I0120 18:24:20.876555 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-c726-account-create-update-9nmqv"] Jan 20 18:24:20 crc kubenswrapper[4661]: W0120 18:24:20.896574 4661 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8ecbb4c9_acd4_45bd_a270_8798d5aa5926.slice/crio-584cc818a5ee52866889726541e044defe397f9ecf6c38bd21c6d0430e23b5df WatchSource:0}: Error finding container 584cc818a5ee52866889726541e044defe397f9ecf6c38bd21c6d0430e23b5df: Status 404 returned error can't find the container with id 584cc818a5ee52866889726541e044defe397f9ecf6c38bd21c6d0430e23b5df Jan 20 18:24:20 crc kubenswrapper[4661]: I0120 18:24:20.896959 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-p9spf"] Jan 20 18:24:20 crc kubenswrapper[4661]: I0120 18:24:20.903203 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-g2bml"] Jan 20 18:24:20 crc kubenswrapper[4661]: W0120 18:24:20.916064 4661 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd42815f3_34a4_40b4_bfca_cc453cb7d569.slice/crio-e034c7b6343f6d78cee1cc91703d6ac1d80494fb361bc0c5677ec65b8aa81863 WatchSource:0}: Error finding container e034c7b6343f6d78cee1cc91703d6ac1d80494fb361bc0c5677ec65b8aa81863: Status 404 returned error can't find the container with id e034c7b6343f6d78cee1cc91703d6ac1d80494fb361bc0c5677ec65b8aa81863 Jan 20 18:24:21 crc kubenswrapper[4661]: I0120 18:24:21.023343 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-6b29-account-create-update-mp485"] Jan 20 18:24:21 crc kubenswrapper[4661]: I0120 18:24:21.040491 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-4694-account-create-update-d9j2f"] Jan 20 18:24:21 crc kubenswrapper[4661]: W0120 18:24:21.068988 4661 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podba6f71cd_96ff_473f_b875_b8f44a58dab4.slice/crio-9cbe0d0d18ad810e7ce7951e1f0aec7d6d0e5c7cdc8faa4ada7ce794d72c1130 WatchSource:0}: Error finding container 9cbe0d0d18ad810e7ce7951e1f0aec7d6d0e5c7cdc8faa4ada7ce794d72c1130: Status 404 returned error can't find the container with id 9cbe0d0d18ad810e7ce7951e1f0aec7d6d0e5c7cdc8faa4ada7ce794d72c1130 Jan 20 18:24:21 crc kubenswrapper[4661]: I0120 18:24:21.282822 4661 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-p7h4x-config-p6wtq"] Jan 20 18:24:21 crc kubenswrapper[4661]: I0120 18:24:21.293066 4661 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-p7h4x-config-p6wtq"] Jan 20 18:24:21 crc kubenswrapper[4661]: I0120 18:24:21.476231 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-g2bml" event={"ID":"becce439-c271-41e5-9d39-0bdd4b284772","Type":"ContainerStarted","Data":"4e961f31c710c32d632178da3c1f6553e83a76abff69ce2153f682fd7c352402"} Jan 20 18:24:21 crc kubenswrapper[4661]: I0120 18:24:21.476288 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-g2bml" event={"ID":"becce439-c271-41e5-9d39-0bdd4b284772","Type":"ContainerStarted","Data":"39e8732b877e0dfa1336a8f78fb382d1ac343b934ab11788ee8a0d4a6f8379e4"} Jan 20 18:24:21 crc kubenswrapper[4661]: I0120 18:24:21.484153 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-4694-account-create-update-d9j2f" event={"ID":"ba6f71cd-96ff-473f-b875-b8f44a58dab4","Type":"ContainerStarted","Data":"9cbe0d0d18ad810e7ce7951e1f0aec7d6d0e5c7cdc8faa4ada7ce794d72c1130"} Jan 20 18:24:21 crc kubenswrapper[4661]: I0120 18:24:21.490032 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-wq7m7" event={"ID":"289c1012-9041-4cb4-baa7-888d31048e4c","Type":"ContainerStarted","Data":"980128280ba8a699e9e351466c542c5555edcc240c07dea0542e0ac2bb69d3b2"} Jan 20 18:24:21 crc kubenswrapper[4661]: I0120 18:24:21.494786 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-c726-account-create-update-9nmqv" event={"ID":"8ecbb4c9-acd4-45bd-a270-8798d5aa5926","Type":"ContainerStarted","Data":"9c44cb67da46db745fac0fed83b94683bb0aff4510142ef54d241b5dfbcff29a"} Jan 20 18:24:21 crc kubenswrapper[4661]: I0120 18:24:21.494840 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-c726-account-create-update-9nmqv" event={"ID":"8ecbb4c9-acd4-45bd-a270-8798d5aa5926","Type":"ContainerStarted","Data":"584cc818a5ee52866889726541e044defe397f9ecf6c38bd21c6d0430e23b5df"} Jan 20 18:24:21 crc kubenswrapper[4661]: I0120 18:24:21.496532 4661 generic.go:334] "Generic (PLEG): container finished" podID="f44143b3-87a7-4a6c-a5ca-fd2ddbb5a5b2" containerID="b357a0ec01664d19d9701e2fe9631e4f8db945a5ea59a45bc4c522d3be569270" exitCode=0 Jan 20 18:24:21 crc kubenswrapper[4661]: I0120 18:24:21.496614 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-b6hh6" event={"ID":"f44143b3-87a7-4a6c-a5ca-fd2ddbb5a5b2","Type":"ContainerDied","Data":"b357a0ec01664d19d9701e2fe9631e4f8db945a5ea59a45bc4c522d3be569270"} Jan 20 18:24:21 crc kubenswrapper[4661]: I0120 18:24:21.496642 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-b6hh6" event={"ID":"f44143b3-87a7-4a6c-a5ca-fd2ddbb5a5b2","Type":"ContainerStarted","Data":"9638f402c262fe5f3a83e30961cc57ab42294ccf1cc4e5a9ee8233c943ead79b"} Jan 20 18:24:21 crc kubenswrapper[4661]: I0120 18:24:21.498122 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-tck9n" event={"ID":"64020290-0e73-480d-b523-d7cb664eacfd","Type":"ContainerStarted","Data":"77925133481db4cab9d22b8313e107aa9311cb661804b6c28eea5f7f4979af4b"} Jan 20 18:24:21 crc kubenswrapper[4661]: I0120 18:24:21.500554 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-6b29-account-create-update-mp485" event={"ID":"8ca68c3b-f314-4a85-80c3-e3ff0a17b449","Type":"ContainerStarted","Data":"91c1c0dc5ee3b06a8c25355770483a60714c2caa96d9416802ca6b14f02eda5f"} Jan 20 18:24:21 crc kubenswrapper[4661]: I0120 18:24:21.500598 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-6b29-account-create-update-mp485" event={"ID":"8ca68c3b-f314-4a85-80c3-e3ff0a17b449","Type":"ContainerStarted","Data":"ac787d4415d3521dc73d5950cd507f58d92d28db548c3aa1d209db7136e7cd5a"} Jan 20 18:24:21 crc kubenswrapper[4661]: I0120 18:24:21.512205 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-p9spf" event={"ID":"d42815f3-34a4-40b4-bfca-cc453cb7d569","Type":"ContainerStarted","Data":"4e14a4f29c28dcd080fd708fc1a1397641711d809052cb863dca7f5e9ba4ca54"} Jan 20 18:24:21 crc kubenswrapper[4661]: I0120 18:24:21.512412 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-p9spf" event={"ID":"d42815f3-34a4-40b4-bfca-cc453cb7d569","Type":"ContainerStarted","Data":"e034c7b6343f6d78cee1cc91703d6ac1d80494fb361bc0c5677ec65b8aa81863"} Jan 20 18:24:21 crc kubenswrapper[4661]: I0120 18:24:21.527589 4661 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-sync-tck9n" podStartSLOduration=2.222204219 podStartE2EDuration="17.527569613s" podCreationTimestamp="2026-01-20 18:24:04 +0000 UTC" firstStartedPulling="2026-01-20 18:24:05.067496429 +0000 UTC m=+1101.398286091" lastFinishedPulling="2026-01-20 18:24:20.372861823 +0000 UTC m=+1116.703651485" observedRunningTime="2026-01-20 18:24:21.522397936 +0000 UTC m=+1117.853187608" watchObservedRunningTime="2026-01-20 18:24:21.527569613 +0000 UTC m=+1117.858359285" Jan 20 18:24:21 crc kubenswrapper[4661]: I0120 18:24:21.553689 4661 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-create-g2bml" podStartSLOduration=6.553654806 podStartE2EDuration="6.553654806s" podCreationTimestamp="2026-01-20 18:24:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 18:24:21.506304518 +0000 UTC m=+1117.837094180" watchObservedRunningTime="2026-01-20 18:24:21.553654806 +0000 UTC m=+1117.884444468" Jan 20 18:24:21 crc kubenswrapper[4661]: I0120 18:24:21.557064 4661 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-6b29-account-create-update-mp485" podStartSLOduration=6.557047336 podStartE2EDuration="6.557047336s" podCreationTimestamp="2026-01-20 18:24:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 18:24:21.541526124 +0000 UTC m=+1117.872315786" watchObservedRunningTime="2026-01-20 18:24:21.557047336 +0000 UTC m=+1117.887836998" Jan 20 18:24:21 crc kubenswrapper[4661]: I0120 18:24:21.572448 4661 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-c726-account-create-update-9nmqv" podStartSLOduration=6.572429535 podStartE2EDuration="6.572429535s" podCreationTimestamp="2026-01-20 18:24:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 18:24:21.563519928 +0000 UTC m=+1117.894309590" watchObservedRunningTime="2026-01-20 18:24:21.572429535 +0000 UTC m=+1117.903219197" Jan 20 18:24:21 crc kubenswrapper[4661]: I0120 18:24:21.654314 4661 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-create-p9spf" podStartSLOduration=6.65429974 podStartE2EDuration="6.65429974s" podCreationTimestamp="2026-01-20 18:24:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 18:24:21.651978359 +0000 UTC m=+1117.982768021" watchObservedRunningTime="2026-01-20 18:24:21.65429974 +0000 UTC m=+1117.985089402" Jan 20 18:24:22 crc kubenswrapper[4661]: I0120 18:24:22.154164 4661 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="abdc289a-5047-465e-ac25-3934ef40796c" path="/var/lib/kubelet/pods/abdc289a-5047-465e-ac25-3934ef40796c/volumes" Jan 20 18:24:22 crc kubenswrapper[4661]: I0120 18:24:22.519355 4661 generic.go:334] "Generic (PLEG): container finished" podID="becce439-c271-41e5-9d39-0bdd4b284772" containerID="4e961f31c710c32d632178da3c1f6553e83a76abff69ce2153f682fd7c352402" exitCode=0 Jan 20 18:24:22 crc kubenswrapper[4661]: I0120 18:24:22.519414 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-g2bml" event={"ID":"becce439-c271-41e5-9d39-0bdd4b284772","Type":"ContainerDied","Data":"4e961f31c710c32d632178da3c1f6553e83a76abff69ce2153f682fd7c352402"} Jan 20 18:24:22 crc kubenswrapper[4661]: I0120 18:24:22.520658 4661 generic.go:334] "Generic (PLEG): container finished" podID="d42815f3-34a4-40b4-bfca-cc453cb7d569" containerID="4e14a4f29c28dcd080fd708fc1a1397641711d809052cb863dca7f5e9ba4ca54" exitCode=0 Jan 20 18:24:22 crc kubenswrapper[4661]: I0120 18:24:22.520714 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-p9spf" event={"ID":"d42815f3-34a4-40b4-bfca-cc453cb7d569","Type":"ContainerDied","Data":"4e14a4f29c28dcd080fd708fc1a1397641711d809052cb863dca7f5e9ba4ca54"} Jan 20 18:24:22 crc kubenswrapper[4661]: I0120 18:24:22.521802 4661 generic.go:334] "Generic (PLEG): container finished" podID="ba6f71cd-96ff-473f-b875-b8f44a58dab4" containerID="3004c39b2a690d14c58709df3cdafe712bdb579576ed3687154e82d37a2c8591" exitCode=0 Jan 20 18:24:22 crc kubenswrapper[4661]: I0120 18:24:22.521841 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-4694-account-create-update-d9j2f" event={"ID":"ba6f71cd-96ff-473f-b875-b8f44a58dab4","Type":"ContainerDied","Data":"3004c39b2a690d14c58709df3cdafe712bdb579576ed3687154e82d37a2c8591"} Jan 20 18:24:22 crc kubenswrapper[4661]: I0120 18:24:22.525324 4661 generic.go:334] "Generic (PLEG): container finished" podID="8ecbb4c9-acd4-45bd-a270-8798d5aa5926" containerID="9c44cb67da46db745fac0fed83b94683bb0aff4510142ef54d241b5dfbcff29a" exitCode=0 Jan 20 18:24:22 crc kubenswrapper[4661]: I0120 18:24:22.525366 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-c726-account-create-update-9nmqv" event={"ID":"8ecbb4c9-acd4-45bd-a270-8798d5aa5926","Type":"ContainerDied","Data":"9c44cb67da46db745fac0fed83b94683bb0aff4510142ef54d241b5dfbcff29a"} Jan 20 18:24:22 crc kubenswrapper[4661]: I0120 18:24:22.527390 4661 generic.go:334] "Generic (PLEG): container finished" podID="8ca68c3b-f314-4a85-80c3-e3ff0a17b449" containerID="91c1c0dc5ee3b06a8c25355770483a60714c2caa96d9416802ca6b14f02eda5f" exitCode=0 Jan 20 18:24:22 crc kubenswrapper[4661]: I0120 18:24:22.527560 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-6b29-account-create-update-mp485" event={"ID":"8ca68c3b-f314-4a85-80c3-e3ff0a17b449","Type":"ContainerDied","Data":"91c1c0dc5ee3b06a8c25355770483a60714c2caa96d9416802ca6b14f02eda5f"} Jan 20 18:24:22 crc kubenswrapper[4661]: I0120 18:24:22.911359 4661 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-b6hh6" Jan 20 18:24:22 crc kubenswrapper[4661]: I0120 18:24:22.968545 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f44143b3-87a7-4a6c-a5ca-fd2ddbb5a5b2-operator-scripts\") pod \"f44143b3-87a7-4a6c-a5ca-fd2ddbb5a5b2\" (UID: \"f44143b3-87a7-4a6c-a5ca-fd2ddbb5a5b2\") " Jan 20 18:24:22 crc kubenswrapper[4661]: I0120 18:24:22.968692 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w8s4n\" (UniqueName: \"kubernetes.io/projected/f44143b3-87a7-4a6c-a5ca-fd2ddbb5a5b2-kube-api-access-w8s4n\") pod \"f44143b3-87a7-4a6c-a5ca-fd2ddbb5a5b2\" (UID: \"f44143b3-87a7-4a6c-a5ca-fd2ddbb5a5b2\") " Jan 20 18:24:22 crc kubenswrapper[4661]: I0120 18:24:22.971824 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f44143b3-87a7-4a6c-a5ca-fd2ddbb5a5b2-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "f44143b3-87a7-4a6c-a5ca-fd2ddbb5a5b2" (UID: "f44143b3-87a7-4a6c-a5ca-fd2ddbb5a5b2"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 18:24:22 crc kubenswrapper[4661]: I0120 18:24:22.976910 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f44143b3-87a7-4a6c-a5ca-fd2ddbb5a5b2-kube-api-access-w8s4n" (OuterVolumeSpecName: "kube-api-access-w8s4n") pod "f44143b3-87a7-4a6c-a5ca-fd2ddbb5a5b2" (UID: "f44143b3-87a7-4a6c-a5ca-fd2ddbb5a5b2"). InnerVolumeSpecName "kube-api-access-w8s4n". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 18:24:23 crc kubenswrapper[4661]: I0120 18:24:23.069620 4661 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f44143b3-87a7-4a6c-a5ca-fd2ddbb5a5b2-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 20 18:24:23 crc kubenswrapper[4661]: I0120 18:24:23.069647 4661 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w8s4n\" (UniqueName: \"kubernetes.io/projected/f44143b3-87a7-4a6c-a5ca-fd2ddbb5a5b2-kube-api-access-w8s4n\") on node \"crc\" DevicePath \"\"" Jan 20 18:24:23 crc kubenswrapper[4661]: I0120 18:24:23.536903 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-b6hh6" event={"ID":"f44143b3-87a7-4a6c-a5ca-fd2ddbb5a5b2","Type":"ContainerDied","Data":"9638f402c262fe5f3a83e30961cc57ab42294ccf1cc4e5a9ee8233c943ead79b"} Jan 20 18:24:23 crc kubenswrapper[4661]: I0120 18:24:23.536959 4661 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9638f402c262fe5f3a83e30961cc57ab42294ccf1cc4e5a9ee8233c943ead79b" Jan 20 18:24:23 crc kubenswrapper[4661]: I0120 18:24:23.536913 4661 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-b6hh6" Jan 20 18:24:25 crc kubenswrapper[4661]: I0120 18:24:25.474897 4661 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Jan 20 18:24:29 crc kubenswrapper[4661]: I0120 18:24:29.323492 4661 patch_prober.go:28] interesting pod/machine-config-daemon-svf7c container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 20 18:24:29 crc kubenswrapper[4661]: I0120 18:24:29.325131 4661 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-svf7c" podUID="78855c94-da90-4523-8d65-70f7fd153dee" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 20 18:24:33 crc kubenswrapper[4661]: E0120 18:24:33.322902 4661 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-keystone:current-podified" Jan 20 18:24:33 crc kubenswrapper[4661]: E0120 18:24:33.323907 4661 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:keystone-db-sync,Image:quay.io/podified-antelope-centos9/openstack-keystone:current-podified,Command:[/bin/bash],Args:[-c keystone-manage db_sync],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/keystone/keystone.conf,SubPath:keystone.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dx5xx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42425,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42425,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod keystone-db-sync-wq7m7_openstack(289c1012-9041-4cb4-baa7-888d31048e4c): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 20 18:24:33 crc kubenswrapper[4661]: E0120 18:24:33.325131 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"keystone-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/keystone-db-sync-wq7m7" podUID="289c1012-9041-4cb4-baa7-888d31048e4c" Jan 20 18:24:33 crc kubenswrapper[4661]: I0120 18:24:33.399251 4661 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-6b29-account-create-update-mp485" Jan 20 18:24:33 crc kubenswrapper[4661]: I0120 18:24:33.422082 4661 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-p9spf" Jan 20 18:24:33 crc kubenswrapper[4661]: I0120 18:24:33.438210 4661 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-g2bml" Jan 20 18:24:33 crc kubenswrapper[4661]: I0120 18:24:33.446597 4661 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-c726-account-create-update-9nmqv" Jan 20 18:24:33 crc kubenswrapper[4661]: I0120 18:24:33.458519 4661 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-4694-account-create-update-d9j2f" Jan 20 18:24:33 crc kubenswrapper[4661]: I0120 18:24:33.559810 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-48gzx\" (UniqueName: \"kubernetes.io/projected/8ecbb4c9-acd4-45bd-a270-8798d5aa5926-kube-api-access-48gzx\") pod \"8ecbb4c9-acd4-45bd-a270-8798d5aa5926\" (UID: \"8ecbb4c9-acd4-45bd-a270-8798d5aa5926\") " Jan 20 18:24:33 crc kubenswrapper[4661]: I0120 18:24:33.559895 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8ca68c3b-f314-4a85-80c3-e3ff0a17b449-operator-scripts\") pod \"8ca68c3b-f314-4a85-80c3-e3ff0a17b449\" (UID: \"8ca68c3b-f314-4a85-80c3-e3ff0a17b449\") " Jan 20 18:24:33 crc kubenswrapper[4661]: I0120 18:24:33.559972 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/becce439-c271-41e5-9d39-0bdd4b284772-operator-scripts\") pod \"becce439-c271-41e5-9d39-0bdd4b284772\" (UID: \"becce439-c271-41e5-9d39-0bdd4b284772\") " Jan 20 18:24:33 crc kubenswrapper[4661]: I0120 18:24:33.560050 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d42815f3-34a4-40b4-bfca-cc453cb7d569-operator-scripts\") pod \"d42815f3-34a4-40b4-bfca-cc453cb7d569\" (UID: \"d42815f3-34a4-40b4-bfca-cc453cb7d569\") " Jan 20 18:24:33 crc kubenswrapper[4661]: I0120 18:24:33.560127 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxw7c\" (UniqueName: \"kubernetes.io/projected/ba6f71cd-96ff-473f-b875-b8f44a58dab4-kube-api-access-wxw7c\") pod \"ba6f71cd-96ff-473f-b875-b8f44a58dab4\" (UID: \"ba6f71cd-96ff-473f-b875-b8f44a58dab4\") " Jan 20 18:24:33 crc kubenswrapper[4661]: I0120 18:24:33.560276 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jffwg\" (UniqueName: \"kubernetes.io/projected/d42815f3-34a4-40b4-bfca-cc453cb7d569-kube-api-access-jffwg\") pod \"d42815f3-34a4-40b4-bfca-cc453cb7d569\" (UID: \"d42815f3-34a4-40b4-bfca-cc453cb7d569\") " Jan 20 18:24:33 crc kubenswrapper[4661]: I0120 18:24:33.560425 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c78vr\" (UniqueName: \"kubernetes.io/projected/becce439-c271-41e5-9d39-0bdd4b284772-kube-api-access-c78vr\") pod \"becce439-c271-41e5-9d39-0bdd4b284772\" (UID: \"becce439-c271-41e5-9d39-0bdd4b284772\") " Jan 20 18:24:33 crc kubenswrapper[4661]: I0120 18:24:33.560550 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-555np\" (UniqueName: \"kubernetes.io/projected/8ca68c3b-f314-4a85-80c3-e3ff0a17b449-kube-api-access-555np\") pod \"8ca68c3b-f314-4a85-80c3-e3ff0a17b449\" (UID: \"8ca68c3b-f314-4a85-80c3-e3ff0a17b449\") " Jan 20 18:24:33 crc kubenswrapper[4661]: I0120 18:24:33.560640 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ba6f71cd-96ff-473f-b875-b8f44a58dab4-operator-scripts\") pod \"ba6f71cd-96ff-473f-b875-b8f44a58dab4\" (UID: \"ba6f71cd-96ff-473f-b875-b8f44a58dab4\") " Jan 20 18:24:33 crc kubenswrapper[4661]: I0120 18:24:33.560698 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8ecbb4c9-acd4-45bd-a270-8798d5aa5926-operator-scripts\") pod \"8ecbb4c9-acd4-45bd-a270-8798d5aa5926\" (UID: \"8ecbb4c9-acd4-45bd-a270-8798d5aa5926\") " Jan 20 18:24:33 crc kubenswrapper[4661]: I0120 18:24:33.560863 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/becce439-c271-41e5-9d39-0bdd4b284772-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "becce439-c271-41e5-9d39-0bdd4b284772" (UID: "becce439-c271-41e5-9d39-0bdd4b284772"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 18:24:33 crc kubenswrapper[4661]: I0120 18:24:33.560942 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8ca68c3b-f314-4a85-80c3-e3ff0a17b449-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "8ca68c3b-f314-4a85-80c3-e3ff0a17b449" (UID: "8ca68c3b-f314-4a85-80c3-e3ff0a17b449"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 18:24:33 crc kubenswrapper[4661]: I0120 18:24:33.561307 4661 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8ca68c3b-f314-4a85-80c3-e3ff0a17b449-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 20 18:24:33 crc kubenswrapper[4661]: I0120 18:24:33.561336 4661 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/becce439-c271-41e5-9d39-0bdd4b284772-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 20 18:24:33 crc kubenswrapper[4661]: I0120 18:24:33.561518 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d42815f3-34a4-40b4-bfca-cc453cb7d569-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "d42815f3-34a4-40b4-bfca-cc453cb7d569" (UID: "d42815f3-34a4-40b4-bfca-cc453cb7d569"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 18:24:33 crc kubenswrapper[4661]: I0120 18:24:33.562001 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ba6f71cd-96ff-473f-b875-b8f44a58dab4-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "ba6f71cd-96ff-473f-b875-b8f44a58dab4" (UID: "ba6f71cd-96ff-473f-b875-b8f44a58dab4"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 18:24:33 crc kubenswrapper[4661]: I0120 18:24:33.562596 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8ecbb4c9-acd4-45bd-a270-8798d5aa5926-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "8ecbb4c9-acd4-45bd-a270-8798d5aa5926" (UID: "8ecbb4c9-acd4-45bd-a270-8798d5aa5926"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 18:24:33 crc kubenswrapper[4661]: I0120 18:24:33.567923 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/becce439-c271-41e5-9d39-0bdd4b284772-kube-api-access-c78vr" (OuterVolumeSpecName: "kube-api-access-c78vr") pod "becce439-c271-41e5-9d39-0bdd4b284772" (UID: "becce439-c271-41e5-9d39-0bdd4b284772"). InnerVolumeSpecName "kube-api-access-c78vr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 18:24:33 crc kubenswrapper[4661]: I0120 18:24:33.568308 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ba6f71cd-96ff-473f-b875-b8f44a58dab4-kube-api-access-wxw7c" (OuterVolumeSpecName: "kube-api-access-wxw7c") pod "ba6f71cd-96ff-473f-b875-b8f44a58dab4" (UID: "ba6f71cd-96ff-473f-b875-b8f44a58dab4"). InnerVolumeSpecName "kube-api-access-wxw7c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 18:24:33 crc kubenswrapper[4661]: I0120 18:24:33.571771 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d42815f3-34a4-40b4-bfca-cc453cb7d569-kube-api-access-jffwg" (OuterVolumeSpecName: "kube-api-access-jffwg") pod "d42815f3-34a4-40b4-bfca-cc453cb7d569" (UID: "d42815f3-34a4-40b4-bfca-cc453cb7d569"). InnerVolumeSpecName "kube-api-access-jffwg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 18:24:33 crc kubenswrapper[4661]: I0120 18:24:33.571901 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8ecbb4c9-acd4-45bd-a270-8798d5aa5926-kube-api-access-48gzx" (OuterVolumeSpecName: "kube-api-access-48gzx") pod "8ecbb4c9-acd4-45bd-a270-8798d5aa5926" (UID: "8ecbb4c9-acd4-45bd-a270-8798d5aa5926"). InnerVolumeSpecName "kube-api-access-48gzx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 18:24:33 crc kubenswrapper[4661]: I0120 18:24:33.572644 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8ca68c3b-f314-4a85-80c3-e3ff0a17b449-kube-api-access-555np" (OuterVolumeSpecName: "kube-api-access-555np") pod "8ca68c3b-f314-4a85-80c3-e3ff0a17b449" (UID: "8ca68c3b-f314-4a85-80c3-e3ff0a17b449"). InnerVolumeSpecName "kube-api-access-555np". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 18:24:33 crc kubenswrapper[4661]: I0120 18:24:33.628006 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-p9spf" event={"ID":"d42815f3-34a4-40b4-bfca-cc453cb7d569","Type":"ContainerDied","Data":"e034c7b6343f6d78cee1cc91703d6ac1d80494fb361bc0c5677ec65b8aa81863"} Jan 20 18:24:33 crc kubenswrapper[4661]: I0120 18:24:33.628046 4661 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e034c7b6343f6d78cee1cc91703d6ac1d80494fb361bc0c5677ec65b8aa81863" Jan 20 18:24:33 crc kubenswrapper[4661]: I0120 18:24:33.628110 4661 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-p9spf" Jan 20 18:24:33 crc kubenswrapper[4661]: I0120 18:24:33.629760 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-g2bml" event={"ID":"becce439-c271-41e5-9d39-0bdd4b284772","Type":"ContainerDied","Data":"39e8732b877e0dfa1336a8f78fb382d1ac343b934ab11788ee8a0d4a6f8379e4"} Jan 20 18:24:33 crc kubenswrapper[4661]: I0120 18:24:33.629785 4661 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="39e8732b877e0dfa1336a8f78fb382d1ac343b934ab11788ee8a0d4a6f8379e4" Jan 20 18:24:33 crc kubenswrapper[4661]: I0120 18:24:33.630479 4661 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-g2bml" Jan 20 18:24:33 crc kubenswrapper[4661]: I0120 18:24:33.631776 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-4694-account-create-update-d9j2f" event={"ID":"ba6f71cd-96ff-473f-b875-b8f44a58dab4","Type":"ContainerDied","Data":"9cbe0d0d18ad810e7ce7951e1f0aec7d6d0e5c7cdc8faa4ada7ce794d72c1130"} Jan 20 18:24:33 crc kubenswrapper[4661]: I0120 18:24:33.631805 4661 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9cbe0d0d18ad810e7ce7951e1f0aec7d6d0e5c7cdc8faa4ada7ce794d72c1130" Jan 20 18:24:33 crc kubenswrapper[4661]: I0120 18:24:33.631864 4661 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-4694-account-create-update-d9j2f" Jan 20 18:24:33 crc kubenswrapper[4661]: I0120 18:24:33.633502 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-c726-account-create-update-9nmqv" event={"ID":"8ecbb4c9-acd4-45bd-a270-8798d5aa5926","Type":"ContainerDied","Data":"584cc818a5ee52866889726541e044defe397f9ecf6c38bd21c6d0430e23b5df"} Jan 20 18:24:33 crc kubenswrapper[4661]: I0120 18:24:33.633529 4661 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="584cc818a5ee52866889726541e044defe397f9ecf6c38bd21c6d0430e23b5df" Jan 20 18:24:33 crc kubenswrapper[4661]: I0120 18:24:33.633596 4661 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-c726-account-create-update-9nmqv" Jan 20 18:24:33 crc kubenswrapper[4661]: I0120 18:24:33.638590 4661 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-6b29-account-create-update-mp485" Jan 20 18:24:33 crc kubenswrapper[4661]: I0120 18:24:33.643244 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-6b29-account-create-update-mp485" event={"ID":"8ca68c3b-f314-4a85-80c3-e3ff0a17b449","Type":"ContainerDied","Data":"ac787d4415d3521dc73d5950cd507f58d92d28db548c3aa1d209db7136e7cd5a"} Jan 20 18:24:33 crc kubenswrapper[4661]: I0120 18:24:33.643348 4661 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ac787d4415d3521dc73d5950cd507f58d92d28db548c3aa1d209db7136e7cd5a" Jan 20 18:24:33 crc kubenswrapper[4661]: E0120 18:24:33.644718 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"keystone-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-keystone:current-podified\\\"\"" pod="openstack/keystone-db-sync-wq7m7" podUID="289c1012-9041-4cb4-baa7-888d31048e4c" Jan 20 18:24:33 crc kubenswrapper[4661]: I0120 18:24:33.662920 4661 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ba6f71cd-96ff-473f-b875-b8f44a58dab4-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 20 18:24:33 crc kubenswrapper[4661]: I0120 18:24:33.662955 4661 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8ecbb4c9-acd4-45bd-a270-8798d5aa5926-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 20 18:24:33 crc kubenswrapper[4661]: I0120 18:24:33.662970 4661 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-48gzx\" (UniqueName: \"kubernetes.io/projected/8ecbb4c9-acd4-45bd-a270-8798d5aa5926-kube-api-access-48gzx\") on node \"crc\" DevicePath \"\"" Jan 20 18:24:33 crc kubenswrapper[4661]: I0120 18:24:33.662981 4661 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d42815f3-34a4-40b4-bfca-cc453cb7d569-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 20 18:24:33 crc kubenswrapper[4661]: I0120 18:24:33.662994 4661 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxw7c\" (UniqueName: \"kubernetes.io/projected/ba6f71cd-96ff-473f-b875-b8f44a58dab4-kube-api-access-wxw7c\") on node \"crc\" DevicePath \"\"" Jan 20 18:24:33 crc kubenswrapper[4661]: I0120 18:24:33.663006 4661 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jffwg\" (UniqueName: \"kubernetes.io/projected/d42815f3-34a4-40b4-bfca-cc453cb7d569-kube-api-access-jffwg\") on node \"crc\" DevicePath \"\"" Jan 20 18:24:33 crc kubenswrapper[4661]: I0120 18:24:33.663018 4661 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c78vr\" (UniqueName: \"kubernetes.io/projected/becce439-c271-41e5-9d39-0bdd4b284772-kube-api-access-c78vr\") on node \"crc\" DevicePath \"\"" Jan 20 18:24:33 crc kubenswrapper[4661]: I0120 18:24:33.663032 4661 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-555np\" (UniqueName: \"kubernetes.io/projected/8ca68c3b-f314-4a85-80c3-e3ff0a17b449-kube-api-access-555np\") on node \"crc\" DevicePath \"\"" Jan 20 18:24:39 crc kubenswrapper[4661]: I0120 18:24:39.697831 4661 generic.go:334] "Generic (PLEG): container finished" podID="64020290-0e73-480d-b523-d7cb664eacfd" containerID="77925133481db4cab9d22b8313e107aa9311cb661804b6c28eea5f7f4979af4b" exitCode=0 Jan 20 18:24:39 crc kubenswrapper[4661]: I0120 18:24:39.697914 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-tck9n" event={"ID":"64020290-0e73-480d-b523-d7cb664eacfd","Type":"ContainerDied","Data":"77925133481db4cab9d22b8313e107aa9311cb661804b6c28eea5f7f4979af4b"} Jan 20 18:24:41 crc kubenswrapper[4661]: I0120 18:24:41.056983 4661 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-tck9n" Jan 20 18:24:41 crc kubenswrapper[4661]: I0120 18:24:41.109151 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/64020290-0e73-480d-b523-d7cb664eacfd-config-data\") pod \"64020290-0e73-480d-b523-d7cb664eacfd\" (UID: \"64020290-0e73-480d-b523-d7cb664eacfd\") " Jan 20 18:24:41 crc kubenswrapper[4661]: I0120 18:24:41.109349 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/64020290-0e73-480d-b523-d7cb664eacfd-db-sync-config-data\") pod \"64020290-0e73-480d-b523-d7cb664eacfd\" (UID: \"64020290-0e73-480d-b523-d7cb664eacfd\") " Jan 20 18:24:41 crc kubenswrapper[4661]: I0120 18:24:41.109379 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vlhwh\" (UniqueName: \"kubernetes.io/projected/64020290-0e73-480d-b523-d7cb664eacfd-kube-api-access-vlhwh\") pod \"64020290-0e73-480d-b523-d7cb664eacfd\" (UID: \"64020290-0e73-480d-b523-d7cb664eacfd\") " Jan 20 18:24:41 crc kubenswrapper[4661]: I0120 18:24:41.109397 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/64020290-0e73-480d-b523-d7cb664eacfd-combined-ca-bundle\") pod \"64020290-0e73-480d-b523-d7cb664eacfd\" (UID: \"64020290-0e73-480d-b523-d7cb664eacfd\") " Jan 20 18:24:41 crc kubenswrapper[4661]: I0120 18:24:41.114591 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/64020290-0e73-480d-b523-d7cb664eacfd-kube-api-access-vlhwh" (OuterVolumeSpecName: "kube-api-access-vlhwh") pod "64020290-0e73-480d-b523-d7cb664eacfd" (UID: "64020290-0e73-480d-b523-d7cb664eacfd"). InnerVolumeSpecName "kube-api-access-vlhwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 18:24:41 crc kubenswrapper[4661]: I0120 18:24:41.117600 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/64020290-0e73-480d-b523-d7cb664eacfd-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "64020290-0e73-480d-b523-d7cb664eacfd" (UID: "64020290-0e73-480d-b523-d7cb664eacfd"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 18:24:41 crc kubenswrapper[4661]: I0120 18:24:41.130003 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/64020290-0e73-480d-b523-d7cb664eacfd-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "64020290-0e73-480d-b523-d7cb664eacfd" (UID: "64020290-0e73-480d-b523-d7cb664eacfd"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 18:24:41 crc kubenswrapper[4661]: I0120 18:24:41.154210 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/64020290-0e73-480d-b523-d7cb664eacfd-config-data" (OuterVolumeSpecName: "config-data") pod "64020290-0e73-480d-b523-d7cb664eacfd" (UID: "64020290-0e73-480d-b523-d7cb664eacfd"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 18:24:41 crc kubenswrapper[4661]: I0120 18:24:41.212132 4661 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/64020290-0e73-480d-b523-d7cb664eacfd-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 20 18:24:41 crc kubenswrapper[4661]: I0120 18:24:41.212174 4661 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vlhwh\" (UniqueName: \"kubernetes.io/projected/64020290-0e73-480d-b523-d7cb664eacfd-kube-api-access-vlhwh\") on node \"crc\" DevicePath \"\"" Jan 20 18:24:41 crc kubenswrapper[4661]: I0120 18:24:41.212185 4661 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/64020290-0e73-480d-b523-d7cb664eacfd-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 20 18:24:41 crc kubenswrapper[4661]: I0120 18:24:41.212195 4661 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/64020290-0e73-480d-b523-d7cb664eacfd-config-data\") on node \"crc\" DevicePath \"\"" Jan 20 18:24:41 crc kubenswrapper[4661]: I0120 18:24:41.715463 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-tck9n" event={"ID":"64020290-0e73-480d-b523-d7cb664eacfd","Type":"ContainerDied","Data":"79e213f3358db4cd094bf3bccfc3937b7be5bf666a77547574f1bb83bb60a318"} Jan 20 18:24:41 crc kubenswrapper[4661]: I0120 18:24:41.715506 4661 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="79e213f3358db4cd094bf3bccfc3937b7be5bf666a77547574f1bb83bb60a318" Jan 20 18:24:41 crc kubenswrapper[4661]: I0120 18:24:41.715560 4661 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-tck9n" Jan 20 18:24:42 crc kubenswrapper[4661]: I0120 18:24:42.150368 4661 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-54f9b7b8d9-wt75k"] Jan 20 18:24:42 crc kubenswrapper[4661]: E0120 18:24:42.150612 4661 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ba6f71cd-96ff-473f-b875-b8f44a58dab4" containerName="mariadb-account-create-update" Jan 20 18:24:42 crc kubenswrapper[4661]: I0120 18:24:42.150625 4661 state_mem.go:107] "Deleted CPUSet assignment" podUID="ba6f71cd-96ff-473f-b875-b8f44a58dab4" containerName="mariadb-account-create-update" Jan 20 18:24:42 crc kubenswrapper[4661]: E0120 18:24:42.150636 4661 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8ca68c3b-f314-4a85-80c3-e3ff0a17b449" containerName="mariadb-account-create-update" Jan 20 18:24:42 crc kubenswrapper[4661]: I0120 18:24:42.150641 4661 state_mem.go:107] "Deleted CPUSet assignment" podUID="8ca68c3b-f314-4a85-80c3-e3ff0a17b449" containerName="mariadb-account-create-update" Jan 20 18:24:42 crc kubenswrapper[4661]: E0120 18:24:42.150655 4661 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f44143b3-87a7-4a6c-a5ca-fd2ddbb5a5b2" containerName="mariadb-database-create" Jan 20 18:24:42 crc kubenswrapper[4661]: I0120 18:24:42.150661 4661 state_mem.go:107] "Deleted CPUSet assignment" podUID="f44143b3-87a7-4a6c-a5ca-fd2ddbb5a5b2" containerName="mariadb-database-create" Jan 20 18:24:42 crc kubenswrapper[4661]: E0120 18:24:42.150687 4661 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="abdc289a-5047-465e-ac25-3934ef40796c" containerName="ovn-config" Jan 20 18:24:42 crc kubenswrapper[4661]: I0120 18:24:42.150692 4661 state_mem.go:107] "Deleted CPUSet assignment" podUID="abdc289a-5047-465e-ac25-3934ef40796c" containerName="ovn-config" Jan 20 18:24:42 crc kubenswrapper[4661]: E0120 18:24:42.150704 4661 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="64020290-0e73-480d-b523-d7cb664eacfd" containerName="glance-db-sync" Jan 20 18:24:42 crc kubenswrapper[4661]: I0120 18:24:42.150710 4661 state_mem.go:107] "Deleted CPUSet assignment" podUID="64020290-0e73-480d-b523-d7cb664eacfd" containerName="glance-db-sync" Jan 20 18:24:42 crc kubenswrapper[4661]: E0120 18:24:42.150723 4661 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d42815f3-34a4-40b4-bfca-cc453cb7d569" containerName="mariadb-database-create" Jan 20 18:24:42 crc kubenswrapper[4661]: I0120 18:24:42.150728 4661 state_mem.go:107] "Deleted CPUSet assignment" podUID="d42815f3-34a4-40b4-bfca-cc453cb7d569" containerName="mariadb-database-create" Jan 20 18:24:42 crc kubenswrapper[4661]: E0120 18:24:42.150735 4661 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="becce439-c271-41e5-9d39-0bdd4b284772" containerName="mariadb-database-create" Jan 20 18:24:42 crc kubenswrapper[4661]: I0120 18:24:42.150740 4661 state_mem.go:107] "Deleted CPUSet assignment" podUID="becce439-c271-41e5-9d39-0bdd4b284772" containerName="mariadb-database-create" Jan 20 18:24:42 crc kubenswrapper[4661]: E0120 18:24:42.150749 4661 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8ecbb4c9-acd4-45bd-a270-8798d5aa5926" containerName="mariadb-account-create-update" Jan 20 18:24:42 crc kubenswrapper[4661]: I0120 18:24:42.150754 4661 state_mem.go:107] "Deleted CPUSet assignment" podUID="8ecbb4c9-acd4-45bd-a270-8798d5aa5926" containerName="mariadb-account-create-update" Jan 20 18:24:42 crc kubenswrapper[4661]: I0120 18:24:42.150883 4661 memory_manager.go:354] "RemoveStaleState removing state" podUID="8ecbb4c9-acd4-45bd-a270-8798d5aa5926" containerName="mariadb-account-create-update" Jan 20 18:24:42 crc kubenswrapper[4661]: I0120 18:24:42.150893 4661 memory_manager.go:354] "RemoveStaleState removing state" podUID="8ca68c3b-f314-4a85-80c3-e3ff0a17b449" containerName="mariadb-account-create-update" Jan 20 18:24:42 crc kubenswrapper[4661]: I0120 18:24:42.150900 4661 memory_manager.go:354] "RemoveStaleState removing state" podUID="abdc289a-5047-465e-ac25-3934ef40796c" containerName="ovn-config" Jan 20 18:24:42 crc kubenswrapper[4661]: I0120 18:24:42.150908 4661 memory_manager.go:354] "RemoveStaleState removing state" podUID="becce439-c271-41e5-9d39-0bdd4b284772" containerName="mariadb-database-create" Jan 20 18:24:42 crc kubenswrapper[4661]: I0120 18:24:42.150919 4661 memory_manager.go:354] "RemoveStaleState removing state" podUID="ba6f71cd-96ff-473f-b875-b8f44a58dab4" containerName="mariadb-account-create-update" Jan 20 18:24:42 crc kubenswrapper[4661]: I0120 18:24:42.150929 4661 memory_manager.go:354] "RemoveStaleState removing state" podUID="d42815f3-34a4-40b4-bfca-cc453cb7d569" containerName="mariadb-database-create" Jan 20 18:24:42 crc kubenswrapper[4661]: I0120 18:24:42.150940 4661 memory_manager.go:354] "RemoveStaleState removing state" podUID="f44143b3-87a7-4a6c-a5ca-fd2ddbb5a5b2" containerName="mariadb-database-create" Jan 20 18:24:42 crc kubenswrapper[4661]: I0120 18:24:42.150950 4661 memory_manager.go:354] "RemoveStaleState removing state" podUID="64020290-0e73-480d-b523-d7cb664eacfd" containerName="glance-db-sync" Jan 20 18:24:42 crc kubenswrapper[4661]: I0120 18:24:42.151692 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-54f9b7b8d9-wt75k" Jan 20 18:24:42 crc kubenswrapper[4661]: I0120 18:24:42.173245 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-54f9b7b8d9-wt75k"] Jan 20 18:24:42 crc kubenswrapper[4661]: I0120 18:24:42.227619 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bb256\" (UniqueName: \"kubernetes.io/projected/67352b8c-a5b5-427d-9bdd-861d7d00a88e-kube-api-access-bb256\") pod \"dnsmasq-dns-54f9b7b8d9-wt75k\" (UID: \"67352b8c-a5b5-427d-9bdd-861d7d00a88e\") " pod="openstack/dnsmasq-dns-54f9b7b8d9-wt75k" Jan 20 18:24:42 crc kubenswrapper[4661]: I0120 18:24:42.227716 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/67352b8c-a5b5-427d-9bdd-861d7d00a88e-config\") pod \"dnsmasq-dns-54f9b7b8d9-wt75k\" (UID: \"67352b8c-a5b5-427d-9bdd-861d7d00a88e\") " pod="openstack/dnsmasq-dns-54f9b7b8d9-wt75k" Jan 20 18:24:42 crc kubenswrapper[4661]: I0120 18:24:42.227806 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/67352b8c-a5b5-427d-9bdd-861d7d00a88e-dns-svc\") pod \"dnsmasq-dns-54f9b7b8d9-wt75k\" (UID: \"67352b8c-a5b5-427d-9bdd-861d7d00a88e\") " pod="openstack/dnsmasq-dns-54f9b7b8d9-wt75k" Jan 20 18:24:42 crc kubenswrapper[4661]: I0120 18:24:42.227840 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/67352b8c-a5b5-427d-9bdd-861d7d00a88e-ovsdbserver-sb\") pod \"dnsmasq-dns-54f9b7b8d9-wt75k\" (UID: \"67352b8c-a5b5-427d-9bdd-861d7d00a88e\") " pod="openstack/dnsmasq-dns-54f9b7b8d9-wt75k" Jan 20 18:24:42 crc kubenswrapper[4661]: I0120 18:24:42.227864 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/67352b8c-a5b5-427d-9bdd-861d7d00a88e-ovsdbserver-nb\") pod \"dnsmasq-dns-54f9b7b8d9-wt75k\" (UID: \"67352b8c-a5b5-427d-9bdd-861d7d00a88e\") " pod="openstack/dnsmasq-dns-54f9b7b8d9-wt75k" Jan 20 18:24:42 crc kubenswrapper[4661]: I0120 18:24:42.329132 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/67352b8c-a5b5-427d-9bdd-861d7d00a88e-dns-svc\") pod \"dnsmasq-dns-54f9b7b8d9-wt75k\" (UID: \"67352b8c-a5b5-427d-9bdd-861d7d00a88e\") " pod="openstack/dnsmasq-dns-54f9b7b8d9-wt75k" Jan 20 18:24:42 crc kubenswrapper[4661]: I0120 18:24:42.330014 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/67352b8c-a5b5-427d-9bdd-861d7d00a88e-dns-svc\") pod \"dnsmasq-dns-54f9b7b8d9-wt75k\" (UID: \"67352b8c-a5b5-427d-9bdd-861d7d00a88e\") " pod="openstack/dnsmasq-dns-54f9b7b8d9-wt75k" Jan 20 18:24:42 crc kubenswrapper[4661]: I0120 18:24:42.330084 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/67352b8c-a5b5-427d-9bdd-861d7d00a88e-ovsdbserver-sb\") pod \"dnsmasq-dns-54f9b7b8d9-wt75k\" (UID: \"67352b8c-a5b5-427d-9bdd-861d7d00a88e\") " pod="openstack/dnsmasq-dns-54f9b7b8d9-wt75k" Jan 20 18:24:42 crc kubenswrapper[4661]: I0120 18:24:42.330109 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/67352b8c-a5b5-427d-9bdd-861d7d00a88e-ovsdbserver-nb\") pod \"dnsmasq-dns-54f9b7b8d9-wt75k\" (UID: \"67352b8c-a5b5-427d-9bdd-861d7d00a88e\") " pod="openstack/dnsmasq-dns-54f9b7b8d9-wt75k" Jan 20 18:24:42 crc kubenswrapper[4661]: I0120 18:24:42.330155 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bb256\" (UniqueName: \"kubernetes.io/projected/67352b8c-a5b5-427d-9bdd-861d7d00a88e-kube-api-access-bb256\") pod \"dnsmasq-dns-54f9b7b8d9-wt75k\" (UID: \"67352b8c-a5b5-427d-9bdd-861d7d00a88e\") " pod="openstack/dnsmasq-dns-54f9b7b8d9-wt75k" Jan 20 18:24:42 crc kubenswrapper[4661]: I0120 18:24:42.330193 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/67352b8c-a5b5-427d-9bdd-861d7d00a88e-config\") pod \"dnsmasq-dns-54f9b7b8d9-wt75k\" (UID: \"67352b8c-a5b5-427d-9bdd-861d7d00a88e\") " pod="openstack/dnsmasq-dns-54f9b7b8d9-wt75k" Jan 20 18:24:42 crc kubenswrapper[4661]: I0120 18:24:42.330755 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/67352b8c-a5b5-427d-9bdd-861d7d00a88e-config\") pod \"dnsmasq-dns-54f9b7b8d9-wt75k\" (UID: \"67352b8c-a5b5-427d-9bdd-861d7d00a88e\") " pod="openstack/dnsmasq-dns-54f9b7b8d9-wt75k" Jan 20 18:24:42 crc kubenswrapper[4661]: I0120 18:24:42.331536 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/67352b8c-a5b5-427d-9bdd-861d7d00a88e-ovsdbserver-sb\") pod \"dnsmasq-dns-54f9b7b8d9-wt75k\" (UID: \"67352b8c-a5b5-427d-9bdd-861d7d00a88e\") " pod="openstack/dnsmasq-dns-54f9b7b8d9-wt75k" Jan 20 18:24:42 crc kubenswrapper[4661]: I0120 18:24:42.331550 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/67352b8c-a5b5-427d-9bdd-861d7d00a88e-ovsdbserver-nb\") pod \"dnsmasq-dns-54f9b7b8d9-wt75k\" (UID: \"67352b8c-a5b5-427d-9bdd-861d7d00a88e\") " pod="openstack/dnsmasq-dns-54f9b7b8d9-wt75k" Jan 20 18:24:42 crc kubenswrapper[4661]: I0120 18:24:42.355596 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bb256\" (UniqueName: \"kubernetes.io/projected/67352b8c-a5b5-427d-9bdd-861d7d00a88e-kube-api-access-bb256\") pod \"dnsmasq-dns-54f9b7b8d9-wt75k\" (UID: \"67352b8c-a5b5-427d-9bdd-861d7d00a88e\") " pod="openstack/dnsmasq-dns-54f9b7b8d9-wt75k" Jan 20 18:24:42 crc kubenswrapper[4661]: I0120 18:24:42.465961 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-54f9b7b8d9-wt75k" Jan 20 18:24:43 crc kubenswrapper[4661]: I0120 18:24:43.094174 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-54f9b7b8d9-wt75k"] Jan 20 18:24:43 crc kubenswrapper[4661]: I0120 18:24:43.729955 4661 generic.go:334] "Generic (PLEG): container finished" podID="67352b8c-a5b5-427d-9bdd-861d7d00a88e" containerID="8eaa12322b0b3170eb72537434f6c82bd18209e98ebff2477a21644138bf30cb" exitCode=0 Jan 20 18:24:43 crc kubenswrapper[4661]: I0120 18:24:43.730018 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-54f9b7b8d9-wt75k" event={"ID":"67352b8c-a5b5-427d-9bdd-861d7d00a88e","Type":"ContainerDied","Data":"8eaa12322b0b3170eb72537434f6c82bd18209e98ebff2477a21644138bf30cb"} Jan 20 18:24:43 crc kubenswrapper[4661]: I0120 18:24:43.732958 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-54f9b7b8d9-wt75k" event={"ID":"67352b8c-a5b5-427d-9bdd-861d7d00a88e","Type":"ContainerStarted","Data":"892183cfd587235d37a7e20d865348b6d47f3773938e6bbc7cd6ad03ddf14bc2"} Jan 20 18:24:44 crc kubenswrapper[4661]: I0120 18:24:44.741723 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-54f9b7b8d9-wt75k" event={"ID":"67352b8c-a5b5-427d-9bdd-861d7d00a88e","Type":"ContainerStarted","Data":"bd86fe1a03feb51f3909e976ceffa7d222d3e7eb2e857245973975817a67c234"} Jan 20 18:24:44 crc kubenswrapper[4661]: I0120 18:24:44.742067 4661 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-54f9b7b8d9-wt75k" Jan 20 18:24:44 crc kubenswrapper[4661]: I0120 18:24:44.760893 4661 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-54f9b7b8d9-wt75k" podStartSLOduration=2.760874256 podStartE2EDuration="2.760874256s" podCreationTimestamp="2026-01-20 18:24:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 18:24:44.759495219 +0000 UTC m=+1141.090284901" watchObservedRunningTime="2026-01-20 18:24:44.760874256 +0000 UTC m=+1141.091663918" Jan 20 18:24:46 crc kubenswrapper[4661]: I0120 18:24:46.765071 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-wq7m7" event={"ID":"289c1012-9041-4cb4-baa7-888d31048e4c","Type":"ContainerStarted","Data":"25897a81bce10762f0db6c8058da7297d3ed4363b9e3345f1fd34da2d3873903"} Jan 20 18:24:48 crc kubenswrapper[4661]: I0120 18:24:48.785402 4661 generic.go:334] "Generic (PLEG): container finished" podID="289c1012-9041-4cb4-baa7-888d31048e4c" containerID="25897a81bce10762f0db6c8058da7297d3ed4363b9e3345f1fd34da2d3873903" exitCode=0 Jan 20 18:24:48 crc kubenswrapper[4661]: I0120 18:24:48.785628 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-wq7m7" event={"ID":"289c1012-9041-4cb4-baa7-888d31048e4c","Type":"ContainerDied","Data":"25897a81bce10762f0db6c8058da7297d3ed4363b9e3345f1fd34da2d3873903"} Jan 20 18:24:50 crc kubenswrapper[4661]: I0120 18:24:50.193399 4661 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-wq7m7" Jan 20 18:24:50 crc kubenswrapper[4661]: I0120 18:24:50.364178 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/289c1012-9041-4cb4-baa7-888d31048e4c-combined-ca-bundle\") pod \"289c1012-9041-4cb4-baa7-888d31048e4c\" (UID: \"289c1012-9041-4cb4-baa7-888d31048e4c\") " Jan 20 18:24:50 crc kubenswrapper[4661]: I0120 18:24:50.364365 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dx5xx\" (UniqueName: \"kubernetes.io/projected/289c1012-9041-4cb4-baa7-888d31048e4c-kube-api-access-dx5xx\") pod \"289c1012-9041-4cb4-baa7-888d31048e4c\" (UID: \"289c1012-9041-4cb4-baa7-888d31048e4c\") " Jan 20 18:24:50 crc kubenswrapper[4661]: I0120 18:24:50.364409 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/289c1012-9041-4cb4-baa7-888d31048e4c-config-data\") pod \"289c1012-9041-4cb4-baa7-888d31048e4c\" (UID: \"289c1012-9041-4cb4-baa7-888d31048e4c\") " Jan 20 18:24:50 crc kubenswrapper[4661]: I0120 18:24:50.370852 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/289c1012-9041-4cb4-baa7-888d31048e4c-kube-api-access-dx5xx" (OuterVolumeSpecName: "kube-api-access-dx5xx") pod "289c1012-9041-4cb4-baa7-888d31048e4c" (UID: "289c1012-9041-4cb4-baa7-888d31048e4c"). InnerVolumeSpecName "kube-api-access-dx5xx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 18:24:50 crc kubenswrapper[4661]: I0120 18:24:50.410268 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/289c1012-9041-4cb4-baa7-888d31048e4c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "289c1012-9041-4cb4-baa7-888d31048e4c" (UID: "289c1012-9041-4cb4-baa7-888d31048e4c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 18:24:50 crc kubenswrapper[4661]: I0120 18:24:50.434846 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/289c1012-9041-4cb4-baa7-888d31048e4c-config-data" (OuterVolumeSpecName: "config-data") pod "289c1012-9041-4cb4-baa7-888d31048e4c" (UID: "289c1012-9041-4cb4-baa7-888d31048e4c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 18:24:50 crc kubenswrapper[4661]: I0120 18:24:50.466567 4661 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/289c1012-9041-4cb4-baa7-888d31048e4c-config-data\") on node \"crc\" DevicePath \"\"" Jan 20 18:24:50 crc kubenswrapper[4661]: I0120 18:24:50.466612 4661 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/289c1012-9041-4cb4-baa7-888d31048e4c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 20 18:24:50 crc kubenswrapper[4661]: I0120 18:24:50.466633 4661 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dx5xx\" (UniqueName: \"kubernetes.io/projected/289c1012-9041-4cb4-baa7-888d31048e4c-kube-api-access-dx5xx\") on node \"crc\" DevicePath \"\"" Jan 20 18:24:50 crc kubenswrapper[4661]: I0120 18:24:50.805622 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-wq7m7" event={"ID":"289c1012-9041-4cb4-baa7-888d31048e4c","Type":"ContainerDied","Data":"980128280ba8a699e9e351466c542c5555edcc240c07dea0542e0ac2bb69d3b2"} Jan 20 18:24:50 crc kubenswrapper[4661]: I0120 18:24:50.805683 4661 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-wq7m7" Jan 20 18:24:50 crc kubenswrapper[4661]: I0120 18:24:50.805690 4661 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="980128280ba8a699e9e351466c542c5555edcc240c07dea0542e0ac2bb69d3b2" Jan 20 18:24:51 crc kubenswrapper[4661]: I0120 18:24:51.091552 4661 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-54f9b7b8d9-wt75k"] Jan 20 18:24:51 crc kubenswrapper[4661]: I0120 18:24:51.092053 4661 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-54f9b7b8d9-wt75k" podUID="67352b8c-a5b5-427d-9bdd-861d7d00a88e" containerName="dnsmasq-dns" containerID="cri-o://bd86fe1a03feb51f3909e976ceffa7d222d3e7eb2e857245973975817a67c234" gracePeriod=10 Jan 20 18:24:51 crc kubenswrapper[4661]: I0120 18:24:51.094169 4661 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-54f9b7b8d9-wt75k" Jan 20 18:24:51 crc kubenswrapper[4661]: I0120 18:24:51.182634 4661 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6546db6db7-jh8m5"] Jan 20 18:24:51 crc kubenswrapper[4661]: E0120 18:24:51.182913 4661 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="289c1012-9041-4cb4-baa7-888d31048e4c" containerName="keystone-db-sync" Jan 20 18:24:51 crc kubenswrapper[4661]: I0120 18:24:51.182929 4661 state_mem.go:107] "Deleted CPUSet assignment" podUID="289c1012-9041-4cb4-baa7-888d31048e4c" containerName="keystone-db-sync" Jan 20 18:24:51 crc kubenswrapper[4661]: I0120 18:24:51.183093 4661 memory_manager.go:354] "RemoveStaleState removing state" podUID="289c1012-9041-4cb4-baa7-888d31048e4c" containerName="keystone-db-sync" Jan 20 18:24:51 crc kubenswrapper[4661]: I0120 18:24:51.183804 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6546db6db7-jh8m5" Jan 20 18:24:51 crc kubenswrapper[4661]: I0120 18:24:51.205864 4661 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-hkmmn"] Jan 20 18:24:51 crc kubenswrapper[4661]: I0120 18:24:51.209038 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-hkmmn" Jan 20 18:24:51 crc kubenswrapper[4661]: I0120 18:24:51.215327 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 20 18:24:51 crc kubenswrapper[4661]: I0120 18:24:51.215612 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 20 18:24:51 crc kubenswrapper[4661]: I0120 18:24:51.215742 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 20 18:24:51 crc kubenswrapper[4661]: I0120 18:24:51.215851 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-bvbmp" Jan 20 18:24:51 crc kubenswrapper[4661]: I0120 18:24:51.215985 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Jan 20 18:24:51 crc kubenswrapper[4661]: I0120 18:24:51.241436 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-hkmmn"] Jan 20 18:24:51 crc kubenswrapper[4661]: I0120 18:24:51.258371 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6546db6db7-jh8m5"] Jan 20 18:24:51 crc kubenswrapper[4661]: I0120 18:24:51.285580 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f7rwc\" (UniqueName: \"kubernetes.io/projected/f97254a4-d267-447c-a674-ccebc78cc066-kube-api-access-f7rwc\") pod \"dnsmasq-dns-6546db6db7-jh8m5\" (UID: \"f97254a4-d267-447c-a674-ccebc78cc066\") " pod="openstack/dnsmasq-dns-6546db6db7-jh8m5" Jan 20 18:24:51 crc kubenswrapper[4661]: I0120 18:24:51.285626 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f97254a4-d267-447c-a674-ccebc78cc066-dns-svc\") pod \"dnsmasq-dns-6546db6db7-jh8m5\" (UID: \"f97254a4-d267-447c-a674-ccebc78cc066\") " pod="openstack/dnsmasq-dns-6546db6db7-jh8m5" Jan 20 18:24:51 crc kubenswrapper[4661]: I0120 18:24:51.285646 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f97254a4-d267-447c-a674-ccebc78cc066-config\") pod \"dnsmasq-dns-6546db6db7-jh8m5\" (UID: \"f97254a4-d267-447c-a674-ccebc78cc066\") " pod="openstack/dnsmasq-dns-6546db6db7-jh8m5" Jan 20 18:24:51 crc kubenswrapper[4661]: I0120 18:24:51.285685 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f97254a4-d267-447c-a674-ccebc78cc066-ovsdbserver-nb\") pod \"dnsmasq-dns-6546db6db7-jh8m5\" (UID: \"f97254a4-d267-447c-a674-ccebc78cc066\") " pod="openstack/dnsmasq-dns-6546db6db7-jh8m5" Jan 20 18:24:51 crc kubenswrapper[4661]: I0120 18:24:51.285768 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f97254a4-d267-447c-a674-ccebc78cc066-ovsdbserver-sb\") pod \"dnsmasq-dns-6546db6db7-jh8m5\" (UID: \"f97254a4-d267-447c-a674-ccebc78cc066\") " pod="openstack/dnsmasq-dns-6546db6db7-jh8m5" Jan 20 18:24:51 crc kubenswrapper[4661]: I0120 18:24:51.387491 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9d3d5220-e6bb-4e22-9e2a-cec33bdad6a0-scripts\") pod \"keystone-bootstrap-hkmmn\" (UID: \"9d3d5220-e6bb-4e22-9e2a-cec33bdad6a0\") " pod="openstack/keystone-bootstrap-hkmmn" Jan 20 18:24:51 crc kubenswrapper[4661]: I0120 18:24:51.387862 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f97254a4-d267-447c-a674-ccebc78cc066-ovsdbserver-sb\") pod \"dnsmasq-dns-6546db6db7-jh8m5\" (UID: \"f97254a4-d267-447c-a674-ccebc78cc066\") " pod="openstack/dnsmasq-dns-6546db6db7-jh8m5" Jan 20 18:24:51 crc kubenswrapper[4661]: I0120 18:24:51.387910 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/9d3d5220-e6bb-4e22-9e2a-cec33bdad6a0-credential-keys\") pod \"keystone-bootstrap-hkmmn\" (UID: \"9d3d5220-e6bb-4e22-9e2a-cec33bdad6a0\") " pod="openstack/keystone-bootstrap-hkmmn" Jan 20 18:24:51 crc kubenswrapper[4661]: I0120 18:24:51.387952 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9d3d5220-e6bb-4e22-9e2a-cec33bdad6a0-combined-ca-bundle\") pod \"keystone-bootstrap-hkmmn\" (UID: \"9d3d5220-e6bb-4e22-9e2a-cec33bdad6a0\") " pod="openstack/keystone-bootstrap-hkmmn" Jan 20 18:24:51 crc kubenswrapper[4661]: I0120 18:24:51.387986 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-29m7d\" (UniqueName: \"kubernetes.io/projected/9d3d5220-e6bb-4e22-9e2a-cec33bdad6a0-kube-api-access-29m7d\") pod \"keystone-bootstrap-hkmmn\" (UID: \"9d3d5220-e6bb-4e22-9e2a-cec33bdad6a0\") " pod="openstack/keystone-bootstrap-hkmmn" Jan 20 18:24:51 crc kubenswrapper[4661]: I0120 18:24:51.388148 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f7rwc\" (UniqueName: \"kubernetes.io/projected/f97254a4-d267-447c-a674-ccebc78cc066-kube-api-access-f7rwc\") pod \"dnsmasq-dns-6546db6db7-jh8m5\" (UID: \"f97254a4-d267-447c-a674-ccebc78cc066\") " pod="openstack/dnsmasq-dns-6546db6db7-jh8m5" Jan 20 18:24:51 crc kubenswrapper[4661]: I0120 18:24:51.388203 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f97254a4-d267-447c-a674-ccebc78cc066-dns-svc\") pod \"dnsmasq-dns-6546db6db7-jh8m5\" (UID: \"f97254a4-d267-447c-a674-ccebc78cc066\") " pod="openstack/dnsmasq-dns-6546db6db7-jh8m5" Jan 20 18:24:51 crc kubenswrapper[4661]: I0120 18:24:51.388234 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f97254a4-d267-447c-a674-ccebc78cc066-config\") pod \"dnsmasq-dns-6546db6db7-jh8m5\" (UID: \"f97254a4-d267-447c-a674-ccebc78cc066\") " pod="openstack/dnsmasq-dns-6546db6db7-jh8m5" Jan 20 18:24:51 crc kubenswrapper[4661]: I0120 18:24:51.388254 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9d3d5220-e6bb-4e22-9e2a-cec33bdad6a0-config-data\") pod \"keystone-bootstrap-hkmmn\" (UID: \"9d3d5220-e6bb-4e22-9e2a-cec33bdad6a0\") " pod="openstack/keystone-bootstrap-hkmmn" Jan 20 18:24:51 crc kubenswrapper[4661]: I0120 18:24:51.388292 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/9d3d5220-e6bb-4e22-9e2a-cec33bdad6a0-fernet-keys\") pod \"keystone-bootstrap-hkmmn\" (UID: \"9d3d5220-e6bb-4e22-9e2a-cec33bdad6a0\") " pod="openstack/keystone-bootstrap-hkmmn" Jan 20 18:24:51 crc kubenswrapper[4661]: I0120 18:24:51.388330 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f97254a4-d267-447c-a674-ccebc78cc066-ovsdbserver-nb\") pod \"dnsmasq-dns-6546db6db7-jh8m5\" (UID: \"f97254a4-d267-447c-a674-ccebc78cc066\") " pod="openstack/dnsmasq-dns-6546db6db7-jh8m5" Jan 20 18:24:51 crc kubenswrapper[4661]: I0120 18:24:51.389051 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f97254a4-d267-447c-a674-ccebc78cc066-ovsdbserver-sb\") pod \"dnsmasq-dns-6546db6db7-jh8m5\" (UID: \"f97254a4-d267-447c-a674-ccebc78cc066\") " pod="openstack/dnsmasq-dns-6546db6db7-jh8m5" Jan 20 18:24:51 crc kubenswrapper[4661]: I0120 18:24:51.389413 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f97254a4-d267-447c-a674-ccebc78cc066-ovsdbserver-nb\") pod \"dnsmasq-dns-6546db6db7-jh8m5\" (UID: \"f97254a4-d267-447c-a674-ccebc78cc066\") " pod="openstack/dnsmasq-dns-6546db6db7-jh8m5" Jan 20 18:24:51 crc kubenswrapper[4661]: I0120 18:24:51.389562 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f97254a4-d267-447c-a674-ccebc78cc066-dns-svc\") pod \"dnsmasq-dns-6546db6db7-jh8m5\" (UID: \"f97254a4-d267-447c-a674-ccebc78cc066\") " pod="openstack/dnsmasq-dns-6546db6db7-jh8m5" Jan 20 18:24:51 crc kubenswrapper[4661]: I0120 18:24:51.390383 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f97254a4-d267-447c-a674-ccebc78cc066-config\") pod \"dnsmasq-dns-6546db6db7-jh8m5\" (UID: \"f97254a4-d267-447c-a674-ccebc78cc066\") " pod="openstack/dnsmasq-dns-6546db6db7-jh8m5" Jan 20 18:24:51 crc kubenswrapper[4661]: I0120 18:24:51.419952 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f7rwc\" (UniqueName: \"kubernetes.io/projected/f97254a4-d267-447c-a674-ccebc78cc066-kube-api-access-f7rwc\") pod \"dnsmasq-dns-6546db6db7-jh8m5\" (UID: \"f97254a4-d267-447c-a674-ccebc78cc066\") " pod="openstack/dnsmasq-dns-6546db6db7-jh8m5" Jan 20 18:24:51 crc kubenswrapper[4661]: I0120 18:24:51.441491 4661 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-sync-glwqq"] Jan 20 18:24:51 crc kubenswrapper[4661]: I0120 18:24:51.442457 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-glwqq" Jan 20 18:24:51 crc kubenswrapper[4661]: I0120 18:24:51.445093 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-tl24v" Jan 20 18:24:51 crc kubenswrapper[4661]: I0120 18:24:51.445290 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Jan 20 18:24:51 crc kubenswrapper[4661]: I0120 18:24:51.445403 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Jan 20 18:24:51 crc kubenswrapper[4661]: I0120 18:24:51.450085 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-glwqq"] Jan 20 18:24:51 crc kubenswrapper[4661]: I0120 18:24:51.489791 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9d3d5220-e6bb-4e22-9e2a-cec33bdad6a0-config-data\") pod \"keystone-bootstrap-hkmmn\" (UID: \"9d3d5220-e6bb-4e22-9e2a-cec33bdad6a0\") " pod="openstack/keystone-bootstrap-hkmmn" Jan 20 18:24:51 crc kubenswrapper[4661]: I0120 18:24:51.489841 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/9d3d5220-e6bb-4e22-9e2a-cec33bdad6a0-fernet-keys\") pod \"keystone-bootstrap-hkmmn\" (UID: \"9d3d5220-e6bb-4e22-9e2a-cec33bdad6a0\") " pod="openstack/keystone-bootstrap-hkmmn" Jan 20 18:24:51 crc kubenswrapper[4661]: I0120 18:24:51.489939 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9d3d5220-e6bb-4e22-9e2a-cec33bdad6a0-scripts\") pod \"keystone-bootstrap-hkmmn\" (UID: \"9d3d5220-e6bb-4e22-9e2a-cec33bdad6a0\") " pod="openstack/keystone-bootstrap-hkmmn" Jan 20 18:24:51 crc kubenswrapper[4661]: I0120 18:24:51.489976 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/9d3d5220-e6bb-4e22-9e2a-cec33bdad6a0-credential-keys\") pod \"keystone-bootstrap-hkmmn\" (UID: \"9d3d5220-e6bb-4e22-9e2a-cec33bdad6a0\") " pod="openstack/keystone-bootstrap-hkmmn" Jan 20 18:24:51 crc kubenswrapper[4661]: I0120 18:24:51.489997 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9d3d5220-e6bb-4e22-9e2a-cec33bdad6a0-combined-ca-bundle\") pod \"keystone-bootstrap-hkmmn\" (UID: \"9d3d5220-e6bb-4e22-9e2a-cec33bdad6a0\") " pod="openstack/keystone-bootstrap-hkmmn" Jan 20 18:24:51 crc kubenswrapper[4661]: I0120 18:24:51.490034 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-29m7d\" (UniqueName: \"kubernetes.io/projected/9d3d5220-e6bb-4e22-9e2a-cec33bdad6a0-kube-api-access-29m7d\") pod \"keystone-bootstrap-hkmmn\" (UID: \"9d3d5220-e6bb-4e22-9e2a-cec33bdad6a0\") " pod="openstack/keystone-bootstrap-hkmmn" Jan 20 18:24:51 crc kubenswrapper[4661]: I0120 18:24:51.503470 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9d3d5220-e6bb-4e22-9e2a-cec33bdad6a0-scripts\") pod \"keystone-bootstrap-hkmmn\" (UID: \"9d3d5220-e6bb-4e22-9e2a-cec33bdad6a0\") " pod="openstack/keystone-bootstrap-hkmmn" Jan 20 18:24:51 crc kubenswrapper[4661]: I0120 18:24:51.504046 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/9d3d5220-e6bb-4e22-9e2a-cec33bdad6a0-fernet-keys\") pod \"keystone-bootstrap-hkmmn\" (UID: \"9d3d5220-e6bb-4e22-9e2a-cec33bdad6a0\") " pod="openstack/keystone-bootstrap-hkmmn" Jan 20 18:24:51 crc kubenswrapper[4661]: I0120 18:24:51.514420 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9d3d5220-e6bb-4e22-9e2a-cec33bdad6a0-config-data\") pod \"keystone-bootstrap-hkmmn\" (UID: \"9d3d5220-e6bb-4e22-9e2a-cec33bdad6a0\") " pod="openstack/keystone-bootstrap-hkmmn" Jan 20 18:24:51 crc kubenswrapper[4661]: I0120 18:24:51.517600 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/9d3d5220-e6bb-4e22-9e2a-cec33bdad6a0-credential-keys\") pod \"keystone-bootstrap-hkmmn\" (UID: \"9d3d5220-e6bb-4e22-9e2a-cec33bdad6a0\") " pod="openstack/keystone-bootstrap-hkmmn" Jan 20 18:24:51 crc kubenswrapper[4661]: I0120 18:24:51.522218 4661 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 20 18:24:51 crc kubenswrapper[4661]: I0120 18:24:51.524292 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 20 18:24:51 crc kubenswrapper[4661]: I0120 18:24:51.524401 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9d3d5220-e6bb-4e22-9e2a-cec33bdad6a0-combined-ca-bundle\") pod \"keystone-bootstrap-hkmmn\" (UID: \"9d3d5220-e6bb-4e22-9e2a-cec33bdad6a0\") " pod="openstack/keystone-bootstrap-hkmmn" Jan 20 18:24:51 crc kubenswrapper[4661]: I0120 18:24:51.536627 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 20 18:24:51 crc kubenswrapper[4661]: I0120 18:24:51.536916 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 20 18:24:51 crc kubenswrapper[4661]: I0120 18:24:51.546215 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-29m7d\" (UniqueName: \"kubernetes.io/projected/9d3d5220-e6bb-4e22-9e2a-cec33bdad6a0-kube-api-access-29m7d\") pod \"keystone-bootstrap-hkmmn\" (UID: \"9d3d5220-e6bb-4e22-9e2a-cec33bdad6a0\") " pod="openstack/keystone-bootstrap-hkmmn" Jan 20 18:24:51 crc kubenswrapper[4661]: I0120 18:24:51.579757 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 20 18:24:51 crc kubenswrapper[4661]: I0120 18:24:51.593322 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2423d758-4514-439d-a804-42287945bedc-etc-machine-id\") pod \"cinder-db-sync-glwqq\" (UID: \"2423d758-4514-439d-a804-42287945bedc\") " pod="openstack/cinder-db-sync-glwqq" Jan 20 18:24:51 crc kubenswrapper[4661]: I0120 18:24:51.593395 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2423d758-4514-439d-a804-42287945bedc-combined-ca-bundle\") pod \"cinder-db-sync-glwqq\" (UID: \"2423d758-4514-439d-a804-42287945bedc\") " pod="openstack/cinder-db-sync-glwqq" Jan 20 18:24:51 crc kubenswrapper[4661]: I0120 18:24:51.593414 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fjndl\" (UniqueName: \"kubernetes.io/projected/2423d758-4514-439d-a804-42287945bedc-kube-api-access-fjndl\") pod \"cinder-db-sync-glwqq\" (UID: \"2423d758-4514-439d-a804-42287945bedc\") " pod="openstack/cinder-db-sync-glwqq" Jan 20 18:24:51 crc kubenswrapper[4661]: I0120 18:24:51.593494 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2423d758-4514-439d-a804-42287945bedc-config-data\") pod \"cinder-db-sync-glwqq\" (UID: \"2423d758-4514-439d-a804-42287945bedc\") " pod="openstack/cinder-db-sync-glwqq" Jan 20 18:24:51 crc kubenswrapper[4661]: I0120 18:24:51.593541 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2423d758-4514-439d-a804-42287945bedc-scripts\") pod \"cinder-db-sync-glwqq\" (UID: \"2423d758-4514-439d-a804-42287945bedc\") " pod="openstack/cinder-db-sync-glwqq" Jan 20 18:24:51 crc kubenswrapper[4661]: I0120 18:24:51.593581 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/2423d758-4514-439d-a804-42287945bedc-db-sync-config-data\") pod \"cinder-db-sync-glwqq\" (UID: \"2423d758-4514-439d-a804-42287945bedc\") " pod="openstack/cinder-db-sync-glwqq" Jan 20 18:24:51 crc kubenswrapper[4661]: I0120 18:24:51.602402 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6546db6db7-jh8m5" Jan 20 18:24:51 crc kubenswrapper[4661]: I0120 18:24:51.607227 4661 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-sync-6mvml"] Jan 20 18:24:51 crc kubenswrapper[4661]: I0120 18:24:51.608352 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-6mvml" Jan 20 18:24:51 crc kubenswrapper[4661]: I0120 18:24:51.628278 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-6dlvz" Jan 20 18:24:51 crc kubenswrapper[4661]: I0120 18:24:51.628642 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Jan 20 18:24:51 crc kubenswrapper[4661]: I0120 18:24:51.636993 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Jan 20 18:24:51 crc kubenswrapper[4661]: I0120 18:24:51.637238 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-hkmmn" Jan 20 18:24:51 crc kubenswrapper[4661]: I0120 18:24:51.667029 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-6mvml"] Jan 20 18:24:51 crc kubenswrapper[4661]: I0120 18:24:51.700189 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2423d758-4514-439d-a804-42287945bedc-config-data\") pod \"cinder-db-sync-glwqq\" (UID: \"2423d758-4514-439d-a804-42287945bedc\") " pod="openstack/cinder-db-sync-glwqq" Jan 20 18:24:51 crc kubenswrapper[4661]: I0120 18:24:51.700253 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5a5e84fe-d036-42ca-97d0-2abdb2b8ebe4-scripts\") pod \"ceilometer-0\" (UID: \"5a5e84fe-d036-42ca-97d0-2abdb2b8ebe4\") " pod="openstack/ceilometer-0" Jan 20 18:24:51 crc kubenswrapper[4661]: I0120 18:24:51.700287 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2423d758-4514-439d-a804-42287945bedc-scripts\") pod \"cinder-db-sync-glwqq\" (UID: \"2423d758-4514-439d-a804-42287945bedc\") " pod="openstack/cinder-db-sync-glwqq" Jan 20 18:24:51 crc kubenswrapper[4661]: I0120 18:24:51.700305 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/f4ab2fe8-a602-4b1c-b880-f1eeb9bde9de-config\") pod \"neutron-db-sync-6mvml\" (UID: \"f4ab2fe8-a602-4b1c-b880-f1eeb9bde9de\") " pod="openstack/neutron-db-sync-6mvml" Jan 20 18:24:51 crc kubenswrapper[4661]: I0120 18:24:51.700334 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f4ab2fe8-a602-4b1c-b880-f1eeb9bde9de-combined-ca-bundle\") pod \"neutron-db-sync-6mvml\" (UID: \"f4ab2fe8-a602-4b1c-b880-f1eeb9bde9de\") " pod="openstack/neutron-db-sync-6mvml" Jan 20 18:24:51 crc kubenswrapper[4661]: I0120 18:24:51.700382 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/2423d758-4514-439d-a804-42287945bedc-db-sync-config-data\") pod \"cinder-db-sync-glwqq\" (UID: \"2423d758-4514-439d-a804-42287945bedc\") " pod="openstack/cinder-db-sync-glwqq" Jan 20 18:24:51 crc kubenswrapper[4661]: I0120 18:24:51.700408 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5a5e84fe-d036-42ca-97d0-2abdb2b8ebe4-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"5a5e84fe-d036-42ca-97d0-2abdb2b8ebe4\") " pod="openstack/ceilometer-0" Jan 20 18:24:51 crc kubenswrapper[4661]: I0120 18:24:51.700444 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5a5e84fe-d036-42ca-97d0-2abdb2b8ebe4-config-data\") pod \"ceilometer-0\" (UID: \"5a5e84fe-d036-42ca-97d0-2abdb2b8ebe4\") " pod="openstack/ceilometer-0" Jan 20 18:24:51 crc kubenswrapper[4661]: I0120 18:24:51.700463 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5a5e84fe-d036-42ca-97d0-2abdb2b8ebe4-log-httpd\") pod \"ceilometer-0\" (UID: \"5a5e84fe-d036-42ca-97d0-2abdb2b8ebe4\") " pod="openstack/ceilometer-0" Jan 20 18:24:51 crc kubenswrapper[4661]: I0120 18:24:51.700502 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2423d758-4514-439d-a804-42287945bedc-etc-machine-id\") pod \"cinder-db-sync-glwqq\" (UID: \"2423d758-4514-439d-a804-42287945bedc\") " pod="openstack/cinder-db-sync-glwqq" Jan 20 18:24:51 crc kubenswrapper[4661]: I0120 18:24:51.700518 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5a5e84fe-d036-42ca-97d0-2abdb2b8ebe4-run-httpd\") pod \"ceilometer-0\" (UID: \"5a5e84fe-d036-42ca-97d0-2abdb2b8ebe4\") " pod="openstack/ceilometer-0" Jan 20 18:24:51 crc kubenswrapper[4661]: I0120 18:24:51.700547 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2423d758-4514-439d-a804-42287945bedc-combined-ca-bundle\") pod \"cinder-db-sync-glwqq\" (UID: \"2423d758-4514-439d-a804-42287945bedc\") " pod="openstack/cinder-db-sync-glwqq" Jan 20 18:24:51 crc kubenswrapper[4661]: I0120 18:24:51.700564 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fjndl\" (UniqueName: \"kubernetes.io/projected/2423d758-4514-439d-a804-42287945bedc-kube-api-access-fjndl\") pod \"cinder-db-sync-glwqq\" (UID: \"2423d758-4514-439d-a804-42287945bedc\") " pod="openstack/cinder-db-sync-glwqq" Jan 20 18:24:51 crc kubenswrapper[4661]: I0120 18:24:51.700582 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5a5e84fe-d036-42ca-97d0-2abdb2b8ebe4-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"5a5e84fe-d036-42ca-97d0-2abdb2b8ebe4\") " pod="openstack/ceilometer-0" Jan 20 18:24:51 crc kubenswrapper[4661]: I0120 18:24:51.700614 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wt65f\" (UniqueName: \"kubernetes.io/projected/f4ab2fe8-a602-4b1c-b880-f1eeb9bde9de-kube-api-access-wt65f\") pod \"neutron-db-sync-6mvml\" (UID: \"f4ab2fe8-a602-4b1c-b880-f1eeb9bde9de\") " pod="openstack/neutron-db-sync-6mvml" Jan 20 18:24:51 crc kubenswrapper[4661]: I0120 18:24:51.700651 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tlm7x\" (UniqueName: \"kubernetes.io/projected/5a5e84fe-d036-42ca-97d0-2abdb2b8ebe4-kube-api-access-tlm7x\") pod \"ceilometer-0\" (UID: \"5a5e84fe-d036-42ca-97d0-2abdb2b8ebe4\") " pod="openstack/ceilometer-0" Jan 20 18:24:51 crc kubenswrapper[4661]: I0120 18:24:51.702818 4661 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-sync-bbdtt"] Jan 20 18:24:51 crc kubenswrapper[4661]: I0120 18:24:51.703841 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-bbdtt" Jan 20 18:24:51 crc kubenswrapper[4661]: I0120 18:24:51.706908 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2423d758-4514-439d-a804-42287945bedc-etc-machine-id\") pod \"cinder-db-sync-glwqq\" (UID: \"2423d758-4514-439d-a804-42287945bedc\") " pod="openstack/cinder-db-sync-glwqq" Jan 20 18:24:51 crc kubenswrapper[4661]: I0120 18:24:51.711167 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/2423d758-4514-439d-a804-42287945bedc-db-sync-config-data\") pod \"cinder-db-sync-glwqq\" (UID: \"2423d758-4514-439d-a804-42287945bedc\") " pod="openstack/cinder-db-sync-glwqq" Jan 20 18:24:51 crc kubenswrapper[4661]: I0120 18:24:51.714215 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2423d758-4514-439d-a804-42287945bedc-scripts\") pod \"cinder-db-sync-glwqq\" (UID: \"2423d758-4514-439d-a804-42287945bedc\") " pod="openstack/cinder-db-sync-glwqq" Jan 20 18:24:51 crc kubenswrapper[4661]: I0120 18:24:51.714307 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2423d758-4514-439d-a804-42287945bedc-config-data\") pod \"cinder-db-sync-glwqq\" (UID: \"2423d758-4514-439d-a804-42287945bedc\") " pod="openstack/cinder-db-sync-glwqq" Jan 20 18:24:51 crc kubenswrapper[4661]: I0120 18:24:51.714398 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2423d758-4514-439d-a804-42287945bedc-combined-ca-bundle\") pod \"cinder-db-sync-glwqq\" (UID: \"2423d758-4514-439d-a804-42287945bedc\") " pod="openstack/cinder-db-sync-glwqq" Jan 20 18:24:51 crc kubenswrapper[4661]: I0120 18:24:51.725320 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-jrm7t" Jan 20 18:24:51 crc kubenswrapper[4661]: I0120 18:24:51.738026 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Jan 20 18:24:51 crc kubenswrapper[4661]: I0120 18:24:51.738208 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-bbdtt"] Jan 20 18:24:51 crc kubenswrapper[4661]: I0120 18:24:51.751108 4661 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-sync-8t5pf"] Jan 20 18:24:51 crc kubenswrapper[4661]: I0120 18:24:51.752116 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-8t5pf" Jan 20 18:24:51 crc kubenswrapper[4661]: I0120 18:24:51.762489 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-vj6nj" Jan 20 18:24:51 crc kubenswrapper[4661]: I0120 18:24:51.762829 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Jan 20 18:24:51 crc kubenswrapper[4661]: I0120 18:24:51.763733 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-8t5pf"] Jan 20 18:24:51 crc kubenswrapper[4661]: I0120 18:24:51.764259 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fjndl\" (UniqueName: \"kubernetes.io/projected/2423d758-4514-439d-a804-42287945bedc-kube-api-access-fjndl\") pod \"cinder-db-sync-glwqq\" (UID: \"2423d758-4514-439d-a804-42287945bedc\") " pod="openstack/cinder-db-sync-glwqq" Jan 20 18:24:51 crc kubenswrapper[4661]: I0120 18:24:51.757658 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Jan 20 18:24:51 crc kubenswrapper[4661]: I0120 18:24:51.776887 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-glwqq" Jan 20 18:24:51 crc kubenswrapper[4661]: I0120 18:24:51.804448 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5a5e84fe-d036-42ca-97d0-2abdb2b8ebe4-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"5a5e84fe-d036-42ca-97d0-2abdb2b8ebe4\") " pod="openstack/ceilometer-0" Jan 20 18:24:51 crc kubenswrapper[4661]: I0120 18:24:51.804534 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wt65f\" (UniqueName: \"kubernetes.io/projected/f4ab2fe8-a602-4b1c-b880-f1eeb9bde9de-kube-api-access-wt65f\") pod \"neutron-db-sync-6mvml\" (UID: \"f4ab2fe8-a602-4b1c-b880-f1eeb9bde9de\") " pod="openstack/neutron-db-sync-6mvml" Jan 20 18:24:51 crc kubenswrapper[4661]: I0120 18:24:51.804614 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tlm7x\" (UniqueName: \"kubernetes.io/projected/5a5e84fe-d036-42ca-97d0-2abdb2b8ebe4-kube-api-access-tlm7x\") pod \"ceilometer-0\" (UID: \"5a5e84fe-d036-42ca-97d0-2abdb2b8ebe4\") " pod="openstack/ceilometer-0" Jan 20 18:24:51 crc kubenswrapper[4661]: I0120 18:24:51.804725 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5a5e84fe-d036-42ca-97d0-2abdb2b8ebe4-scripts\") pod \"ceilometer-0\" (UID: \"5a5e84fe-d036-42ca-97d0-2abdb2b8ebe4\") " pod="openstack/ceilometer-0" Jan 20 18:24:51 crc kubenswrapper[4661]: I0120 18:24:51.804746 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/f4ab2fe8-a602-4b1c-b880-f1eeb9bde9de-config\") pod \"neutron-db-sync-6mvml\" (UID: \"f4ab2fe8-a602-4b1c-b880-f1eeb9bde9de\") " pod="openstack/neutron-db-sync-6mvml" Jan 20 18:24:51 crc kubenswrapper[4661]: I0120 18:24:51.804801 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f4ab2fe8-a602-4b1c-b880-f1eeb9bde9de-combined-ca-bundle\") pod \"neutron-db-sync-6mvml\" (UID: \"f4ab2fe8-a602-4b1c-b880-f1eeb9bde9de\") " pod="openstack/neutron-db-sync-6mvml" Jan 20 18:24:51 crc kubenswrapper[4661]: I0120 18:24:51.804871 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5a5e84fe-d036-42ca-97d0-2abdb2b8ebe4-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"5a5e84fe-d036-42ca-97d0-2abdb2b8ebe4\") " pod="openstack/ceilometer-0" Jan 20 18:24:51 crc kubenswrapper[4661]: I0120 18:24:51.804898 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5a5e84fe-d036-42ca-97d0-2abdb2b8ebe4-config-data\") pod \"ceilometer-0\" (UID: \"5a5e84fe-d036-42ca-97d0-2abdb2b8ebe4\") " pod="openstack/ceilometer-0" Jan 20 18:24:51 crc kubenswrapper[4661]: I0120 18:24:51.804951 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5a5e84fe-d036-42ca-97d0-2abdb2b8ebe4-log-httpd\") pod \"ceilometer-0\" (UID: \"5a5e84fe-d036-42ca-97d0-2abdb2b8ebe4\") " pod="openstack/ceilometer-0" Jan 20 18:24:51 crc kubenswrapper[4661]: I0120 18:24:51.804990 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5a5e84fe-d036-42ca-97d0-2abdb2b8ebe4-run-httpd\") pod \"ceilometer-0\" (UID: \"5a5e84fe-d036-42ca-97d0-2abdb2b8ebe4\") " pod="openstack/ceilometer-0" Jan 20 18:24:51 crc kubenswrapper[4661]: I0120 18:24:51.805951 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5a5e84fe-d036-42ca-97d0-2abdb2b8ebe4-run-httpd\") pod \"ceilometer-0\" (UID: \"5a5e84fe-d036-42ca-97d0-2abdb2b8ebe4\") " pod="openstack/ceilometer-0" Jan 20 18:24:51 crc kubenswrapper[4661]: I0120 18:24:51.808481 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5a5e84fe-d036-42ca-97d0-2abdb2b8ebe4-log-httpd\") pod \"ceilometer-0\" (UID: \"5a5e84fe-d036-42ca-97d0-2abdb2b8ebe4\") " pod="openstack/ceilometer-0" Jan 20 18:24:51 crc kubenswrapper[4661]: I0120 18:24:51.816654 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5a5e84fe-d036-42ca-97d0-2abdb2b8ebe4-config-data\") pod \"ceilometer-0\" (UID: \"5a5e84fe-d036-42ca-97d0-2abdb2b8ebe4\") " pod="openstack/ceilometer-0" Jan 20 18:24:51 crc kubenswrapper[4661]: I0120 18:24:51.818720 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5a5e84fe-d036-42ca-97d0-2abdb2b8ebe4-scripts\") pod \"ceilometer-0\" (UID: \"5a5e84fe-d036-42ca-97d0-2abdb2b8ebe4\") " pod="openstack/ceilometer-0" Jan 20 18:24:51 crc kubenswrapper[4661]: I0120 18:24:51.825116 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f4ab2fe8-a602-4b1c-b880-f1eeb9bde9de-combined-ca-bundle\") pod \"neutron-db-sync-6mvml\" (UID: \"f4ab2fe8-a602-4b1c-b880-f1eeb9bde9de\") " pod="openstack/neutron-db-sync-6mvml" Jan 20 18:24:51 crc kubenswrapper[4661]: I0120 18:24:51.825241 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5a5e84fe-d036-42ca-97d0-2abdb2b8ebe4-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"5a5e84fe-d036-42ca-97d0-2abdb2b8ebe4\") " pod="openstack/ceilometer-0" Jan 20 18:24:51 crc kubenswrapper[4661]: I0120 18:24:51.825740 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/f4ab2fe8-a602-4b1c-b880-f1eeb9bde9de-config\") pod \"neutron-db-sync-6mvml\" (UID: \"f4ab2fe8-a602-4b1c-b880-f1eeb9bde9de\") " pod="openstack/neutron-db-sync-6mvml" Jan 20 18:24:51 crc kubenswrapper[4661]: I0120 18:24:51.826216 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5a5e84fe-d036-42ca-97d0-2abdb2b8ebe4-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"5a5e84fe-d036-42ca-97d0-2abdb2b8ebe4\") " pod="openstack/ceilometer-0" Jan 20 18:24:51 crc kubenswrapper[4661]: I0120 18:24:51.857217 4661 generic.go:334] "Generic (PLEG): container finished" podID="67352b8c-a5b5-427d-9bdd-861d7d00a88e" containerID="bd86fe1a03feb51f3909e976ceffa7d222d3e7eb2e857245973975817a67c234" exitCode=0 Jan 20 18:24:51 crc kubenswrapper[4661]: I0120 18:24:51.857273 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-54f9b7b8d9-wt75k" event={"ID":"67352b8c-a5b5-427d-9bdd-861d7d00a88e","Type":"ContainerDied","Data":"bd86fe1a03feb51f3909e976ceffa7d222d3e7eb2e857245973975817a67c234"} Jan 20 18:24:51 crc kubenswrapper[4661]: I0120 18:24:51.868708 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wt65f\" (UniqueName: \"kubernetes.io/projected/f4ab2fe8-a602-4b1c-b880-f1eeb9bde9de-kube-api-access-wt65f\") pod \"neutron-db-sync-6mvml\" (UID: \"f4ab2fe8-a602-4b1c-b880-f1eeb9bde9de\") " pod="openstack/neutron-db-sync-6mvml" Jan 20 18:24:51 crc kubenswrapper[4661]: I0120 18:24:51.879741 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tlm7x\" (UniqueName: \"kubernetes.io/projected/5a5e84fe-d036-42ca-97d0-2abdb2b8ebe4-kube-api-access-tlm7x\") pod \"ceilometer-0\" (UID: \"5a5e84fe-d036-42ca-97d0-2abdb2b8ebe4\") " pod="openstack/ceilometer-0" Jan 20 18:24:51 crc kubenswrapper[4661]: I0120 18:24:51.888878 4661 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6546db6db7-jh8m5"] Jan 20 18:24:51 crc kubenswrapper[4661]: I0120 18:24:51.898269 4661 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7987f74bbc-ntqbq"] Jan 20 18:24:51 crc kubenswrapper[4661]: I0120 18:24:51.899489 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7987f74bbc-ntqbq" Jan 20 18:24:51 crc kubenswrapper[4661]: I0120 18:24:51.909340 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ebc6351c-f99f-4bf6-ae92-e0a8aee0b5be-ovsdbserver-nb\") pod \"dnsmasq-dns-7987f74bbc-ntqbq\" (UID: \"ebc6351c-f99f-4bf6-ae92-e0a8aee0b5be\") " pod="openstack/dnsmasq-dns-7987f74bbc-ntqbq" Jan 20 18:24:51 crc kubenswrapper[4661]: I0120 18:24:51.909377 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dbaa46dc-18ed-41e6-84b6-86daf834ffd4-config-data\") pod \"placement-db-sync-8t5pf\" (UID: \"dbaa46dc-18ed-41e6-84b6-86daf834ffd4\") " pod="openstack/placement-db-sync-8t5pf" Jan 20 18:24:51 crc kubenswrapper[4661]: I0120 18:24:51.909420 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ebc6351c-f99f-4bf6-ae92-e0a8aee0b5be-config\") pod \"dnsmasq-dns-7987f74bbc-ntqbq\" (UID: \"ebc6351c-f99f-4bf6-ae92-e0a8aee0b5be\") " pod="openstack/dnsmasq-dns-7987f74bbc-ntqbq" Jan 20 18:24:51 crc kubenswrapper[4661]: I0120 18:24:51.909443 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5bq2l\" (UniqueName: \"kubernetes.io/projected/dbaa46dc-18ed-41e6-84b6-86daf834ffd4-kube-api-access-5bq2l\") pod \"placement-db-sync-8t5pf\" (UID: \"dbaa46dc-18ed-41e6-84b6-86daf834ffd4\") " pod="openstack/placement-db-sync-8t5pf" Jan 20 18:24:51 crc kubenswrapper[4661]: I0120 18:24:51.909478 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dbaa46dc-18ed-41e6-84b6-86daf834ffd4-scripts\") pod \"placement-db-sync-8t5pf\" (UID: \"dbaa46dc-18ed-41e6-84b6-86daf834ffd4\") " pod="openstack/placement-db-sync-8t5pf" Jan 20 18:24:51 crc kubenswrapper[4661]: I0120 18:24:51.909514 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ebc6351c-f99f-4bf6-ae92-e0a8aee0b5be-dns-svc\") pod \"dnsmasq-dns-7987f74bbc-ntqbq\" (UID: \"ebc6351c-f99f-4bf6-ae92-e0a8aee0b5be\") " pod="openstack/dnsmasq-dns-7987f74bbc-ntqbq" Jan 20 18:24:51 crc kubenswrapper[4661]: I0120 18:24:51.909540 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/5443a645-bf2b-48db-8111-efa81b46526c-db-sync-config-data\") pod \"barbican-db-sync-bbdtt\" (UID: \"5443a645-bf2b-48db-8111-efa81b46526c\") " pod="openstack/barbican-db-sync-bbdtt" Jan 20 18:24:51 crc kubenswrapper[4661]: I0120 18:24:51.909564 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-46n9d\" (UniqueName: \"kubernetes.io/projected/5443a645-bf2b-48db-8111-efa81b46526c-kube-api-access-46n9d\") pod \"barbican-db-sync-bbdtt\" (UID: \"5443a645-bf2b-48db-8111-efa81b46526c\") " pod="openstack/barbican-db-sync-bbdtt" Jan 20 18:24:51 crc kubenswrapper[4661]: I0120 18:24:51.909583 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dbaa46dc-18ed-41e6-84b6-86daf834ffd4-logs\") pod \"placement-db-sync-8t5pf\" (UID: \"dbaa46dc-18ed-41e6-84b6-86daf834ffd4\") " pod="openstack/placement-db-sync-8t5pf" Jan 20 18:24:51 crc kubenswrapper[4661]: I0120 18:24:51.909600 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bcnqk\" (UniqueName: \"kubernetes.io/projected/ebc6351c-f99f-4bf6-ae92-e0a8aee0b5be-kube-api-access-bcnqk\") pod \"dnsmasq-dns-7987f74bbc-ntqbq\" (UID: \"ebc6351c-f99f-4bf6-ae92-e0a8aee0b5be\") " pod="openstack/dnsmasq-dns-7987f74bbc-ntqbq" Jan 20 18:24:51 crc kubenswrapper[4661]: I0120 18:24:51.909617 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ebc6351c-f99f-4bf6-ae92-e0a8aee0b5be-ovsdbserver-sb\") pod \"dnsmasq-dns-7987f74bbc-ntqbq\" (UID: \"ebc6351c-f99f-4bf6-ae92-e0a8aee0b5be\") " pod="openstack/dnsmasq-dns-7987f74bbc-ntqbq" Jan 20 18:24:51 crc kubenswrapper[4661]: I0120 18:24:51.909634 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5443a645-bf2b-48db-8111-efa81b46526c-combined-ca-bundle\") pod \"barbican-db-sync-bbdtt\" (UID: \"5443a645-bf2b-48db-8111-efa81b46526c\") " pod="openstack/barbican-db-sync-bbdtt" Jan 20 18:24:51 crc kubenswrapper[4661]: I0120 18:24:51.909653 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dbaa46dc-18ed-41e6-84b6-86daf834ffd4-combined-ca-bundle\") pod \"placement-db-sync-8t5pf\" (UID: \"dbaa46dc-18ed-41e6-84b6-86daf834ffd4\") " pod="openstack/placement-db-sync-8t5pf" Jan 20 18:24:51 crc kubenswrapper[4661]: I0120 18:24:51.918427 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 20 18:24:51 crc kubenswrapper[4661]: I0120 18:24:51.990239 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-6mvml" Jan 20 18:24:52 crc kubenswrapper[4661]: I0120 18:24:52.033939 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7987f74bbc-ntqbq"] Jan 20 18:24:52 crc kubenswrapper[4661]: I0120 18:24:52.037509 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/5443a645-bf2b-48db-8111-efa81b46526c-db-sync-config-data\") pod \"barbican-db-sync-bbdtt\" (UID: \"5443a645-bf2b-48db-8111-efa81b46526c\") " pod="openstack/barbican-db-sync-bbdtt" Jan 20 18:24:52 crc kubenswrapper[4661]: I0120 18:24:52.040388 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-46n9d\" (UniqueName: \"kubernetes.io/projected/5443a645-bf2b-48db-8111-efa81b46526c-kube-api-access-46n9d\") pod \"barbican-db-sync-bbdtt\" (UID: \"5443a645-bf2b-48db-8111-efa81b46526c\") " pod="openstack/barbican-db-sync-bbdtt" Jan 20 18:24:52 crc kubenswrapper[4661]: I0120 18:24:52.040462 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dbaa46dc-18ed-41e6-84b6-86daf834ffd4-logs\") pod \"placement-db-sync-8t5pf\" (UID: \"dbaa46dc-18ed-41e6-84b6-86daf834ffd4\") " pod="openstack/placement-db-sync-8t5pf" Jan 20 18:24:52 crc kubenswrapper[4661]: I0120 18:24:52.040502 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bcnqk\" (UniqueName: \"kubernetes.io/projected/ebc6351c-f99f-4bf6-ae92-e0a8aee0b5be-kube-api-access-bcnqk\") pod \"dnsmasq-dns-7987f74bbc-ntqbq\" (UID: \"ebc6351c-f99f-4bf6-ae92-e0a8aee0b5be\") " pod="openstack/dnsmasq-dns-7987f74bbc-ntqbq" Jan 20 18:24:52 crc kubenswrapper[4661]: I0120 18:24:52.040536 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ebc6351c-f99f-4bf6-ae92-e0a8aee0b5be-ovsdbserver-sb\") pod \"dnsmasq-dns-7987f74bbc-ntqbq\" (UID: \"ebc6351c-f99f-4bf6-ae92-e0a8aee0b5be\") " pod="openstack/dnsmasq-dns-7987f74bbc-ntqbq" Jan 20 18:24:52 crc kubenswrapper[4661]: I0120 18:24:52.040565 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5443a645-bf2b-48db-8111-efa81b46526c-combined-ca-bundle\") pod \"barbican-db-sync-bbdtt\" (UID: \"5443a645-bf2b-48db-8111-efa81b46526c\") " pod="openstack/barbican-db-sync-bbdtt" Jan 20 18:24:52 crc kubenswrapper[4661]: I0120 18:24:52.040610 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dbaa46dc-18ed-41e6-84b6-86daf834ffd4-combined-ca-bundle\") pod \"placement-db-sync-8t5pf\" (UID: \"dbaa46dc-18ed-41e6-84b6-86daf834ffd4\") " pod="openstack/placement-db-sync-8t5pf" Jan 20 18:24:52 crc kubenswrapper[4661]: I0120 18:24:52.040655 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ebc6351c-f99f-4bf6-ae92-e0a8aee0b5be-ovsdbserver-nb\") pod \"dnsmasq-dns-7987f74bbc-ntqbq\" (UID: \"ebc6351c-f99f-4bf6-ae92-e0a8aee0b5be\") " pod="openstack/dnsmasq-dns-7987f74bbc-ntqbq" Jan 20 18:24:52 crc kubenswrapper[4661]: I0120 18:24:52.040690 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dbaa46dc-18ed-41e6-84b6-86daf834ffd4-config-data\") pod \"placement-db-sync-8t5pf\" (UID: \"dbaa46dc-18ed-41e6-84b6-86daf834ffd4\") " pod="openstack/placement-db-sync-8t5pf" Jan 20 18:24:52 crc kubenswrapper[4661]: I0120 18:24:52.040788 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ebc6351c-f99f-4bf6-ae92-e0a8aee0b5be-config\") pod \"dnsmasq-dns-7987f74bbc-ntqbq\" (UID: \"ebc6351c-f99f-4bf6-ae92-e0a8aee0b5be\") " pod="openstack/dnsmasq-dns-7987f74bbc-ntqbq" Jan 20 18:24:52 crc kubenswrapper[4661]: I0120 18:24:52.040835 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5bq2l\" (UniqueName: \"kubernetes.io/projected/dbaa46dc-18ed-41e6-84b6-86daf834ffd4-kube-api-access-5bq2l\") pod \"placement-db-sync-8t5pf\" (UID: \"dbaa46dc-18ed-41e6-84b6-86daf834ffd4\") " pod="openstack/placement-db-sync-8t5pf" Jan 20 18:24:52 crc kubenswrapper[4661]: I0120 18:24:52.040912 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dbaa46dc-18ed-41e6-84b6-86daf834ffd4-scripts\") pod \"placement-db-sync-8t5pf\" (UID: \"dbaa46dc-18ed-41e6-84b6-86daf834ffd4\") " pod="openstack/placement-db-sync-8t5pf" Jan 20 18:24:52 crc kubenswrapper[4661]: I0120 18:24:52.041003 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ebc6351c-f99f-4bf6-ae92-e0a8aee0b5be-dns-svc\") pod \"dnsmasq-dns-7987f74bbc-ntqbq\" (UID: \"ebc6351c-f99f-4bf6-ae92-e0a8aee0b5be\") " pod="openstack/dnsmasq-dns-7987f74bbc-ntqbq" Jan 20 18:24:52 crc kubenswrapper[4661]: I0120 18:24:52.041971 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ebc6351c-f99f-4bf6-ae92-e0a8aee0b5be-dns-svc\") pod \"dnsmasq-dns-7987f74bbc-ntqbq\" (UID: \"ebc6351c-f99f-4bf6-ae92-e0a8aee0b5be\") " pod="openstack/dnsmasq-dns-7987f74bbc-ntqbq" Jan 20 18:24:52 crc kubenswrapper[4661]: I0120 18:24:52.042955 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dbaa46dc-18ed-41e6-84b6-86daf834ffd4-logs\") pod \"placement-db-sync-8t5pf\" (UID: \"dbaa46dc-18ed-41e6-84b6-86daf834ffd4\") " pod="openstack/placement-db-sync-8t5pf" Jan 20 18:24:52 crc kubenswrapper[4661]: I0120 18:24:52.043118 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/5443a645-bf2b-48db-8111-efa81b46526c-db-sync-config-data\") pod \"barbican-db-sync-bbdtt\" (UID: \"5443a645-bf2b-48db-8111-efa81b46526c\") " pod="openstack/barbican-db-sync-bbdtt" Jan 20 18:24:52 crc kubenswrapper[4661]: I0120 18:24:52.059210 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ebc6351c-f99f-4bf6-ae92-e0a8aee0b5be-ovsdbserver-sb\") pod \"dnsmasq-dns-7987f74bbc-ntqbq\" (UID: \"ebc6351c-f99f-4bf6-ae92-e0a8aee0b5be\") " pod="openstack/dnsmasq-dns-7987f74bbc-ntqbq" Jan 20 18:24:52 crc kubenswrapper[4661]: I0120 18:24:52.059619 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ebc6351c-f99f-4bf6-ae92-e0a8aee0b5be-config\") pod \"dnsmasq-dns-7987f74bbc-ntqbq\" (UID: \"ebc6351c-f99f-4bf6-ae92-e0a8aee0b5be\") " pod="openstack/dnsmasq-dns-7987f74bbc-ntqbq" Jan 20 18:24:52 crc kubenswrapper[4661]: I0120 18:24:52.057375 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ebc6351c-f99f-4bf6-ae92-e0a8aee0b5be-ovsdbserver-nb\") pod \"dnsmasq-dns-7987f74bbc-ntqbq\" (UID: \"ebc6351c-f99f-4bf6-ae92-e0a8aee0b5be\") " pod="openstack/dnsmasq-dns-7987f74bbc-ntqbq" Jan 20 18:24:52 crc kubenswrapper[4661]: I0120 18:24:52.068927 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5443a645-bf2b-48db-8111-efa81b46526c-combined-ca-bundle\") pod \"barbican-db-sync-bbdtt\" (UID: \"5443a645-bf2b-48db-8111-efa81b46526c\") " pod="openstack/barbican-db-sync-bbdtt" Jan 20 18:24:52 crc kubenswrapper[4661]: I0120 18:24:52.066025 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dbaa46dc-18ed-41e6-84b6-86daf834ffd4-scripts\") pod \"placement-db-sync-8t5pf\" (UID: \"dbaa46dc-18ed-41e6-84b6-86daf834ffd4\") " pod="openstack/placement-db-sync-8t5pf" Jan 20 18:24:52 crc kubenswrapper[4661]: I0120 18:24:52.102523 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dbaa46dc-18ed-41e6-84b6-86daf834ffd4-config-data\") pod \"placement-db-sync-8t5pf\" (UID: \"dbaa46dc-18ed-41e6-84b6-86daf834ffd4\") " pod="openstack/placement-db-sync-8t5pf" Jan 20 18:24:52 crc kubenswrapper[4661]: I0120 18:24:52.106356 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bcnqk\" (UniqueName: \"kubernetes.io/projected/ebc6351c-f99f-4bf6-ae92-e0a8aee0b5be-kube-api-access-bcnqk\") pod \"dnsmasq-dns-7987f74bbc-ntqbq\" (UID: \"ebc6351c-f99f-4bf6-ae92-e0a8aee0b5be\") " pod="openstack/dnsmasq-dns-7987f74bbc-ntqbq" Jan 20 18:24:52 crc kubenswrapper[4661]: I0120 18:24:52.107163 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-46n9d\" (UniqueName: \"kubernetes.io/projected/5443a645-bf2b-48db-8111-efa81b46526c-kube-api-access-46n9d\") pod \"barbican-db-sync-bbdtt\" (UID: \"5443a645-bf2b-48db-8111-efa81b46526c\") " pod="openstack/barbican-db-sync-bbdtt" Jan 20 18:24:52 crc kubenswrapper[4661]: I0120 18:24:52.109700 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5bq2l\" (UniqueName: \"kubernetes.io/projected/dbaa46dc-18ed-41e6-84b6-86daf834ffd4-kube-api-access-5bq2l\") pod \"placement-db-sync-8t5pf\" (UID: \"dbaa46dc-18ed-41e6-84b6-86daf834ffd4\") " pod="openstack/placement-db-sync-8t5pf" Jan 20 18:24:52 crc kubenswrapper[4661]: I0120 18:24:52.114041 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dbaa46dc-18ed-41e6-84b6-86daf834ffd4-combined-ca-bundle\") pod \"placement-db-sync-8t5pf\" (UID: \"dbaa46dc-18ed-41e6-84b6-86daf834ffd4\") " pod="openstack/placement-db-sync-8t5pf" Jan 20 18:24:52 crc kubenswrapper[4661]: I0120 18:24:52.137178 4661 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-54f9b7b8d9-wt75k" Jan 20 18:24:52 crc kubenswrapper[4661]: I0120 18:24:52.244898 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/67352b8c-a5b5-427d-9bdd-861d7d00a88e-ovsdbserver-nb\") pod \"67352b8c-a5b5-427d-9bdd-861d7d00a88e\" (UID: \"67352b8c-a5b5-427d-9bdd-861d7d00a88e\") " Jan 20 18:24:52 crc kubenswrapper[4661]: I0120 18:24:52.245167 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/67352b8c-a5b5-427d-9bdd-861d7d00a88e-dns-svc\") pod \"67352b8c-a5b5-427d-9bdd-861d7d00a88e\" (UID: \"67352b8c-a5b5-427d-9bdd-861d7d00a88e\") " Jan 20 18:24:52 crc kubenswrapper[4661]: I0120 18:24:52.245206 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bb256\" (UniqueName: \"kubernetes.io/projected/67352b8c-a5b5-427d-9bdd-861d7d00a88e-kube-api-access-bb256\") pod \"67352b8c-a5b5-427d-9bdd-861d7d00a88e\" (UID: \"67352b8c-a5b5-427d-9bdd-861d7d00a88e\") " Jan 20 18:24:52 crc kubenswrapper[4661]: I0120 18:24:52.245252 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/67352b8c-a5b5-427d-9bdd-861d7d00a88e-ovsdbserver-sb\") pod \"67352b8c-a5b5-427d-9bdd-861d7d00a88e\" (UID: \"67352b8c-a5b5-427d-9bdd-861d7d00a88e\") " Jan 20 18:24:52 crc kubenswrapper[4661]: I0120 18:24:52.245330 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/67352b8c-a5b5-427d-9bdd-861d7d00a88e-config\") pod \"67352b8c-a5b5-427d-9bdd-861d7d00a88e\" (UID: \"67352b8c-a5b5-427d-9bdd-861d7d00a88e\") " Jan 20 18:24:52 crc kubenswrapper[4661]: I0120 18:24:52.255621 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/67352b8c-a5b5-427d-9bdd-861d7d00a88e-kube-api-access-bb256" (OuterVolumeSpecName: "kube-api-access-bb256") pod "67352b8c-a5b5-427d-9bdd-861d7d00a88e" (UID: "67352b8c-a5b5-427d-9bdd-861d7d00a88e"). InnerVolumeSpecName "kube-api-access-bb256". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 18:24:52 crc kubenswrapper[4661]: I0120 18:24:52.347642 4661 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bb256\" (UniqueName: \"kubernetes.io/projected/67352b8c-a5b5-427d-9bdd-861d7d00a88e-kube-api-access-bb256\") on node \"crc\" DevicePath \"\"" Jan 20 18:24:52 crc kubenswrapper[4661]: I0120 18:24:52.357759 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7987f74bbc-ntqbq" Jan 20 18:24:52 crc kubenswrapper[4661]: I0120 18:24:52.371571 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/67352b8c-a5b5-427d-9bdd-861d7d00a88e-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "67352b8c-a5b5-427d-9bdd-861d7d00a88e" (UID: "67352b8c-a5b5-427d-9bdd-861d7d00a88e"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 18:24:52 crc kubenswrapper[4661]: I0120 18:24:52.372049 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-bbdtt" Jan 20 18:24:52 crc kubenswrapper[4661]: I0120 18:24:52.380941 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/67352b8c-a5b5-427d-9bdd-861d7d00a88e-config" (OuterVolumeSpecName: "config") pod "67352b8c-a5b5-427d-9bdd-861d7d00a88e" (UID: "67352b8c-a5b5-427d-9bdd-861d7d00a88e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 18:24:52 crc kubenswrapper[4661]: I0120 18:24:52.397006 4661 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6546db6db7-jh8m5"] Jan 20 18:24:52 crc kubenswrapper[4661]: I0120 18:24:52.411607 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-8t5pf" Jan 20 18:24:52 crc kubenswrapper[4661]: I0120 18:24:52.424158 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/67352b8c-a5b5-427d-9bdd-861d7d00a88e-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "67352b8c-a5b5-427d-9bdd-861d7d00a88e" (UID: "67352b8c-a5b5-427d-9bdd-861d7d00a88e"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 18:24:52 crc kubenswrapper[4661]: I0120 18:24:52.439155 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-hkmmn"] Jan 20 18:24:52 crc kubenswrapper[4661]: I0120 18:24:52.450057 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/67352b8c-a5b5-427d-9bdd-861d7d00a88e-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "67352b8c-a5b5-427d-9bdd-861d7d00a88e" (UID: "67352b8c-a5b5-427d-9bdd-861d7d00a88e"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 18:24:52 crc kubenswrapper[4661]: I0120 18:24:52.450558 4661 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/67352b8c-a5b5-427d-9bdd-861d7d00a88e-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 20 18:24:52 crc kubenswrapper[4661]: I0120 18:24:52.450582 4661 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/67352b8c-a5b5-427d-9bdd-861d7d00a88e-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 20 18:24:52 crc kubenswrapper[4661]: I0120 18:24:52.451733 4661 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/67352b8c-a5b5-427d-9bdd-861d7d00a88e-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 20 18:24:52 crc kubenswrapper[4661]: I0120 18:24:52.451820 4661 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/67352b8c-a5b5-427d-9bdd-861d7d00a88e-config\") on node \"crc\" DevicePath \"\"" Jan 20 18:24:52 crc kubenswrapper[4661]: I0120 18:24:52.637401 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-glwqq"] Jan 20 18:24:52 crc kubenswrapper[4661]: W0120 18:24:52.675522 4661 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2423d758_4514_439d_a804_42287945bedc.slice/crio-2527365a3c1440fb8bc8be41183b4ce72bb300cab6550737327cb1272dab1fdf WatchSource:0}: Error finding container 2527365a3c1440fb8bc8be41183b4ce72bb300cab6550737327cb1272dab1fdf: Status 404 returned error can't find the container with id 2527365a3c1440fb8bc8be41183b4ce72bb300cab6550737327cb1272dab1fdf Jan 20 18:24:52 crc kubenswrapper[4661]: I0120 18:24:52.847953 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-6mvml"] Jan 20 18:24:52 crc kubenswrapper[4661]: I0120 18:24:52.910924 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-bbdtt"] Jan 20 18:24:52 crc kubenswrapper[4661]: I0120 18:24:52.919686 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 20 18:24:52 crc kubenswrapper[4661]: W0120 18:24:52.937751 4661 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf4ab2fe8_a602_4b1c_b880_f1eeb9bde9de.slice/crio-372774cdaeaed41b381e7451a6fa33d00d559751b774a7a0e59e7b6725fca334 WatchSource:0}: Error finding container 372774cdaeaed41b381e7451a6fa33d00d559751b774a7a0e59e7b6725fca334: Status 404 returned error can't find the container with id 372774cdaeaed41b381e7451a6fa33d00d559751b774a7a0e59e7b6725fca334 Jan 20 18:24:53 crc kubenswrapper[4661]: I0120 18:24:52.986172 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-54f9b7b8d9-wt75k" event={"ID":"67352b8c-a5b5-427d-9bdd-861d7d00a88e","Type":"ContainerDied","Data":"892183cfd587235d37a7e20d865348b6d47f3773938e6bbc7cd6ad03ddf14bc2"} Jan 20 18:24:53 crc kubenswrapper[4661]: I0120 18:24:52.986213 4661 scope.go:117] "RemoveContainer" containerID="bd86fe1a03feb51f3909e976ceffa7d222d3e7eb2e857245973975817a67c234" Jan 20 18:24:53 crc kubenswrapper[4661]: I0120 18:24:52.986314 4661 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-54f9b7b8d9-wt75k" Jan 20 18:24:53 crc kubenswrapper[4661]: I0120 18:24:53.003704 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-hkmmn" event={"ID":"9d3d5220-e6bb-4e22-9e2a-cec33bdad6a0","Type":"ContainerStarted","Data":"02ad438504bd6c1619974d731f29f242b1c62ef8bf823a5abd756a198155c8c4"} Jan 20 18:24:53 crc kubenswrapper[4661]: I0120 18:24:53.005556 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-glwqq" event={"ID":"2423d758-4514-439d-a804-42287945bedc","Type":"ContainerStarted","Data":"2527365a3c1440fb8bc8be41183b4ce72bb300cab6550737327cb1272dab1fdf"} Jan 20 18:24:53 crc kubenswrapper[4661]: I0120 18:24:53.006549 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6546db6db7-jh8m5" event={"ID":"f97254a4-d267-447c-a674-ccebc78cc066","Type":"ContainerStarted","Data":"51f9a3b20eeceacd49b14c11bb37f003929c7ea54d97d1e0abf534527857f8c7"} Jan 20 18:24:53 crc kubenswrapper[4661]: I0120 18:24:53.025613 4661 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-54f9b7b8d9-wt75k"] Jan 20 18:24:53 crc kubenswrapper[4661]: I0120 18:24:53.038782 4661 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-54f9b7b8d9-wt75k"] Jan 20 18:24:53 crc kubenswrapper[4661]: I0120 18:24:53.041272 4661 scope.go:117] "RemoveContainer" containerID="8eaa12322b0b3170eb72537434f6c82bd18209e98ebff2477a21644138bf30cb" Jan 20 18:24:53 crc kubenswrapper[4661]: I0120 18:24:53.041405 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-8t5pf"] Jan 20 18:24:54 crc kubenswrapper[4661]: I0120 18:24:54.024329 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-bbdtt" event={"ID":"5443a645-bf2b-48db-8111-efa81b46526c","Type":"ContainerStarted","Data":"91b904b0f546bec369c24189a2e76caad6cbdc7109e76272f202ab195232c7af"} Jan 20 18:24:54 crc kubenswrapper[4661]: I0120 18:24:54.037613 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-8t5pf" event={"ID":"dbaa46dc-18ed-41e6-84b6-86daf834ffd4","Type":"ContainerStarted","Data":"206e11d3f8f9609661028aeb06e82c6978b287bcb24f24835209aa18ca31b328"} Jan 20 18:24:54 crc kubenswrapper[4661]: I0120 18:24:54.055544 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-6mvml" event={"ID":"f4ab2fe8-a602-4b1c-b880-f1eeb9bde9de","Type":"ContainerStarted","Data":"d78a870eb313e545ddd0b420c52038a328c22cbaaaa42e5563a12a2abedfffc1"} Jan 20 18:24:54 crc kubenswrapper[4661]: I0120 18:24:54.055597 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-6mvml" event={"ID":"f4ab2fe8-a602-4b1c-b880-f1eeb9bde9de","Type":"ContainerStarted","Data":"372774cdaeaed41b381e7451a6fa33d00d559751b774a7a0e59e7b6725fca334"} Jan 20 18:24:54 crc kubenswrapper[4661]: I0120 18:24:54.068500 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5a5e84fe-d036-42ca-97d0-2abdb2b8ebe4","Type":"ContainerStarted","Data":"4ada2bd0dae89cc42a8bdd1ac96d284f34e035e0035e9328a2f90ef7a1f58c5d"} Jan 20 18:24:54 crc kubenswrapper[4661]: I0120 18:24:54.086323 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-hkmmn" event={"ID":"9d3d5220-e6bb-4e22-9e2a-cec33bdad6a0","Type":"ContainerStarted","Data":"f2cc9eecd7e9e582ebb25740836d718a0994063682d827bd0fc03e7f0bdf8b3d"} Jan 20 18:24:54 crc kubenswrapper[4661]: I0120 18:24:54.088486 4661 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-sync-6mvml" podStartSLOduration=3.088462511 podStartE2EDuration="3.088462511s" podCreationTimestamp="2026-01-20 18:24:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 18:24:54.075707832 +0000 UTC m=+1150.406497514" watchObservedRunningTime="2026-01-20 18:24:54.088462511 +0000 UTC m=+1150.419252173" Jan 20 18:24:54 crc kubenswrapper[4661]: I0120 18:24:54.133058 4661 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 20 18:24:54 crc kubenswrapper[4661]: I0120 18:24:54.147206 4661 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-hkmmn" podStartSLOduration=3.147186221 podStartE2EDuration="3.147186221s" podCreationTimestamp="2026-01-20 18:24:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 18:24:54.120237905 +0000 UTC m=+1150.451027567" watchObservedRunningTime="2026-01-20 18:24:54.147186221 +0000 UTC m=+1150.477975883" Jan 20 18:24:54 crc kubenswrapper[4661]: I0120 18:24:54.152828 4661 generic.go:334] "Generic (PLEG): container finished" podID="f97254a4-d267-447c-a674-ccebc78cc066" containerID="3230c11e806e2b67b9932989e4759ef2dcaf430dbafa83cfedf5d94ea384985c" exitCode=0 Jan 20 18:24:54 crc kubenswrapper[4661]: I0120 18:24:54.161047 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6546db6db7-jh8m5" event={"ID":"f97254a4-d267-447c-a674-ccebc78cc066","Type":"ContainerDied","Data":"3230c11e806e2b67b9932989e4759ef2dcaf430dbafa83cfedf5d94ea384985c"} Jan 20 18:24:54 crc kubenswrapper[4661]: I0120 18:24:54.188198 4661 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="67352b8c-a5b5-427d-9bdd-861d7d00a88e" path="/var/lib/kubelet/pods/67352b8c-a5b5-427d-9bdd-861d7d00a88e/volumes" Jan 20 18:24:54 crc kubenswrapper[4661]: I0120 18:24:54.273131 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7987f74bbc-ntqbq"] Jan 20 18:24:54 crc kubenswrapper[4661]: I0120 18:24:54.579727 4661 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6546db6db7-jh8m5" Jan 20 18:24:54 crc kubenswrapper[4661]: I0120 18:24:54.652143 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f7rwc\" (UniqueName: \"kubernetes.io/projected/f97254a4-d267-447c-a674-ccebc78cc066-kube-api-access-f7rwc\") pod \"f97254a4-d267-447c-a674-ccebc78cc066\" (UID: \"f97254a4-d267-447c-a674-ccebc78cc066\") " Jan 20 18:24:54 crc kubenswrapper[4661]: I0120 18:24:54.652252 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f97254a4-d267-447c-a674-ccebc78cc066-ovsdbserver-nb\") pod \"f97254a4-d267-447c-a674-ccebc78cc066\" (UID: \"f97254a4-d267-447c-a674-ccebc78cc066\") " Jan 20 18:24:54 crc kubenswrapper[4661]: I0120 18:24:54.652285 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f97254a4-d267-447c-a674-ccebc78cc066-ovsdbserver-sb\") pod \"f97254a4-d267-447c-a674-ccebc78cc066\" (UID: \"f97254a4-d267-447c-a674-ccebc78cc066\") " Jan 20 18:24:54 crc kubenswrapper[4661]: I0120 18:24:54.652328 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f97254a4-d267-447c-a674-ccebc78cc066-dns-svc\") pod \"f97254a4-d267-447c-a674-ccebc78cc066\" (UID: \"f97254a4-d267-447c-a674-ccebc78cc066\") " Jan 20 18:24:54 crc kubenswrapper[4661]: I0120 18:24:54.652433 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f97254a4-d267-447c-a674-ccebc78cc066-config\") pod \"f97254a4-d267-447c-a674-ccebc78cc066\" (UID: \"f97254a4-d267-447c-a674-ccebc78cc066\") " Jan 20 18:24:54 crc kubenswrapper[4661]: I0120 18:24:54.663551 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f97254a4-d267-447c-a674-ccebc78cc066-kube-api-access-f7rwc" (OuterVolumeSpecName: "kube-api-access-f7rwc") pod "f97254a4-d267-447c-a674-ccebc78cc066" (UID: "f97254a4-d267-447c-a674-ccebc78cc066"). InnerVolumeSpecName "kube-api-access-f7rwc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 18:24:54 crc kubenswrapper[4661]: I0120 18:24:54.687598 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f97254a4-d267-447c-a674-ccebc78cc066-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "f97254a4-d267-447c-a674-ccebc78cc066" (UID: "f97254a4-d267-447c-a674-ccebc78cc066"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 18:24:54 crc kubenswrapper[4661]: I0120 18:24:54.688552 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f97254a4-d267-447c-a674-ccebc78cc066-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "f97254a4-d267-447c-a674-ccebc78cc066" (UID: "f97254a4-d267-447c-a674-ccebc78cc066"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 18:24:54 crc kubenswrapper[4661]: I0120 18:24:54.691794 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f97254a4-d267-447c-a674-ccebc78cc066-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "f97254a4-d267-447c-a674-ccebc78cc066" (UID: "f97254a4-d267-447c-a674-ccebc78cc066"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 18:24:54 crc kubenswrapper[4661]: I0120 18:24:54.699330 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f97254a4-d267-447c-a674-ccebc78cc066-config" (OuterVolumeSpecName: "config") pod "f97254a4-d267-447c-a674-ccebc78cc066" (UID: "f97254a4-d267-447c-a674-ccebc78cc066"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 18:24:54 crc kubenswrapper[4661]: I0120 18:24:54.753853 4661 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f7rwc\" (UniqueName: \"kubernetes.io/projected/f97254a4-d267-447c-a674-ccebc78cc066-kube-api-access-f7rwc\") on node \"crc\" DevicePath \"\"" Jan 20 18:24:54 crc kubenswrapper[4661]: I0120 18:24:54.753887 4661 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f97254a4-d267-447c-a674-ccebc78cc066-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 20 18:24:54 crc kubenswrapper[4661]: I0120 18:24:54.753896 4661 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f97254a4-d267-447c-a674-ccebc78cc066-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 20 18:24:54 crc kubenswrapper[4661]: I0120 18:24:54.753906 4661 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f97254a4-d267-447c-a674-ccebc78cc066-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 20 18:24:54 crc kubenswrapper[4661]: I0120 18:24:54.753921 4661 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f97254a4-d267-447c-a674-ccebc78cc066-config\") on node \"crc\" DevicePath \"\"" Jan 20 18:24:55 crc kubenswrapper[4661]: I0120 18:24:55.165649 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6546db6db7-jh8m5" event={"ID":"f97254a4-d267-447c-a674-ccebc78cc066","Type":"ContainerDied","Data":"51f9a3b20eeceacd49b14c11bb37f003929c7ea54d97d1e0abf534527857f8c7"} Jan 20 18:24:55 crc kubenswrapper[4661]: I0120 18:24:55.165721 4661 scope.go:117] "RemoveContainer" containerID="3230c11e806e2b67b9932989e4759ef2dcaf430dbafa83cfedf5d94ea384985c" Jan 20 18:24:55 crc kubenswrapper[4661]: I0120 18:24:55.165828 4661 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6546db6db7-jh8m5" Jan 20 18:24:55 crc kubenswrapper[4661]: I0120 18:24:55.170397 4661 generic.go:334] "Generic (PLEG): container finished" podID="ebc6351c-f99f-4bf6-ae92-e0a8aee0b5be" containerID="6096e0cd0b5f587a882bb0f8c6d764ac949ff8f6bf7be3165a1450ce199187f1" exitCode=0 Jan 20 18:24:55 crc kubenswrapper[4661]: I0120 18:24:55.171649 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7987f74bbc-ntqbq" event={"ID":"ebc6351c-f99f-4bf6-ae92-e0a8aee0b5be","Type":"ContainerDied","Data":"6096e0cd0b5f587a882bb0f8c6d764ac949ff8f6bf7be3165a1450ce199187f1"} Jan 20 18:24:55 crc kubenswrapper[4661]: I0120 18:24:55.171772 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7987f74bbc-ntqbq" event={"ID":"ebc6351c-f99f-4bf6-ae92-e0a8aee0b5be","Type":"ContainerStarted","Data":"fbd602182f4aa96279a49ee95ca5033da750d8f396e05372eff4f8886a3f6fd5"} Jan 20 18:24:55 crc kubenswrapper[4661]: I0120 18:24:55.246783 4661 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6546db6db7-jh8m5"] Jan 20 18:24:55 crc kubenswrapper[4661]: I0120 18:24:55.261013 4661 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6546db6db7-jh8m5"] Jan 20 18:24:56 crc kubenswrapper[4661]: I0120 18:24:56.152876 4661 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f97254a4-d267-447c-a674-ccebc78cc066" path="/var/lib/kubelet/pods/f97254a4-d267-447c-a674-ccebc78cc066/volumes" Jan 20 18:24:56 crc kubenswrapper[4661]: I0120 18:24:56.180236 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7987f74bbc-ntqbq" event={"ID":"ebc6351c-f99f-4bf6-ae92-e0a8aee0b5be","Type":"ContainerStarted","Data":"cd107171512ab4dabc070b9cd54cf73aff0688d5411db067faaf56574fdd38b9"} Jan 20 18:24:56 crc kubenswrapper[4661]: I0120 18:24:56.180998 4661 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7987f74bbc-ntqbq" Jan 20 18:24:56 crc kubenswrapper[4661]: I0120 18:24:56.200408 4661 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7987f74bbc-ntqbq" podStartSLOduration=5.200390796 podStartE2EDuration="5.200390796s" podCreationTimestamp="2026-01-20 18:24:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 18:24:56.196131873 +0000 UTC m=+1152.526921535" watchObservedRunningTime="2026-01-20 18:24:56.200390796 +0000 UTC m=+1152.531180458" Jan 20 18:24:58 crc kubenswrapper[4661]: I0120 18:24:58.207757 4661 generic.go:334] "Generic (PLEG): container finished" podID="9d3d5220-e6bb-4e22-9e2a-cec33bdad6a0" containerID="f2cc9eecd7e9e582ebb25740836d718a0994063682d827bd0fc03e7f0bdf8b3d" exitCode=0 Jan 20 18:24:58 crc kubenswrapper[4661]: I0120 18:24:58.208846 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-hkmmn" event={"ID":"9d3d5220-e6bb-4e22-9e2a-cec33bdad6a0","Type":"ContainerDied","Data":"f2cc9eecd7e9e582ebb25740836d718a0994063682d827bd0fc03e7f0bdf8b3d"} Jan 20 18:24:58 crc kubenswrapper[4661]: I0120 18:24:58.811985 4661 scope.go:117] "RemoveContainer" containerID="f1f59b932d1d5cf88b17584e72cd478a89ed2faa135a43d0e563946d13f57c03" Jan 20 18:24:59 crc kubenswrapper[4661]: I0120 18:24:59.323872 4661 patch_prober.go:28] interesting pod/machine-config-daemon-svf7c container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 20 18:24:59 crc kubenswrapper[4661]: I0120 18:24:59.324141 4661 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-svf7c" podUID="78855c94-da90-4523-8d65-70f7fd153dee" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 20 18:25:02 crc kubenswrapper[4661]: I0120 18:25:02.359826 4661 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7987f74bbc-ntqbq" Jan 20 18:25:02 crc kubenswrapper[4661]: I0120 18:25:02.476257 4661 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-tpwdx"] Jan 20 18:25:02 crc kubenswrapper[4661]: I0120 18:25:02.476788 4661 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-86db49b7ff-tpwdx" podUID="ad7ce2d6-6f59-4934-b54c-d1d763e14c22" containerName="dnsmasq-dns" containerID="cri-o://90718394324f3ce0bcc7a3bc9fafd1d5d961f48bebf829f6bc2b013ce89bde2d" gracePeriod=10 Jan 20 18:25:03 crc kubenswrapper[4661]: I0120 18:25:03.249324 4661 generic.go:334] "Generic (PLEG): container finished" podID="ad7ce2d6-6f59-4934-b54c-d1d763e14c22" containerID="90718394324f3ce0bcc7a3bc9fafd1d5d961f48bebf829f6bc2b013ce89bde2d" exitCode=0 Jan 20 18:25:03 crc kubenswrapper[4661]: I0120 18:25:03.249369 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-tpwdx" event={"ID":"ad7ce2d6-6f59-4934-b54c-d1d763e14c22","Type":"ContainerDied","Data":"90718394324f3ce0bcc7a3bc9fafd1d5d961f48bebf829f6bc2b013ce89bde2d"} Jan 20 18:25:04 crc kubenswrapper[4661]: I0120 18:25:04.236483 4661 scope.go:117] "RemoveContainer" containerID="869f1571b1c783266addd965c9676bbe437e95769b1f5be0a46ada7c9cad6742" Jan 20 18:25:04 crc kubenswrapper[4661]: I0120 18:25:04.281057 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-hkmmn" event={"ID":"9d3d5220-e6bb-4e22-9e2a-cec33bdad6a0","Type":"ContainerDied","Data":"02ad438504bd6c1619974d731f29f242b1c62ef8bf823a5abd756a198155c8c4"} Jan 20 18:25:04 crc kubenswrapper[4661]: I0120 18:25:04.281097 4661 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="02ad438504bd6c1619974d731f29f242b1c62ef8bf823a5abd756a198155c8c4" Jan 20 18:25:04 crc kubenswrapper[4661]: I0120 18:25:04.340269 4661 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-hkmmn" Jan 20 18:25:04 crc kubenswrapper[4661]: I0120 18:25:04.433219 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9d3d5220-e6bb-4e22-9e2a-cec33bdad6a0-config-data\") pod \"9d3d5220-e6bb-4e22-9e2a-cec33bdad6a0\" (UID: \"9d3d5220-e6bb-4e22-9e2a-cec33bdad6a0\") " Jan 20 18:25:04 crc kubenswrapper[4661]: I0120 18:25:04.433527 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9d3d5220-e6bb-4e22-9e2a-cec33bdad6a0-scripts\") pod \"9d3d5220-e6bb-4e22-9e2a-cec33bdad6a0\" (UID: \"9d3d5220-e6bb-4e22-9e2a-cec33bdad6a0\") " Jan 20 18:25:04 crc kubenswrapper[4661]: I0120 18:25:04.433603 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-29m7d\" (UniqueName: \"kubernetes.io/projected/9d3d5220-e6bb-4e22-9e2a-cec33bdad6a0-kube-api-access-29m7d\") pod \"9d3d5220-e6bb-4e22-9e2a-cec33bdad6a0\" (UID: \"9d3d5220-e6bb-4e22-9e2a-cec33bdad6a0\") " Jan 20 18:25:04 crc kubenswrapper[4661]: I0120 18:25:04.433761 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/9d3d5220-e6bb-4e22-9e2a-cec33bdad6a0-fernet-keys\") pod \"9d3d5220-e6bb-4e22-9e2a-cec33bdad6a0\" (UID: \"9d3d5220-e6bb-4e22-9e2a-cec33bdad6a0\") " Jan 20 18:25:04 crc kubenswrapper[4661]: I0120 18:25:04.433785 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9d3d5220-e6bb-4e22-9e2a-cec33bdad6a0-combined-ca-bundle\") pod \"9d3d5220-e6bb-4e22-9e2a-cec33bdad6a0\" (UID: \"9d3d5220-e6bb-4e22-9e2a-cec33bdad6a0\") " Jan 20 18:25:04 crc kubenswrapper[4661]: I0120 18:25:04.433809 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/9d3d5220-e6bb-4e22-9e2a-cec33bdad6a0-credential-keys\") pod \"9d3d5220-e6bb-4e22-9e2a-cec33bdad6a0\" (UID: \"9d3d5220-e6bb-4e22-9e2a-cec33bdad6a0\") " Jan 20 18:25:04 crc kubenswrapper[4661]: I0120 18:25:04.441813 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d3d5220-e6bb-4e22-9e2a-cec33bdad6a0-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "9d3d5220-e6bb-4e22-9e2a-cec33bdad6a0" (UID: "9d3d5220-e6bb-4e22-9e2a-cec33bdad6a0"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 18:25:04 crc kubenswrapper[4661]: I0120 18:25:04.441985 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d3d5220-e6bb-4e22-9e2a-cec33bdad6a0-scripts" (OuterVolumeSpecName: "scripts") pod "9d3d5220-e6bb-4e22-9e2a-cec33bdad6a0" (UID: "9d3d5220-e6bb-4e22-9e2a-cec33bdad6a0"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 18:25:04 crc kubenswrapper[4661]: I0120 18:25:04.446745 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d3d5220-e6bb-4e22-9e2a-cec33bdad6a0-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "9d3d5220-e6bb-4e22-9e2a-cec33bdad6a0" (UID: "9d3d5220-e6bb-4e22-9e2a-cec33bdad6a0"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 18:25:04 crc kubenswrapper[4661]: I0120 18:25:04.446877 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d3d5220-e6bb-4e22-9e2a-cec33bdad6a0-kube-api-access-29m7d" (OuterVolumeSpecName: "kube-api-access-29m7d") pod "9d3d5220-e6bb-4e22-9e2a-cec33bdad6a0" (UID: "9d3d5220-e6bb-4e22-9e2a-cec33bdad6a0"). InnerVolumeSpecName "kube-api-access-29m7d". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 18:25:04 crc kubenswrapper[4661]: E0120 18:25:04.460604 4661 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9d3d5220-e6bb-4e22-9e2a-cec33bdad6a0-config-data podName:9d3d5220-e6bb-4e22-9e2a-cec33bdad6a0 nodeName:}" failed. No retries permitted until 2026-01-20 18:25:04.960561869 +0000 UTC m=+1161.291351531 (durationBeforeRetry 500ms). Error: error cleaning subPath mounts for volume "config-data" (UniqueName: "kubernetes.io/secret/9d3d5220-e6bb-4e22-9e2a-cec33bdad6a0-config-data") pod "9d3d5220-e6bb-4e22-9e2a-cec33bdad6a0" (UID: "9d3d5220-e6bb-4e22-9e2a-cec33bdad6a0") : error deleting /var/lib/kubelet/pods/9d3d5220-e6bb-4e22-9e2a-cec33bdad6a0/volume-subpaths: remove /var/lib/kubelet/pods/9d3d5220-e6bb-4e22-9e2a-cec33bdad6a0/volume-subpaths: no such file or directory Jan 20 18:25:04 crc kubenswrapper[4661]: I0120 18:25:04.463536 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d3d5220-e6bb-4e22-9e2a-cec33bdad6a0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9d3d5220-e6bb-4e22-9e2a-cec33bdad6a0" (UID: "9d3d5220-e6bb-4e22-9e2a-cec33bdad6a0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 18:25:04 crc kubenswrapper[4661]: I0120 18:25:04.536222 4661 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-29m7d\" (UniqueName: \"kubernetes.io/projected/9d3d5220-e6bb-4e22-9e2a-cec33bdad6a0-kube-api-access-29m7d\") on node \"crc\" DevicePath \"\"" Jan 20 18:25:04 crc kubenswrapper[4661]: I0120 18:25:04.536262 4661 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/9d3d5220-e6bb-4e22-9e2a-cec33bdad6a0-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 20 18:25:04 crc kubenswrapper[4661]: I0120 18:25:04.536275 4661 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9d3d5220-e6bb-4e22-9e2a-cec33bdad6a0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 20 18:25:04 crc kubenswrapper[4661]: I0120 18:25:04.536285 4661 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/9d3d5220-e6bb-4e22-9e2a-cec33bdad6a0-credential-keys\") on node \"crc\" DevicePath \"\"" Jan 20 18:25:04 crc kubenswrapper[4661]: I0120 18:25:04.536295 4661 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9d3d5220-e6bb-4e22-9e2a-cec33bdad6a0-scripts\") on node \"crc\" DevicePath \"\"" Jan 20 18:25:05 crc kubenswrapper[4661]: I0120 18:25:05.045105 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9d3d5220-e6bb-4e22-9e2a-cec33bdad6a0-config-data\") pod \"9d3d5220-e6bb-4e22-9e2a-cec33bdad6a0\" (UID: \"9d3d5220-e6bb-4e22-9e2a-cec33bdad6a0\") " Jan 20 18:25:05 crc kubenswrapper[4661]: I0120 18:25:05.053576 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d3d5220-e6bb-4e22-9e2a-cec33bdad6a0-config-data" (OuterVolumeSpecName: "config-data") pod "9d3d5220-e6bb-4e22-9e2a-cec33bdad6a0" (UID: "9d3d5220-e6bb-4e22-9e2a-cec33bdad6a0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 18:25:05 crc kubenswrapper[4661]: I0120 18:25:05.146830 4661 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9d3d5220-e6bb-4e22-9e2a-cec33bdad6a0-config-data\") on node \"crc\" DevicePath \"\"" Jan 20 18:25:05 crc kubenswrapper[4661]: I0120 18:25:05.155393 4661 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-86db49b7ff-tpwdx" podUID="ad7ce2d6-6f59-4934-b54c-d1d763e14c22" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.110:5353: connect: connection refused" Jan 20 18:25:05 crc kubenswrapper[4661]: I0120 18:25:05.290589 4661 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-hkmmn" Jan 20 18:25:05 crc kubenswrapper[4661]: I0120 18:25:05.537250 4661 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-hkmmn"] Jan 20 18:25:05 crc kubenswrapper[4661]: I0120 18:25:05.553524 4661 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-hkmmn"] Jan 20 18:25:05 crc kubenswrapper[4661]: I0120 18:25:05.608038 4661 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-68lzj"] Jan 20 18:25:05 crc kubenswrapper[4661]: E0120 18:25:05.608341 4661 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="67352b8c-a5b5-427d-9bdd-861d7d00a88e" containerName="dnsmasq-dns" Jan 20 18:25:05 crc kubenswrapper[4661]: I0120 18:25:05.608357 4661 state_mem.go:107] "Deleted CPUSet assignment" podUID="67352b8c-a5b5-427d-9bdd-861d7d00a88e" containerName="dnsmasq-dns" Jan 20 18:25:05 crc kubenswrapper[4661]: E0120 18:25:05.608368 4661 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f97254a4-d267-447c-a674-ccebc78cc066" containerName="init" Jan 20 18:25:05 crc kubenswrapper[4661]: I0120 18:25:05.608375 4661 state_mem.go:107] "Deleted CPUSet assignment" podUID="f97254a4-d267-447c-a674-ccebc78cc066" containerName="init" Jan 20 18:25:05 crc kubenswrapper[4661]: E0120 18:25:05.608388 4661 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9d3d5220-e6bb-4e22-9e2a-cec33bdad6a0" containerName="keystone-bootstrap" Jan 20 18:25:05 crc kubenswrapper[4661]: I0120 18:25:05.608395 4661 state_mem.go:107] "Deleted CPUSet assignment" podUID="9d3d5220-e6bb-4e22-9e2a-cec33bdad6a0" containerName="keystone-bootstrap" Jan 20 18:25:05 crc kubenswrapper[4661]: E0120 18:25:05.608411 4661 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="67352b8c-a5b5-427d-9bdd-861d7d00a88e" containerName="init" Jan 20 18:25:05 crc kubenswrapper[4661]: I0120 18:25:05.608417 4661 state_mem.go:107] "Deleted CPUSet assignment" podUID="67352b8c-a5b5-427d-9bdd-861d7d00a88e" containerName="init" Jan 20 18:25:05 crc kubenswrapper[4661]: I0120 18:25:05.608546 4661 memory_manager.go:354] "RemoveStaleState removing state" podUID="f97254a4-d267-447c-a674-ccebc78cc066" containerName="init" Jan 20 18:25:05 crc kubenswrapper[4661]: I0120 18:25:05.608561 4661 memory_manager.go:354] "RemoveStaleState removing state" podUID="9d3d5220-e6bb-4e22-9e2a-cec33bdad6a0" containerName="keystone-bootstrap" Jan 20 18:25:05 crc kubenswrapper[4661]: I0120 18:25:05.608576 4661 memory_manager.go:354] "RemoveStaleState removing state" podUID="67352b8c-a5b5-427d-9bdd-861d7d00a88e" containerName="dnsmasq-dns" Jan 20 18:25:05 crc kubenswrapper[4661]: I0120 18:25:05.609032 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-68lzj" Jan 20 18:25:05 crc kubenswrapper[4661]: I0120 18:25:05.611645 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Jan 20 18:25:05 crc kubenswrapper[4661]: I0120 18:25:05.611833 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 20 18:25:05 crc kubenswrapper[4661]: I0120 18:25:05.611997 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 20 18:25:05 crc kubenswrapper[4661]: I0120 18:25:05.613099 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 20 18:25:05 crc kubenswrapper[4661]: I0120 18:25:05.613314 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-bvbmp" Jan 20 18:25:05 crc kubenswrapper[4661]: I0120 18:25:05.635273 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-68lzj"] Jan 20 18:25:05 crc kubenswrapper[4661]: I0120 18:25:05.655539 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/b097fc1c-9aca-44d7-be7e-cd35bf67f9f5-fernet-keys\") pod \"keystone-bootstrap-68lzj\" (UID: \"b097fc1c-9aca-44d7-be7e-cd35bf67f9f5\") " pod="openstack/keystone-bootstrap-68lzj" Jan 20 18:25:05 crc kubenswrapper[4661]: I0120 18:25:05.655629 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mlkz6\" (UniqueName: \"kubernetes.io/projected/b097fc1c-9aca-44d7-be7e-cd35bf67f9f5-kube-api-access-mlkz6\") pod \"keystone-bootstrap-68lzj\" (UID: \"b097fc1c-9aca-44d7-be7e-cd35bf67f9f5\") " pod="openstack/keystone-bootstrap-68lzj" Jan 20 18:25:05 crc kubenswrapper[4661]: I0120 18:25:05.655651 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b097fc1c-9aca-44d7-be7e-cd35bf67f9f5-config-data\") pod \"keystone-bootstrap-68lzj\" (UID: \"b097fc1c-9aca-44d7-be7e-cd35bf67f9f5\") " pod="openstack/keystone-bootstrap-68lzj" Jan 20 18:25:05 crc kubenswrapper[4661]: I0120 18:25:05.655714 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/b097fc1c-9aca-44d7-be7e-cd35bf67f9f5-credential-keys\") pod \"keystone-bootstrap-68lzj\" (UID: \"b097fc1c-9aca-44d7-be7e-cd35bf67f9f5\") " pod="openstack/keystone-bootstrap-68lzj" Jan 20 18:25:05 crc kubenswrapper[4661]: I0120 18:25:05.655730 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b097fc1c-9aca-44d7-be7e-cd35bf67f9f5-combined-ca-bundle\") pod \"keystone-bootstrap-68lzj\" (UID: \"b097fc1c-9aca-44d7-be7e-cd35bf67f9f5\") " pod="openstack/keystone-bootstrap-68lzj" Jan 20 18:25:05 crc kubenswrapper[4661]: I0120 18:25:05.655781 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b097fc1c-9aca-44d7-be7e-cd35bf67f9f5-scripts\") pod \"keystone-bootstrap-68lzj\" (UID: \"b097fc1c-9aca-44d7-be7e-cd35bf67f9f5\") " pod="openstack/keystone-bootstrap-68lzj" Jan 20 18:25:05 crc kubenswrapper[4661]: I0120 18:25:05.757199 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/b097fc1c-9aca-44d7-be7e-cd35bf67f9f5-credential-keys\") pod \"keystone-bootstrap-68lzj\" (UID: \"b097fc1c-9aca-44d7-be7e-cd35bf67f9f5\") " pod="openstack/keystone-bootstrap-68lzj" Jan 20 18:25:05 crc kubenswrapper[4661]: I0120 18:25:05.757244 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b097fc1c-9aca-44d7-be7e-cd35bf67f9f5-combined-ca-bundle\") pod \"keystone-bootstrap-68lzj\" (UID: \"b097fc1c-9aca-44d7-be7e-cd35bf67f9f5\") " pod="openstack/keystone-bootstrap-68lzj" Jan 20 18:25:05 crc kubenswrapper[4661]: I0120 18:25:05.757301 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b097fc1c-9aca-44d7-be7e-cd35bf67f9f5-scripts\") pod \"keystone-bootstrap-68lzj\" (UID: \"b097fc1c-9aca-44d7-be7e-cd35bf67f9f5\") " pod="openstack/keystone-bootstrap-68lzj" Jan 20 18:25:05 crc kubenswrapper[4661]: I0120 18:25:05.757325 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/b097fc1c-9aca-44d7-be7e-cd35bf67f9f5-fernet-keys\") pod \"keystone-bootstrap-68lzj\" (UID: \"b097fc1c-9aca-44d7-be7e-cd35bf67f9f5\") " pod="openstack/keystone-bootstrap-68lzj" Jan 20 18:25:05 crc kubenswrapper[4661]: I0120 18:25:05.757378 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mlkz6\" (UniqueName: \"kubernetes.io/projected/b097fc1c-9aca-44d7-be7e-cd35bf67f9f5-kube-api-access-mlkz6\") pod \"keystone-bootstrap-68lzj\" (UID: \"b097fc1c-9aca-44d7-be7e-cd35bf67f9f5\") " pod="openstack/keystone-bootstrap-68lzj" Jan 20 18:25:05 crc kubenswrapper[4661]: I0120 18:25:05.757395 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b097fc1c-9aca-44d7-be7e-cd35bf67f9f5-config-data\") pod \"keystone-bootstrap-68lzj\" (UID: \"b097fc1c-9aca-44d7-be7e-cd35bf67f9f5\") " pod="openstack/keystone-bootstrap-68lzj" Jan 20 18:25:05 crc kubenswrapper[4661]: I0120 18:25:05.761659 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b097fc1c-9aca-44d7-be7e-cd35bf67f9f5-scripts\") pod \"keystone-bootstrap-68lzj\" (UID: \"b097fc1c-9aca-44d7-be7e-cd35bf67f9f5\") " pod="openstack/keystone-bootstrap-68lzj" Jan 20 18:25:05 crc kubenswrapper[4661]: I0120 18:25:05.762323 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b097fc1c-9aca-44d7-be7e-cd35bf67f9f5-combined-ca-bundle\") pod \"keystone-bootstrap-68lzj\" (UID: \"b097fc1c-9aca-44d7-be7e-cd35bf67f9f5\") " pod="openstack/keystone-bootstrap-68lzj" Jan 20 18:25:05 crc kubenswrapper[4661]: I0120 18:25:05.763869 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/b097fc1c-9aca-44d7-be7e-cd35bf67f9f5-credential-keys\") pod \"keystone-bootstrap-68lzj\" (UID: \"b097fc1c-9aca-44d7-be7e-cd35bf67f9f5\") " pod="openstack/keystone-bootstrap-68lzj" Jan 20 18:25:05 crc kubenswrapper[4661]: I0120 18:25:05.770366 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/b097fc1c-9aca-44d7-be7e-cd35bf67f9f5-fernet-keys\") pod \"keystone-bootstrap-68lzj\" (UID: \"b097fc1c-9aca-44d7-be7e-cd35bf67f9f5\") " pod="openstack/keystone-bootstrap-68lzj" Jan 20 18:25:05 crc kubenswrapper[4661]: I0120 18:25:05.772943 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b097fc1c-9aca-44d7-be7e-cd35bf67f9f5-config-data\") pod \"keystone-bootstrap-68lzj\" (UID: \"b097fc1c-9aca-44d7-be7e-cd35bf67f9f5\") " pod="openstack/keystone-bootstrap-68lzj" Jan 20 18:25:05 crc kubenswrapper[4661]: I0120 18:25:05.776807 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mlkz6\" (UniqueName: \"kubernetes.io/projected/b097fc1c-9aca-44d7-be7e-cd35bf67f9f5-kube-api-access-mlkz6\") pod \"keystone-bootstrap-68lzj\" (UID: \"b097fc1c-9aca-44d7-be7e-cd35bf67f9f5\") " pod="openstack/keystone-bootstrap-68lzj" Jan 20 18:25:05 crc kubenswrapper[4661]: I0120 18:25:05.930947 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-68lzj" Jan 20 18:25:06 crc kubenswrapper[4661]: I0120 18:25:06.157812 4661 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d3d5220-e6bb-4e22-9e2a-cec33bdad6a0" path="/var/lib/kubelet/pods/9d3d5220-e6bb-4e22-9e2a-cec33bdad6a0/volumes" Jan 20 18:25:10 crc kubenswrapper[4661]: I0120 18:25:10.156423 4661 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-86db49b7ff-tpwdx" podUID="ad7ce2d6-6f59-4934-b54c-d1d763e14c22" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.110:5353: connect: connection refused" Jan 20 18:25:12 crc kubenswrapper[4661]: E0120 18:25:12.349967 4661 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-ceilometer-central:current-podified" Jan 20 18:25:12 crc kubenswrapper[4661]: E0120 18:25:12.350644 4661 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.io/podified-antelope-centos9/openstack-ceilometer-central:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n66hc8h547hf4h66chc8h5dbh5fch78h698hdfh5b8h7dh649h5cdh5b6h654hc9h67h684h67ch59dh65dhcch78hfh599hcbh5f4h567h579h5q,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tlm7x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(5a5e84fe-d036-42ca-97d0-2abdb2b8ebe4): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 20 18:25:13 crc kubenswrapper[4661]: E0120 18:25:13.789523 4661 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified" Jan 20 18:25:13 crc kubenswrapper[4661]: E0120 18:25:13.790004 4661 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cinder-db-sync,Image:quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_set_configs && /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-machine-id,ReadOnly:true,MountPath:/etc/machine-id,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/config-data/merged,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/cinder/cinder.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fjndl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-db-sync-glwqq_openstack(2423d758-4514-439d-a804-42287945bedc): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 20 18:25:13 crc kubenswrapper[4661]: E0120 18:25:13.791444 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/cinder-db-sync-glwqq" podUID="2423d758-4514-439d-a804-42287945bedc" Jan 20 18:25:13 crc kubenswrapper[4661]: I0120 18:25:13.807758 4661 scope.go:117] "RemoveContainer" containerID="e89f92e6d91d4d56c26c3929f326e03d5e3eb2fee1f581a91d9760bd55ade7f1" Jan 20 18:25:14 crc kubenswrapper[4661]: I0120 18:25:14.055634 4661 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-tpwdx" Jan 20 18:25:14 crc kubenswrapper[4661]: I0120 18:25:14.133139 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ad7ce2d6-6f59-4934-b54c-d1d763e14c22-config\") pod \"ad7ce2d6-6f59-4934-b54c-d1d763e14c22\" (UID: \"ad7ce2d6-6f59-4934-b54c-d1d763e14c22\") " Jan 20 18:25:14 crc kubenswrapper[4661]: I0120 18:25:14.133203 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ad7ce2d6-6f59-4934-b54c-d1d763e14c22-ovsdbserver-sb\") pod \"ad7ce2d6-6f59-4934-b54c-d1d763e14c22\" (UID: \"ad7ce2d6-6f59-4934-b54c-d1d763e14c22\") " Jan 20 18:25:14 crc kubenswrapper[4661]: I0120 18:25:14.133227 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ad7ce2d6-6f59-4934-b54c-d1d763e14c22-ovsdbserver-nb\") pod \"ad7ce2d6-6f59-4934-b54c-d1d763e14c22\" (UID: \"ad7ce2d6-6f59-4934-b54c-d1d763e14c22\") " Jan 20 18:25:14 crc kubenswrapper[4661]: I0120 18:25:14.133248 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ad7ce2d6-6f59-4934-b54c-d1d763e14c22-dns-svc\") pod \"ad7ce2d6-6f59-4934-b54c-d1d763e14c22\" (UID: \"ad7ce2d6-6f59-4934-b54c-d1d763e14c22\") " Jan 20 18:25:14 crc kubenswrapper[4661]: I0120 18:25:14.133289 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n2x9d\" (UniqueName: \"kubernetes.io/projected/ad7ce2d6-6f59-4934-b54c-d1d763e14c22-kube-api-access-n2x9d\") pod \"ad7ce2d6-6f59-4934-b54c-d1d763e14c22\" (UID: \"ad7ce2d6-6f59-4934-b54c-d1d763e14c22\") " Jan 20 18:25:14 crc kubenswrapper[4661]: I0120 18:25:14.143746 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ad7ce2d6-6f59-4934-b54c-d1d763e14c22-kube-api-access-n2x9d" (OuterVolumeSpecName: "kube-api-access-n2x9d") pod "ad7ce2d6-6f59-4934-b54c-d1d763e14c22" (UID: "ad7ce2d6-6f59-4934-b54c-d1d763e14c22"). InnerVolumeSpecName "kube-api-access-n2x9d". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 18:25:14 crc kubenswrapper[4661]: I0120 18:25:14.237459 4661 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n2x9d\" (UniqueName: \"kubernetes.io/projected/ad7ce2d6-6f59-4934-b54c-d1d763e14c22-kube-api-access-n2x9d\") on node \"crc\" DevicePath \"\"" Jan 20 18:25:14 crc kubenswrapper[4661]: I0120 18:25:14.264784 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-68lzj"] Jan 20 18:25:14 crc kubenswrapper[4661]: W0120 18:25:14.269932 4661 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb097fc1c_9aca_44d7_be7e_cd35bf67f9f5.slice/crio-6fb9951cee545bb785b9462abbc4d79b8f0cc6272606175698594f3a2ebe0485 WatchSource:0}: Error finding container 6fb9951cee545bb785b9462abbc4d79b8f0cc6272606175698594f3a2ebe0485: Status 404 returned error can't find the container with id 6fb9951cee545bb785b9462abbc4d79b8f0cc6272606175698594f3a2ebe0485 Jan 20 18:25:14 crc kubenswrapper[4661]: I0120 18:25:14.281386 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ad7ce2d6-6f59-4934-b54c-d1d763e14c22-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "ad7ce2d6-6f59-4934-b54c-d1d763e14c22" (UID: "ad7ce2d6-6f59-4934-b54c-d1d763e14c22"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 18:25:14 crc kubenswrapper[4661]: I0120 18:25:14.281806 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ad7ce2d6-6f59-4934-b54c-d1d763e14c22-config" (OuterVolumeSpecName: "config") pod "ad7ce2d6-6f59-4934-b54c-d1d763e14c22" (UID: "ad7ce2d6-6f59-4934-b54c-d1d763e14c22"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 18:25:14 crc kubenswrapper[4661]: I0120 18:25:14.282131 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ad7ce2d6-6f59-4934-b54c-d1d763e14c22-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "ad7ce2d6-6f59-4934-b54c-d1d763e14c22" (UID: "ad7ce2d6-6f59-4934-b54c-d1d763e14c22"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 18:25:14 crc kubenswrapper[4661]: I0120 18:25:14.291121 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ad7ce2d6-6f59-4934-b54c-d1d763e14c22-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "ad7ce2d6-6f59-4934-b54c-d1d763e14c22" (UID: "ad7ce2d6-6f59-4934-b54c-d1d763e14c22"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 18:25:14 crc kubenswrapper[4661]: I0120 18:25:14.339729 4661 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ad7ce2d6-6f59-4934-b54c-d1d763e14c22-config\") on node \"crc\" DevicePath \"\"" Jan 20 18:25:14 crc kubenswrapper[4661]: I0120 18:25:14.339768 4661 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ad7ce2d6-6f59-4934-b54c-d1d763e14c22-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 20 18:25:14 crc kubenswrapper[4661]: I0120 18:25:14.339782 4661 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ad7ce2d6-6f59-4934-b54c-d1d763e14c22-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 20 18:25:14 crc kubenswrapper[4661]: I0120 18:25:14.339794 4661 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ad7ce2d6-6f59-4934-b54c-d1d763e14c22-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 20 18:25:14 crc kubenswrapper[4661]: I0120 18:25:14.359457 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-bbdtt" event={"ID":"5443a645-bf2b-48db-8111-efa81b46526c","Type":"ContainerStarted","Data":"8395cb0b9fda90bec15e72a2609c5c80f301bafa24d002f3743c8fa74498f372"} Jan 20 18:25:14 crc kubenswrapper[4661]: I0120 18:25:14.363632 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-tpwdx" event={"ID":"ad7ce2d6-6f59-4934-b54c-d1d763e14c22","Type":"ContainerDied","Data":"c7f6a11b2515772f670274153e8b86c0ce63d949dfed8f067e9abc14185068c2"} Jan 20 18:25:14 crc kubenswrapper[4661]: I0120 18:25:14.363735 4661 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-tpwdx" Jan 20 18:25:14 crc kubenswrapper[4661]: I0120 18:25:14.363743 4661 scope.go:117] "RemoveContainer" containerID="90718394324f3ce0bcc7a3bc9fafd1d5d961f48bebf829f6bc2b013ce89bde2d" Jan 20 18:25:14 crc kubenswrapper[4661]: I0120 18:25:14.367161 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-68lzj" event={"ID":"b097fc1c-9aca-44d7-be7e-cd35bf67f9f5","Type":"ContainerStarted","Data":"6fb9951cee545bb785b9462abbc4d79b8f0cc6272606175698594f3a2ebe0485"} Jan 20 18:25:14 crc kubenswrapper[4661]: I0120 18:25:14.369230 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-8t5pf" event={"ID":"dbaa46dc-18ed-41e6-84b6-86daf834ffd4","Type":"ContainerStarted","Data":"166853743caf9feb45b4da68340bd7710917c4dacbe6d5327f532c651daa70fd"} Jan 20 18:25:14 crc kubenswrapper[4661]: E0120 18:25:14.371293 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified\\\"\"" pod="openstack/cinder-db-sync-glwqq" podUID="2423d758-4514-439d-a804-42287945bedc" Jan 20 18:25:14 crc kubenswrapper[4661]: I0120 18:25:14.376247 4661 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-sync-bbdtt" podStartSLOduration=2.548807749 podStartE2EDuration="23.376230701s" podCreationTimestamp="2026-01-20 18:24:51 +0000 UTC" firstStartedPulling="2026-01-20 18:24:52.983917714 +0000 UTC m=+1149.314707376" lastFinishedPulling="2026-01-20 18:25:13.811340666 +0000 UTC m=+1170.142130328" observedRunningTime="2026-01-20 18:25:14.374095885 +0000 UTC m=+1170.704885547" watchObservedRunningTime="2026-01-20 18:25:14.376230701 +0000 UTC m=+1170.707020363" Jan 20 18:25:14 crc kubenswrapper[4661]: I0120 18:25:14.397909 4661 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-sync-8t5pf" podStartSLOduration=2.706645015 podStartE2EDuration="23.397867571s" podCreationTimestamp="2026-01-20 18:24:51 +0000 UTC" firstStartedPulling="2026-01-20 18:24:53.070145965 +0000 UTC m=+1149.400935617" lastFinishedPulling="2026-01-20 18:25:13.761368511 +0000 UTC m=+1170.092158173" observedRunningTime="2026-01-20 18:25:14.387152249 +0000 UTC m=+1170.717941921" watchObservedRunningTime="2026-01-20 18:25:14.397867571 +0000 UTC m=+1170.728657233" Jan 20 18:25:14 crc kubenswrapper[4661]: I0120 18:25:14.434463 4661 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-tpwdx"] Jan 20 18:25:14 crc kubenswrapper[4661]: I0120 18:25:14.444378 4661 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-tpwdx"] Jan 20 18:25:14 crc kubenswrapper[4661]: I0120 18:25:14.596285 4661 scope.go:117] "RemoveContainer" containerID="f658e4ffe764882dfdf026ba57515baa960160168300760983d30baa32fdc2ad" Jan 20 18:25:15 crc kubenswrapper[4661]: I0120 18:25:15.379049 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-68lzj" event={"ID":"b097fc1c-9aca-44d7-be7e-cd35bf67f9f5","Type":"ContainerStarted","Data":"dea524a1d31897fc46521509b37fdba03041d35156177fabefea81763cbb3879"} Jan 20 18:25:15 crc kubenswrapper[4661]: I0120 18:25:15.386464 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5a5e84fe-d036-42ca-97d0-2abdb2b8ebe4","Type":"ContainerStarted","Data":"e40b4ea97366e02aeef3a875ca4e5621745d2a173a54638487639386aeb40bda"} Jan 20 18:25:15 crc kubenswrapper[4661]: I0120 18:25:15.405479 4661 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-68lzj" podStartSLOduration=10.405459484 podStartE2EDuration="10.405459484s" podCreationTimestamp="2026-01-20 18:25:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 18:25:15.400287648 +0000 UTC m=+1171.731077350" watchObservedRunningTime="2026-01-20 18:25:15.405459484 +0000 UTC m=+1171.736249166" Jan 20 18:25:16 crc kubenswrapper[4661]: I0120 18:25:16.153425 4661 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ad7ce2d6-6f59-4934-b54c-d1d763e14c22" path="/var/lib/kubelet/pods/ad7ce2d6-6f59-4934-b54c-d1d763e14c22/volumes" Jan 20 18:25:17 crc kubenswrapper[4661]: I0120 18:25:17.403831 4661 generic.go:334] "Generic (PLEG): container finished" podID="dbaa46dc-18ed-41e6-84b6-86daf834ffd4" containerID="166853743caf9feb45b4da68340bd7710917c4dacbe6d5327f532c651daa70fd" exitCode=0 Jan 20 18:25:17 crc kubenswrapper[4661]: I0120 18:25:17.403878 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-8t5pf" event={"ID":"dbaa46dc-18ed-41e6-84b6-86daf834ffd4","Type":"ContainerDied","Data":"166853743caf9feb45b4da68340bd7710917c4dacbe6d5327f532c651daa70fd"} Jan 20 18:25:18 crc kubenswrapper[4661]: I0120 18:25:18.412637 4661 generic.go:334] "Generic (PLEG): container finished" podID="b097fc1c-9aca-44d7-be7e-cd35bf67f9f5" containerID="dea524a1d31897fc46521509b37fdba03041d35156177fabefea81763cbb3879" exitCode=0 Jan 20 18:25:18 crc kubenswrapper[4661]: I0120 18:25:18.413018 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-68lzj" event={"ID":"b097fc1c-9aca-44d7-be7e-cd35bf67f9f5","Type":"ContainerDied","Data":"dea524a1d31897fc46521509b37fdba03041d35156177fabefea81763cbb3879"} Jan 20 18:25:18 crc kubenswrapper[4661]: I0120 18:25:18.415171 4661 generic.go:334] "Generic (PLEG): container finished" podID="f4ab2fe8-a602-4b1c-b880-f1eeb9bde9de" containerID="d78a870eb313e545ddd0b420c52038a328c22cbaaaa42e5563a12a2abedfffc1" exitCode=0 Jan 20 18:25:18 crc kubenswrapper[4661]: I0120 18:25:18.415240 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-6mvml" event={"ID":"f4ab2fe8-a602-4b1c-b880-f1eeb9bde9de","Type":"ContainerDied","Data":"d78a870eb313e545ddd0b420c52038a328c22cbaaaa42e5563a12a2abedfffc1"} Jan 20 18:25:18 crc kubenswrapper[4661]: I0120 18:25:18.416530 4661 generic.go:334] "Generic (PLEG): container finished" podID="5443a645-bf2b-48db-8111-efa81b46526c" containerID="8395cb0b9fda90bec15e72a2609c5c80f301bafa24d002f3743c8fa74498f372" exitCode=0 Jan 20 18:25:18 crc kubenswrapper[4661]: I0120 18:25:18.416647 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-bbdtt" event={"ID":"5443a645-bf2b-48db-8111-efa81b46526c","Type":"ContainerDied","Data":"8395cb0b9fda90bec15e72a2609c5c80f301bafa24d002f3743c8fa74498f372"} Jan 20 18:25:19 crc kubenswrapper[4661]: I0120 18:25:19.571413 4661 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-8t5pf" Jan 20 18:25:19 crc kubenswrapper[4661]: I0120 18:25:19.629257 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dbaa46dc-18ed-41e6-84b6-86daf834ffd4-logs\") pod \"dbaa46dc-18ed-41e6-84b6-86daf834ffd4\" (UID: \"dbaa46dc-18ed-41e6-84b6-86daf834ffd4\") " Jan 20 18:25:19 crc kubenswrapper[4661]: I0120 18:25:19.629657 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dbaa46dc-18ed-41e6-84b6-86daf834ffd4-combined-ca-bundle\") pod \"dbaa46dc-18ed-41e6-84b6-86daf834ffd4\" (UID: \"dbaa46dc-18ed-41e6-84b6-86daf834ffd4\") " Jan 20 18:25:19 crc kubenswrapper[4661]: I0120 18:25:19.629708 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dbaa46dc-18ed-41e6-84b6-86daf834ffd4-config-data\") pod \"dbaa46dc-18ed-41e6-84b6-86daf834ffd4\" (UID: \"dbaa46dc-18ed-41e6-84b6-86daf834ffd4\") " Jan 20 18:25:19 crc kubenswrapper[4661]: I0120 18:25:19.629827 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dbaa46dc-18ed-41e6-84b6-86daf834ffd4-logs" (OuterVolumeSpecName: "logs") pod "dbaa46dc-18ed-41e6-84b6-86daf834ffd4" (UID: "dbaa46dc-18ed-41e6-84b6-86daf834ffd4"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 18:25:19 crc kubenswrapper[4661]: I0120 18:25:19.629895 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5bq2l\" (UniqueName: \"kubernetes.io/projected/dbaa46dc-18ed-41e6-84b6-86daf834ffd4-kube-api-access-5bq2l\") pod \"dbaa46dc-18ed-41e6-84b6-86daf834ffd4\" (UID: \"dbaa46dc-18ed-41e6-84b6-86daf834ffd4\") " Jan 20 18:25:19 crc kubenswrapper[4661]: I0120 18:25:19.629974 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dbaa46dc-18ed-41e6-84b6-86daf834ffd4-scripts\") pod \"dbaa46dc-18ed-41e6-84b6-86daf834ffd4\" (UID: \"dbaa46dc-18ed-41e6-84b6-86daf834ffd4\") " Jan 20 18:25:19 crc kubenswrapper[4661]: I0120 18:25:19.630763 4661 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dbaa46dc-18ed-41e6-84b6-86daf834ffd4-logs\") on node \"crc\" DevicePath \"\"" Jan 20 18:25:19 crc kubenswrapper[4661]: I0120 18:25:19.647853 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dbaa46dc-18ed-41e6-84b6-86daf834ffd4-scripts" (OuterVolumeSpecName: "scripts") pod "dbaa46dc-18ed-41e6-84b6-86daf834ffd4" (UID: "dbaa46dc-18ed-41e6-84b6-86daf834ffd4"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 18:25:19 crc kubenswrapper[4661]: I0120 18:25:19.653966 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dbaa46dc-18ed-41e6-84b6-86daf834ffd4-kube-api-access-5bq2l" (OuterVolumeSpecName: "kube-api-access-5bq2l") pod "dbaa46dc-18ed-41e6-84b6-86daf834ffd4" (UID: "dbaa46dc-18ed-41e6-84b6-86daf834ffd4"). InnerVolumeSpecName "kube-api-access-5bq2l". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 18:25:19 crc kubenswrapper[4661]: I0120 18:25:19.687268 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dbaa46dc-18ed-41e6-84b6-86daf834ffd4-config-data" (OuterVolumeSpecName: "config-data") pod "dbaa46dc-18ed-41e6-84b6-86daf834ffd4" (UID: "dbaa46dc-18ed-41e6-84b6-86daf834ffd4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 18:25:19 crc kubenswrapper[4661]: I0120 18:25:19.691713 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dbaa46dc-18ed-41e6-84b6-86daf834ffd4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "dbaa46dc-18ed-41e6-84b6-86daf834ffd4" (UID: "dbaa46dc-18ed-41e6-84b6-86daf834ffd4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 18:25:19 crc kubenswrapper[4661]: I0120 18:25:19.731916 4661 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5bq2l\" (UniqueName: \"kubernetes.io/projected/dbaa46dc-18ed-41e6-84b6-86daf834ffd4-kube-api-access-5bq2l\") on node \"crc\" DevicePath \"\"" Jan 20 18:25:19 crc kubenswrapper[4661]: I0120 18:25:19.731957 4661 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dbaa46dc-18ed-41e6-84b6-86daf834ffd4-scripts\") on node \"crc\" DevicePath \"\"" Jan 20 18:25:19 crc kubenswrapper[4661]: I0120 18:25:19.731968 4661 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dbaa46dc-18ed-41e6-84b6-86daf834ffd4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 20 18:25:19 crc kubenswrapper[4661]: I0120 18:25:19.731977 4661 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dbaa46dc-18ed-41e6-84b6-86daf834ffd4-config-data\") on node \"crc\" DevicePath \"\"" Jan 20 18:25:19 crc kubenswrapper[4661]: I0120 18:25:19.800558 4661 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-bbdtt" Jan 20 18:25:19 crc kubenswrapper[4661]: I0120 18:25:19.837732 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5443a645-bf2b-48db-8111-efa81b46526c-kube-api-access-46n9d" (OuterVolumeSpecName: "kube-api-access-46n9d") pod "5443a645-bf2b-48db-8111-efa81b46526c" (UID: "5443a645-bf2b-48db-8111-efa81b46526c"). InnerVolumeSpecName "kube-api-access-46n9d". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 18:25:19 crc kubenswrapper[4661]: I0120 18:25:19.838715 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-46n9d\" (UniqueName: \"kubernetes.io/projected/5443a645-bf2b-48db-8111-efa81b46526c-kube-api-access-46n9d\") pod \"5443a645-bf2b-48db-8111-efa81b46526c\" (UID: \"5443a645-bf2b-48db-8111-efa81b46526c\") " Jan 20 18:25:19 crc kubenswrapper[4661]: I0120 18:25:19.838809 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5443a645-bf2b-48db-8111-efa81b46526c-combined-ca-bundle\") pod \"5443a645-bf2b-48db-8111-efa81b46526c\" (UID: \"5443a645-bf2b-48db-8111-efa81b46526c\") " Jan 20 18:25:19 crc kubenswrapper[4661]: I0120 18:25:19.838990 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/5443a645-bf2b-48db-8111-efa81b46526c-db-sync-config-data\") pod \"5443a645-bf2b-48db-8111-efa81b46526c\" (UID: \"5443a645-bf2b-48db-8111-efa81b46526c\") " Jan 20 18:25:19 crc kubenswrapper[4661]: I0120 18:25:19.839506 4661 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-46n9d\" (UniqueName: \"kubernetes.io/projected/5443a645-bf2b-48db-8111-efa81b46526c-kube-api-access-46n9d\") on node \"crc\" DevicePath \"\"" Jan 20 18:25:19 crc kubenswrapper[4661]: I0120 18:25:19.843899 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5443a645-bf2b-48db-8111-efa81b46526c-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "5443a645-bf2b-48db-8111-efa81b46526c" (UID: "5443a645-bf2b-48db-8111-efa81b46526c"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 18:25:19 crc kubenswrapper[4661]: I0120 18:25:19.876887 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5443a645-bf2b-48db-8111-efa81b46526c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5443a645-bf2b-48db-8111-efa81b46526c" (UID: "5443a645-bf2b-48db-8111-efa81b46526c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 18:25:19 crc kubenswrapper[4661]: I0120 18:25:19.881914 4661 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-68lzj" Jan 20 18:25:19 crc kubenswrapper[4661]: I0120 18:25:19.894790 4661 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-6mvml" Jan 20 18:25:19 crc kubenswrapper[4661]: I0120 18:25:19.941006 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/b097fc1c-9aca-44d7-be7e-cd35bf67f9f5-fernet-keys\") pod \"b097fc1c-9aca-44d7-be7e-cd35bf67f9f5\" (UID: \"b097fc1c-9aca-44d7-be7e-cd35bf67f9f5\") " Jan 20 18:25:19 crc kubenswrapper[4661]: I0120 18:25:19.941080 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mlkz6\" (UniqueName: \"kubernetes.io/projected/b097fc1c-9aca-44d7-be7e-cd35bf67f9f5-kube-api-access-mlkz6\") pod \"b097fc1c-9aca-44d7-be7e-cd35bf67f9f5\" (UID: \"b097fc1c-9aca-44d7-be7e-cd35bf67f9f5\") " Jan 20 18:25:19 crc kubenswrapper[4661]: I0120 18:25:19.941117 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f4ab2fe8-a602-4b1c-b880-f1eeb9bde9de-combined-ca-bundle\") pod \"f4ab2fe8-a602-4b1c-b880-f1eeb9bde9de\" (UID: \"f4ab2fe8-a602-4b1c-b880-f1eeb9bde9de\") " Jan 20 18:25:19 crc kubenswrapper[4661]: I0120 18:25:19.941163 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b097fc1c-9aca-44d7-be7e-cd35bf67f9f5-scripts\") pod \"b097fc1c-9aca-44d7-be7e-cd35bf67f9f5\" (UID: \"b097fc1c-9aca-44d7-be7e-cd35bf67f9f5\") " Jan 20 18:25:19 crc kubenswrapper[4661]: I0120 18:25:19.941248 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wt65f\" (UniqueName: \"kubernetes.io/projected/f4ab2fe8-a602-4b1c-b880-f1eeb9bde9de-kube-api-access-wt65f\") pod \"f4ab2fe8-a602-4b1c-b880-f1eeb9bde9de\" (UID: \"f4ab2fe8-a602-4b1c-b880-f1eeb9bde9de\") " Jan 20 18:25:19 crc kubenswrapper[4661]: I0120 18:25:19.941353 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b097fc1c-9aca-44d7-be7e-cd35bf67f9f5-combined-ca-bundle\") pod \"b097fc1c-9aca-44d7-be7e-cd35bf67f9f5\" (UID: \"b097fc1c-9aca-44d7-be7e-cd35bf67f9f5\") " Jan 20 18:25:19 crc kubenswrapper[4661]: I0120 18:25:19.941379 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b097fc1c-9aca-44d7-be7e-cd35bf67f9f5-config-data\") pod \"b097fc1c-9aca-44d7-be7e-cd35bf67f9f5\" (UID: \"b097fc1c-9aca-44d7-be7e-cd35bf67f9f5\") " Jan 20 18:25:19 crc kubenswrapper[4661]: I0120 18:25:19.941415 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/f4ab2fe8-a602-4b1c-b880-f1eeb9bde9de-config\") pod \"f4ab2fe8-a602-4b1c-b880-f1eeb9bde9de\" (UID: \"f4ab2fe8-a602-4b1c-b880-f1eeb9bde9de\") " Jan 20 18:25:19 crc kubenswrapper[4661]: I0120 18:25:19.941439 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/b097fc1c-9aca-44d7-be7e-cd35bf67f9f5-credential-keys\") pod \"b097fc1c-9aca-44d7-be7e-cd35bf67f9f5\" (UID: \"b097fc1c-9aca-44d7-be7e-cd35bf67f9f5\") " Jan 20 18:25:19 crc kubenswrapper[4661]: I0120 18:25:19.941927 4661 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5443a645-bf2b-48db-8111-efa81b46526c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 20 18:25:19 crc kubenswrapper[4661]: I0120 18:25:19.941956 4661 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/5443a645-bf2b-48db-8111-efa81b46526c-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 20 18:25:19 crc kubenswrapper[4661]: I0120 18:25:19.950528 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b097fc1c-9aca-44d7-be7e-cd35bf67f9f5-kube-api-access-mlkz6" (OuterVolumeSpecName: "kube-api-access-mlkz6") pod "b097fc1c-9aca-44d7-be7e-cd35bf67f9f5" (UID: "b097fc1c-9aca-44d7-be7e-cd35bf67f9f5"). InnerVolumeSpecName "kube-api-access-mlkz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 18:25:19 crc kubenswrapper[4661]: I0120 18:25:19.956952 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b097fc1c-9aca-44d7-be7e-cd35bf67f9f5-scripts" (OuterVolumeSpecName: "scripts") pod "b097fc1c-9aca-44d7-be7e-cd35bf67f9f5" (UID: "b097fc1c-9aca-44d7-be7e-cd35bf67f9f5"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 18:25:19 crc kubenswrapper[4661]: I0120 18:25:19.958300 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b097fc1c-9aca-44d7-be7e-cd35bf67f9f5-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "b097fc1c-9aca-44d7-be7e-cd35bf67f9f5" (UID: "b097fc1c-9aca-44d7-be7e-cd35bf67f9f5"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 18:25:19 crc kubenswrapper[4661]: I0120 18:25:19.965255 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b097fc1c-9aca-44d7-be7e-cd35bf67f9f5-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "b097fc1c-9aca-44d7-be7e-cd35bf67f9f5" (UID: "b097fc1c-9aca-44d7-be7e-cd35bf67f9f5"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 18:25:19 crc kubenswrapper[4661]: I0120 18:25:19.967117 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f4ab2fe8-a602-4b1c-b880-f1eeb9bde9de-kube-api-access-wt65f" (OuterVolumeSpecName: "kube-api-access-wt65f") pod "f4ab2fe8-a602-4b1c-b880-f1eeb9bde9de" (UID: "f4ab2fe8-a602-4b1c-b880-f1eeb9bde9de"). InnerVolumeSpecName "kube-api-access-wt65f". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 18:25:19 crc kubenswrapper[4661]: I0120 18:25:19.969845 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b097fc1c-9aca-44d7-be7e-cd35bf67f9f5-config-data" (OuterVolumeSpecName: "config-data") pod "b097fc1c-9aca-44d7-be7e-cd35bf67f9f5" (UID: "b097fc1c-9aca-44d7-be7e-cd35bf67f9f5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 18:25:19 crc kubenswrapper[4661]: I0120 18:25:19.970746 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f4ab2fe8-a602-4b1c-b880-f1eeb9bde9de-config" (OuterVolumeSpecName: "config") pod "f4ab2fe8-a602-4b1c-b880-f1eeb9bde9de" (UID: "f4ab2fe8-a602-4b1c-b880-f1eeb9bde9de"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 18:25:19 crc kubenswrapper[4661]: I0120 18:25:19.975706 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b097fc1c-9aca-44d7-be7e-cd35bf67f9f5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b097fc1c-9aca-44d7-be7e-cd35bf67f9f5" (UID: "b097fc1c-9aca-44d7-be7e-cd35bf67f9f5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 18:25:19 crc kubenswrapper[4661]: I0120 18:25:19.979939 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f4ab2fe8-a602-4b1c-b880-f1eeb9bde9de-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f4ab2fe8-a602-4b1c-b880-f1eeb9bde9de" (UID: "f4ab2fe8-a602-4b1c-b880-f1eeb9bde9de"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 18:25:20 crc kubenswrapper[4661]: I0120 18:25:20.042892 4661 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b097fc1c-9aca-44d7-be7e-cd35bf67f9f5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 20 18:25:20 crc kubenswrapper[4661]: I0120 18:25:20.043154 4661 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b097fc1c-9aca-44d7-be7e-cd35bf67f9f5-config-data\") on node \"crc\" DevicePath \"\"" Jan 20 18:25:20 crc kubenswrapper[4661]: I0120 18:25:20.043224 4661 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/f4ab2fe8-a602-4b1c-b880-f1eeb9bde9de-config\") on node \"crc\" DevicePath \"\"" Jan 20 18:25:20 crc kubenswrapper[4661]: I0120 18:25:20.043322 4661 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/b097fc1c-9aca-44d7-be7e-cd35bf67f9f5-credential-keys\") on node \"crc\" DevicePath \"\"" Jan 20 18:25:20 crc kubenswrapper[4661]: I0120 18:25:20.043380 4661 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/b097fc1c-9aca-44d7-be7e-cd35bf67f9f5-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 20 18:25:20 crc kubenswrapper[4661]: I0120 18:25:20.043429 4661 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mlkz6\" (UniqueName: \"kubernetes.io/projected/b097fc1c-9aca-44d7-be7e-cd35bf67f9f5-kube-api-access-mlkz6\") on node \"crc\" DevicePath \"\"" Jan 20 18:25:20 crc kubenswrapper[4661]: I0120 18:25:20.043479 4661 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f4ab2fe8-a602-4b1c-b880-f1eeb9bde9de-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 20 18:25:20 crc kubenswrapper[4661]: I0120 18:25:20.043526 4661 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b097fc1c-9aca-44d7-be7e-cd35bf67f9f5-scripts\") on node \"crc\" DevicePath \"\"" Jan 20 18:25:20 crc kubenswrapper[4661]: I0120 18:25:20.043573 4661 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wt65f\" (UniqueName: \"kubernetes.io/projected/f4ab2fe8-a602-4b1c-b880-f1eeb9bde9de-kube-api-access-wt65f\") on node \"crc\" DevicePath \"\"" Jan 20 18:25:20 crc kubenswrapper[4661]: I0120 18:25:20.437919 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-8t5pf" event={"ID":"dbaa46dc-18ed-41e6-84b6-86daf834ffd4","Type":"ContainerDied","Data":"206e11d3f8f9609661028aeb06e82c6978b287bcb24f24835209aa18ca31b328"} Jan 20 18:25:20 crc kubenswrapper[4661]: I0120 18:25:20.437966 4661 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="206e11d3f8f9609661028aeb06e82c6978b287bcb24f24835209aa18ca31b328" Jan 20 18:25:20 crc kubenswrapper[4661]: I0120 18:25:20.437967 4661 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-8t5pf" Jan 20 18:25:20 crc kubenswrapper[4661]: I0120 18:25:20.441248 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-6mvml" event={"ID":"f4ab2fe8-a602-4b1c-b880-f1eeb9bde9de","Type":"ContainerDied","Data":"372774cdaeaed41b381e7451a6fa33d00d559751b774a7a0e59e7b6725fca334"} Jan 20 18:25:20 crc kubenswrapper[4661]: I0120 18:25:20.441304 4661 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="372774cdaeaed41b381e7451a6fa33d00d559751b774a7a0e59e7b6725fca334" Jan 20 18:25:20 crc kubenswrapper[4661]: I0120 18:25:20.441405 4661 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-6mvml" Jan 20 18:25:20 crc kubenswrapper[4661]: I0120 18:25:20.447506 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5a5e84fe-d036-42ca-97d0-2abdb2b8ebe4","Type":"ContainerStarted","Data":"a62e7e6c87dd6649efb329db945ddce05752de194660ef6f7246bf8205684dcb"} Jan 20 18:25:20 crc kubenswrapper[4661]: I0120 18:25:20.454156 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-bbdtt" event={"ID":"5443a645-bf2b-48db-8111-efa81b46526c","Type":"ContainerDied","Data":"91b904b0f546bec369c24189a2e76caad6cbdc7109e76272f202ab195232c7af"} Jan 20 18:25:20 crc kubenswrapper[4661]: I0120 18:25:20.454235 4661 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="91b904b0f546bec369c24189a2e76caad6cbdc7109e76272f202ab195232c7af" Jan 20 18:25:20 crc kubenswrapper[4661]: I0120 18:25:20.454410 4661 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-bbdtt" Jan 20 18:25:20 crc kubenswrapper[4661]: I0120 18:25:20.486863 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-68lzj" event={"ID":"b097fc1c-9aca-44d7-be7e-cd35bf67f9f5","Type":"ContainerDied","Data":"6fb9951cee545bb785b9462abbc4d79b8f0cc6272606175698594f3a2ebe0485"} Jan 20 18:25:20 crc kubenswrapper[4661]: I0120 18:25:20.486929 4661 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6fb9951cee545bb785b9462abbc4d79b8f0cc6272606175698594f3a2ebe0485" Jan 20 18:25:20 crc kubenswrapper[4661]: I0120 18:25:20.487056 4661 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-68lzj" Jan 20 18:25:20 crc kubenswrapper[4661]: I0120 18:25:20.592576 4661 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-c468c8b55-f2kw4"] Jan 20 18:25:20 crc kubenswrapper[4661]: E0120 18:25:20.592978 4661 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5443a645-bf2b-48db-8111-efa81b46526c" containerName="barbican-db-sync" Jan 20 18:25:20 crc kubenswrapper[4661]: I0120 18:25:20.592994 4661 state_mem.go:107] "Deleted CPUSet assignment" podUID="5443a645-bf2b-48db-8111-efa81b46526c" containerName="barbican-db-sync" Jan 20 18:25:20 crc kubenswrapper[4661]: E0120 18:25:20.593023 4661 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b097fc1c-9aca-44d7-be7e-cd35bf67f9f5" containerName="keystone-bootstrap" Jan 20 18:25:20 crc kubenswrapper[4661]: I0120 18:25:20.593032 4661 state_mem.go:107] "Deleted CPUSet assignment" podUID="b097fc1c-9aca-44d7-be7e-cd35bf67f9f5" containerName="keystone-bootstrap" Jan 20 18:25:20 crc kubenswrapper[4661]: E0120 18:25:20.593044 4661 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ad7ce2d6-6f59-4934-b54c-d1d763e14c22" containerName="dnsmasq-dns" Jan 20 18:25:20 crc kubenswrapper[4661]: I0120 18:25:20.593052 4661 state_mem.go:107] "Deleted CPUSet assignment" podUID="ad7ce2d6-6f59-4934-b54c-d1d763e14c22" containerName="dnsmasq-dns" Jan 20 18:25:20 crc kubenswrapper[4661]: E0120 18:25:20.593066 4661 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dbaa46dc-18ed-41e6-84b6-86daf834ffd4" containerName="placement-db-sync" Jan 20 18:25:20 crc kubenswrapper[4661]: I0120 18:25:20.593073 4661 state_mem.go:107] "Deleted CPUSet assignment" podUID="dbaa46dc-18ed-41e6-84b6-86daf834ffd4" containerName="placement-db-sync" Jan 20 18:25:20 crc kubenswrapper[4661]: E0120 18:25:20.593087 4661 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ad7ce2d6-6f59-4934-b54c-d1d763e14c22" containerName="init" Jan 20 18:25:20 crc kubenswrapper[4661]: I0120 18:25:20.593095 4661 state_mem.go:107] "Deleted CPUSet assignment" podUID="ad7ce2d6-6f59-4934-b54c-d1d763e14c22" containerName="init" Jan 20 18:25:20 crc kubenswrapper[4661]: E0120 18:25:20.593109 4661 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4ab2fe8-a602-4b1c-b880-f1eeb9bde9de" containerName="neutron-db-sync" Jan 20 18:25:20 crc kubenswrapper[4661]: I0120 18:25:20.593117 4661 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4ab2fe8-a602-4b1c-b880-f1eeb9bde9de" containerName="neutron-db-sync" Jan 20 18:25:20 crc kubenswrapper[4661]: I0120 18:25:20.593290 4661 memory_manager.go:354] "RemoveStaleState removing state" podUID="ad7ce2d6-6f59-4934-b54c-d1d763e14c22" containerName="dnsmasq-dns" Jan 20 18:25:20 crc kubenswrapper[4661]: I0120 18:25:20.593304 4661 memory_manager.go:354] "RemoveStaleState removing state" podUID="dbaa46dc-18ed-41e6-84b6-86daf834ffd4" containerName="placement-db-sync" Jan 20 18:25:20 crc kubenswrapper[4661]: I0120 18:25:20.593315 4661 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4ab2fe8-a602-4b1c-b880-f1eeb9bde9de" containerName="neutron-db-sync" Jan 20 18:25:20 crc kubenswrapper[4661]: I0120 18:25:20.593328 4661 memory_manager.go:354] "RemoveStaleState removing state" podUID="b097fc1c-9aca-44d7-be7e-cd35bf67f9f5" containerName="keystone-bootstrap" Jan 20 18:25:20 crc kubenswrapper[4661]: I0120 18:25:20.593349 4661 memory_manager.go:354] "RemoveStaleState removing state" podUID="5443a645-bf2b-48db-8111-efa81b46526c" containerName="barbican-db-sync" Jan 20 18:25:20 crc kubenswrapper[4661]: I0120 18:25:20.600749 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-c468c8b55-f2kw4" Jan 20 18:25:20 crc kubenswrapper[4661]: I0120 18:25:20.603277 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 20 18:25:20 crc kubenswrapper[4661]: I0120 18:25:20.603569 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 20 18:25:20 crc kubenswrapper[4661]: I0120 18:25:20.603698 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-public-svc" Jan 20 18:25:20 crc kubenswrapper[4661]: I0120 18:25:20.609217 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-bvbmp" Jan 20 18:25:20 crc kubenswrapper[4661]: I0120 18:25:20.609261 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-internal-svc" Jan 20 18:25:20 crc kubenswrapper[4661]: I0120 18:25:20.609442 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 20 18:25:20 crc kubenswrapper[4661]: I0120 18:25:20.616228 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-c468c8b55-f2kw4"] Jan 20 18:25:20 crc kubenswrapper[4661]: I0120 18:25:20.653110 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/40923d71-e4f3-4c19-939c-e9f9b12fe635-credential-keys\") pod \"keystone-c468c8b55-f2kw4\" (UID: \"40923d71-e4f3-4c19-939c-e9f9b12fe635\") " pod="openstack/keystone-c468c8b55-f2kw4" Jan 20 18:25:20 crc kubenswrapper[4661]: I0120 18:25:20.653159 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/40923d71-e4f3-4c19-939c-e9f9b12fe635-internal-tls-certs\") pod \"keystone-c468c8b55-f2kw4\" (UID: \"40923d71-e4f3-4c19-939c-e9f9b12fe635\") " pod="openstack/keystone-c468c8b55-f2kw4" Jan 20 18:25:20 crc kubenswrapper[4661]: I0120 18:25:20.653180 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/40923d71-e4f3-4c19-939c-e9f9b12fe635-fernet-keys\") pod \"keystone-c468c8b55-f2kw4\" (UID: \"40923d71-e4f3-4c19-939c-e9f9b12fe635\") " pod="openstack/keystone-c468c8b55-f2kw4" Jan 20 18:25:20 crc kubenswrapper[4661]: I0120 18:25:20.653224 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/40923d71-e4f3-4c19-939c-e9f9b12fe635-scripts\") pod \"keystone-c468c8b55-f2kw4\" (UID: \"40923d71-e4f3-4c19-939c-e9f9b12fe635\") " pod="openstack/keystone-c468c8b55-f2kw4" Jan 20 18:25:20 crc kubenswrapper[4661]: I0120 18:25:20.653239 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/40923d71-e4f3-4c19-939c-e9f9b12fe635-public-tls-certs\") pod \"keystone-c468c8b55-f2kw4\" (UID: \"40923d71-e4f3-4c19-939c-e9f9b12fe635\") " pod="openstack/keystone-c468c8b55-f2kw4" Jan 20 18:25:20 crc kubenswrapper[4661]: I0120 18:25:20.653264 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/40923d71-e4f3-4c19-939c-e9f9b12fe635-combined-ca-bundle\") pod \"keystone-c468c8b55-f2kw4\" (UID: \"40923d71-e4f3-4c19-939c-e9f9b12fe635\") " pod="openstack/keystone-c468c8b55-f2kw4" Jan 20 18:25:20 crc kubenswrapper[4661]: I0120 18:25:20.653283 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gbhf8\" (UniqueName: \"kubernetes.io/projected/40923d71-e4f3-4c19-939c-e9f9b12fe635-kube-api-access-gbhf8\") pod \"keystone-c468c8b55-f2kw4\" (UID: \"40923d71-e4f3-4c19-939c-e9f9b12fe635\") " pod="openstack/keystone-c468c8b55-f2kw4" Jan 20 18:25:20 crc kubenswrapper[4661]: I0120 18:25:20.653308 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/40923d71-e4f3-4c19-939c-e9f9b12fe635-config-data\") pod \"keystone-c468c8b55-f2kw4\" (UID: \"40923d71-e4f3-4c19-939c-e9f9b12fe635\") " pod="openstack/keystone-c468c8b55-f2kw4" Jan 20 18:25:20 crc kubenswrapper[4661]: I0120 18:25:20.753947 4661 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-66f64dd556-cvpcx"] Jan 20 18:25:20 crc kubenswrapper[4661]: I0120 18:25:20.754655 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/40923d71-e4f3-4c19-939c-e9f9b12fe635-credential-keys\") pod \"keystone-c468c8b55-f2kw4\" (UID: \"40923d71-e4f3-4c19-939c-e9f9b12fe635\") " pod="openstack/keystone-c468c8b55-f2kw4" Jan 20 18:25:20 crc kubenswrapper[4661]: I0120 18:25:20.754720 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/40923d71-e4f3-4c19-939c-e9f9b12fe635-internal-tls-certs\") pod \"keystone-c468c8b55-f2kw4\" (UID: \"40923d71-e4f3-4c19-939c-e9f9b12fe635\") " pod="openstack/keystone-c468c8b55-f2kw4" Jan 20 18:25:20 crc kubenswrapper[4661]: I0120 18:25:20.754749 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/40923d71-e4f3-4c19-939c-e9f9b12fe635-fernet-keys\") pod \"keystone-c468c8b55-f2kw4\" (UID: \"40923d71-e4f3-4c19-939c-e9f9b12fe635\") " pod="openstack/keystone-c468c8b55-f2kw4" Jan 20 18:25:20 crc kubenswrapper[4661]: I0120 18:25:20.754808 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/40923d71-e4f3-4c19-939c-e9f9b12fe635-scripts\") pod \"keystone-c468c8b55-f2kw4\" (UID: \"40923d71-e4f3-4c19-939c-e9f9b12fe635\") " pod="openstack/keystone-c468c8b55-f2kw4" Jan 20 18:25:20 crc kubenswrapper[4661]: I0120 18:25:20.754828 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/40923d71-e4f3-4c19-939c-e9f9b12fe635-public-tls-certs\") pod \"keystone-c468c8b55-f2kw4\" (UID: \"40923d71-e4f3-4c19-939c-e9f9b12fe635\") " pod="openstack/keystone-c468c8b55-f2kw4" Jan 20 18:25:20 crc kubenswrapper[4661]: I0120 18:25:20.754854 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/40923d71-e4f3-4c19-939c-e9f9b12fe635-combined-ca-bundle\") pod \"keystone-c468c8b55-f2kw4\" (UID: \"40923d71-e4f3-4c19-939c-e9f9b12fe635\") " pod="openstack/keystone-c468c8b55-f2kw4" Jan 20 18:25:20 crc kubenswrapper[4661]: I0120 18:25:20.754876 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gbhf8\" (UniqueName: \"kubernetes.io/projected/40923d71-e4f3-4c19-939c-e9f9b12fe635-kube-api-access-gbhf8\") pod \"keystone-c468c8b55-f2kw4\" (UID: \"40923d71-e4f3-4c19-939c-e9f9b12fe635\") " pod="openstack/keystone-c468c8b55-f2kw4" Jan 20 18:25:20 crc kubenswrapper[4661]: I0120 18:25:20.754903 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/40923d71-e4f3-4c19-939c-e9f9b12fe635-config-data\") pod \"keystone-c468c8b55-f2kw4\" (UID: \"40923d71-e4f3-4c19-939c-e9f9b12fe635\") " pod="openstack/keystone-c468c8b55-f2kw4" Jan 20 18:25:20 crc kubenswrapper[4661]: I0120 18:25:20.758178 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-66f64dd556-cvpcx" Jan 20 18:25:20 crc kubenswrapper[4661]: I0120 18:25:20.761022 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/40923d71-e4f3-4c19-939c-e9f9b12fe635-scripts\") pod \"keystone-c468c8b55-f2kw4\" (UID: \"40923d71-e4f3-4c19-939c-e9f9b12fe635\") " pod="openstack/keystone-c468c8b55-f2kw4" Jan 20 18:25:20 crc kubenswrapper[4661]: I0120 18:25:20.761127 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/40923d71-e4f3-4c19-939c-e9f9b12fe635-public-tls-certs\") pod \"keystone-c468c8b55-f2kw4\" (UID: \"40923d71-e4f3-4c19-939c-e9f9b12fe635\") " pod="openstack/keystone-c468c8b55-f2kw4" Jan 20 18:25:20 crc kubenswrapper[4661]: I0120 18:25:20.761534 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/40923d71-e4f3-4c19-939c-e9f9b12fe635-config-data\") pod \"keystone-c468c8b55-f2kw4\" (UID: \"40923d71-e4f3-4c19-939c-e9f9b12fe635\") " pod="openstack/keystone-c468c8b55-f2kw4" Jan 20 18:25:20 crc kubenswrapper[4661]: I0120 18:25:20.765023 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/40923d71-e4f3-4c19-939c-e9f9b12fe635-combined-ca-bundle\") pod \"keystone-c468c8b55-f2kw4\" (UID: \"40923d71-e4f3-4c19-939c-e9f9b12fe635\") " pod="openstack/keystone-c468c8b55-f2kw4" Jan 20 18:25:20 crc kubenswrapper[4661]: I0120 18:25:20.767577 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/40923d71-e4f3-4c19-939c-e9f9b12fe635-fernet-keys\") pod \"keystone-c468c8b55-f2kw4\" (UID: \"40923d71-e4f3-4c19-939c-e9f9b12fe635\") " pod="openstack/keystone-c468c8b55-f2kw4" Jan 20 18:25:20 crc kubenswrapper[4661]: I0120 18:25:20.770040 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/40923d71-e4f3-4c19-939c-e9f9b12fe635-credential-keys\") pod \"keystone-c468c8b55-f2kw4\" (UID: \"40923d71-e4f3-4c19-939c-e9f9b12fe635\") " pod="openstack/keystone-c468c8b55-f2kw4" Jan 20 18:25:20 crc kubenswrapper[4661]: I0120 18:25:20.770991 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/40923d71-e4f3-4c19-939c-e9f9b12fe635-internal-tls-certs\") pod \"keystone-c468c8b55-f2kw4\" (UID: \"40923d71-e4f3-4c19-939c-e9f9b12fe635\") " pod="openstack/keystone-c468c8b55-f2kw4" Jan 20 18:25:20 crc kubenswrapper[4661]: I0120 18:25:20.804039 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Jan 20 18:25:20 crc kubenswrapper[4661]: I0120 18:25:20.804427 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Jan 20 18:25:20 crc kubenswrapper[4661]: I0120 18:25:20.804529 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-public-svc" Jan 20 18:25:20 crc kubenswrapper[4661]: I0120 18:25:20.804635 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-vj6nj" Jan 20 18:25:20 crc kubenswrapper[4661]: I0120 18:25:20.804747 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-internal-svc" Jan 20 18:25:20 crc kubenswrapper[4661]: I0120 18:25:20.822007 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gbhf8\" (UniqueName: \"kubernetes.io/projected/40923d71-e4f3-4c19-939c-e9f9b12fe635-kube-api-access-gbhf8\") pod \"keystone-c468c8b55-f2kw4\" (UID: \"40923d71-e4f3-4c19-939c-e9f9b12fe635\") " pod="openstack/keystone-c468c8b55-f2kw4" Jan 20 18:25:20 crc kubenswrapper[4661]: I0120 18:25:20.844319 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-66f64dd556-cvpcx"] Jan 20 18:25:20 crc kubenswrapper[4661]: I0120 18:25:20.891711 4661 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7b946d459c-w6r5c"] Jan 20 18:25:20 crc kubenswrapper[4661]: I0120 18:25:20.905385 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7b946d459c-w6r5c" Jan 20 18:25:20 crc kubenswrapper[4661]: I0120 18:25:20.914095 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-c468c8b55-f2kw4" Jan 20 18:25:20 crc kubenswrapper[4661]: I0120 18:25:20.966135 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/371e36aa-f7e5-443b-9f8e-ef9b8e9d0f61-combined-ca-bundle\") pod \"placement-66f64dd556-cvpcx\" (UID: \"371e36aa-f7e5-443b-9f8e-ef9b8e9d0f61\") " pod="openstack/placement-66f64dd556-cvpcx" Jan 20 18:25:20 crc kubenswrapper[4661]: I0120 18:25:20.966180 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/371e36aa-f7e5-443b-9f8e-ef9b8e9d0f61-internal-tls-certs\") pod \"placement-66f64dd556-cvpcx\" (UID: \"371e36aa-f7e5-443b-9f8e-ef9b8e9d0f61\") " pod="openstack/placement-66f64dd556-cvpcx" Jan 20 18:25:20 crc kubenswrapper[4661]: I0120 18:25:20.966208 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/371e36aa-f7e5-443b-9f8e-ef9b8e9d0f61-scripts\") pod \"placement-66f64dd556-cvpcx\" (UID: \"371e36aa-f7e5-443b-9f8e-ef9b8e9d0f61\") " pod="openstack/placement-66f64dd556-cvpcx" Jan 20 18:25:20 crc kubenswrapper[4661]: I0120 18:25:20.966234 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hmgg6\" (UniqueName: \"kubernetes.io/projected/371e36aa-f7e5-443b-9f8e-ef9b8e9d0f61-kube-api-access-hmgg6\") pod \"placement-66f64dd556-cvpcx\" (UID: \"371e36aa-f7e5-443b-9f8e-ef9b8e9d0f61\") " pod="openstack/placement-66f64dd556-cvpcx" Jan 20 18:25:20 crc kubenswrapper[4661]: I0120 18:25:20.966252 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/371e36aa-f7e5-443b-9f8e-ef9b8e9d0f61-logs\") pod \"placement-66f64dd556-cvpcx\" (UID: \"371e36aa-f7e5-443b-9f8e-ef9b8e9d0f61\") " pod="openstack/placement-66f64dd556-cvpcx" Jan 20 18:25:20 crc kubenswrapper[4661]: I0120 18:25:20.966280 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/371e36aa-f7e5-443b-9f8e-ef9b8e9d0f61-public-tls-certs\") pod \"placement-66f64dd556-cvpcx\" (UID: \"371e36aa-f7e5-443b-9f8e-ef9b8e9d0f61\") " pod="openstack/placement-66f64dd556-cvpcx" Jan 20 18:25:20 crc kubenswrapper[4661]: I0120 18:25:20.966330 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/371e36aa-f7e5-443b-9f8e-ef9b8e9d0f61-config-data\") pod \"placement-66f64dd556-cvpcx\" (UID: \"371e36aa-f7e5-443b-9f8e-ef9b8e9d0f61\") " pod="openstack/placement-66f64dd556-cvpcx" Jan 20 18:25:20 crc kubenswrapper[4661]: I0120 18:25:20.985326 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7b946d459c-w6r5c"] Jan 20 18:25:21 crc kubenswrapper[4661]: I0120 18:25:21.058845 4661 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-689bd5f764-p5qpx"] Jan 20 18:25:21 crc kubenswrapper[4661]: I0120 18:25:21.061867 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-689bd5f764-p5qpx" Jan 20 18:25:21 crc kubenswrapper[4661]: I0120 18:25:21.067157 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Jan 20 18:25:21 crc kubenswrapper[4661]: I0120 18:25:21.067514 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Jan 20 18:25:21 crc kubenswrapper[4661]: I0120 18:25:21.067561 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-ovndbs" Jan 20 18:25:21 crc kubenswrapper[4661]: I0120 18:25:21.067861 4661 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-5b484f76ff-qrd8w"] Jan 20 18:25:21 crc kubenswrapper[4661]: I0120 18:25:21.067800 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-6dlvz" Jan 20 18:25:21 crc kubenswrapper[4661]: I0120 18:25:21.069152 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hmgg6\" (UniqueName: \"kubernetes.io/projected/371e36aa-f7e5-443b-9f8e-ef9b8e9d0f61-kube-api-access-hmgg6\") pod \"placement-66f64dd556-cvpcx\" (UID: \"371e36aa-f7e5-443b-9f8e-ef9b8e9d0f61\") " pod="openstack/placement-66f64dd556-cvpcx" Jan 20 18:25:21 crc kubenswrapper[4661]: I0120 18:25:21.069316 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/371e36aa-f7e5-443b-9f8e-ef9b8e9d0f61-logs\") pod \"placement-66f64dd556-cvpcx\" (UID: \"371e36aa-f7e5-443b-9f8e-ef9b8e9d0f61\") " pod="openstack/placement-66f64dd556-cvpcx" Jan 20 18:25:21 crc kubenswrapper[4661]: I0120 18:25:21.069367 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/371e36aa-f7e5-443b-9f8e-ef9b8e9d0f61-public-tls-certs\") pod \"placement-66f64dd556-cvpcx\" (UID: \"371e36aa-f7e5-443b-9f8e-ef9b8e9d0f61\") " pod="openstack/placement-66f64dd556-cvpcx" Jan 20 18:25:21 crc kubenswrapper[4661]: I0120 18:25:21.069414 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c5rmn\" (UniqueName: \"kubernetes.io/projected/260cab02-a26b-4891-9c82-54e4e3746911-kube-api-access-c5rmn\") pod \"dnsmasq-dns-7b946d459c-w6r5c\" (UID: \"260cab02-a26b-4891-9c82-54e4e3746911\") " pod="openstack/dnsmasq-dns-7b946d459c-w6r5c" Jan 20 18:25:21 crc kubenswrapper[4661]: I0120 18:25:21.069444 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/260cab02-a26b-4891-9c82-54e4e3746911-config\") pod \"dnsmasq-dns-7b946d459c-w6r5c\" (UID: \"260cab02-a26b-4891-9c82-54e4e3746911\") " pod="openstack/dnsmasq-dns-7b946d459c-w6r5c" Jan 20 18:25:21 crc kubenswrapper[4661]: I0120 18:25:21.069464 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/371e36aa-f7e5-443b-9f8e-ef9b8e9d0f61-config-data\") pod \"placement-66f64dd556-cvpcx\" (UID: \"371e36aa-f7e5-443b-9f8e-ef9b8e9d0f61\") " pod="openstack/placement-66f64dd556-cvpcx" Jan 20 18:25:21 crc kubenswrapper[4661]: I0120 18:25:21.069494 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/260cab02-a26b-4891-9c82-54e4e3746911-ovsdbserver-sb\") pod \"dnsmasq-dns-7b946d459c-w6r5c\" (UID: \"260cab02-a26b-4891-9c82-54e4e3746911\") " pod="openstack/dnsmasq-dns-7b946d459c-w6r5c" Jan 20 18:25:21 crc kubenswrapper[4661]: I0120 18:25:21.069521 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/260cab02-a26b-4891-9c82-54e4e3746911-dns-svc\") pod \"dnsmasq-dns-7b946d459c-w6r5c\" (UID: \"260cab02-a26b-4891-9c82-54e4e3746911\") " pod="openstack/dnsmasq-dns-7b946d459c-w6r5c" Jan 20 18:25:21 crc kubenswrapper[4661]: I0120 18:25:21.069542 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/371e36aa-f7e5-443b-9f8e-ef9b8e9d0f61-combined-ca-bundle\") pod \"placement-66f64dd556-cvpcx\" (UID: \"371e36aa-f7e5-443b-9f8e-ef9b8e9d0f61\") " pod="openstack/placement-66f64dd556-cvpcx" Jan 20 18:25:21 crc kubenswrapper[4661]: I0120 18:25:21.069564 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/260cab02-a26b-4891-9c82-54e4e3746911-ovsdbserver-nb\") pod \"dnsmasq-dns-7b946d459c-w6r5c\" (UID: \"260cab02-a26b-4891-9c82-54e4e3746911\") " pod="openstack/dnsmasq-dns-7b946d459c-w6r5c" Jan 20 18:25:21 crc kubenswrapper[4661]: I0120 18:25:21.069602 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/371e36aa-f7e5-443b-9f8e-ef9b8e9d0f61-internal-tls-certs\") pod \"placement-66f64dd556-cvpcx\" (UID: \"371e36aa-f7e5-443b-9f8e-ef9b8e9d0f61\") " pod="openstack/placement-66f64dd556-cvpcx" Jan 20 18:25:21 crc kubenswrapper[4661]: I0120 18:25:21.069636 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/371e36aa-f7e5-443b-9f8e-ef9b8e9d0f61-scripts\") pod \"placement-66f64dd556-cvpcx\" (UID: \"371e36aa-f7e5-443b-9f8e-ef9b8e9d0f61\") " pod="openstack/placement-66f64dd556-cvpcx" Jan 20 18:25:21 crc kubenswrapper[4661]: I0120 18:25:21.069917 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/371e36aa-f7e5-443b-9f8e-ef9b8e9d0f61-logs\") pod \"placement-66f64dd556-cvpcx\" (UID: \"371e36aa-f7e5-443b-9f8e-ef9b8e9d0f61\") " pod="openstack/placement-66f64dd556-cvpcx" Jan 20 18:25:21 crc kubenswrapper[4661]: I0120 18:25:21.073091 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-5b484f76ff-qrd8w" Jan 20 18:25:21 crc kubenswrapper[4661]: I0120 18:25:21.084549 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/371e36aa-f7e5-443b-9f8e-ef9b8e9d0f61-combined-ca-bundle\") pod \"placement-66f64dd556-cvpcx\" (UID: \"371e36aa-f7e5-443b-9f8e-ef9b8e9d0f61\") " pod="openstack/placement-66f64dd556-cvpcx" Jan 20 18:25:21 crc kubenswrapper[4661]: I0120 18:25:21.097899 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-5b484f76ff-qrd8w"] Jan 20 18:25:21 crc kubenswrapper[4661]: I0120 18:25:21.102411 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/371e36aa-f7e5-443b-9f8e-ef9b8e9d0f61-config-data\") pod \"placement-66f64dd556-cvpcx\" (UID: \"371e36aa-f7e5-443b-9f8e-ef9b8e9d0f61\") " pod="openstack/placement-66f64dd556-cvpcx" Jan 20 18:25:21 crc kubenswrapper[4661]: I0120 18:25:21.103159 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/371e36aa-f7e5-443b-9f8e-ef9b8e9d0f61-internal-tls-certs\") pod \"placement-66f64dd556-cvpcx\" (UID: \"371e36aa-f7e5-443b-9f8e-ef9b8e9d0f61\") " pod="openstack/placement-66f64dd556-cvpcx" Jan 20 18:25:21 crc kubenswrapper[4661]: I0120 18:25:21.105705 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-jrm7t" Jan 20 18:25:21 crc kubenswrapper[4661]: I0120 18:25:21.106003 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Jan 20 18:25:21 crc kubenswrapper[4661]: I0120 18:25:21.106241 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-worker-config-data" Jan 20 18:25:21 crc kubenswrapper[4661]: I0120 18:25:21.107293 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/371e36aa-f7e5-443b-9f8e-ef9b8e9d0f61-public-tls-certs\") pod \"placement-66f64dd556-cvpcx\" (UID: \"371e36aa-f7e5-443b-9f8e-ef9b8e9d0f61\") " pod="openstack/placement-66f64dd556-cvpcx" Jan 20 18:25:21 crc kubenswrapper[4661]: I0120 18:25:21.107557 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/371e36aa-f7e5-443b-9f8e-ef9b8e9d0f61-scripts\") pod \"placement-66f64dd556-cvpcx\" (UID: \"371e36aa-f7e5-443b-9f8e-ef9b8e9d0f61\") " pod="openstack/placement-66f64dd556-cvpcx" Jan 20 18:25:21 crc kubenswrapper[4661]: I0120 18:25:21.129550 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-689bd5f764-p5qpx"] Jan 20 18:25:21 crc kubenswrapper[4661]: I0120 18:25:21.133407 4661 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-596b75897b-2g4gm"] Jan 20 18:25:21 crc kubenswrapper[4661]: I0120 18:25:21.134645 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-596b75897b-2g4gm" Jan 20 18:25:21 crc kubenswrapper[4661]: I0120 18:25:21.142711 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-keystone-listener-config-data" Jan 20 18:25:21 crc kubenswrapper[4661]: I0120 18:25:21.148307 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hmgg6\" (UniqueName: \"kubernetes.io/projected/371e36aa-f7e5-443b-9f8e-ef9b8e9d0f61-kube-api-access-hmgg6\") pod \"placement-66f64dd556-cvpcx\" (UID: \"371e36aa-f7e5-443b-9f8e-ef9b8e9d0f61\") " pod="openstack/placement-66f64dd556-cvpcx" Jan 20 18:25:21 crc kubenswrapper[4661]: I0120 18:25:21.154738 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-596b75897b-2g4gm"] Jan 20 18:25:21 crc kubenswrapper[4661]: I0120 18:25:21.171419 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nmd2z\" (UniqueName: \"kubernetes.io/projected/051bbaec-94d3-4a08-ab1b-4417e566e5f3-kube-api-access-nmd2z\") pod \"neutron-689bd5f764-p5qpx\" (UID: \"051bbaec-94d3-4a08-ab1b-4417e566e5f3\") " pod="openstack/neutron-689bd5f764-p5qpx" Jan 20 18:25:21 crc kubenswrapper[4661]: I0120 18:25:21.171455 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/051bbaec-94d3-4a08-ab1b-4417e566e5f3-combined-ca-bundle\") pod \"neutron-689bd5f764-p5qpx\" (UID: \"051bbaec-94d3-4a08-ab1b-4417e566e5f3\") " pod="openstack/neutron-689bd5f764-p5qpx" Jan 20 18:25:21 crc kubenswrapper[4661]: I0120 18:25:21.171501 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c5rmn\" (UniqueName: \"kubernetes.io/projected/260cab02-a26b-4891-9c82-54e4e3746911-kube-api-access-c5rmn\") pod \"dnsmasq-dns-7b946d459c-w6r5c\" (UID: \"260cab02-a26b-4891-9c82-54e4e3746911\") " pod="openstack/dnsmasq-dns-7b946d459c-w6r5c" Jan 20 18:25:21 crc kubenswrapper[4661]: I0120 18:25:21.171528 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/260cab02-a26b-4891-9c82-54e4e3746911-config\") pod \"dnsmasq-dns-7b946d459c-w6r5c\" (UID: \"260cab02-a26b-4891-9c82-54e4e3746911\") " pod="openstack/dnsmasq-dns-7b946d459c-w6r5c" Jan 20 18:25:21 crc kubenswrapper[4661]: I0120 18:25:21.171551 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/051bbaec-94d3-4a08-ab1b-4417e566e5f3-config\") pod \"neutron-689bd5f764-p5qpx\" (UID: \"051bbaec-94d3-4a08-ab1b-4417e566e5f3\") " pod="openstack/neutron-689bd5f764-p5qpx" Jan 20 18:25:21 crc kubenswrapper[4661]: I0120 18:25:21.171576 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/260cab02-a26b-4891-9c82-54e4e3746911-ovsdbserver-sb\") pod \"dnsmasq-dns-7b946d459c-w6r5c\" (UID: \"260cab02-a26b-4891-9c82-54e4e3746911\") " pod="openstack/dnsmasq-dns-7b946d459c-w6r5c" Jan 20 18:25:21 crc kubenswrapper[4661]: I0120 18:25:21.171596 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/051bbaec-94d3-4a08-ab1b-4417e566e5f3-ovndb-tls-certs\") pod \"neutron-689bd5f764-p5qpx\" (UID: \"051bbaec-94d3-4a08-ab1b-4417e566e5f3\") " pod="openstack/neutron-689bd5f764-p5qpx" Jan 20 18:25:21 crc kubenswrapper[4661]: I0120 18:25:21.171616 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/260cab02-a26b-4891-9c82-54e4e3746911-dns-svc\") pod \"dnsmasq-dns-7b946d459c-w6r5c\" (UID: \"260cab02-a26b-4891-9c82-54e4e3746911\") " pod="openstack/dnsmasq-dns-7b946d459c-w6r5c" Jan 20 18:25:21 crc kubenswrapper[4661]: I0120 18:25:21.171632 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/260cab02-a26b-4891-9c82-54e4e3746911-ovsdbserver-nb\") pod \"dnsmasq-dns-7b946d459c-w6r5c\" (UID: \"260cab02-a26b-4891-9c82-54e4e3746911\") " pod="openstack/dnsmasq-dns-7b946d459c-w6r5c" Jan 20 18:25:21 crc kubenswrapper[4661]: I0120 18:25:21.171670 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/051bbaec-94d3-4a08-ab1b-4417e566e5f3-httpd-config\") pod \"neutron-689bd5f764-p5qpx\" (UID: \"051bbaec-94d3-4a08-ab1b-4417e566e5f3\") " pod="openstack/neutron-689bd5f764-p5qpx" Jan 20 18:25:21 crc kubenswrapper[4661]: I0120 18:25:21.172648 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/260cab02-a26b-4891-9c82-54e4e3746911-config\") pod \"dnsmasq-dns-7b946d459c-w6r5c\" (UID: \"260cab02-a26b-4891-9c82-54e4e3746911\") " pod="openstack/dnsmasq-dns-7b946d459c-w6r5c" Jan 20 18:25:21 crc kubenswrapper[4661]: I0120 18:25:21.173147 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/260cab02-a26b-4891-9c82-54e4e3746911-dns-svc\") pod \"dnsmasq-dns-7b946d459c-w6r5c\" (UID: \"260cab02-a26b-4891-9c82-54e4e3746911\") " pod="openstack/dnsmasq-dns-7b946d459c-w6r5c" Jan 20 18:25:21 crc kubenswrapper[4661]: I0120 18:25:21.173624 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/260cab02-a26b-4891-9c82-54e4e3746911-ovsdbserver-nb\") pod \"dnsmasq-dns-7b946d459c-w6r5c\" (UID: \"260cab02-a26b-4891-9c82-54e4e3746911\") " pod="openstack/dnsmasq-dns-7b946d459c-w6r5c" Jan 20 18:25:21 crc kubenswrapper[4661]: I0120 18:25:21.187232 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/260cab02-a26b-4891-9c82-54e4e3746911-ovsdbserver-sb\") pod \"dnsmasq-dns-7b946d459c-w6r5c\" (UID: \"260cab02-a26b-4891-9c82-54e4e3746911\") " pod="openstack/dnsmasq-dns-7b946d459c-w6r5c" Jan 20 18:25:21 crc kubenswrapper[4661]: I0120 18:25:21.222950 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-66f64dd556-cvpcx" Jan 20 18:25:21 crc kubenswrapper[4661]: I0120 18:25:21.239550 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c5rmn\" (UniqueName: \"kubernetes.io/projected/260cab02-a26b-4891-9c82-54e4e3746911-kube-api-access-c5rmn\") pod \"dnsmasq-dns-7b946d459c-w6r5c\" (UID: \"260cab02-a26b-4891-9c82-54e4e3746911\") " pod="openstack/dnsmasq-dns-7b946d459c-w6r5c" Jan 20 18:25:21 crc kubenswrapper[4661]: I0120 18:25:21.276453 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/051bbaec-94d3-4a08-ab1b-4417e566e5f3-httpd-config\") pod \"neutron-689bd5f764-p5qpx\" (UID: \"051bbaec-94d3-4a08-ab1b-4417e566e5f3\") " pod="openstack/neutron-689bd5f764-p5qpx" Jan 20 18:25:21 crc kubenswrapper[4661]: I0120 18:25:21.276496 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/341d9328-73af-4986-9901-43b929a9e030-logs\") pod \"barbican-worker-5b484f76ff-qrd8w\" (UID: \"341d9328-73af-4986-9901-43b929a9e030\") " pod="openstack/barbican-worker-5b484f76ff-qrd8w" Jan 20 18:25:21 crc kubenswrapper[4661]: I0120 18:25:21.276541 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nmd2z\" (UniqueName: \"kubernetes.io/projected/051bbaec-94d3-4a08-ab1b-4417e566e5f3-kube-api-access-nmd2z\") pod \"neutron-689bd5f764-p5qpx\" (UID: \"051bbaec-94d3-4a08-ab1b-4417e566e5f3\") " pod="openstack/neutron-689bd5f764-p5qpx" Jan 20 18:25:21 crc kubenswrapper[4661]: I0120 18:25:21.276561 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/051bbaec-94d3-4a08-ab1b-4417e566e5f3-combined-ca-bundle\") pod \"neutron-689bd5f764-p5qpx\" (UID: \"051bbaec-94d3-4a08-ab1b-4417e566e5f3\") " pod="openstack/neutron-689bd5f764-p5qpx" Jan 20 18:25:21 crc kubenswrapper[4661]: I0120 18:25:21.276596 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6zcnj\" (UniqueName: \"kubernetes.io/projected/341d9328-73af-4986-9901-43b929a9e030-kube-api-access-6zcnj\") pod \"barbican-worker-5b484f76ff-qrd8w\" (UID: \"341d9328-73af-4986-9901-43b929a9e030\") " pod="openstack/barbican-worker-5b484f76ff-qrd8w" Jan 20 18:25:21 crc kubenswrapper[4661]: I0120 18:25:21.276615 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f94c4b9e-2f53-44d0-a637-4e8f4a3f9d58-config-data\") pod \"barbican-keystone-listener-596b75897b-2g4gm\" (UID: \"f94c4b9e-2f53-44d0-a637-4e8f4a3f9d58\") " pod="openstack/barbican-keystone-listener-596b75897b-2g4gm" Jan 20 18:25:21 crc kubenswrapper[4661]: I0120 18:25:21.276637 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zfstq\" (UniqueName: \"kubernetes.io/projected/f94c4b9e-2f53-44d0-a637-4e8f4a3f9d58-kube-api-access-zfstq\") pod \"barbican-keystone-listener-596b75897b-2g4gm\" (UID: \"f94c4b9e-2f53-44d0-a637-4e8f4a3f9d58\") " pod="openstack/barbican-keystone-listener-596b75897b-2g4gm" Jan 20 18:25:21 crc kubenswrapper[4661]: I0120 18:25:21.276688 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/341d9328-73af-4986-9901-43b929a9e030-config-data-custom\") pod \"barbican-worker-5b484f76ff-qrd8w\" (UID: \"341d9328-73af-4986-9901-43b929a9e030\") " pod="openstack/barbican-worker-5b484f76ff-qrd8w" Jan 20 18:25:21 crc kubenswrapper[4661]: I0120 18:25:21.276706 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/341d9328-73af-4986-9901-43b929a9e030-combined-ca-bundle\") pod \"barbican-worker-5b484f76ff-qrd8w\" (UID: \"341d9328-73af-4986-9901-43b929a9e030\") " pod="openstack/barbican-worker-5b484f76ff-qrd8w" Jan 20 18:25:21 crc kubenswrapper[4661]: I0120 18:25:21.276722 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/051bbaec-94d3-4a08-ab1b-4417e566e5f3-config\") pod \"neutron-689bd5f764-p5qpx\" (UID: \"051bbaec-94d3-4a08-ab1b-4417e566e5f3\") " pod="openstack/neutron-689bd5f764-p5qpx" Jan 20 18:25:21 crc kubenswrapper[4661]: I0120 18:25:21.276745 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f94c4b9e-2f53-44d0-a637-4e8f4a3f9d58-logs\") pod \"barbican-keystone-listener-596b75897b-2g4gm\" (UID: \"f94c4b9e-2f53-44d0-a637-4e8f4a3f9d58\") " pod="openstack/barbican-keystone-listener-596b75897b-2g4gm" Jan 20 18:25:21 crc kubenswrapper[4661]: I0120 18:25:21.276770 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/051bbaec-94d3-4a08-ab1b-4417e566e5f3-ovndb-tls-certs\") pod \"neutron-689bd5f764-p5qpx\" (UID: \"051bbaec-94d3-4a08-ab1b-4417e566e5f3\") " pod="openstack/neutron-689bd5f764-p5qpx" Jan 20 18:25:21 crc kubenswrapper[4661]: I0120 18:25:21.276786 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/341d9328-73af-4986-9901-43b929a9e030-config-data\") pod \"barbican-worker-5b484f76ff-qrd8w\" (UID: \"341d9328-73af-4986-9901-43b929a9e030\") " pod="openstack/barbican-worker-5b484f76ff-qrd8w" Jan 20 18:25:21 crc kubenswrapper[4661]: I0120 18:25:21.276813 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f94c4b9e-2f53-44d0-a637-4e8f4a3f9d58-combined-ca-bundle\") pod \"barbican-keystone-listener-596b75897b-2g4gm\" (UID: \"f94c4b9e-2f53-44d0-a637-4e8f4a3f9d58\") " pod="openstack/barbican-keystone-listener-596b75897b-2g4gm" Jan 20 18:25:21 crc kubenswrapper[4661]: I0120 18:25:21.276829 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f94c4b9e-2f53-44d0-a637-4e8f4a3f9d58-config-data-custom\") pod \"barbican-keystone-listener-596b75897b-2g4gm\" (UID: \"f94c4b9e-2f53-44d0-a637-4e8f4a3f9d58\") " pod="openstack/barbican-keystone-listener-596b75897b-2g4gm" Jan 20 18:25:21 crc kubenswrapper[4661]: I0120 18:25:21.282660 4661 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7b946d459c-w6r5c"] Jan 20 18:25:21 crc kubenswrapper[4661]: I0120 18:25:21.283208 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7b946d459c-w6r5c" Jan 20 18:25:21 crc kubenswrapper[4661]: I0120 18:25:21.283345 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/051bbaec-94d3-4a08-ab1b-4417e566e5f3-combined-ca-bundle\") pod \"neutron-689bd5f764-p5qpx\" (UID: \"051bbaec-94d3-4a08-ab1b-4417e566e5f3\") " pod="openstack/neutron-689bd5f764-p5qpx" Jan 20 18:25:21 crc kubenswrapper[4661]: I0120 18:25:21.294403 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/051bbaec-94d3-4a08-ab1b-4417e566e5f3-config\") pod \"neutron-689bd5f764-p5qpx\" (UID: \"051bbaec-94d3-4a08-ab1b-4417e566e5f3\") " pod="openstack/neutron-689bd5f764-p5qpx" Jan 20 18:25:21 crc kubenswrapper[4661]: I0120 18:25:21.297046 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/051bbaec-94d3-4a08-ab1b-4417e566e5f3-ovndb-tls-certs\") pod \"neutron-689bd5f764-p5qpx\" (UID: \"051bbaec-94d3-4a08-ab1b-4417e566e5f3\") " pod="openstack/neutron-689bd5f764-p5qpx" Jan 20 18:25:21 crc kubenswrapper[4661]: I0120 18:25:21.304362 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/051bbaec-94d3-4a08-ab1b-4417e566e5f3-httpd-config\") pod \"neutron-689bd5f764-p5qpx\" (UID: \"051bbaec-94d3-4a08-ab1b-4417e566e5f3\") " pod="openstack/neutron-689bd5f764-p5qpx" Jan 20 18:25:21 crc kubenswrapper[4661]: I0120 18:25:21.312425 4661 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6bb684768f-5nk7f"] Jan 20 18:25:21 crc kubenswrapper[4661]: I0120 18:25:21.314080 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6bb684768f-5nk7f" Jan 20 18:25:21 crc kubenswrapper[4661]: I0120 18:25:21.315371 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nmd2z\" (UniqueName: \"kubernetes.io/projected/051bbaec-94d3-4a08-ab1b-4417e566e5f3-kube-api-access-nmd2z\") pod \"neutron-689bd5f764-p5qpx\" (UID: \"051bbaec-94d3-4a08-ab1b-4417e566e5f3\") " pod="openstack/neutron-689bd5f764-p5qpx" Jan 20 18:25:21 crc kubenswrapper[4661]: I0120 18:25:21.382228 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f94c4b9e-2f53-44d0-a637-4e8f4a3f9d58-combined-ca-bundle\") pod \"barbican-keystone-listener-596b75897b-2g4gm\" (UID: \"f94c4b9e-2f53-44d0-a637-4e8f4a3f9d58\") " pod="openstack/barbican-keystone-listener-596b75897b-2g4gm" Jan 20 18:25:21 crc kubenswrapper[4661]: I0120 18:25:21.382276 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f94c4b9e-2f53-44d0-a637-4e8f4a3f9d58-config-data-custom\") pod \"barbican-keystone-listener-596b75897b-2g4gm\" (UID: \"f94c4b9e-2f53-44d0-a637-4e8f4a3f9d58\") " pod="openstack/barbican-keystone-listener-596b75897b-2g4gm" Jan 20 18:25:21 crc kubenswrapper[4661]: I0120 18:25:21.382310 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/341d9328-73af-4986-9901-43b929a9e030-logs\") pod \"barbican-worker-5b484f76ff-qrd8w\" (UID: \"341d9328-73af-4986-9901-43b929a9e030\") " pod="openstack/barbican-worker-5b484f76ff-qrd8w" Jan 20 18:25:21 crc kubenswrapper[4661]: I0120 18:25:21.382360 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6zcnj\" (UniqueName: \"kubernetes.io/projected/341d9328-73af-4986-9901-43b929a9e030-kube-api-access-6zcnj\") pod \"barbican-worker-5b484f76ff-qrd8w\" (UID: \"341d9328-73af-4986-9901-43b929a9e030\") " pod="openstack/barbican-worker-5b484f76ff-qrd8w" Jan 20 18:25:21 crc kubenswrapper[4661]: I0120 18:25:21.382378 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f94c4b9e-2f53-44d0-a637-4e8f4a3f9d58-config-data\") pod \"barbican-keystone-listener-596b75897b-2g4gm\" (UID: \"f94c4b9e-2f53-44d0-a637-4e8f4a3f9d58\") " pod="openstack/barbican-keystone-listener-596b75897b-2g4gm" Jan 20 18:25:21 crc kubenswrapper[4661]: I0120 18:25:21.382397 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zfstq\" (UniqueName: \"kubernetes.io/projected/f94c4b9e-2f53-44d0-a637-4e8f4a3f9d58-kube-api-access-zfstq\") pod \"barbican-keystone-listener-596b75897b-2g4gm\" (UID: \"f94c4b9e-2f53-44d0-a637-4e8f4a3f9d58\") " pod="openstack/barbican-keystone-listener-596b75897b-2g4gm" Jan 20 18:25:21 crc kubenswrapper[4661]: I0120 18:25:21.382432 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/341d9328-73af-4986-9901-43b929a9e030-config-data-custom\") pod \"barbican-worker-5b484f76ff-qrd8w\" (UID: \"341d9328-73af-4986-9901-43b929a9e030\") " pod="openstack/barbican-worker-5b484f76ff-qrd8w" Jan 20 18:25:21 crc kubenswrapper[4661]: I0120 18:25:21.382450 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/341d9328-73af-4986-9901-43b929a9e030-combined-ca-bundle\") pod \"barbican-worker-5b484f76ff-qrd8w\" (UID: \"341d9328-73af-4986-9901-43b929a9e030\") " pod="openstack/barbican-worker-5b484f76ff-qrd8w" Jan 20 18:25:21 crc kubenswrapper[4661]: I0120 18:25:21.382471 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f94c4b9e-2f53-44d0-a637-4e8f4a3f9d58-logs\") pod \"barbican-keystone-listener-596b75897b-2g4gm\" (UID: \"f94c4b9e-2f53-44d0-a637-4e8f4a3f9d58\") " pod="openstack/barbican-keystone-listener-596b75897b-2g4gm" Jan 20 18:25:21 crc kubenswrapper[4661]: I0120 18:25:21.382500 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/341d9328-73af-4986-9901-43b929a9e030-config-data\") pod \"barbican-worker-5b484f76ff-qrd8w\" (UID: \"341d9328-73af-4986-9901-43b929a9e030\") " pod="openstack/barbican-worker-5b484f76ff-qrd8w" Jan 20 18:25:21 crc kubenswrapper[4661]: I0120 18:25:21.387148 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/341d9328-73af-4986-9901-43b929a9e030-logs\") pod \"barbican-worker-5b484f76ff-qrd8w\" (UID: \"341d9328-73af-4986-9901-43b929a9e030\") " pod="openstack/barbican-worker-5b484f76ff-qrd8w" Jan 20 18:25:21 crc kubenswrapper[4661]: I0120 18:25:21.387493 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6bb684768f-5nk7f"] Jan 20 18:25:21 crc kubenswrapper[4661]: I0120 18:25:21.387584 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f94c4b9e-2f53-44d0-a637-4e8f4a3f9d58-logs\") pod \"barbican-keystone-listener-596b75897b-2g4gm\" (UID: \"f94c4b9e-2f53-44d0-a637-4e8f4a3f9d58\") " pod="openstack/barbican-keystone-listener-596b75897b-2g4gm" Jan 20 18:25:21 crc kubenswrapper[4661]: I0120 18:25:21.392659 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f94c4b9e-2f53-44d0-a637-4e8f4a3f9d58-combined-ca-bundle\") pod \"barbican-keystone-listener-596b75897b-2g4gm\" (UID: \"f94c4b9e-2f53-44d0-a637-4e8f4a3f9d58\") " pod="openstack/barbican-keystone-listener-596b75897b-2g4gm" Jan 20 18:25:21 crc kubenswrapper[4661]: I0120 18:25:21.397180 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/341d9328-73af-4986-9901-43b929a9e030-config-data\") pod \"barbican-worker-5b484f76ff-qrd8w\" (UID: \"341d9328-73af-4986-9901-43b929a9e030\") " pod="openstack/barbican-worker-5b484f76ff-qrd8w" Jan 20 18:25:21 crc kubenswrapper[4661]: I0120 18:25:21.399386 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/341d9328-73af-4986-9901-43b929a9e030-config-data-custom\") pod \"barbican-worker-5b484f76ff-qrd8w\" (UID: \"341d9328-73af-4986-9901-43b929a9e030\") " pod="openstack/barbican-worker-5b484f76ff-qrd8w" Jan 20 18:25:21 crc kubenswrapper[4661]: I0120 18:25:21.399800 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f94c4b9e-2f53-44d0-a637-4e8f4a3f9d58-config-data-custom\") pod \"barbican-keystone-listener-596b75897b-2g4gm\" (UID: \"f94c4b9e-2f53-44d0-a637-4e8f4a3f9d58\") " pod="openstack/barbican-keystone-listener-596b75897b-2g4gm" Jan 20 18:25:21 crc kubenswrapper[4661]: I0120 18:25:21.400197 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/341d9328-73af-4986-9901-43b929a9e030-combined-ca-bundle\") pod \"barbican-worker-5b484f76ff-qrd8w\" (UID: \"341d9328-73af-4986-9901-43b929a9e030\") " pod="openstack/barbican-worker-5b484f76ff-qrd8w" Jan 20 18:25:21 crc kubenswrapper[4661]: I0120 18:25:21.401324 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f94c4b9e-2f53-44d0-a637-4e8f4a3f9d58-config-data\") pod \"barbican-keystone-listener-596b75897b-2g4gm\" (UID: \"f94c4b9e-2f53-44d0-a637-4e8f4a3f9d58\") " pod="openstack/barbican-keystone-listener-596b75897b-2g4gm" Jan 20 18:25:21 crc kubenswrapper[4661]: I0120 18:25:21.411855 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-689bd5f764-p5qpx" Jan 20 18:25:21 crc kubenswrapper[4661]: I0120 18:25:21.428387 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zfstq\" (UniqueName: \"kubernetes.io/projected/f94c4b9e-2f53-44d0-a637-4e8f4a3f9d58-kube-api-access-zfstq\") pod \"barbican-keystone-listener-596b75897b-2g4gm\" (UID: \"f94c4b9e-2f53-44d0-a637-4e8f4a3f9d58\") " pod="openstack/barbican-keystone-listener-596b75897b-2g4gm" Jan 20 18:25:21 crc kubenswrapper[4661]: I0120 18:25:21.429102 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6zcnj\" (UniqueName: \"kubernetes.io/projected/341d9328-73af-4986-9901-43b929a9e030-kube-api-access-6zcnj\") pod \"barbican-worker-5b484f76ff-qrd8w\" (UID: \"341d9328-73af-4986-9901-43b929a9e030\") " pod="openstack/barbican-worker-5b484f76ff-qrd8w" Jan 20 18:25:21 crc kubenswrapper[4661]: I0120 18:25:21.464765 4661 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-79fcf454c6-dzz6z"] Jan 20 18:25:21 crc kubenswrapper[4661]: I0120 18:25:21.466182 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-79fcf454c6-dzz6z" Jan 20 18:25:21 crc kubenswrapper[4661]: I0120 18:25:21.470846 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-5b484f76ff-qrd8w" Jan 20 18:25:21 crc kubenswrapper[4661]: I0120 18:25:21.474291 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-api-config-data" Jan 20 18:25:21 crc kubenswrapper[4661]: I0120 18:25:21.483815 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/deebca4d-9edb-45ba-90a2-c58696b6c0d7-ovsdbserver-nb\") pod \"dnsmasq-dns-6bb684768f-5nk7f\" (UID: \"deebca4d-9edb-45ba-90a2-c58696b6c0d7\") " pod="openstack/dnsmasq-dns-6bb684768f-5nk7f" Jan 20 18:25:21 crc kubenswrapper[4661]: I0120 18:25:21.484052 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/deebca4d-9edb-45ba-90a2-c58696b6c0d7-ovsdbserver-sb\") pod \"dnsmasq-dns-6bb684768f-5nk7f\" (UID: \"deebca4d-9edb-45ba-90a2-c58696b6c0d7\") " pod="openstack/dnsmasq-dns-6bb684768f-5nk7f" Jan 20 18:25:21 crc kubenswrapper[4661]: I0120 18:25:21.484093 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/deebca4d-9edb-45ba-90a2-c58696b6c0d7-dns-svc\") pod \"dnsmasq-dns-6bb684768f-5nk7f\" (UID: \"deebca4d-9edb-45ba-90a2-c58696b6c0d7\") " pod="openstack/dnsmasq-dns-6bb684768f-5nk7f" Jan 20 18:25:21 crc kubenswrapper[4661]: I0120 18:25:21.484117 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/deebca4d-9edb-45ba-90a2-c58696b6c0d7-config\") pod \"dnsmasq-dns-6bb684768f-5nk7f\" (UID: \"deebca4d-9edb-45ba-90a2-c58696b6c0d7\") " pod="openstack/dnsmasq-dns-6bb684768f-5nk7f" Jan 20 18:25:21 crc kubenswrapper[4661]: I0120 18:25:21.484137 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tkssr\" (UniqueName: \"kubernetes.io/projected/deebca4d-9edb-45ba-90a2-c58696b6c0d7-kube-api-access-tkssr\") pod \"dnsmasq-dns-6bb684768f-5nk7f\" (UID: \"deebca4d-9edb-45ba-90a2-c58696b6c0d7\") " pod="openstack/dnsmasq-dns-6bb684768f-5nk7f" Jan 20 18:25:21 crc kubenswrapper[4661]: I0120 18:25:21.494596 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-79fcf454c6-dzz6z"] Jan 20 18:25:21 crc kubenswrapper[4661]: I0120 18:25:21.499347 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-596b75897b-2g4gm" Jan 20 18:25:21 crc kubenswrapper[4661]: I0120 18:25:21.598169 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/deebca4d-9edb-45ba-90a2-c58696b6c0d7-dns-svc\") pod \"dnsmasq-dns-6bb684768f-5nk7f\" (UID: \"deebca4d-9edb-45ba-90a2-c58696b6c0d7\") " pod="openstack/dnsmasq-dns-6bb684768f-5nk7f" Jan 20 18:25:21 crc kubenswrapper[4661]: I0120 18:25:21.598249 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/deebca4d-9edb-45ba-90a2-c58696b6c0d7-config\") pod \"dnsmasq-dns-6bb684768f-5nk7f\" (UID: \"deebca4d-9edb-45ba-90a2-c58696b6c0d7\") " pod="openstack/dnsmasq-dns-6bb684768f-5nk7f" Jan 20 18:25:21 crc kubenswrapper[4661]: I0120 18:25:21.598280 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tkssr\" (UniqueName: \"kubernetes.io/projected/deebca4d-9edb-45ba-90a2-c58696b6c0d7-kube-api-access-tkssr\") pod \"dnsmasq-dns-6bb684768f-5nk7f\" (UID: \"deebca4d-9edb-45ba-90a2-c58696b6c0d7\") " pod="openstack/dnsmasq-dns-6bb684768f-5nk7f" Jan 20 18:25:21 crc kubenswrapper[4661]: I0120 18:25:21.598331 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/52dc5147-ce19-4dcc-94b5-a2eaacfba32d-logs\") pod \"barbican-api-79fcf454c6-dzz6z\" (UID: \"52dc5147-ce19-4dcc-94b5-a2eaacfba32d\") " pod="openstack/barbican-api-79fcf454c6-dzz6z" Jan 20 18:25:21 crc kubenswrapper[4661]: I0120 18:25:21.598506 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z2bhg\" (UniqueName: \"kubernetes.io/projected/52dc5147-ce19-4dcc-94b5-a2eaacfba32d-kube-api-access-z2bhg\") pod \"barbican-api-79fcf454c6-dzz6z\" (UID: \"52dc5147-ce19-4dcc-94b5-a2eaacfba32d\") " pod="openstack/barbican-api-79fcf454c6-dzz6z" Jan 20 18:25:21 crc kubenswrapper[4661]: I0120 18:25:21.598804 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/52dc5147-ce19-4dcc-94b5-a2eaacfba32d-combined-ca-bundle\") pod \"barbican-api-79fcf454c6-dzz6z\" (UID: \"52dc5147-ce19-4dcc-94b5-a2eaacfba32d\") " pod="openstack/barbican-api-79fcf454c6-dzz6z" Jan 20 18:25:21 crc kubenswrapper[4661]: I0120 18:25:21.598898 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/deebca4d-9edb-45ba-90a2-c58696b6c0d7-ovsdbserver-nb\") pod \"dnsmasq-dns-6bb684768f-5nk7f\" (UID: \"deebca4d-9edb-45ba-90a2-c58696b6c0d7\") " pod="openstack/dnsmasq-dns-6bb684768f-5nk7f" Jan 20 18:25:21 crc kubenswrapper[4661]: I0120 18:25:21.598975 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/52dc5147-ce19-4dcc-94b5-a2eaacfba32d-config-data-custom\") pod \"barbican-api-79fcf454c6-dzz6z\" (UID: \"52dc5147-ce19-4dcc-94b5-a2eaacfba32d\") " pod="openstack/barbican-api-79fcf454c6-dzz6z" Jan 20 18:25:21 crc kubenswrapper[4661]: I0120 18:25:21.599026 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/deebca4d-9edb-45ba-90a2-c58696b6c0d7-ovsdbserver-sb\") pod \"dnsmasq-dns-6bb684768f-5nk7f\" (UID: \"deebca4d-9edb-45ba-90a2-c58696b6c0d7\") " pod="openstack/dnsmasq-dns-6bb684768f-5nk7f" Jan 20 18:25:21 crc kubenswrapper[4661]: I0120 18:25:21.599049 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/52dc5147-ce19-4dcc-94b5-a2eaacfba32d-config-data\") pod \"barbican-api-79fcf454c6-dzz6z\" (UID: \"52dc5147-ce19-4dcc-94b5-a2eaacfba32d\") " pod="openstack/barbican-api-79fcf454c6-dzz6z" Jan 20 18:25:21 crc kubenswrapper[4661]: I0120 18:25:21.600159 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/deebca4d-9edb-45ba-90a2-c58696b6c0d7-dns-svc\") pod \"dnsmasq-dns-6bb684768f-5nk7f\" (UID: \"deebca4d-9edb-45ba-90a2-c58696b6c0d7\") " pod="openstack/dnsmasq-dns-6bb684768f-5nk7f" Jan 20 18:25:21 crc kubenswrapper[4661]: I0120 18:25:21.601289 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/deebca4d-9edb-45ba-90a2-c58696b6c0d7-config\") pod \"dnsmasq-dns-6bb684768f-5nk7f\" (UID: \"deebca4d-9edb-45ba-90a2-c58696b6c0d7\") " pod="openstack/dnsmasq-dns-6bb684768f-5nk7f" Jan 20 18:25:21 crc kubenswrapper[4661]: I0120 18:25:21.602201 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/deebca4d-9edb-45ba-90a2-c58696b6c0d7-ovsdbserver-nb\") pod \"dnsmasq-dns-6bb684768f-5nk7f\" (UID: \"deebca4d-9edb-45ba-90a2-c58696b6c0d7\") " pod="openstack/dnsmasq-dns-6bb684768f-5nk7f" Jan 20 18:25:21 crc kubenswrapper[4661]: I0120 18:25:21.602741 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/deebca4d-9edb-45ba-90a2-c58696b6c0d7-ovsdbserver-sb\") pod \"dnsmasq-dns-6bb684768f-5nk7f\" (UID: \"deebca4d-9edb-45ba-90a2-c58696b6c0d7\") " pod="openstack/dnsmasq-dns-6bb684768f-5nk7f" Jan 20 18:25:21 crc kubenswrapper[4661]: I0120 18:25:21.699949 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/52dc5147-ce19-4dcc-94b5-a2eaacfba32d-logs\") pod \"barbican-api-79fcf454c6-dzz6z\" (UID: \"52dc5147-ce19-4dcc-94b5-a2eaacfba32d\") " pod="openstack/barbican-api-79fcf454c6-dzz6z" Jan 20 18:25:21 crc kubenswrapper[4661]: I0120 18:25:21.700240 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z2bhg\" (UniqueName: \"kubernetes.io/projected/52dc5147-ce19-4dcc-94b5-a2eaacfba32d-kube-api-access-z2bhg\") pod \"barbican-api-79fcf454c6-dzz6z\" (UID: \"52dc5147-ce19-4dcc-94b5-a2eaacfba32d\") " pod="openstack/barbican-api-79fcf454c6-dzz6z" Jan 20 18:25:21 crc kubenswrapper[4661]: I0120 18:25:21.700275 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/52dc5147-ce19-4dcc-94b5-a2eaacfba32d-combined-ca-bundle\") pod \"barbican-api-79fcf454c6-dzz6z\" (UID: \"52dc5147-ce19-4dcc-94b5-a2eaacfba32d\") " pod="openstack/barbican-api-79fcf454c6-dzz6z" Jan 20 18:25:21 crc kubenswrapper[4661]: I0120 18:25:21.700438 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/52dc5147-ce19-4dcc-94b5-a2eaacfba32d-config-data-custom\") pod \"barbican-api-79fcf454c6-dzz6z\" (UID: \"52dc5147-ce19-4dcc-94b5-a2eaacfba32d\") " pod="openstack/barbican-api-79fcf454c6-dzz6z" Jan 20 18:25:21 crc kubenswrapper[4661]: I0120 18:25:21.700470 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/52dc5147-ce19-4dcc-94b5-a2eaacfba32d-config-data\") pod \"barbican-api-79fcf454c6-dzz6z\" (UID: \"52dc5147-ce19-4dcc-94b5-a2eaacfba32d\") " pod="openstack/barbican-api-79fcf454c6-dzz6z" Jan 20 18:25:21 crc kubenswrapper[4661]: I0120 18:25:21.702078 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/52dc5147-ce19-4dcc-94b5-a2eaacfba32d-logs\") pod \"barbican-api-79fcf454c6-dzz6z\" (UID: \"52dc5147-ce19-4dcc-94b5-a2eaacfba32d\") " pod="openstack/barbican-api-79fcf454c6-dzz6z" Jan 20 18:25:21 crc kubenswrapper[4661]: I0120 18:25:21.710891 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tkssr\" (UniqueName: \"kubernetes.io/projected/deebca4d-9edb-45ba-90a2-c58696b6c0d7-kube-api-access-tkssr\") pod \"dnsmasq-dns-6bb684768f-5nk7f\" (UID: \"deebca4d-9edb-45ba-90a2-c58696b6c0d7\") " pod="openstack/dnsmasq-dns-6bb684768f-5nk7f" Jan 20 18:25:21 crc kubenswrapper[4661]: I0120 18:25:21.751272 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z2bhg\" (UniqueName: \"kubernetes.io/projected/52dc5147-ce19-4dcc-94b5-a2eaacfba32d-kube-api-access-z2bhg\") pod \"barbican-api-79fcf454c6-dzz6z\" (UID: \"52dc5147-ce19-4dcc-94b5-a2eaacfba32d\") " pod="openstack/barbican-api-79fcf454c6-dzz6z" Jan 20 18:25:21 crc kubenswrapper[4661]: I0120 18:25:21.759445 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-c468c8b55-f2kw4"] Jan 20 18:25:21 crc kubenswrapper[4661]: I0120 18:25:21.765350 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/52dc5147-ce19-4dcc-94b5-a2eaacfba32d-config-data-custom\") pod \"barbican-api-79fcf454c6-dzz6z\" (UID: \"52dc5147-ce19-4dcc-94b5-a2eaacfba32d\") " pod="openstack/barbican-api-79fcf454c6-dzz6z" Jan 20 18:25:21 crc kubenswrapper[4661]: I0120 18:25:21.766341 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/52dc5147-ce19-4dcc-94b5-a2eaacfba32d-config-data\") pod \"barbican-api-79fcf454c6-dzz6z\" (UID: \"52dc5147-ce19-4dcc-94b5-a2eaacfba32d\") " pod="openstack/barbican-api-79fcf454c6-dzz6z" Jan 20 18:25:21 crc kubenswrapper[4661]: I0120 18:25:21.769143 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/52dc5147-ce19-4dcc-94b5-a2eaacfba32d-combined-ca-bundle\") pod \"barbican-api-79fcf454c6-dzz6z\" (UID: \"52dc5147-ce19-4dcc-94b5-a2eaacfba32d\") " pod="openstack/barbican-api-79fcf454c6-dzz6z" Jan 20 18:25:21 crc kubenswrapper[4661]: I0120 18:25:21.878258 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-79fcf454c6-dzz6z" Jan 20 18:25:21 crc kubenswrapper[4661]: I0120 18:25:21.969928 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6bb684768f-5nk7f" Jan 20 18:25:22 crc kubenswrapper[4661]: I0120 18:25:22.171468 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-689bd5f764-p5qpx"] Jan 20 18:25:22 crc kubenswrapper[4661]: I0120 18:25:22.176755 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-66f64dd556-cvpcx"] Jan 20 18:25:22 crc kubenswrapper[4661]: I0120 18:25:22.201905 4661 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7b946d459c-w6r5c"] Jan 20 18:25:22 crc kubenswrapper[4661]: I0120 18:25:22.466710 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-5b484f76ff-qrd8w"] Jan 20 18:25:22 crc kubenswrapper[4661]: I0120 18:25:22.511801 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-596b75897b-2g4gm"] Jan 20 18:25:22 crc kubenswrapper[4661]: W0120 18:25:22.576829 4661 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf94c4b9e_2f53_44d0_a637_4e8f4a3f9d58.slice/crio-075adc5c8d7a8e79a11118d453036911dadbbb16f7e7e62e755a040c726d4836 WatchSource:0}: Error finding container 075adc5c8d7a8e79a11118d453036911dadbbb16f7e7e62e755a040c726d4836: Status 404 returned error can't find the container with id 075adc5c8d7a8e79a11118d453036911dadbbb16f7e7e62e755a040c726d4836 Jan 20 18:25:22 crc kubenswrapper[4661]: I0120 18:25:22.578722 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-c468c8b55-f2kw4" event={"ID":"40923d71-e4f3-4c19-939c-e9f9b12fe635","Type":"ContainerStarted","Data":"8c28ceaf62e62902ae467dabc55a881bf55c0f186e250adc52fd8faaf21bd1bb"} Jan 20 18:25:22 crc kubenswrapper[4661]: I0120 18:25:22.598204 4661 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-c468c8b55-f2kw4" Jan 20 18:25:22 crc kubenswrapper[4661]: I0120 18:25:22.598229 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-c468c8b55-f2kw4" event={"ID":"40923d71-e4f3-4c19-939c-e9f9b12fe635","Type":"ContainerStarted","Data":"815c97b4b5931d351eee67ebc046156de97889f986e78ce000410d58381798fa"} Jan 20 18:25:22 crc kubenswrapper[4661]: I0120 18:25:22.598253 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7b946d459c-w6r5c" event={"ID":"260cab02-a26b-4891-9c82-54e4e3746911","Type":"ContainerStarted","Data":"55d15eb313496b0d189305106a0b53e2bab69e79550a288a5684e3ff76e44807"} Jan 20 18:25:22 crc kubenswrapper[4661]: I0120 18:25:22.622486 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-689bd5f764-p5qpx" event={"ID":"051bbaec-94d3-4a08-ab1b-4417e566e5f3","Type":"ContainerStarted","Data":"971f04cb600621070bde5d5dd44cb0d4b7b2a2b05eb782bf691609c8c447fae4"} Jan 20 18:25:22 crc kubenswrapper[4661]: I0120 18:25:22.623318 4661 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-c468c8b55-f2kw4" podStartSLOduration=2.623295585 podStartE2EDuration="2.623295585s" podCreationTimestamp="2026-01-20 18:25:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 18:25:22.608218249 +0000 UTC m=+1178.939007911" watchObservedRunningTime="2026-01-20 18:25:22.623295585 +0000 UTC m=+1178.954085247" Jan 20 18:25:22 crc kubenswrapper[4661]: I0120 18:25:22.627831 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-66f64dd556-cvpcx" event={"ID":"371e36aa-f7e5-443b-9f8e-ef9b8e9d0f61","Type":"ContainerStarted","Data":"f13770423597127e4cc2e7023ca3d9b5f7088f2294f0d07f518cda81f984c4ad"} Jan 20 18:25:22 crc kubenswrapper[4661]: I0120 18:25:22.678969 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-79fcf454c6-dzz6z"] Jan 20 18:25:22 crc kubenswrapper[4661]: I0120 18:25:22.754288 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6bb684768f-5nk7f"] Jan 20 18:25:23 crc kubenswrapper[4661]: I0120 18:25:23.401901 4661 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-57dd7457c5-2txjn"] Jan 20 18:25:23 crc kubenswrapper[4661]: I0120 18:25:23.403385 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-57dd7457c5-2txjn" Jan 20 18:25:23 crc kubenswrapper[4661]: I0120 18:25:23.405239 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-internal-svc" Jan 20 18:25:23 crc kubenswrapper[4661]: I0120 18:25:23.405421 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-public-svc" Jan 20 18:25:23 crc kubenswrapper[4661]: I0120 18:25:23.431273 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-57dd7457c5-2txjn"] Jan 20 18:25:23 crc kubenswrapper[4661]: I0120 18:25:23.473389 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/978fc50f-3ea8-4427-af11-d8f4c4f3c0d5-public-tls-certs\") pod \"neutron-57dd7457c5-2txjn\" (UID: \"978fc50f-3ea8-4427-af11-d8f4c4f3c0d5\") " pod="openstack/neutron-57dd7457c5-2txjn" Jan 20 18:25:23 crc kubenswrapper[4661]: I0120 18:25:23.473488 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/978fc50f-3ea8-4427-af11-d8f4c4f3c0d5-combined-ca-bundle\") pod \"neutron-57dd7457c5-2txjn\" (UID: \"978fc50f-3ea8-4427-af11-d8f4c4f3c0d5\") " pod="openstack/neutron-57dd7457c5-2txjn" Jan 20 18:25:23 crc kubenswrapper[4661]: I0120 18:25:23.473536 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/978fc50f-3ea8-4427-af11-d8f4c4f3c0d5-config\") pod \"neutron-57dd7457c5-2txjn\" (UID: \"978fc50f-3ea8-4427-af11-d8f4c4f3c0d5\") " pod="openstack/neutron-57dd7457c5-2txjn" Jan 20 18:25:23 crc kubenswrapper[4661]: I0120 18:25:23.473575 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/978fc50f-3ea8-4427-af11-d8f4c4f3c0d5-internal-tls-certs\") pod \"neutron-57dd7457c5-2txjn\" (UID: \"978fc50f-3ea8-4427-af11-d8f4c4f3c0d5\") " pod="openstack/neutron-57dd7457c5-2txjn" Jan 20 18:25:23 crc kubenswrapper[4661]: I0120 18:25:23.473598 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m66jl\" (UniqueName: \"kubernetes.io/projected/978fc50f-3ea8-4427-af11-d8f4c4f3c0d5-kube-api-access-m66jl\") pod \"neutron-57dd7457c5-2txjn\" (UID: \"978fc50f-3ea8-4427-af11-d8f4c4f3c0d5\") " pod="openstack/neutron-57dd7457c5-2txjn" Jan 20 18:25:23 crc kubenswrapper[4661]: I0120 18:25:23.473722 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/978fc50f-3ea8-4427-af11-d8f4c4f3c0d5-ovndb-tls-certs\") pod \"neutron-57dd7457c5-2txjn\" (UID: \"978fc50f-3ea8-4427-af11-d8f4c4f3c0d5\") " pod="openstack/neutron-57dd7457c5-2txjn" Jan 20 18:25:23 crc kubenswrapper[4661]: I0120 18:25:23.473762 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/978fc50f-3ea8-4427-af11-d8f4c4f3c0d5-httpd-config\") pod \"neutron-57dd7457c5-2txjn\" (UID: \"978fc50f-3ea8-4427-af11-d8f4c4f3c0d5\") " pod="openstack/neutron-57dd7457c5-2txjn" Jan 20 18:25:23 crc kubenswrapper[4661]: I0120 18:25:23.574903 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/978fc50f-3ea8-4427-af11-d8f4c4f3c0d5-httpd-config\") pod \"neutron-57dd7457c5-2txjn\" (UID: \"978fc50f-3ea8-4427-af11-d8f4c4f3c0d5\") " pod="openstack/neutron-57dd7457c5-2txjn" Jan 20 18:25:23 crc kubenswrapper[4661]: I0120 18:25:23.575247 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/978fc50f-3ea8-4427-af11-d8f4c4f3c0d5-public-tls-certs\") pod \"neutron-57dd7457c5-2txjn\" (UID: \"978fc50f-3ea8-4427-af11-d8f4c4f3c0d5\") " pod="openstack/neutron-57dd7457c5-2txjn" Jan 20 18:25:23 crc kubenswrapper[4661]: I0120 18:25:23.575287 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/978fc50f-3ea8-4427-af11-d8f4c4f3c0d5-combined-ca-bundle\") pod \"neutron-57dd7457c5-2txjn\" (UID: \"978fc50f-3ea8-4427-af11-d8f4c4f3c0d5\") " pod="openstack/neutron-57dd7457c5-2txjn" Jan 20 18:25:23 crc kubenswrapper[4661]: I0120 18:25:23.575327 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/978fc50f-3ea8-4427-af11-d8f4c4f3c0d5-config\") pod \"neutron-57dd7457c5-2txjn\" (UID: \"978fc50f-3ea8-4427-af11-d8f4c4f3c0d5\") " pod="openstack/neutron-57dd7457c5-2txjn" Jan 20 18:25:23 crc kubenswrapper[4661]: I0120 18:25:23.575366 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m66jl\" (UniqueName: \"kubernetes.io/projected/978fc50f-3ea8-4427-af11-d8f4c4f3c0d5-kube-api-access-m66jl\") pod \"neutron-57dd7457c5-2txjn\" (UID: \"978fc50f-3ea8-4427-af11-d8f4c4f3c0d5\") " pod="openstack/neutron-57dd7457c5-2txjn" Jan 20 18:25:23 crc kubenswrapper[4661]: I0120 18:25:23.575389 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/978fc50f-3ea8-4427-af11-d8f4c4f3c0d5-internal-tls-certs\") pod \"neutron-57dd7457c5-2txjn\" (UID: \"978fc50f-3ea8-4427-af11-d8f4c4f3c0d5\") " pod="openstack/neutron-57dd7457c5-2txjn" Jan 20 18:25:23 crc kubenswrapper[4661]: I0120 18:25:23.575489 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/978fc50f-3ea8-4427-af11-d8f4c4f3c0d5-ovndb-tls-certs\") pod \"neutron-57dd7457c5-2txjn\" (UID: \"978fc50f-3ea8-4427-af11-d8f4c4f3c0d5\") " pod="openstack/neutron-57dd7457c5-2txjn" Jan 20 18:25:23 crc kubenswrapper[4661]: I0120 18:25:23.582820 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/978fc50f-3ea8-4427-af11-d8f4c4f3c0d5-ovndb-tls-certs\") pod \"neutron-57dd7457c5-2txjn\" (UID: \"978fc50f-3ea8-4427-af11-d8f4c4f3c0d5\") " pod="openstack/neutron-57dd7457c5-2txjn" Jan 20 18:25:23 crc kubenswrapper[4661]: I0120 18:25:23.586017 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/978fc50f-3ea8-4427-af11-d8f4c4f3c0d5-config\") pod \"neutron-57dd7457c5-2txjn\" (UID: \"978fc50f-3ea8-4427-af11-d8f4c4f3c0d5\") " pod="openstack/neutron-57dd7457c5-2txjn" Jan 20 18:25:23 crc kubenswrapper[4661]: I0120 18:25:23.587095 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/978fc50f-3ea8-4427-af11-d8f4c4f3c0d5-httpd-config\") pod \"neutron-57dd7457c5-2txjn\" (UID: \"978fc50f-3ea8-4427-af11-d8f4c4f3c0d5\") " pod="openstack/neutron-57dd7457c5-2txjn" Jan 20 18:25:23 crc kubenswrapper[4661]: I0120 18:25:23.587479 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/978fc50f-3ea8-4427-af11-d8f4c4f3c0d5-combined-ca-bundle\") pod \"neutron-57dd7457c5-2txjn\" (UID: \"978fc50f-3ea8-4427-af11-d8f4c4f3c0d5\") " pod="openstack/neutron-57dd7457c5-2txjn" Jan 20 18:25:23 crc kubenswrapper[4661]: I0120 18:25:23.588440 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/978fc50f-3ea8-4427-af11-d8f4c4f3c0d5-public-tls-certs\") pod \"neutron-57dd7457c5-2txjn\" (UID: \"978fc50f-3ea8-4427-af11-d8f4c4f3c0d5\") " pod="openstack/neutron-57dd7457c5-2txjn" Jan 20 18:25:23 crc kubenswrapper[4661]: I0120 18:25:23.593440 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/978fc50f-3ea8-4427-af11-d8f4c4f3c0d5-internal-tls-certs\") pod \"neutron-57dd7457c5-2txjn\" (UID: \"978fc50f-3ea8-4427-af11-d8f4c4f3c0d5\") " pod="openstack/neutron-57dd7457c5-2txjn" Jan 20 18:25:23 crc kubenswrapper[4661]: I0120 18:25:23.606402 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m66jl\" (UniqueName: \"kubernetes.io/projected/978fc50f-3ea8-4427-af11-d8f4c4f3c0d5-kube-api-access-m66jl\") pod \"neutron-57dd7457c5-2txjn\" (UID: \"978fc50f-3ea8-4427-af11-d8f4c4f3c0d5\") " pod="openstack/neutron-57dd7457c5-2txjn" Jan 20 18:25:23 crc kubenswrapper[4661]: I0120 18:25:23.637650 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-66f64dd556-cvpcx" event={"ID":"371e36aa-f7e5-443b-9f8e-ef9b8e9d0f61","Type":"ContainerStarted","Data":"89f2bc1e898ac27eca2793adfc02542d9c5f424695e862e54c5166c4cbe81710"} Jan 20 18:25:23 crc kubenswrapper[4661]: I0120 18:25:23.637719 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-66f64dd556-cvpcx" event={"ID":"371e36aa-f7e5-443b-9f8e-ef9b8e9d0f61","Type":"ContainerStarted","Data":"b9a873f37c4c09f270c6fbaf8e436566e24c6358e3cb42577c5f7f5e47434d92"} Jan 20 18:25:23 crc kubenswrapper[4661]: I0120 18:25:23.638850 4661 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-66f64dd556-cvpcx" Jan 20 18:25:23 crc kubenswrapper[4661]: I0120 18:25:23.638888 4661 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-66f64dd556-cvpcx" Jan 20 18:25:23 crc kubenswrapper[4661]: I0120 18:25:23.641287 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-5b484f76ff-qrd8w" event={"ID":"341d9328-73af-4986-9901-43b929a9e030","Type":"ContainerStarted","Data":"f84ac86108df58ad07e64f219f1b5303627656f137527169a57354165ea2597a"} Jan 20 18:25:23 crc kubenswrapper[4661]: I0120 18:25:23.651635 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-596b75897b-2g4gm" event={"ID":"f94c4b9e-2f53-44d0-a637-4e8f4a3f9d58","Type":"ContainerStarted","Data":"075adc5c8d7a8e79a11118d453036911dadbbb16f7e7e62e755a040c726d4836"} Jan 20 18:25:23 crc kubenswrapper[4661]: I0120 18:25:23.653551 4661 generic.go:334] "Generic (PLEG): container finished" podID="deebca4d-9edb-45ba-90a2-c58696b6c0d7" containerID="98fb9616f7b5f78e1a53f936d47574a9188899d576bed25acbe1f83c83288f5f" exitCode=0 Jan 20 18:25:23 crc kubenswrapper[4661]: I0120 18:25:23.653601 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bb684768f-5nk7f" event={"ID":"deebca4d-9edb-45ba-90a2-c58696b6c0d7","Type":"ContainerDied","Data":"98fb9616f7b5f78e1a53f936d47574a9188899d576bed25acbe1f83c83288f5f"} Jan 20 18:25:23 crc kubenswrapper[4661]: I0120 18:25:23.653622 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bb684768f-5nk7f" event={"ID":"deebca4d-9edb-45ba-90a2-c58696b6c0d7","Type":"ContainerStarted","Data":"d6a06ba0686004c7ba48069cda6b43f6a92aaf4082dcf2f4933fb1f960e5d7dc"} Jan 20 18:25:23 crc kubenswrapper[4661]: I0120 18:25:23.676147 4661 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-66f64dd556-cvpcx" podStartSLOduration=3.6761294810000003 podStartE2EDuration="3.676129481s" podCreationTimestamp="2026-01-20 18:25:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 18:25:23.67496618 +0000 UTC m=+1180.005755842" watchObservedRunningTime="2026-01-20 18:25:23.676129481 +0000 UTC m=+1180.006919143" Jan 20 18:25:23 crc kubenswrapper[4661]: I0120 18:25:23.703701 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-79fcf454c6-dzz6z" event={"ID":"52dc5147-ce19-4dcc-94b5-a2eaacfba32d","Type":"ContainerStarted","Data":"62dc42b97d632f736c68a5edeb599f75a56ef8ad5ff6b2d89515f96206e9419a"} Jan 20 18:25:23 crc kubenswrapper[4661]: I0120 18:25:23.703748 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-79fcf454c6-dzz6z" event={"ID":"52dc5147-ce19-4dcc-94b5-a2eaacfba32d","Type":"ContainerStarted","Data":"1c28619d8fadc633e6dc9980642d7bffe0eecd909790908217e5b278b7484f67"} Jan 20 18:25:23 crc kubenswrapper[4661]: I0120 18:25:23.703761 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-79fcf454c6-dzz6z" event={"ID":"52dc5147-ce19-4dcc-94b5-a2eaacfba32d","Type":"ContainerStarted","Data":"ab44fe2dfbf32408849cd233d3f532ff19e92cb93699af2f116d68e7c835b18e"} Jan 20 18:25:23 crc kubenswrapper[4661]: I0120 18:25:23.707920 4661 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-79fcf454c6-dzz6z" Jan 20 18:25:23 crc kubenswrapper[4661]: I0120 18:25:23.708078 4661 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-79fcf454c6-dzz6z" Jan 20 18:25:23 crc kubenswrapper[4661]: I0120 18:25:23.731768 4661 generic.go:334] "Generic (PLEG): container finished" podID="260cab02-a26b-4891-9c82-54e4e3746911" containerID="03c0382655b62de674636c072b62b82379b3a111f790d8d6e62e802493222587" exitCode=0 Jan 20 18:25:23 crc kubenswrapper[4661]: I0120 18:25:23.731873 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7b946d459c-w6r5c" event={"ID":"260cab02-a26b-4891-9c82-54e4e3746911","Type":"ContainerDied","Data":"03c0382655b62de674636c072b62b82379b3a111f790d8d6e62e802493222587"} Jan 20 18:25:23 crc kubenswrapper[4661]: I0120 18:25:23.757823 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-689bd5f764-p5qpx" event={"ID":"051bbaec-94d3-4a08-ab1b-4417e566e5f3","Type":"ContainerStarted","Data":"8956ce0726710ad0120621168680b90297d4cdda4657606c96b00e928727d835"} Jan 20 18:25:23 crc kubenswrapper[4661]: I0120 18:25:23.757861 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-689bd5f764-p5qpx" event={"ID":"051bbaec-94d3-4a08-ab1b-4417e566e5f3","Type":"ContainerStarted","Data":"efa988c462b7d5828080f9b8427590917c2e9b2ce41f95a8ba5e62b8472613a8"} Jan 20 18:25:23 crc kubenswrapper[4661]: I0120 18:25:23.758884 4661 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-689bd5f764-p5qpx" Jan 20 18:25:23 crc kubenswrapper[4661]: I0120 18:25:23.759114 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-57dd7457c5-2txjn" Jan 20 18:25:23 crc kubenswrapper[4661]: I0120 18:25:23.848950 4661 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-79fcf454c6-dzz6z" podStartSLOduration=2.848925388 podStartE2EDuration="2.848925388s" podCreationTimestamp="2026-01-20 18:25:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 18:25:23.75172285 +0000 UTC m=+1180.082512532" watchObservedRunningTime="2026-01-20 18:25:23.848925388 +0000 UTC m=+1180.179715050" Jan 20 18:25:23 crc kubenswrapper[4661]: I0120 18:25:23.895950 4661 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-689bd5f764-p5qpx" podStartSLOduration=3.895929684 podStartE2EDuration="3.895929684s" podCreationTimestamp="2026-01-20 18:25:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 18:25:23.808404831 +0000 UTC m=+1180.139194493" watchObservedRunningTime="2026-01-20 18:25:23.895929684 +0000 UTC m=+1180.226719346" Jan 20 18:25:24 crc kubenswrapper[4661]: I0120 18:25:24.325994 4661 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7b946d459c-w6r5c" Jan 20 18:25:24 crc kubenswrapper[4661]: I0120 18:25:24.408282 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/260cab02-a26b-4891-9c82-54e4e3746911-ovsdbserver-nb\") pod \"260cab02-a26b-4891-9c82-54e4e3746911\" (UID: \"260cab02-a26b-4891-9c82-54e4e3746911\") " Jan 20 18:25:24 crc kubenswrapper[4661]: I0120 18:25:24.408344 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c5rmn\" (UniqueName: \"kubernetes.io/projected/260cab02-a26b-4891-9c82-54e4e3746911-kube-api-access-c5rmn\") pod \"260cab02-a26b-4891-9c82-54e4e3746911\" (UID: \"260cab02-a26b-4891-9c82-54e4e3746911\") " Jan 20 18:25:24 crc kubenswrapper[4661]: I0120 18:25:24.408371 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/260cab02-a26b-4891-9c82-54e4e3746911-dns-svc\") pod \"260cab02-a26b-4891-9c82-54e4e3746911\" (UID: \"260cab02-a26b-4891-9c82-54e4e3746911\") " Jan 20 18:25:24 crc kubenswrapper[4661]: I0120 18:25:24.408451 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/260cab02-a26b-4891-9c82-54e4e3746911-config\") pod \"260cab02-a26b-4891-9c82-54e4e3746911\" (UID: \"260cab02-a26b-4891-9c82-54e4e3746911\") " Jan 20 18:25:24 crc kubenswrapper[4661]: I0120 18:25:24.408557 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/260cab02-a26b-4891-9c82-54e4e3746911-ovsdbserver-sb\") pod \"260cab02-a26b-4891-9c82-54e4e3746911\" (UID: \"260cab02-a26b-4891-9c82-54e4e3746911\") " Jan 20 18:25:24 crc kubenswrapper[4661]: I0120 18:25:24.413350 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/260cab02-a26b-4891-9c82-54e4e3746911-kube-api-access-c5rmn" (OuterVolumeSpecName: "kube-api-access-c5rmn") pod "260cab02-a26b-4891-9c82-54e4e3746911" (UID: "260cab02-a26b-4891-9c82-54e4e3746911"). InnerVolumeSpecName "kube-api-access-c5rmn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 18:25:24 crc kubenswrapper[4661]: I0120 18:25:24.439801 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/260cab02-a26b-4891-9c82-54e4e3746911-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "260cab02-a26b-4891-9c82-54e4e3746911" (UID: "260cab02-a26b-4891-9c82-54e4e3746911"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 18:25:24 crc kubenswrapper[4661]: I0120 18:25:24.440729 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/260cab02-a26b-4891-9c82-54e4e3746911-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "260cab02-a26b-4891-9c82-54e4e3746911" (UID: "260cab02-a26b-4891-9c82-54e4e3746911"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 18:25:24 crc kubenswrapper[4661]: I0120 18:25:24.440970 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/260cab02-a26b-4891-9c82-54e4e3746911-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "260cab02-a26b-4891-9c82-54e4e3746911" (UID: "260cab02-a26b-4891-9c82-54e4e3746911"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 18:25:24 crc kubenswrapper[4661]: I0120 18:25:24.455208 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/260cab02-a26b-4891-9c82-54e4e3746911-config" (OuterVolumeSpecName: "config") pod "260cab02-a26b-4891-9c82-54e4e3746911" (UID: "260cab02-a26b-4891-9c82-54e4e3746911"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 18:25:24 crc kubenswrapper[4661]: I0120 18:25:24.510882 4661 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/260cab02-a26b-4891-9c82-54e4e3746911-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 20 18:25:24 crc kubenswrapper[4661]: I0120 18:25:24.510920 4661 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c5rmn\" (UniqueName: \"kubernetes.io/projected/260cab02-a26b-4891-9c82-54e4e3746911-kube-api-access-c5rmn\") on node \"crc\" DevicePath \"\"" Jan 20 18:25:24 crc kubenswrapper[4661]: I0120 18:25:24.510936 4661 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/260cab02-a26b-4891-9c82-54e4e3746911-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 20 18:25:24 crc kubenswrapper[4661]: I0120 18:25:24.510950 4661 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/260cab02-a26b-4891-9c82-54e4e3746911-config\") on node \"crc\" DevicePath \"\"" Jan 20 18:25:24 crc kubenswrapper[4661]: I0120 18:25:24.510968 4661 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/260cab02-a26b-4891-9c82-54e4e3746911-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 20 18:25:24 crc kubenswrapper[4661]: I0120 18:25:24.607369 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-57dd7457c5-2txjn"] Jan 20 18:25:24 crc kubenswrapper[4661]: W0120 18:25:24.609246 4661 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod978fc50f_3ea8_4427_af11_d8f4c4f3c0d5.slice/crio-bad76193c51651c84e319ce32d8f73f3c37f0a4272ef6c0024b9b2c6a4787357 WatchSource:0}: Error finding container bad76193c51651c84e319ce32d8f73f3c37f0a4272ef6c0024b9b2c6a4787357: Status 404 returned error can't find the container with id bad76193c51651c84e319ce32d8f73f3c37f0a4272ef6c0024b9b2c6a4787357 Jan 20 18:25:24 crc kubenswrapper[4661]: I0120 18:25:24.773567 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-57dd7457c5-2txjn" event={"ID":"978fc50f-3ea8-4427-af11-d8f4c4f3c0d5","Type":"ContainerStarted","Data":"bad76193c51651c84e319ce32d8f73f3c37f0a4272ef6c0024b9b2c6a4787357"} Jan 20 18:25:24 crc kubenswrapper[4661]: I0120 18:25:24.778599 4661 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7b946d459c-w6r5c" Jan 20 18:25:24 crc kubenswrapper[4661]: I0120 18:25:24.779725 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7b946d459c-w6r5c" event={"ID":"260cab02-a26b-4891-9c82-54e4e3746911","Type":"ContainerDied","Data":"55d15eb313496b0d189305106a0b53e2bab69e79550a288a5684e3ff76e44807"} Jan 20 18:25:24 crc kubenswrapper[4661]: I0120 18:25:24.779776 4661 scope.go:117] "RemoveContainer" containerID="03c0382655b62de674636c072b62b82379b3a111f790d8d6e62e802493222587" Jan 20 18:25:24 crc kubenswrapper[4661]: I0120 18:25:24.863303 4661 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7b946d459c-w6r5c"] Jan 20 18:25:24 crc kubenswrapper[4661]: I0120 18:25:24.889462 4661 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7b946d459c-w6r5c"] Jan 20 18:25:26 crc kubenswrapper[4661]: I0120 18:25:26.150605 4661 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="260cab02-a26b-4891-9c82-54e4e3746911" path="/var/lib/kubelet/pods/260cab02-a26b-4891-9c82-54e4e3746911/volumes" Jan 20 18:25:27 crc kubenswrapper[4661]: I0120 18:25:27.439413 4661 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-7b8c8444c8-77p78"] Jan 20 18:25:27 crc kubenswrapper[4661]: E0120 18:25:27.440016 4661 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="260cab02-a26b-4891-9c82-54e4e3746911" containerName="init" Jan 20 18:25:27 crc kubenswrapper[4661]: I0120 18:25:27.440027 4661 state_mem.go:107] "Deleted CPUSet assignment" podUID="260cab02-a26b-4891-9c82-54e4e3746911" containerName="init" Jan 20 18:25:27 crc kubenswrapper[4661]: I0120 18:25:27.440181 4661 memory_manager.go:354] "RemoveStaleState removing state" podUID="260cab02-a26b-4891-9c82-54e4e3746911" containerName="init" Jan 20 18:25:27 crc kubenswrapper[4661]: I0120 18:25:27.441243 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-7b8c8444c8-77p78" Jan 20 18:25:27 crc kubenswrapper[4661]: I0120 18:25:27.443546 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-public-svc" Jan 20 18:25:27 crc kubenswrapper[4661]: I0120 18:25:27.445524 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-internal-svc" Jan 20 18:25:27 crc kubenswrapper[4661]: I0120 18:25:27.458586 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-7b8c8444c8-77p78"] Jan 20 18:25:27 crc kubenswrapper[4661]: I0120 18:25:27.488013 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e636b383-8c8d-4554-9717-35ba37b726f5-combined-ca-bundle\") pod \"barbican-api-7b8c8444c8-77p78\" (UID: \"e636b383-8c8d-4554-9717-35ba37b726f5\") " pod="openstack/barbican-api-7b8c8444c8-77p78" Jan 20 18:25:27 crc kubenswrapper[4661]: I0120 18:25:27.488057 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e636b383-8c8d-4554-9717-35ba37b726f5-logs\") pod \"barbican-api-7b8c8444c8-77p78\" (UID: \"e636b383-8c8d-4554-9717-35ba37b726f5\") " pod="openstack/barbican-api-7b8c8444c8-77p78" Jan 20 18:25:27 crc kubenswrapper[4661]: I0120 18:25:27.488093 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c6jw2\" (UniqueName: \"kubernetes.io/projected/e636b383-8c8d-4554-9717-35ba37b726f5-kube-api-access-c6jw2\") pod \"barbican-api-7b8c8444c8-77p78\" (UID: \"e636b383-8c8d-4554-9717-35ba37b726f5\") " pod="openstack/barbican-api-7b8c8444c8-77p78" Jan 20 18:25:27 crc kubenswrapper[4661]: I0120 18:25:27.488146 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e636b383-8c8d-4554-9717-35ba37b726f5-config-data-custom\") pod \"barbican-api-7b8c8444c8-77p78\" (UID: \"e636b383-8c8d-4554-9717-35ba37b726f5\") " pod="openstack/barbican-api-7b8c8444c8-77p78" Jan 20 18:25:27 crc kubenswrapper[4661]: I0120 18:25:27.488190 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e636b383-8c8d-4554-9717-35ba37b726f5-public-tls-certs\") pod \"barbican-api-7b8c8444c8-77p78\" (UID: \"e636b383-8c8d-4554-9717-35ba37b726f5\") " pod="openstack/barbican-api-7b8c8444c8-77p78" Jan 20 18:25:27 crc kubenswrapper[4661]: I0120 18:25:27.488208 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e636b383-8c8d-4554-9717-35ba37b726f5-internal-tls-certs\") pod \"barbican-api-7b8c8444c8-77p78\" (UID: \"e636b383-8c8d-4554-9717-35ba37b726f5\") " pod="openstack/barbican-api-7b8c8444c8-77p78" Jan 20 18:25:27 crc kubenswrapper[4661]: I0120 18:25:27.488226 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e636b383-8c8d-4554-9717-35ba37b726f5-config-data\") pod \"barbican-api-7b8c8444c8-77p78\" (UID: \"e636b383-8c8d-4554-9717-35ba37b726f5\") " pod="openstack/barbican-api-7b8c8444c8-77p78" Jan 20 18:25:27 crc kubenswrapper[4661]: I0120 18:25:27.589657 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e636b383-8c8d-4554-9717-35ba37b726f5-combined-ca-bundle\") pod \"barbican-api-7b8c8444c8-77p78\" (UID: \"e636b383-8c8d-4554-9717-35ba37b726f5\") " pod="openstack/barbican-api-7b8c8444c8-77p78" Jan 20 18:25:27 crc kubenswrapper[4661]: I0120 18:25:27.589739 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e636b383-8c8d-4554-9717-35ba37b726f5-logs\") pod \"barbican-api-7b8c8444c8-77p78\" (UID: \"e636b383-8c8d-4554-9717-35ba37b726f5\") " pod="openstack/barbican-api-7b8c8444c8-77p78" Jan 20 18:25:27 crc kubenswrapper[4661]: I0120 18:25:27.589790 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c6jw2\" (UniqueName: \"kubernetes.io/projected/e636b383-8c8d-4554-9717-35ba37b726f5-kube-api-access-c6jw2\") pod \"barbican-api-7b8c8444c8-77p78\" (UID: \"e636b383-8c8d-4554-9717-35ba37b726f5\") " pod="openstack/barbican-api-7b8c8444c8-77p78" Jan 20 18:25:27 crc kubenswrapper[4661]: I0120 18:25:27.589854 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e636b383-8c8d-4554-9717-35ba37b726f5-config-data-custom\") pod \"barbican-api-7b8c8444c8-77p78\" (UID: \"e636b383-8c8d-4554-9717-35ba37b726f5\") " pod="openstack/barbican-api-7b8c8444c8-77p78" Jan 20 18:25:27 crc kubenswrapper[4661]: I0120 18:25:27.589907 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e636b383-8c8d-4554-9717-35ba37b726f5-public-tls-certs\") pod \"barbican-api-7b8c8444c8-77p78\" (UID: \"e636b383-8c8d-4554-9717-35ba37b726f5\") " pod="openstack/barbican-api-7b8c8444c8-77p78" Jan 20 18:25:27 crc kubenswrapper[4661]: I0120 18:25:27.589928 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e636b383-8c8d-4554-9717-35ba37b726f5-internal-tls-certs\") pod \"barbican-api-7b8c8444c8-77p78\" (UID: \"e636b383-8c8d-4554-9717-35ba37b726f5\") " pod="openstack/barbican-api-7b8c8444c8-77p78" Jan 20 18:25:27 crc kubenswrapper[4661]: I0120 18:25:27.589951 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e636b383-8c8d-4554-9717-35ba37b726f5-config-data\") pod \"barbican-api-7b8c8444c8-77p78\" (UID: \"e636b383-8c8d-4554-9717-35ba37b726f5\") " pod="openstack/barbican-api-7b8c8444c8-77p78" Jan 20 18:25:27 crc kubenswrapper[4661]: I0120 18:25:27.590489 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e636b383-8c8d-4554-9717-35ba37b726f5-logs\") pod \"barbican-api-7b8c8444c8-77p78\" (UID: \"e636b383-8c8d-4554-9717-35ba37b726f5\") " pod="openstack/barbican-api-7b8c8444c8-77p78" Jan 20 18:25:27 crc kubenswrapper[4661]: I0120 18:25:27.598433 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e636b383-8c8d-4554-9717-35ba37b726f5-config-data-custom\") pod \"barbican-api-7b8c8444c8-77p78\" (UID: \"e636b383-8c8d-4554-9717-35ba37b726f5\") " pod="openstack/barbican-api-7b8c8444c8-77p78" Jan 20 18:25:27 crc kubenswrapper[4661]: I0120 18:25:27.598503 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e636b383-8c8d-4554-9717-35ba37b726f5-combined-ca-bundle\") pod \"barbican-api-7b8c8444c8-77p78\" (UID: \"e636b383-8c8d-4554-9717-35ba37b726f5\") " pod="openstack/barbican-api-7b8c8444c8-77p78" Jan 20 18:25:27 crc kubenswrapper[4661]: I0120 18:25:27.601496 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e636b383-8c8d-4554-9717-35ba37b726f5-internal-tls-certs\") pod \"barbican-api-7b8c8444c8-77p78\" (UID: \"e636b383-8c8d-4554-9717-35ba37b726f5\") " pod="openstack/barbican-api-7b8c8444c8-77p78" Jan 20 18:25:27 crc kubenswrapper[4661]: I0120 18:25:27.601878 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e636b383-8c8d-4554-9717-35ba37b726f5-public-tls-certs\") pod \"barbican-api-7b8c8444c8-77p78\" (UID: \"e636b383-8c8d-4554-9717-35ba37b726f5\") " pod="openstack/barbican-api-7b8c8444c8-77p78" Jan 20 18:25:27 crc kubenswrapper[4661]: I0120 18:25:27.604606 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e636b383-8c8d-4554-9717-35ba37b726f5-config-data\") pod \"barbican-api-7b8c8444c8-77p78\" (UID: \"e636b383-8c8d-4554-9717-35ba37b726f5\") " pod="openstack/barbican-api-7b8c8444c8-77p78" Jan 20 18:25:27 crc kubenswrapper[4661]: I0120 18:25:27.613639 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c6jw2\" (UniqueName: \"kubernetes.io/projected/e636b383-8c8d-4554-9717-35ba37b726f5-kube-api-access-c6jw2\") pod \"barbican-api-7b8c8444c8-77p78\" (UID: \"e636b383-8c8d-4554-9717-35ba37b726f5\") " pod="openstack/barbican-api-7b8c8444c8-77p78" Jan 20 18:25:27 crc kubenswrapper[4661]: I0120 18:25:27.763077 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-7b8c8444c8-77p78" Jan 20 18:25:29 crc kubenswrapper[4661]: I0120 18:25:29.323748 4661 patch_prober.go:28] interesting pod/machine-config-daemon-svf7c container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 20 18:25:29 crc kubenswrapper[4661]: I0120 18:25:29.324192 4661 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-svf7c" podUID="78855c94-da90-4523-8d65-70f7fd153dee" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 20 18:25:29 crc kubenswrapper[4661]: I0120 18:25:29.324256 4661 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-svf7c" Jan 20 18:25:29 crc kubenswrapper[4661]: I0120 18:25:29.325406 4661 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"6a7b06eb16aab1344c1779c2757f290ec217a65e34e3c4694e2964d4e3f3d079"} pod="openshift-machine-config-operator/machine-config-daemon-svf7c" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 20 18:25:29 crc kubenswrapper[4661]: I0120 18:25:29.325543 4661 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-svf7c" podUID="78855c94-da90-4523-8d65-70f7fd153dee" containerName="machine-config-daemon" containerID="cri-o://6a7b06eb16aab1344c1779c2757f290ec217a65e34e3c4694e2964d4e3f3d079" gracePeriod=600 Jan 20 18:25:29 crc kubenswrapper[4661]: I0120 18:25:29.822351 4661 generic.go:334] "Generic (PLEG): container finished" podID="78855c94-da90-4523-8d65-70f7fd153dee" containerID="6a7b06eb16aab1344c1779c2757f290ec217a65e34e3c4694e2964d4e3f3d079" exitCode=0 Jan 20 18:25:29 crc kubenswrapper[4661]: I0120 18:25:29.822429 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-svf7c" event={"ID":"78855c94-da90-4523-8d65-70f7fd153dee","Type":"ContainerDied","Data":"6a7b06eb16aab1344c1779c2757f290ec217a65e34e3c4694e2964d4e3f3d079"} Jan 20 18:25:29 crc kubenswrapper[4661]: I0120 18:25:29.824291 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bb684768f-5nk7f" event={"ID":"deebca4d-9edb-45ba-90a2-c58696b6c0d7","Type":"ContainerStarted","Data":"6e736c620712a32ceddafe9766bb3f4402b36e9925c35abadad5dd11afbf1e7b"} Jan 20 18:25:29 crc kubenswrapper[4661]: I0120 18:25:29.824544 4661 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6bb684768f-5nk7f" Jan 20 18:25:29 crc kubenswrapper[4661]: I0120 18:25:29.851257 4661 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6bb684768f-5nk7f" podStartSLOduration=8.851231285 podStartE2EDuration="8.851231285s" podCreationTimestamp="2026-01-20 18:25:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 18:25:29.844162839 +0000 UTC m=+1186.174952531" watchObservedRunningTime="2026-01-20 18:25:29.851231285 +0000 UTC m=+1186.182020957" Jan 20 18:25:30 crc kubenswrapper[4661]: I0120 18:25:30.836133 4661 scope.go:117] "RemoveContainer" containerID="728daaf1b473865f17a594f3c69374509eda708725908283281a9d0d4f532f9a" Jan 20 18:25:31 crc kubenswrapper[4661]: I0120 18:25:31.855221 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-57dd7457c5-2txjn" event={"ID":"978fc50f-3ea8-4427-af11-d8f4c4f3c0d5","Type":"ContainerStarted","Data":"9a192ad7d0f5be4a05203be0f213dced141fd5f600e2392cf27a17989eed2051"} Jan 20 18:25:33 crc kubenswrapper[4661]: I0120 18:25:33.414976 4661 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-79fcf454c6-dzz6z" Jan 20 18:25:33 crc kubenswrapper[4661]: I0120 18:25:33.448734 4661 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-79fcf454c6-dzz6z" Jan 20 18:25:35 crc kubenswrapper[4661]: E0120 18:25:35.348624 4661 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-barbican-worker:current-podified" Jan 20 18:25:35 crc kubenswrapper[4661]: E0120 18:25:35.349203 4661 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:barbican-worker-log,Image:quay.io/podified-antelope-centos9/openstack-barbican-worker:current-podified,Command:[/usr/bin/dumb-init],Args:[--single-child -- /usr/bin/tail -n+1 -F /var/log/barbican/barbican-worker.log],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n97h94h686h75h75h67chb8h675h594h55fh87h57bh556hfdh8bhc6h5fhc8h65bh589h66fh66ch5bh586h686h55h5c9h8ch5h658hcfhfdq,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/barbican,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6zcnj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42403,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*42403,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod barbican-worker-5b484f76ff-qrd8w_openstack(341d9328-73af-4986-9901-43b929a9e030): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 20 18:25:35 crc kubenswrapper[4661]: E0120 18:25:35.351931 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"barbican-worker-log\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"barbican-worker\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-barbican-worker:current-podified\\\"\"]" pod="openstack/barbican-worker-5b484f76ff-qrd8w" podUID="341d9328-73af-4986-9901-43b929a9e030" Jan 20 18:25:35 crc kubenswrapper[4661]: E0120 18:25:35.899384 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"barbican-worker-log\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-barbican-worker:current-podified\\\"\", failed to \"StartContainer\" for \"barbican-worker\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-barbican-worker:current-podified\\\"\"]" pod="openstack/barbican-worker-5b484f76ff-qrd8w" podUID="341d9328-73af-4986-9901-43b929a9e030" Jan 20 18:25:36 crc kubenswrapper[4661]: I0120 18:25:36.971892 4661 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6bb684768f-5nk7f" Jan 20 18:25:37 crc kubenswrapper[4661]: I0120 18:25:37.067277 4661 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7987f74bbc-ntqbq"] Jan 20 18:25:37 crc kubenswrapper[4661]: I0120 18:25:37.067528 4661 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7987f74bbc-ntqbq" podUID="ebc6351c-f99f-4bf6-ae92-e0a8aee0b5be" containerName="dnsmasq-dns" containerID="cri-o://cd107171512ab4dabc070b9cd54cf73aff0688d5411db067faaf56574fdd38b9" gracePeriod=10 Jan 20 18:25:37 crc kubenswrapper[4661]: I0120 18:25:37.131249 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-7b8c8444c8-77p78"] Jan 20 18:25:37 crc kubenswrapper[4661]: E0120 18:25:37.344979 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/ceilometer-0" podUID="5a5e84fe-d036-42ca-97d0-2abdb2b8ebe4" Jan 20 18:25:37 crc kubenswrapper[4661]: I0120 18:25:37.359242 4661 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-7987f74bbc-ntqbq" podUID="ebc6351c-f99f-4bf6-ae92-e0a8aee0b5be" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.138:5353: connect: connection refused" Jan 20 18:25:37 crc kubenswrapper[4661]: I0120 18:25:37.772820 4661 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7987f74bbc-ntqbq" Jan 20 18:25:37 crc kubenswrapper[4661]: I0120 18:25:37.801265 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ebc6351c-f99f-4bf6-ae92-e0a8aee0b5be-ovsdbserver-nb\") pod \"ebc6351c-f99f-4bf6-ae92-e0a8aee0b5be\" (UID: \"ebc6351c-f99f-4bf6-ae92-e0a8aee0b5be\") " Jan 20 18:25:37 crc kubenswrapper[4661]: I0120 18:25:37.801358 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ebc6351c-f99f-4bf6-ae92-e0a8aee0b5be-dns-svc\") pod \"ebc6351c-f99f-4bf6-ae92-e0a8aee0b5be\" (UID: \"ebc6351c-f99f-4bf6-ae92-e0a8aee0b5be\") " Jan 20 18:25:37 crc kubenswrapper[4661]: I0120 18:25:37.801554 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bcnqk\" (UniqueName: \"kubernetes.io/projected/ebc6351c-f99f-4bf6-ae92-e0a8aee0b5be-kube-api-access-bcnqk\") pod \"ebc6351c-f99f-4bf6-ae92-e0a8aee0b5be\" (UID: \"ebc6351c-f99f-4bf6-ae92-e0a8aee0b5be\") " Jan 20 18:25:37 crc kubenswrapper[4661]: I0120 18:25:37.801621 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ebc6351c-f99f-4bf6-ae92-e0a8aee0b5be-ovsdbserver-sb\") pod \"ebc6351c-f99f-4bf6-ae92-e0a8aee0b5be\" (UID: \"ebc6351c-f99f-4bf6-ae92-e0a8aee0b5be\") " Jan 20 18:25:37 crc kubenswrapper[4661]: I0120 18:25:37.801658 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ebc6351c-f99f-4bf6-ae92-e0a8aee0b5be-config\") pod \"ebc6351c-f99f-4bf6-ae92-e0a8aee0b5be\" (UID: \"ebc6351c-f99f-4bf6-ae92-e0a8aee0b5be\") " Jan 20 18:25:37 crc kubenswrapper[4661]: I0120 18:25:37.831935 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ebc6351c-f99f-4bf6-ae92-e0a8aee0b5be-kube-api-access-bcnqk" (OuterVolumeSpecName: "kube-api-access-bcnqk") pod "ebc6351c-f99f-4bf6-ae92-e0a8aee0b5be" (UID: "ebc6351c-f99f-4bf6-ae92-e0a8aee0b5be"). InnerVolumeSpecName "kube-api-access-bcnqk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 18:25:37 crc kubenswrapper[4661]: I0120 18:25:37.905155 4661 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bcnqk\" (UniqueName: \"kubernetes.io/projected/ebc6351c-f99f-4bf6-ae92-e0a8aee0b5be-kube-api-access-bcnqk\") on node \"crc\" DevicePath \"\"" Jan 20 18:25:37 crc kubenswrapper[4661]: I0120 18:25:37.943918 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-svf7c" event={"ID":"78855c94-da90-4523-8d65-70f7fd153dee","Type":"ContainerStarted","Data":"f275e114f07b1c3b029b2e359e7da6e2181e513e10ba8fa3419553c67d8e09a7"} Jan 20 18:25:37 crc kubenswrapper[4661]: I0120 18:25:37.946131 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ebc6351c-f99f-4bf6-ae92-e0a8aee0b5be-config" (OuterVolumeSpecName: "config") pod "ebc6351c-f99f-4bf6-ae92-e0a8aee0b5be" (UID: "ebc6351c-f99f-4bf6-ae92-e0a8aee0b5be"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 18:25:37 crc kubenswrapper[4661]: I0120 18:25:37.950841 4661 generic.go:334] "Generic (PLEG): container finished" podID="ebc6351c-f99f-4bf6-ae92-e0a8aee0b5be" containerID="cd107171512ab4dabc070b9cd54cf73aff0688d5411db067faaf56574fdd38b9" exitCode=0 Jan 20 18:25:37 crc kubenswrapper[4661]: I0120 18:25:37.950927 4661 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7987f74bbc-ntqbq" Jan 20 18:25:37 crc kubenswrapper[4661]: I0120 18:25:37.950895 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7987f74bbc-ntqbq" event={"ID":"ebc6351c-f99f-4bf6-ae92-e0a8aee0b5be","Type":"ContainerDied","Data":"cd107171512ab4dabc070b9cd54cf73aff0688d5411db067faaf56574fdd38b9"} Jan 20 18:25:37 crc kubenswrapper[4661]: I0120 18:25:37.951724 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7987f74bbc-ntqbq" event={"ID":"ebc6351c-f99f-4bf6-ae92-e0a8aee0b5be","Type":"ContainerDied","Data":"fbd602182f4aa96279a49ee95ca5033da750d8f396e05372eff4f8886a3f6fd5"} Jan 20 18:25:37 crc kubenswrapper[4661]: I0120 18:25:37.951747 4661 scope.go:117] "RemoveContainer" containerID="cd107171512ab4dabc070b9cd54cf73aff0688d5411db067faaf56574fdd38b9" Jan 20 18:25:37 crc kubenswrapper[4661]: I0120 18:25:37.956013 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-57dd7457c5-2txjn" event={"ID":"978fc50f-3ea8-4427-af11-d8f4c4f3c0d5","Type":"ContainerStarted","Data":"3c73e994de10495eabc1a9b7542f58d92940554907f42177a124390c685b9c3f"} Jan 20 18:25:37 crc kubenswrapper[4661]: I0120 18:25:37.957187 4661 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-57dd7457c5-2txjn" Jan 20 18:25:37 crc kubenswrapper[4661]: I0120 18:25:37.965062 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ebc6351c-f99f-4bf6-ae92-e0a8aee0b5be-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "ebc6351c-f99f-4bf6-ae92-e0a8aee0b5be" (UID: "ebc6351c-f99f-4bf6-ae92-e0a8aee0b5be"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 18:25:37 crc kubenswrapper[4661]: I0120 18:25:37.987368 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7b8c8444c8-77p78" event={"ID":"e636b383-8c8d-4554-9717-35ba37b726f5","Type":"ContainerStarted","Data":"c4c516017db4201c3ec06a074f9701dacf7b98bea0660cb55351195f0e8632d8"} Jan 20 18:25:37 crc kubenswrapper[4661]: I0120 18:25:37.987430 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7b8c8444c8-77p78" event={"ID":"e636b383-8c8d-4554-9717-35ba37b726f5","Type":"ContainerStarted","Data":"fd6199ac483fbabf89f78678f80a4597636ec35f466a09f685c611a519db34b0"} Jan 20 18:25:38 crc kubenswrapper[4661]: I0120 18:25:38.006692 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ebc6351c-f99f-4bf6-ae92-e0a8aee0b5be-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "ebc6351c-f99f-4bf6-ae92-e0a8aee0b5be" (UID: "ebc6351c-f99f-4bf6-ae92-e0a8aee0b5be"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 18:25:38 crc kubenswrapper[4661]: I0120 18:25:38.007989 4661 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ebc6351c-f99f-4bf6-ae92-e0a8aee0b5be-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 20 18:25:38 crc kubenswrapper[4661]: I0120 18:25:38.008010 4661 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ebc6351c-f99f-4bf6-ae92-e0a8aee0b5be-config\") on node \"crc\" DevicePath \"\"" Jan 20 18:25:38 crc kubenswrapper[4661]: I0120 18:25:38.008020 4661 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ebc6351c-f99f-4bf6-ae92-e0a8aee0b5be-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 20 18:25:38 crc kubenswrapper[4661]: I0120 18:25:38.009186 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5a5e84fe-d036-42ca-97d0-2abdb2b8ebe4","Type":"ContainerStarted","Data":"373a0088aa2a89b582f21687f973c66e718eb537e2ed648d42024e1574c16fda"} Jan 20 18:25:38 crc kubenswrapper[4661]: I0120 18:25:38.009242 4661 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-57dd7457c5-2txjn" podStartSLOduration=15.00922476 podStartE2EDuration="15.00922476s" podCreationTimestamp="2026-01-20 18:25:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 18:25:37.998374435 +0000 UTC m=+1194.329164117" watchObservedRunningTime="2026-01-20 18:25:38.00922476 +0000 UTC m=+1194.340014422" Jan 20 18:25:38 crc kubenswrapper[4661]: I0120 18:25:38.009710 4661 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="5a5e84fe-d036-42ca-97d0-2abdb2b8ebe4" containerName="ceilometer-notification-agent" containerID="cri-o://e40b4ea97366e02aeef3a875ca4e5621745d2a173a54638487639386aeb40bda" gracePeriod=30 Jan 20 18:25:38 crc kubenswrapper[4661]: I0120 18:25:38.009921 4661 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 20 18:25:38 crc kubenswrapper[4661]: I0120 18:25:38.010171 4661 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="5a5e84fe-d036-42ca-97d0-2abdb2b8ebe4" containerName="proxy-httpd" containerID="cri-o://373a0088aa2a89b582f21687f973c66e718eb537e2ed648d42024e1574c16fda" gracePeriod=30 Jan 20 18:25:38 crc kubenswrapper[4661]: I0120 18:25:38.010257 4661 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="5a5e84fe-d036-42ca-97d0-2abdb2b8ebe4" containerName="sg-core" containerID="cri-o://a62e7e6c87dd6649efb329db945ddce05752de194660ef6f7246bf8205684dcb" gracePeriod=30 Jan 20 18:25:38 crc kubenswrapper[4661]: I0120 18:25:38.013131 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ebc6351c-f99f-4bf6-ae92-e0a8aee0b5be-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "ebc6351c-f99f-4bf6-ae92-e0a8aee0b5be" (UID: "ebc6351c-f99f-4bf6-ae92-e0a8aee0b5be"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 18:25:38 crc kubenswrapper[4661]: I0120 18:25:38.026927 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-596b75897b-2g4gm" event={"ID":"f94c4b9e-2f53-44d0-a637-4e8f4a3f9d58","Type":"ContainerStarted","Data":"0cf7520d8625ea6b77cd942d7df8267f59e9f5b0ed62b80d886bba905a7aa8d7"} Jan 20 18:25:38 crc kubenswrapper[4661]: I0120 18:25:38.026971 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-596b75897b-2g4gm" event={"ID":"f94c4b9e-2f53-44d0-a637-4e8f4a3f9d58","Type":"ContainerStarted","Data":"9fded650677c247e551297714c8d8d9c88c549b3f29ae66542ddead30412d4b5"} Jan 20 18:25:38 crc kubenswrapper[4661]: I0120 18:25:38.049280 4661 scope.go:117] "RemoveContainer" containerID="6096e0cd0b5f587a882bb0f8c6d764ac949ff8f6bf7be3165a1450ce199187f1" Jan 20 18:25:38 crc kubenswrapper[4661]: I0120 18:25:38.064548 4661 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-596b75897b-2g4gm" podStartSLOduration=3.029476769 podStartE2EDuration="17.064529926s" podCreationTimestamp="2026-01-20 18:25:21 +0000 UTC" firstStartedPulling="2026-01-20 18:25:22.599867259 +0000 UTC m=+1178.930656921" lastFinishedPulling="2026-01-20 18:25:36.634920416 +0000 UTC m=+1192.965710078" observedRunningTime="2026-01-20 18:25:38.056390291 +0000 UTC m=+1194.387179963" watchObservedRunningTime="2026-01-20 18:25:38.064529926 +0000 UTC m=+1194.395319588" Jan 20 18:25:38 crc kubenswrapper[4661]: I0120 18:25:38.102238 4661 scope.go:117] "RemoveContainer" containerID="cd107171512ab4dabc070b9cd54cf73aff0688d5411db067faaf56574fdd38b9" Jan 20 18:25:38 crc kubenswrapper[4661]: E0120 18:25:38.104079 4661 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cd107171512ab4dabc070b9cd54cf73aff0688d5411db067faaf56574fdd38b9\": container with ID starting with cd107171512ab4dabc070b9cd54cf73aff0688d5411db067faaf56574fdd38b9 not found: ID does not exist" containerID="cd107171512ab4dabc070b9cd54cf73aff0688d5411db067faaf56574fdd38b9" Jan 20 18:25:38 crc kubenswrapper[4661]: I0120 18:25:38.104129 4661 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cd107171512ab4dabc070b9cd54cf73aff0688d5411db067faaf56574fdd38b9"} err="failed to get container status \"cd107171512ab4dabc070b9cd54cf73aff0688d5411db067faaf56574fdd38b9\": rpc error: code = NotFound desc = could not find container \"cd107171512ab4dabc070b9cd54cf73aff0688d5411db067faaf56574fdd38b9\": container with ID starting with cd107171512ab4dabc070b9cd54cf73aff0688d5411db067faaf56574fdd38b9 not found: ID does not exist" Jan 20 18:25:38 crc kubenswrapper[4661]: I0120 18:25:38.104157 4661 scope.go:117] "RemoveContainer" containerID="6096e0cd0b5f587a882bb0f8c6d764ac949ff8f6bf7be3165a1450ce199187f1" Jan 20 18:25:38 crc kubenswrapper[4661]: E0120 18:25:38.106594 4661 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6096e0cd0b5f587a882bb0f8c6d764ac949ff8f6bf7be3165a1450ce199187f1\": container with ID starting with 6096e0cd0b5f587a882bb0f8c6d764ac949ff8f6bf7be3165a1450ce199187f1 not found: ID does not exist" containerID="6096e0cd0b5f587a882bb0f8c6d764ac949ff8f6bf7be3165a1450ce199187f1" Jan 20 18:25:38 crc kubenswrapper[4661]: I0120 18:25:38.106636 4661 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6096e0cd0b5f587a882bb0f8c6d764ac949ff8f6bf7be3165a1450ce199187f1"} err="failed to get container status \"6096e0cd0b5f587a882bb0f8c6d764ac949ff8f6bf7be3165a1450ce199187f1\": rpc error: code = NotFound desc = could not find container \"6096e0cd0b5f587a882bb0f8c6d764ac949ff8f6bf7be3165a1450ce199187f1\": container with ID starting with 6096e0cd0b5f587a882bb0f8c6d764ac949ff8f6bf7be3165a1450ce199187f1 not found: ID does not exist" Jan 20 18:25:38 crc kubenswrapper[4661]: I0120 18:25:38.109322 4661 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ebc6351c-f99f-4bf6-ae92-e0a8aee0b5be-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 20 18:25:38 crc kubenswrapper[4661]: I0120 18:25:38.272727 4661 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7987f74bbc-ntqbq"] Jan 20 18:25:38 crc kubenswrapper[4661]: I0120 18:25:38.278213 4661 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7987f74bbc-ntqbq"] Jan 20 18:25:39 crc kubenswrapper[4661]: I0120 18:25:39.040201 4661 generic.go:334] "Generic (PLEG): container finished" podID="5a5e84fe-d036-42ca-97d0-2abdb2b8ebe4" containerID="373a0088aa2a89b582f21687f973c66e718eb537e2ed648d42024e1574c16fda" exitCode=0 Jan 20 18:25:39 crc kubenswrapper[4661]: I0120 18:25:39.040470 4661 generic.go:334] "Generic (PLEG): container finished" podID="5a5e84fe-d036-42ca-97d0-2abdb2b8ebe4" containerID="a62e7e6c87dd6649efb329db945ddce05752de194660ef6f7246bf8205684dcb" exitCode=2 Jan 20 18:25:39 crc kubenswrapper[4661]: I0120 18:25:39.040532 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5a5e84fe-d036-42ca-97d0-2abdb2b8ebe4","Type":"ContainerDied","Data":"373a0088aa2a89b582f21687f973c66e718eb537e2ed648d42024e1574c16fda"} Jan 20 18:25:39 crc kubenswrapper[4661]: I0120 18:25:39.040565 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5a5e84fe-d036-42ca-97d0-2abdb2b8ebe4","Type":"ContainerDied","Data":"a62e7e6c87dd6649efb329db945ddce05752de194660ef6f7246bf8205684dcb"} Jan 20 18:25:39 crc kubenswrapper[4661]: I0120 18:25:39.042095 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-glwqq" event={"ID":"2423d758-4514-439d-a804-42287945bedc","Type":"ContainerStarted","Data":"bfb8fe059c190ece7c98766ac159b4761b111ba1e77524918c001bf90963fc6f"} Jan 20 18:25:39 crc kubenswrapper[4661]: I0120 18:25:39.059360 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7b8c8444c8-77p78" event={"ID":"e636b383-8c8d-4554-9717-35ba37b726f5","Type":"ContainerStarted","Data":"0ec5635d435543a800cede256b44b9407c73639f40411cf3822750b331df47a3"} Jan 20 18:25:39 crc kubenswrapper[4661]: I0120 18:25:39.059411 4661 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-7b8c8444c8-77p78" Jan 20 18:25:39 crc kubenswrapper[4661]: I0120 18:25:39.060556 4661 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-7b8c8444c8-77p78" Jan 20 18:25:39 crc kubenswrapper[4661]: I0120 18:25:39.074546 4661 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-sync-glwqq" podStartSLOduration=4.004866089 podStartE2EDuration="48.074523664s" podCreationTimestamp="2026-01-20 18:24:51 +0000 UTC" firstStartedPulling="2026-01-20 18:24:52.682642089 +0000 UTC m=+1149.013431751" lastFinishedPulling="2026-01-20 18:25:36.752299664 +0000 UTC m=+1193.083089326" observedRunningTime="2026-01-20 18:25:39.062719023 +0000 UTC m=+1195.393508705" watchObservedRunningTime="2026-01-20 18:25:39.074523664 +0000 UTC m=+1195.405313336" Jan 20 18:25:39 crc kubenswrapper[4661]: I0120 18:25:39.087565 4661 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-7b8c8444c8-77p78" podStartSLOduration=12.087543246 podStartE2EDuration="12.087543246s" podCreationTimestamp="2026-01-20 18:25:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 18:25:39.083586742 +0000 UTC m=+1195.414376414" watchObservedRunningTime="2026-01-20 18:25:39.087543246 +0000 UTC m=+1195.418332908" Jan 20 18:25:40 crc kubenswrapper[4661]: I0120 18:25:40.078310 4661 generic.go:334] "Generic (PLEG): container finished" podID="5a5e84fe-d036-42ca-97d0-2abdb2b8ebe4" containerID="e40b4ea97366e02aeef3a875ca4e5621745d2a173a54638487639386aeb40bda" exitCode=0 Jan 20 18:25:40 crc kubenswrapper[4661]: I0120 18:25:40.079577 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5a5e84fe-d036-42ca-97d0-2abdb2b8ebe4","Type":"ContainerDied","Data":"e40b4ea97366e02aeef3a875ca4e5621745d2a173a54638487639386aeb40bda"} Jan 20 18:25:40 crc kubenswrapper[4661]: I0120 18:25:40.165942 4661 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ebc6351c-f99f-4bf6-ae92-e0a8aee0b5be" path="/var/lib/kubelet/pods/ebc6351c-f99f-4bf6-ae92-e0a8aee0b5be/volumes" Jan 20 18:25:40 crc kubenswrapper[4661]: I0120 18:25:40.263398 4661 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 20 18:25:40 crc kubenswrapper[4661]: I0120 18:25:40.448397 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5a5e84fe-d036-42ca-97d0-2abdb2b8ebe4-config-data\") pod \"5a5e84fe-d036-42ca-97d0-2abdb2b8ebe4\" (UID: \"5a5e84fe-d036-42ca-97d0-2abdb2b8ebe4\") " Jan 20 18:25:40 crc kubenswrapper[4661]: I0120 18:25:40.448507 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tlm7x\" (UniqueName: \"kubernetes.io/projected/5a5e84fe-d036-42ca-97d0-2abdb2b8ebe4-kube-api-access-tlm7x\") pod \"5a5e84fe-d036-42ca-97d0-2abdb2b8ebe4\" (UID: \"5a5e84fe-d036-42ca-97d0-2abdb2b8ebe4\") " Jan 20 18:25:40 crc kubenswrapper[4661]: I0120 18:25:40.449223 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5a5e84fe-d036-42ca-97d0-2abdb2b8ebe4-combined-ca-bundle\") pod \"5a5e84fe-d036-42ca-97d0-2abdb2b8ebe4\" (UID: \"5a5e84fe-d036-42ca-97d0-2abdb2b8ebe4\") " Jan 20 18:25:40 crc kubenswrapper[4661]: I0120 18:25:40.449261 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5a5e84fe-d036-42ca-97d0-2abdb2b8ebe4-sg-core-conf-yaml\") pod \"5a5e84fe-d036-42ca-97d0-2abdb2b8ebe4\" (UID: \"5a5e84fe-d036-42ca-97d0-2abdb2b8ebe4\") " Jan 20 18:25:40 crc kubenswrapper[4661]: I0120 18:25:40.449324 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5a5e84fe-d036-42ca-97d0-2abdb2b8ebe4-scripts\") pod \"5a5e84fe-d036-42ca-97d0-2abdb2b8ebe4\" (UID: \"5a5e84fe-d036-42ca-97d0-2abdb2b8ebe4\") " Jan 20 18:25:40 crc kubenswrapper[4661]: I0120 18:25:40.449639 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5a5e84fe-d036-42ca-97d0-2abdb2b8ebe4-run-httpd\") pod \"5a5e84fe-d036-42ca-97d0-2abdb2b8ebe4\" (UID: \"5a5e84fe-d036-42ca-97d0-2abdb2b8ebe4\") " Jan 20 18:25:40 crc kubenswrapper[4661]: I0120 18:25:40.450031 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5a5e84fe-d036-42ca-97d0-2abdb2b8ebe4-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "5a5e84fe-d036-42ca-97d0-2abdb2b8ebe4" (UID: "5a5e84fe-d036-42ca-97d0-2abdb2b8ebe4"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 18:25:40 crc kubenswrapper[4661]: I0120 18:25:40.450153 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5a5e84fe-d036-42ca-97d0-2abdb2b8ebe4-log-httpd\") pod \"5a5e84fe-d036-42ca-97d0-2abdb2b8ebe4\" (UID: \"5a5e84fe-d036-42ca-97d0-2abdb2b8ebe4\") " Jan 20 18:25:40 crc kubenswrapper[4661]: I0120 18:25:40.450583 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5a5e84fe-d036-42ca-97d0-2abdb2b8ebe4-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "5a5e84fe-d036-42ca-97d0-2abdb2b8ebe4" (UID: "5a5e84fe-d036-42ca-97d0-2abdb2b8ebe4"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 18:25:40 crc kubenswrapper[4661]: I0120 18:25:40.451207 4661 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5a5e84fe-d036-42ca-97d0-2abdb2b8ebe4-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 20 18:25:40 crc kubenswrapper[4661]: I0120 18:25:40.451237 4661 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5a5e84fe-d036-42ca-97d0-2abdb2b8ebe4-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 20 18:25:40 crc kubenswrapper[4661]: I0120 18:25:40.467765 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5a5e84fe-d036-42ca-97d0-2abdb2b8ebe4-scripts" (OuterVolumeSpecName: "scripts") pod "5a5e84fe-d036-42ca-97d0-2abdb2b8ebe4" (UID: "5a5e84fe-d036-42ca-97d0-2abdb2b8ebe4"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 18:25:40 crc kubenswrapper[4661]: I0120 18:25:40.467777 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5a5e84fe-d036-42ca-97d0-2abdb2b8ebe4-kube-api-access-tlm7x" (OuterVolumeSpecName: "kube-api-access-tlm7x") pod "5a5e84fe-d036-42ca-97d0-2abdb2b8ebe4" (UID: "5a5e84fe-d036-42ca-97d0-2abdb2b8ebe4"). InnerVolumeSpecName "kube-api-access-tlm7x". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 18:25:40 crc kubenswrapper[4661]: I0120 18:25:40.489060 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5a5e84fe-d036-42ca-97d0-2abdb2b8ebe4-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "5a5e84fe-d036-42ca-97d0-2abdb2b8ebe4" (UID: "5a5e84fe-d036-42ca-97d0-2abdb2b8ebe4"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 18:25:40 crc kubenswrapper[4661]: I0120 18:25:40.526934 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5a5e84fe-d036-42ca-97d0-2abdb2b8ebe4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5a5e84fe-d036-42ca-97d0-2abdb2b8ebe4" (UID: "5a5e84fe-d036-42ca-97d0-2abdb2b8ebe4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 18:25:40 crc kubenswrapper[4661]: I0120 18:25:40.533020 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5a5e84fe-d036-42ca-97d0-2abdb2b8ebe4-config-data" (OuterVolumeSpecName: "config-data") pod "5a5e84fe-d036-42ca-97d0-2abdb2b8ebe4" (UID: "5a5e84fe-d036-42ca-97d0-2abdb2b8ebe4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 18:25:40 crc kubenswrapper[4661]: I0120 18:25:40.552931 4661 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5a5e84fe-d036-42ca-97d0-2abdb2b8ebe4-config-data\") on node \"crc\" DevicePath \"\"" Jan 20 18:25:40 crc kubenswrapper[4661]: I0120 18:25:40.552961 4661 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tlm7x\" (UniqueName: \"kubernetes.io/projected/5a5e84fe-d036-42ca-97d0-2abdb2b8ebe4-kube-api-access-tlm7x\") on node \"crc\" DevicePath \"\"" Jan 20 18:25:40 crc kubenswrapper[4661]: I0120 18:25:40.552972 4661 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5a5e84fe-d036-42ca-97d0-2abdb2b8ebe4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 20 18:25:40 crc kubenswrapper[4661]: I0120 18:25:40.552983 4661 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5a5e84fe-d036-42ca-97d0-2abdb2b8ebe4-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 20 18:25:40 crc kubenswrapper[4661]: I0120 18:25:40.552992 4661 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5a5e84fe-d036-42ca-97d0-2abdb2b8ebe4-scripts\") on node \"crc\" DevicePath \"\"" Jan 20 18:25:41 crc kubenswrapper[4661]: I0120 18:25:41.088538 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5a5e84fe-d036-42ca-97d0-2abdb2b8ebe4","Type":"ContainerDied","Data":"4ada2bd0dae89cc42a8bdd1ac96d284f34e035e0035e9328a2f90ef7a1f58c5d"} Jan 20 18:25:41 crc kubenswrapper[4661]: I0120 18:25:41.088799 4661 scope.go:117] "RemoveContainer" containerID="373a0088aa2a89b582f21687f973c66e718eb537e2ed648d42024e1574c16fda" Jan 20 18:25:41 crc kubenswrapper[4661]: I0120 18:25:41.088616 4661 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 20 18:25:41 crc kubenswrapper[4661]: I0120 18:25:41.111957 4661 scope.go:117] "RemoveContainer" containerID="a62e7e6c87dd6649efb329db945ddce05752de194660ef6f7246bf8205684dcb" Jan 20 18:25:41 crc kubenswrapper[4661]: I0120 18:25:41.131773 4661 scope.go:117] "RemoveContainer" containerID="e40b4ea97366e02aeef3a875ca4e5621745d2a173a54638487639386aeb40bda" Jan 20 18:25:41 crc kubenswrapper[4661]: I0120 18:25:41.179107 4661 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 20 18:25:41 crc kubenswrapper[4661]: I0120 18:25:41.185434 4661 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 20 18:25:41 crc kubenswrapper[4661]: I0120 18:25:41.210717 4661 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 20 18:25:41 crc kubenswrapper[4661]: E0120 18:25:41.211092 4661 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ebc6351c-f99f-4bf6-ae92-e0a8aee0b5be" containerName="init" Jan 20 18:25:41 crc kubenswrapper[4661]: I0120 18:25:41.211109 4661 state_mem.go:107] "Deleted CPUSet assignment" podUID="ebc6351c-f99f-4bf6-ae92-e0a8aee0b5be" containerName="init" Jan 20 18:25:41 crc kubenswrapper[4661]: E0120 18:25:41.211126 4661 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5a5e84fe-d036-42ca-97d0-2abdb2b8ebe4" containerName="proxy-httpd" Jan 20 18:25:41 crc kubenswrapper[4661]: I0120 18:25:41.211133 4661 state_mem.go:107] "Deleted CPUSet assignment" podUID="5a5e84fe-d036-42ca-97d0-2abdb2b8ebe4" containerName="proxy-httpd" Jan 20 18:25:41 crc kubenswrapper[4661]: E0120 18:25:41.211142 4661 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5a5e84fe-d036-42ca-97d0-2abdb2b8ebe4" containerName="sg-core" Jan 20 18:25:41 crc kubenswrapper[4661]: I0120 18:25:41.211148 4661 state_mem.go:107] "Deleted CPUSet assignment" podUID="5a5e84fe-d036-42ca-97d0-2abdb2b8ebe4" containerName="sg-core" Jan 20 18:25:41 crc kubenswrapper[4661]: E0120 18:25:41.211168 4661 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ebc6351c-f99f-4bf6-ae92-e0a8aee0b5be" containerName="dnsmasq-dns" Jan 20 18:25:41 crc kubenswrapper[4661]: I0120 18:25:41.211174 4661 state_mem.go:107] "Deleted CPUSet assignment" podUID="ebc6351c-f99f-4bf6-ae92-e0a8aee0b5be" containerName="dnsmasq-dns" Jan 20 18:25:41 crc kubenswrapper[4661]: E0120 18:25:41.211185 4661 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5a5e84fe-d036-42ca-97d0-2abdb2b8ebe4" containerName="ceilometer-notification-agent" Jan 20 18:25:41 crc kubenswrapper[4661]: I0120 18:25:41.211191 4661 state_mem.go:107] "Deleted CPUSet assignment" podUID="5a5e84fe-d036-42ca-97d0-2abdb2b8ebe4" containerName="ceilometer-notification-agent" Jan 20 18:25:41 crc kubenswrapper[4661]: I0120 18:25:41.211333 4661 memory_manager.go:354] "RemoveStaleState removing state" podUID="5a5e84fe-d036-42ca-97d0-2abdb2b8ebe4" containerName="sg-core" Jan 20 18:25:41 crc kubenswrapper[4661]: I0120 18:25:41.211343 4661 memory_manager.go:354] "RemoveStaleState removing state" podUID="5a5e84fe-d036-42ca-97d0-2abdb2b8ebe4" containerName="proxy-httpd" Jan 20 18:25:41 crc kubenswrapper[4661]: I0120 18:25:41.211352 4661 memory_manager.go:354] "RemoveStaleState removing state" podUID="ebc6351c-f99f-4bf6-ae92-e0a8aee0b5be" containerName="dnsmasq-dns" Jan 20 18:25:41 crc kubenswrapper[4661]: I0120 18:25:41.211365 4661 memory_manager.go:354] "RemoveStaleState removing state" podUID="5a5e84fe-d036-42ca-97d0-2abdb2b8ebe4" containerName="ceilometer-notification-agent" Jan 20 18:25:41 crc kubenswrapper[4661]: I0120 18:25:41.212763 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 20 18:25:41 crc kubenswrapper[4661]: I0120 18:25:41.216333 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 20 18:25:41 crc kubenswrapper[4661]: I0120 18:25:41.216391 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 20 18:25:41 crc kubenswrapper[4661]: I0120 18:25:41.233647 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 20 18:25:41 crc kubenswrapper[4661]: I0120 18:25:41.370041 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/1a173834-dcef-416e-9eec-7c3038fcb78e-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"1a173834-dcef-416e-9eec-7c3038fcb78e\") " pod="openstack/ceilometer-0" Jan 20 18:25:41 crc kubenswrapper[4661]: I0120 18:25:41.370355 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1a173834-dcef-416e-9eec-7c3038fcb78e-scripts\") pod \"ceilometer-0\" (UID: \"1a173834-dcef-416e-9eec-7c3038fcb78e\") " pod="openstack/ceilometer-0" Jan 20 18:25:41 crc kubenswrapper[4661]: I0120 18:25:41.370481 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1a173834-dcef-416e-9eec-7c3038fcb78e-run-httpd\") pod \"ceilometer-0\" (UID: \"1a173834-dcef-416e-9eec-7c3038fcb78e\") " pod="openstack/ceilometer-0" Jan 20 18:25:41 crc kubenswrapper[4661]: I0120 18:25:41.370586 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1a173834-dcef-416e-9eec-7c3038fcb78e-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"1a173834-dcef-416e-9eec-7c3038fcb78e\") " pod="openstack/ceilometer-0" Jan 20 18:25:41 crc kubenswrapper[4661]: I0120 18:25:41.370702 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dpfhh\" (UniqueName: \"kubernetes.io/projected/1a173834-dcef-416e-9eec-7c3038fcb78e-kube-api-access-dpfhh\") pod \"ceilometer-0\" (UID: \"1a173834-dcef-416e-9eec-7c3038fcb78e\") " pod="openstack/ceilometer-0" Jan 20 18:25:41 crc kubenswrapper[4661]: I0120 18:25:41.370845 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1a173834-dcef-416e-9eec-7c3038fcb78e-log-httpd\") pod \"ceilometer-0\" (UID: \"1a173834-dcef-416e-9eec-7c3038fcb78e\") " pod="openstack/ceilometer-0" Jan 20 18:25:41 crc kubenswrapper[4661]: I0120 18:25:41.371029 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1a173834-dcef-416e-9eec-7c3038fcb78e-config-data\") pod \"ceilometer-0\" (UID: \"1a173834-dcef-416e-9eec-7c3038fcb78e\") " pod="openstack/ceilometer-0" Jan 20 18:25:41 crc kubenswrapper[4661]: I0120 18:25:41.472373 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1a173834-dcef-416e-9eec-7c3038fcb78e-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"1a173834-dcef-416e-9eec-7c3038fcb78e\") " pod="openstack/ceilometer-0" Jan 20 18:25:41 crc kubenswrapper[4661]: I0120 18:25:41.472462 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dpfhh\" (UniqueName: \"kubernetes.io/projected/1a173834-dcef-416e-9eec-7c3038fcb78e-kube-api-access-dpfhh\") pod \"ceilometer-0\" (UID: \"1a173834-dcef-416e-9eec-7c3038fcb78e\") " pod="openstack/ceilometer-0" Jan 20 18:25:41 crc kubenswrapper[4661]: I0120 18:25:41.472489 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1a173834-dcef-416e-9eec-7c3038fcb78e-log-httpd\") pod \"ceilometer-0\" (UID: \"1a173834-dcef-416e-9eec-7c3038fcb78e\") " pod="openstack/ceilometer-0" Jan 20 18:25:41 crc kubenswrapper[4661]: I0120 18:25:41.472541 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1a173834-dcef-416e-9eec-7c3038fcb78e-config-data\") pod \"ceilometer-0\" (UID: \"1a173834-dcef-416e-9eec-7c3038fcb78e\") " pod="openstack/ceilometer-0" Jan 20 18:25:41 crc kubenswrapper[4661]: I0120 18:25:41.472611 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/1a173834-dcef-416e-9eec-7c3038fcb78e-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"1a173834-dcef-416e-9eec-7c3038fcb78e\") " pod="openstack/ceilometer-0" Jan 20 18:25:41 crc kubenswrapper[4661]: I0120 18:25:41.472650 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1a173834-dcef-416e-9eec-7c3038fcb78e-scripts\") pod \"ceilometer-0\" (UID: \"1a173834-dcef-416e-9eec-7c3038fcb78e\") " pod="openstack/ceilometer-0" Jan 20 18:25:41 crc kubenswrapper[4661]: I0120 18:25:41.472688 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1a173834-dcef-416e-9eec-7c3038fcb78e-run-httpd\") pod \"ceilometer-0\" (UID: \"1a173834-dcef-416e-9eec-7c3038fcb78e\") " pod="openstack/ceilometer-0" Jan 20 18:25:41 crc kubenswrapper[4661]: I0120 18:25:41.473268 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1a173834-dcef-416e-9eec-7c3038fcb78e-run-httpd\") pod \"ceilometer-0\" (UID: \"1a173834-dcef-416e-9eec-7c3038fcb78e\") " pod="openstack/ceilometer-0" Jan 20 18:25:41 crc kubenswrapper[4661]: I0120 18:25:41.473406 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1a173834-dcef-416e-9eec-7c3038fcb78e-log-httpd\") pod \"ceilometer-0\" (UID: \"1a173834-dcef-416e-9eec-7c3038fcb78e\") " pod="openstack/ceilometer-0" Jan 20 18:25:41 crc kubenswrapper[4661]: I0120 18:25:41.477881 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1a173834-dcef-416e-9eec-7c3038fcb78e-config-data\") pod \"ceilometer-0\" (UID: \"1a173834-dcef-416e-9eec-7c3038fcb78e\") " pod="openstack/ceilometer-0" Jan 20 18:25:41 crc kubenswrapper[4661]: I0120 18:25:41.477959 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1a173834-dcef-416e-9eec-7c3038fcb78e-scripts\") pod \"ceilometer-0\" (UID: \"1a173834-dcef-416e-9eec-7c3038fcb78e\") " pod="openstack/ceilometer-0" Jan 20 18:25:41 crc kubenswrapper[4661]: I0120 18:25:41.479404 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/1a173834-dcef-416e-9eec-7c3038fcb78e-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"1a173834-dcef-416e-9eec-7c3038fcb78e\") " pod="openstack/ceilometer-0" Jan 20 18:25:41 crc kubenswrapper[4661]: I0120 18:25:41.486382 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1a173834-dcef-416e-9eec-7c3038fcb78e-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"1a173834-dcef-416e-9eec-7c3038fcb78e\") " pod="openstack/ceilometer-0" Jan 20 18:25:41 crc kubenswrapper[4661]: I0120 18:25:41.495008 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dpfhh\" (UniqueName: \"kubernetes.io/projected/1a173834-dcef-416e-9eec-7c3038fcb78e-kube-api-access-dpfhh\") pod \"ceilometer-0\" (UID: \"1a173834-dcef-416e-9eec-7c3038fcb78e\") " pod="openstack/ceilometer-0" Jan 20 18:25:41 crc kubenswrapper[4661]: I0120 18:25:41.539915 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 20 18:25:42 crc kubenswrapper[4661]: I0120 18:25:42.019943 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 20 18:25:42 crc kubenswrapper[4661]: W0120 18:25:42.026683 4661 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1a173834_dcef_416e_9eec_7c3038fcb78e.slice/crio-c7d905549f2409bcbf3ba02269b955e951fc268fc9c0d4e299d4dc1880a54dfd WatchSource:0}: Error finding container c7d905549f2409bcbf3ba02269b955e951fc268fc9c0d4e299d4dc1880a54dfd: Status 404 returned error can't find the container with id c7d905549f2409bcbf3ba02269b955e951fc268fc9c0d4e299d4dc1880a54dfd Jan 20 18:25:42 crc kubenswrapper[4661]: I0120 18:25:42.098126 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1a173834-dcef-416e-9eec-7c3038fcb78e","Type":"ContainerStarted","Data":"c7d905549f2409bcbf3ba02269b955e951fc268fc9c0d4e299d4dc1880a54dfd"} Jan 20 18:25:42 crc kubenswrapper[4661]: I0120 18:25:42.153638 4661 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5a5e84fe-d036-42ca-97d0-2abdb2b8ebe4" path="/var/lib/kubelet/pods/5a5e84fe-d036-42ca-97d0-2abdb2b8ebe4/volumes" Jan 20 18:25:43 crc kubenswrapper[4661]: I0120 18:25:43.153389 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1a173834-dcef-416e-9eec-7c3038fcb78e","Type":"ContainerStarted","Data":"b6ca95cfe8ef7b57524da992adeffb92f85b769bf81132da041917b2e0abe0c4"} Jan 20 18:25:43 crc kubenswrapper[4661]: I0120 18:25:43.156133 4661 generic.go:334] "Generic (PLEG): container finished" podID="2423d758-4514-439d-a804-42287945bedc" containerID="bfb8fe059c190ece7c98766ac159b4761b111ba1e77524918c001bf90963fc6f" exitCode=0 Jan 20 18:25:43 crc kubenswrapper[4661]: I0120 18:25:43.156185 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-glwqq" event={"ID":"2423d758-4514-439d-a804-42287945bedc","Type":"ContainerDied","Data":"bfb8fe059c190ece7c98766ac159b4761b111ba1e77524918c001bf90963fc6f"} Jan 20 18:25:44 crc kubenswrapper[4661]: I0120 18:25:44.220793 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1a173834-dcef-416e-9eec-7c3038fcb78e","Type":"ContainerStarted","Data":"565590ab98ab21b5dbe5f431ca9ea7967e33cef8e250a5dd0923f9a533f5d327"} Jan 20 18:25:44 crc kubenswrapper[4661]: I0120 18:25:44.221272 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1a173834-dcef-416e-9eec-7c3038fcb78e","Type":"ContainerStarted","Data":"b163de4ac1a6ddbaa0c1653886e73f82c0ee48f5eb2958119459079f2d9d89b7"} Jan 20 18:25:44 crc kubenswrapper[4661]: I0120 18:25:44.420599 4661 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-7b8c8444c8-77p78" Jan 20 18:25:44 crc kubenswrapper[4661]: I0120 18:25:44.549278 4661 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-glwqq" Jan 20 18:25:44 crc kubenswrapper[4661]: I0120 18:25:44.727445 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2423d758-4514-439d-a804-42287945bedc-etc-machine-id\") pod \"2423d758-4514-439d-a804-42287945bedc\" (UID: \"2423d758-4514-439d-a804-42287945bedc\") " Jan 20 18:25:44 crc kubenswrapper[4661]: I0120 18:25:44.727823 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2423d758-4514-439d-a804-42287945bedc-scripts\") pod \"2423d758-4514-439d-a804-42287945bedc\" (UID: \"2423d758-4514-439d-a804-42287945bedc\") " Jan 20 18:25:44 crc kubenswrapper[4661]: I0120 18:25:44.727884 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2423d758-4514-439d-a804-42287945bedc-combined-ca-bundle\") pod \"2423d758-4514-439d-a804-42287945bedc\" (UID: \"2423d758-4514-439d-a804-42287945bedc\") " Jan 20 18:25:44 crc kubenswrapper[4661]: I0120 18:25:44.727950 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/2423d758-4514-439d-a804-42287945bedc-db-sync-config-data\") pod \"2423d758-4514-439d-a804-42287945bedc\" (UID: \"2423d758-4514-439d-a804-42287945bedc\") " Jan 20 18:25:44 crc kubenswrapper[4661]: I0120 18:25:44.727977 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fjndl\" (UniqueName: \"kubernetes.io/projected/2423d758-4514-439d-a804-42287945bedc-kube-api-access-fjndl\") pod \"2423d758-4514-439d-a804-42287945bedc\" (UID: \"2423d758-4514-439d-a804-42287945bedc\") " Jan 20 18:25:44 crc kubenswrapper[4661]: I0120 18:25:44.728096 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2423d758-4514-439d-a804-42287945bedc-config-data\") pod \"2423d758-4514-439d-a804-42287945bedc\" (UID: \"2423d758-4514-439d-a804-42287945bedc\") " Jan 20 18:25:44 crc kubenswrapper[4661]: I0120 18:25:44.727573 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2423d758-4514-439d-a804-42287945bedc-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "2423d758-4514-439d-a804-42287945bedc" (UID: "2423d758-4514-439d-a804-42287945bedc"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 20 18:25:44 crc kubenswrapper[4661]: I0120 18:25:44.738091 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2423d758-4514-439d-a804-42287945bedc-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "2423d758-4514-439d-a804-42287945bedc" (UID: "2423d758-4514-439d-a804-42287945bedc"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 18:25:44 crc kubenswrapper[4661]: I0120 18:25:44.745626 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2423d758-4514-439d-a804-42287945bedc-kube-api-access-fjndl" (OuterVolumeSpecName: "kube-api-access-fjndl") pod "2423d758-4514-439d-a804-42287945bedc" (UID: "2423d758-4514-439d-a804-42287945bedc"). InnerVolumeSpecName "kube-api-access-fjndl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 18:25:44 crc kubenswrapper[4661]: I0120 18:25:44.754824 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2423d758-4514-439d-a804-42287945bedc-scripts" (OuterVolumeSpecName: "scripts") pod "2423d758-4514-439d-a804-42287945bedc" (UID: "2423d758-4514-439d-a804-42287945bedc"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 18:25:44 crc kubenswrapper[4661]: I0120 18:25:44.789944 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2423d758-4514-439d-a804-42287945bedc-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2423d758-4514-439d-a804-42287945bedc" (UID: "2423d758-4514-439d-a804-42287945bedc"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 18:25:44 crc kubenswrapper[4661]: I0120 18:25:44.816828 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2423d758-4514-439d-a804-42287945bedc-config-data" (OuterVolumeSpecName: "config-data") pod "2423d758-4514-439d-a804-42287945bedc" (UID: "2423d758-4514-439d-a804-42287945bedc"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 18:25:44 crc kubenswrapper[4661]: I0120 18:25:44.829677 4661 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2423d758-4514-439d-a804-42287945bedc-config-data\") on node \"crc\" DevicePath \"\"" Jan 20 18:25:44 crc kubenswrapper[4661]: I0120 18:25:44.829703 4661 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2423d758-4514-439d-a804-42287945bedc-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 20 18:25:44 crc kubenswrapper[4661]: I0120 18:25:44.829716 4661 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2423d758-4514-439d-a804-42287945bedc-scripts\") on node \"crc\" DevicePath \"\"" Jan 20 18:25:44 crc kubenswrapper[4661]: I0120 18:25:44.829724 4661 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2423d758-4514-439d-a804-42287945bedc-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 20 18:25:44 crc kubenswrapper[4661]: I0120 18:25:44.829732 4661 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/2423d758-4514-439d-a804-42287945bedc-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 20 18:25:44 crc kubenswrapper[4661]: I0120 18:25:44.829740 4661 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fjndl\" (UniqueName: \"kubernetes.io/projected/2423d758-4514-439d-a804-42287945bedc-kube-api-access-fjndl\") on node \"crc\" DevicePath \"\"" Jan 20 18:25:45 crc kubenswrapper[4661]: I0120 18:25:45.246019 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-glwqq" event={"ID":"2423d758-4514-439d-a804-42287945bedc","Type":"ContainerDied","Data":"2527365a3c1440fb8bc8be41183b4ce72bb300cab6550737327cb1272dab1fdf"} Jan 20 18:25:45 crc kubenswrapper[4661]: I0120 18:25:45.246085 4661 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2527365a3c1440fb8bc8be41183b4ce72bb300cab6550737327cb1272dab1fdf" Jan 20 18:25:45 crc kubenswrapper[4661]: I0120 18:25:45.246163 4661 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-glwqq" Jan 20 18:25:45 crc kubenswrapper[4661]: I0120 18:25:45.419820 4661 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Jan 20 18:25:45 crc kubenswrapper[4661]: E0120 18:25:45.420154 4661 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2423d758-4514-439d-a804-42287945bedc" containerName="cinder-db-sync" Jan 20 18:25:45 crc kubenswrapper[4661]: I0120 18:25:45.420173 4661 state_mem.go:107] "Deleted CPUSet assignment" podUID="2423d758-4514-439d-a804-42287945bedc" containerName="cinder-db-sync" Jan 20 18:25:45 crc kubenswrapper[4661]: I0120 18:25:45.420347 4661 memory_manager.go:354] "RemoveStaleState removing state" podUID="2423d758-4514-439d-a804-42287945bedc" containerName="cinder-db-sync" Jan 20 18:25:45 crc kubenswrapper[4661]: I0120 18:25:45.425131 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 20 18:25:45 crc kubenswrapper[4661]: I0120 18:25:45.437562 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Jan 20 18:25:45 crc kubenswrapper[4661]: I0120 18:25:45.437966 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Jan 20 18:25:45 crc kubenswrapper[4661]: I0120 18:25:45.438924 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Jan 20 18:25:45 crc kubenswrapper[4661]: I0120 18:25:45.439053 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-tl24v" Jan 20 18:25:45 crc kubenswrapper[4661]: I0120 18:25:45.515293 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 20 18:25:45 crc kubenswrapper[4661]: I0120 18:25:45.546969 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/07c8af61-2003-440a-b91e-325023e1ce8d-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"07c8af61-2003-440a-b91e-325023e1ce8d\") " pod="openstack/cinder-scheduler-0" Jan 20 18:25:45 crc kubenswrapper[4661]: I0120 18:25:45.547021 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/07c8af61-2003-440a-b91e-325023e1ce8d-config-data\") pod \"cinder-scheduler-0\" (UID: \"07c8af61-2003-440a-b91e-325023e1ce8d\") " pod="openstack/cinder-scheduler-0" Jan 20 18:25:45 crc kubenswrapper[4661]: I0120 18:25:45.547057 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/07c8af61-2003-440a-b91e-325023e1ce8d-scripts\") pod \"cinder-scheduler-0\" (UID: \"07c8af61-2003-440a-b91e-325023e1ce8d\") " pod="openstack/cinder-scheduler-0" Jan 20 18:25:45 crc kubenswrapper[4661]: I0120 18:25:45.547168 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9vdrg\" (UniqueName: \"kubernetes.io/projected/07c8af61-2003-440a-b91e-325023e1ce8d-kube-api-access-9vdrg\") pod \"cinder-scheduler-0\" (UID: \"07c8af61-2003-440a-b91e-325023e1ce8d\") " pod="openstack/cinder-scheduler-0" Jan 20 18:25:45 crc kubenswrapper[4661]: I0120 18:25:45.547256 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07c8af61-2003-440a-b91e-325023e1ce8d-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"07c8af61-2003-440a-b91e-325023e1ce8d\") " pod="openstack/cinder-scheduler-0" Jan 20 18:25:45 crc kubenswrapper[4661]: I0120 18:25:45.547300 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/07c8af61-2003-440a-b91e-325023e1ce8d-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"07c8af61-2003-440a-b91e-325023e1ce8d\") " pod="openstack/cinder-scheduler-0" Jan 20 18:25:45 crc kubenswrapper[4661]: I0120 18:25:45.579123 4661 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6d97fcdd8f-fsv8j"] Jan 20 18:25:45 crc kubenswrapper[4661]: I0120 18:25:45.581114 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6d97fcdd8f-fsv8j" Jan 20 18:25:45 crc kubenswrapper[4661]: I0120 18:25:45.601755 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6d97fcdd8f-fsv8j"] Jan 20 18:25:45 crc kubenswrapper[4661]: I0120 18:25:45.648913 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9vdrg\" (UniqueName: \"kubernetes.io/projected/07c8af61-2003-440a-b91e-325023e1ce8d-kube-api-access-9vdrg\") pod \"cinder-scheduler-0\" (UID: \"07c8af61-2003-440a-b91e-325023e1ce8d\") " pod="openstack/cinder-scheduler-0" Jan 20 18:25:45 crc kubenswrapper[4661]: I0120 18:25:45.648988 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07c8af61-2003-440a-b91e-325023e1ce8d-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"07c8af61-2003-440a-b91e-325023e1ce8d\") " pod="openstack/cinder-scheduler-0" Jan 20 18:25:45 crc kubenswrapper[4661]: I0120 18:25:45.649031 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/07c8af61-2003-440a-b91e-325023e1ce8d-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"07c8af61-2003-440a-b91e-325023e1ce8d\") " pod="openstack/cinder-scheduler-0" Jan 20 18:25:45 crc kubenswrapper[4661]: I0120 18:25:45.649067 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/07c8af61-2003-440a-b91e-325023e1ce8d-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"07c8af61-2003-440a-b91e-325023e1ce8d\") " pod="openstack/cinder-scheduler-0" Jan 20 18:25:45 crc kubenswrapper[4661]: I0120 18:25:45.649086 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/07c8af61-2003-440a-b91e-325023e1ce8d-config-data\") pod \"cinder-scheduler-0\" (UID: \"07c8af61-2003-440a-b91e-325023e1ce8d\") " pod="openstack/cinder-scheduler-0" Jan 20 18:25:45 crc kubenswrapper[4661]: I0120 18:25:45.649113 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/07c8af61-2003-440a-b91e-325023e1ce8d-scripts\") pod \"cinder-scheduler-0\" (UID: \"07c8af61-2003-440a-b91e-325023e1ce8d\") " pod="openstack/cinder-scheduler-0" Jan 20 18:25:45 crc kubenswrapper[4661]: I0120 18:25:45.650105 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/07c8af61-2003-440a-b91e-325023e1ce8d-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"07c8af61-2003-440a-b91e-325023e1ce8d\") " pod="openstack/cinder-scheduler-0" Jan 20 18:25:45 crc kubenswrapper[4661]: I0120 18:25:45.654545 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/07c8af61-2003-440a-b91e-325023e1ce8d-config-data\") pod \"cinder-scheduler-0\" (UID: \"07c8af61-2003-440a-b91e-325023e1ce8d\") " pod="openstack/cinder-scheduler-0" Jan 20 18:25:45 crc kubenswrapper[4661]: I0120 18:25:45.654899 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/07c8af61-2003-440a-b91e-325023e1ce8d-scripts\") pod \"cinder-scheduler-0\" (UID: \"07c8af61-2003-440a-b91e-325023e1ce8d\") " pod="openstack/cinder-scheduler-0" Jan 20 18:25:45 crc kubenswrapper[4661]: I0120 18:25:45.655360 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07c8af61-2003-440a-b91e-325023e1ce8d-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"07c8af61-2003-440a-b91e-325023e1ce8d\") " pod="openstack/cinder-scheduler-0" Jan 20 18:25:45 crc kubenswrapper[4661]: I0120 18:25:45.658467 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/07c8af61-2003-440a-b91e-325023e1ce8d-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"07c8af61-2003-440a-b91e-325023e1ce8d\") " pod="openstack/cinder-scheduler-0" Jan 20 18:25:45 crc kubenswrapper[4661]: I0120 18:25:45.681379 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9vdrg\" (UniqueName: \"kubernetes.io/projected/07c8af61-2003-440a-b91e-325023e1ce8d-kube-api-access-9vdrg\") pod \"cinder-scheduler-0\" (UID: \"07c8af61-2003-440a-b91e-325023e1ce8d\") " pod="openstack/cinder-scheduler-0" Jan 20 18:25:45 crc kubenswrapper[4661]: I0120 18:25:45.696097 4661 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Jan 20 18:25:45 crc kubenswrapper[4661]: I0120 18:25:45.697649 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 20 18:25:45 crc kubenswrapper[4661]: I0120 18:25:45.708948 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Jan 20 18:25:45 crc kubenswrapper[4661]: I0120 18:25:45.711997 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 20 18:25:45 crc kubenswrapper[4661]: I0120 18:25:45.752814 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kwhc6\" (UniqueName: \"kubernetes.io/projected/67e439a0-e39b-4b8f-b3ec-9a4d33ff65b7-kube-api-access-kwhc6\") pod \"dnsmasq-dns-6d97fcdd8f-fsv8j\" (UID: \"67e439a0-e39b-4b8f-b3ec-9a4d33ff65b7\") " pod="openstack/dnsmasq-dns-6d97fcdd8f-fsv8j" Jan 20 18:25:45 crc kubenswrapper[4661]: I0120 18:25:45.752880 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/67e439a0-e39b-4b8f-b3ec-9a4d33ff65b7-ovsdbserver-sb\") pod \"dnsmasq-dns-6d97fcdd8f-fsv8j\" (UID: \"67e439a0-e39b-4b8f-b3ec-9a4d33ff65b7\") " pod="openstack/dnsmasq-dns-6d97fcdd8f-fsv8j" Jan 20 18:25:45 crc kubenswrapper[4661]: I0120 18:25:45.752917 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/67e439a0-e39b-4b8f-b3ec-9a4d33ff65b7-dns-svc\") pod \"dnsmasq-dns-6d97fcdd8f-fsv8j\" (UID: \"67e439a0-e39b-4b8f-b3ec-9a4d33ff65b7\") " pod="openstack/dnsmasq-dns-6d97fcdd8f-fsv8j" Jan 20 18:25:45 crc kubenswrapper[4661]: I0120 18:25:45.752943 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/67e439a0-e39b-4b8f-b3ec-9a4d33ff65b7-ovsdbserver-nb\") pod \"dnsmasq-dns-6d97fcdd8f-fsv8j\" (UID: \"67e439a0-e39b-4b8f-b3ec-9a4d33ff65b7\") " pod="openstack/dnsmasq-dns-6d97fcdd8f-fsv8j" Jan 20 18:25:45 crc kubenswrapper[4661]: I0120 18:25:45.752960 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/67e439a0-e39b-4b8f-b3ec-9a4d33ff65b7-config\") pod \"dnsmasq-dns-6d97fcdd8f-fsv8j\" (UID: \"67e439a0-e39b-4b8f-b3ec-9a4d33ff65b7\") " pod="openstack/dnsmasq-dns-6d97fcdd8f-fsv8j" Jan 20 18:25:45 crc kubenswrapper[4661]: I0120 18:25:45.775098 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 20 18:25:45 crc kubenswrapper[4661]: I0120 18:25:45.857906 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/67e439a0-e39b-4b8f-b3ec-9a4d33ff65b7-ovsdbserver-nb\") pod \"dnsmasq-dns-6d97fcdd8f-fsv8j\" (UID: \"67e439a0-e39b-4b8f-b3ec-9a4d33ff65b7\") " pod="openstack/dnsmasq-dns-6d97fcdd8f-fsv8j" Jan 20 18:25:45 crc kubenswrapper[4661]: I0120 18:25:45.857959 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/67e439a0-e39b-4b8f-b3ec-9a4d33ff65b7-config\") pod \"dnsmasq-dns-6d97fcdd8f-fsv8j\" (UID: \"67e439a0-e39b-4b8f-b3ec-9a4d33ff65b7\") " pod="openstack/dnsmasq-dns-6d97fcdd8f-fsv8j" Jan 20 18:25:45 crc kubenswrapper[4661]: I0120 18:25:45.858012 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eea12f4c-2121-48d4-af51-bd98dd26100a-config-data\") pod \"cinder-api-0\" (UID: \"eea12f4c-2121-48d4-af51-bd98dd26100a\") " pod="openstack/cinder-api-0" Jan 20 18:25:45 crc kubenswrapper[4661]: I0120 18:25:45.858064 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eea12f4c-2121-48d4-af51-bd98dd26100a-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"eea12f4c-2121-48d4-af51-bd98dd26100a\") " pod="openstack/cinder-api-0" Jan 20 18:25:45 crc kubenswrapper[4661]: I0120 18:25:45.858109 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/eea12f4c-2121-48d4-af51-bd98dd26100a-logs\") pod \"cinder-api-0\" (UID: \"eea12f4c-2121-48d4-af51-bd98dd26100a\") " pod="openstack/cinder-api-0" Jan 20 18:25:45 crc kubenswrapper[4661]: I0120 18:25:45.858140 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/eea12f4c-2121-48d4-af51-bd98dd26100a-etc-machine-id\") pod \"cinder-api-0\" (UID: \"eea12f4c-2121-48d4-af51-bd98dd26100a\") " pod="openstack/cinder-api-0" Jan 20 18:25:45 crc kubenswrapper[4661]: I0120 18:25:45.858178 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kwhc6\" (UniqueName: \"kubernetes.io/projected/67e439a0-e39b-4b8f-b3ec-9a4d33ff65b7-kube-api-access-kwhc6\") pod \"dnsmasq-dns-6d97fcdd8f-fsv8j\" (UID: \"67e439a0-e39b-4b8f-b3ec-9a4d33ff65b7\") " pod="openstack/dnsmasq-dns-6d97fcdd8f-fsv8j" Jan 20 18:25:45 crc kubenswrapper[4661]: I0120 18:25:45.858213 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/eea12f4c-2121-48d4-af51-bd98dd26100a-scripts\") pod \"cinder-api-0\" (UID: \"eea12f4c-2121-48d4-af51-bd98dd26100a\") " pod="openstack/cinder-api-0" Jan 20 18:25:45 crc kubenswrapper[4661]: I0120 18:25:45.858260 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-69g72\" (UniqueName: \"kubernetes.io/projected/eea12f4c-2121-48d4-af51-bd98dd26100a-kube-api-access-69g72\") pod \"cinder-api-0\" (UID: \"eea12f4c-2121-48d4-af51-bd98dd26100a\") " pod="openstack/cinder-api-0" Jan 20 18:25:45 crc kubenswrapper[4661]: I0120 18:25:45.858284 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/67e439a0-e39b-4b8f-b3ec-9a4d33ff65b7-ovsdbserver-sb\") pod \"dnsmasq-dns-6d97fcdd8f-fsv8j\" (UID: \"67e439a0-e39b-4b8f-b3ec-9a4d33ff65b7\") " pod="openstack/dnsmasq-dns-6d97fcdd8f-fsv8j" Jan 20 18:25:45 crc kubenswrapper[4661]: I0120 18:25:45.858324 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/67e439a0-e39b-4b8f-b3ec-9a4d33ff65b7-dns-svc\") pod \"dnsmasq-dns-6d97fcdd8f-fsv8j\" (UID: \"67e439a0-e39b-4b8f-b3ec-9a4d33ff65b7\") " pod="openstack/dnsmasq-dns-6d97fcdd8f-fsv8j" Jan 20 18:25:45 crc kubenswrapper[4661]: I0120 18:25:45.858349 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/eea12f4c-2121-48d4-af51-bd98dd26100a-config-data-custom\") pod \"cinder-api-0\" (UID: \"eea12f4c-2121-48d4-af51-bd98dd26100a\") " pod="openstack/cinder-api-0" Jan 20 18:25:45 crc kubenswrapper[4661]: I0120 18:25:45.859381 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/67e439a0-e39b-4b8f-b3ec-9a4d33ff65b7-ovsdbserver-nb\") pod \"dnsmasq-dns-6d97fcdd8f-fsv8j\" (UID: \"67e439a0-e39b-4b8f-b3ec-9a4d33ff65b7\") " pod="openstack/dnsmasq-dns-6d97fcdd8f-fsv8j" Jan 20 18:25:45 crc kubenswrapper[4661]: I0120 18:25:45.860195 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/67e439a0-e39b-4b8f-b3ec-9a4d33ff65b7-config\") pod \"dnsmasq-dns-6d97fcdd8f-fsv8j\" (UID: \"67e439a0-e39b-4b8f-b3ec-9a4d33ff65b7\") " pod="openstack/dnsmasq-dns-6d97fcdd8f-fsv8j" Jan 20 18:25:45 crc kubenswrapper[4661]: I0120 18:25:45.860763 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/67e439a0-e39b-4b8f-b3ec-9a4d33ff65b7-ovsdbserver-sb\") pod \"dnsmasq-dns-6d97fcdd8f-fsv8j\" (UID: \"67e439a0-e39b-4b8f-b3ec-9a4d33ff65b7\") " pod="openstack/dnsmasq-dns-6d97fcdd8f-fsv8j" Jan 20 18:25:45 crc kubenswrapper[4661]: I0120 18:25:45.863396 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/67e439a0-e39b-4b8f-b3ec-9a4d33ff65b7-dns-svc\") pod \"dnsmasq-dns-6d97fcdd8f-fsv8j\" (UID: \"67e439a0-e39b-4b8f-b3ec-9a4d33ff65b7\") " pod="openstack/dnsmasq-dns-6d97fcdd8f-fsv8j" Jan 20 18:25:45 crc kubenswrapper[4661]: I0120 18:25:45.878374 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kwhc6\" (UniqueName: \"kubernetes.io/projected/67e439a0-e39b-4b8f-b3ec-9a4d33ff65b7-kube-api-access-kwhc6\") pod \"dnsmasq-dns-6d97fcdd8f-fsv8j\" (UID: \"67e439a0-e39b-4b8f-b3ec-9a4d33ff65b7\") " pod="openstack/dnsmasq-dns-6d97fcdd8f-fsv8j" Jan 20 18:25:45 crc kubenswrapper[4661]: I0120 18:25:45.910534 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6d97fcdd8f-fsv8j" Jan 20 18:25:45 crc kubenswrapper[4661]: I0120 18:25:45.961573 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eea12f4c-2121-48d4-af51-bd98dd26100a-config-data\") pod \"cinder-api-0\" (UID: \"eea12f4c-2121-48d4-af51-bd98dd26100a\") " pod="openstack/cinder-api-0" Jan 20 18:25:45 crc kubenswrapper[4661]: I0120 18:25:45.961646 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eea12f4c-2121-48d4-af51-bd98dd26100a-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"eea12f4c-2121-48d4-af51-bd98dd26100a\") " pod="openstack/cinder-api-0" Jan 20 18:25:45 crc kubenswrapper[4661]: I0120 18:25:45.961696 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/eea12f4c-2121-48d4-af51-bd98dd26100a-logs\") pod \"cinder-api-0\" (UID: \"eea12f4c-2121-48d4-af51-bd98dd26100a\") " pod="openstack/cinder-api-0" Jan 20 18:25:45 crc kubenswrapper[4661]: I0120 18:25:45.961726 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/eea12f4c-2121-48d4-af51-bd98dd26100a-etc-machine-id\") pod \"cinder-api-0\" (UID: \"eea12f4c-2121-48d4-af51-bd98dd26100a\") " pod="openstack/cinder-api-0" Jan 20 18:25:45 crc kubenswrapper[4661]: I0120 18:25:45.961761 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/eea12f4c-2121-48d4-af51-bd98dd26100a-scripts\") pod \"cinder-api-0\" (UID: \"eea12f4c-2121-48d4-af51-bd98dd26100a\") " pod="openstack/cinder-api-0" Jan 20 18:25:45 crc kubenswrapper[4661]: I0120 18:25:45.961799 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-69g72\" (UniqueName: \"kubernetes.io/projected/eea12f4c-2121-48d4-af51-bd98dd26100a-kube-api-access-69g72\") pod \"cinder-api-0\" (UID: \"eea12f4c-2121-48d4-af51-bd98dd26100a\") " pod="openstack/cinder-api-0" Jan 20 18:25:45 crc kubenswrapper[4661]: I0120 18:25:45.961831 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/eea12f4c-2121-48d4-af51-bd98dd26100a-config-data-custom\") pod \"cinder-api-0\" (UID: \"eea12f4c-2121-48d4-af51-bd98dd26100a\") " pod="openstack/cinder-api-0" Jan 20 18:25:45 crc kubenswrapper[4661]: I0120 18:25:45.962863 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/eea12f4c-2121-48d4-af51-bd98dd26100a-logs\") pod \"cinder-api-0\" (UID: \"eea12f4c-2121-48d4-af51-bd98dd26100a\") " pod="openstack/cinder-api-0" Jan 20 18:25:45 crc kubenswrapper[4661]: I0120 18:25:45.965836 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/eea12f4c-2121-48d4-af51-bd98dd26100a-config-data-custom\") pod \"cinder-api-0\" (UID: \"eea12f4c-2121-48d4-af51-bd98dd26100a\") " pod="openstack/cinder-api-0" Jan 20 18:25:45 crc kubenswrapper[4661]: I0120 18:25:45.965917 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/eea12f4c-2121-48d4-af51-bd98dd26100a-etc-machine-id\") pod \"cinder-api-0\" (UID: \"eea12f4c-2121-48d4-af51-bd98dd26100a\") " pod="openstack/cinder-api-0" Jan 20 18:25:45 crc kubenswrapper[4661]: I0120 18:25:45.967217 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eea12f4c-2121-48d4-af51-bd98dd26100a-config-data\") pod \"cinder-api-0\" (UID: \"eea12f4c-2121-48d4-af51-bd98dd26100a\") " pod="openstack/cinder-api-0" Jan 20 18:25:45 crc kubenswrapper[4661]: I0120 18:25:45.975907 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eea12f4c-2121-48d4-af51-bd98dd26100a-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"eea12f4c-2121-48d4-af51-bd98dd26100a\") " pod="openstack/cinder-api-0" Jan 20 18:25:45 crc kubenswrapper[4661]: I0120 18:25:45.976171 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/eea12f4c-2121-48d4-af51-bd98dd26100a-scripts\") pod \"cinder-api-0\" (UID: \"eea12f4c-2121-48d4-af51-bd98dd26100a\") " pod="openstack/cinder-api-0" Jan 20 18:25:46 crc kubenswrapper[4661]: I0120 18:25:46.014133 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-69g72\" (UniqueName: \"kubernetes.io/projected/eea12f4c-2121-48d4-af51-bd98dd26100a-kube-api-access-69g72\") pod \"cinder-api-0\" (UID: \"eea12f4c-2121-48d4-af51-bd98dd26100a\") " pod="openstack/cinder-api-0" Jan 20 18:25:46 crc kubenswrapper[4661]: I0120 18:25:46.059157 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 20 18:25:46 crc kubenswrapper[4661]: I0120 18:25:46.289460 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1a173834-dcef-416e-9eec-7c3038fcb78e","Type":"ContainerStarted","Data":"1c528b84591f85ed085d477bc1bda4d1bf46412cadbedf2f18823c1c354c3c27"} Jan 20 18:25:46 crc kubenswrapper[4661]: I0120 18:25:46.290197 4661 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 20 18:25:46 crc kubenswrapper[4661]: I0120 18:25:46.370198 4661 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.5503851690000001 podStartE2EDuration="5.370176626s" podCreationTimestamp="2026-01-20 18:25:41 +0000 UTC" firstStartedPulling="2026-01-20 18:25:42.028931577 +0000 UTC m=+1198.359721239" lastFinishedPulling="2026-01-20 18:25:45.848723034 +0000 UTC m=+1202.179512696" observedRunningTime="2026-01-20 18:25:46.327421801 +0000 UTC m=+1202.658211453" watchObservedRunningTime="2026-01-20 18:25:46.370176626 +0000 UTC m=+1202.700966288" Jan 20 18:25:46 crc kubenswrapper[4661]: I0120 18:25:46.455952 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 20 18:25:46 crc kubenswrapper[4661]: I0120 18:25:46.642035 4661 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-7b8c8444c8-77p78" Jan 20 18:25:46 crc kubenswrapper[4661]: I0120 18:25:46.712361 4661 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-79fcf454c6-dzz6z"] Jan 20 18:25:46 crc kubenswrapper[4661]: I0120 18:25:46.717749 4661 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-79fcf454c6-dzz6z" podUID="52dc5147-ce19-4dcc-94b5-a2eaacfba32d" containerName="barbican-api-log" containerID="cri-o://1c28619d8fadc633e6dc9980642d7bffe0eecd909790908217e5b278b7484f67" gracePeriod=30 Jan 20 18:25:46 crc kubenswrapper[4661]: I0120 18:25:46.718208 4661 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-79fcf454c6-dzz6z" podUID="52dc5147-ce19-4dcc-94b5-a2eaacfba32d" containerName="barbican-api" containerID="cri-o://62dc42b97d632f736c68a5edeb599f75a56ef8ad5ff6b2d89515f96206e9419a" gracePeriod=30 Jan 20 18:25:46 crc kubenswrapper[4661]: I0120 18:25:46.779771 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6d97fcdd8f-fsv8j"] Jan 20 18:25:46 crc kubenswrapper[4661]: I0120 18:25:46.867105 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 20 18:25:47 crc kubenswrapper[4661]: I0120 18:25:47.306148 4661 generic.go:334] "Generic (PLEG): container finished" podID="52dc5147-ce19-4dcc-94b5-a2eaacfba32d" containerID="1c28619d8fadc633e6dc9980642d7bffe0eecd909790908217e5b278b7484f67" exitCode=143 Jan 20 18:25:47 crc kubenswrapper[4661]: I0120 18:25:47.306328 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-79fcf454c6-dzz6z" event={"ID":"52dc5147-ce19-4dcc-94b5-a2eaacfba32d","Type":"ContainerDied","Data":"1c28619d8fadc633e6dc9980642d7bffe0eecd909790908217e5b278b7484f67"} Jan 20 18:25:47 crc kubenswrapper[4661]: I0120 18:25:47.308018 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"eea12f4c-2121-48d4-af51-bd98dd26100a","Type":"ContainerStarted","Data":"305a0c8d87473469d883531d5653e7d5b09926f215ecc9a75db88e620629136a"} Jan 20 18:25:47 crc kubenswrapper[4661]: I0120 18:25:47.310187 4661 generic.go:334] "Generic (PLEG): container finished" podID="67e439a0-e39b-4b8f-b3ec-9a4d33ff65b7" containerID="2d39e9c62669bf5e6d1cfe7d5b5ffea656fcdc7770f10e4ecca1d006ec76a61d" exitCode=0 Jan 20 18:25:47 crc kubenswrapper[4661]: I0120 18:25:47.312949 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6d97fcdd8f-fsv8j" event={"ID":"67e439a0-e39b-4b8f-b3ec-9a4d33ff65b7","Type":"ContainerDied","Data":"2d39e9c62669bf5e6d1cfe7d5b5ffea656fcdc7770f10e4ecca1d006ec76a61d"} Jan 20 18:25:47 crc kubenswrapper[4661]: I0120 18:25:47.312983 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6d97fcdd8f-fsv8j" event={"ID":"67e439a0-e39b-4b8f-b3ec-9a4d33ff65b7","Type":"ContainerStarted","Data":"0c4bb09fabc8c45a3ba4bcd8d8f4da140c1d4789d024a6c9564577b90d0feb46"} Jan 20 18:25:47 crc kubenswrapper[4661]: I0120 18:25:47.317214 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"07c8af61-2003-440a-b91e-325023e1ce8d","Type":"ContainerStarted","Data":"f93e08509ce1f145f10587f8b38d038ced72bffd6d45df1f433faa44ee9a2951"} Jan 20 18:25:47 crc kubenswrapper[4661]: I0120 18:25:47.895802 4661 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Jan 20 18:25:48 crc kubenswrapper[4661]: I0120 18:25:48.343122 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"eea12f4c-2121-48d4-af51-bd98dd26100a","Type":"ContainerStarted","Data":"86d9a4b9e7a02798a5898618549682c8c507eb842aabf3ad27be7ac5086e3bc0"} Jan 20 18:25:48 crc kubenswrapper[4661]: I0120 18:25:48.349796 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6d97fcdd8f-fsv8j" event={"ID":"67e439a0-e39b-4b8f-b3ec-9a4d33ff65b7","Type":"ContainerStarted","Data":"e74a347199d47c64729d7e0a4baaf234808f081cd5cc79ddf30d623ca11bd566"} Jan 20 18:25:48 crc kubenswrapper[4661]: I0120 18:25:48.350752 4661 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6d97fcdd8f-fsv8j" Jan 20 18:25:48 crc kubenswrapper[4661]: I0120 18:25:48.373246 4661 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6d97fcdd8f-fsv8j" podStartSLOduration=3.373230635 podStartE2EDuration="3.373230635s" podCreationTimestamp="2026-01-20 18:25:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 18:25:48.371927241 +0000 UTC m=+1204.702716913" watchObservedRunningTime="2026-01-20 18:25:48.373230635 +0000 UTC m=+1204.704020297" Jan 20 18:25:49 crc kubenswrapper[4661]: I0120 18:25:49.358239 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"eea12f4c-2121-48d4-af51-bd98dd26100a","Type":"ContainerStarted","Data":"918540e896fb88325b94343b41d1363cd0ac8f7dafbded3a8c69582958133eea"} Jan 20 18:25:49 crc kubenswrapper[4661]: I0120 18:25:49.358365 4661 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="eea12f4c-2121-48d4-af51-bd98dd26100a" containerName="cinder-api-log" containerID="cri-o://86d9a4b9e7a02798a5898618549682c8c507eb842aabf3ad27be7ac5086e3bc0" gracePeriod=30 Jan 20 18:25:49 crc kubenswrapper[4661]: I0120 18:25:49.358614 4661 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Jan 20 18:25:49 crc kubenswrapper[4661]: I0120 18:25:49.358646 4661 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="eea12f4c-2121-48d4-af51-bd98dd26100a" containerName="cinder-api" containerID="cri-o://918540e896fb88325b94343b41d1363cd0ac8f7dafbded3a8c69582958133eea" gracePeriod=30 Jan 20 18:25:49 crc kubenswrapper[4661]: I0120 18:25:49.363886 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"07c8af61-2003-440a-b91e-325023e1ce8d","Type":"ContainerStarted","Data":"23968789b84c0642363f8c2db9928aa1d1a4b495738652c786126f866c2ac0ac"} Jan 20 18:25:49 crc kubenswrapper[4661]: I0120 18:25:49.363918 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"07c8af61-2003-440a-b91e-325023e1ce8d","Type":"ContainerStarted","Data":"0de7201438bba87c91f9d8730d86f685696f9d6c3ef60d87fc5ebd2d344e743e"} Jan 20 18:25:49 crc kubenswrapper[4661]: I0120 18:25:49.383112 4661 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=4.383096969 podStartE2EDuration="4.383096969s" podCreationTimestamp="2026-01-20 18:25:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 18:25:49.376843845 +0000 UTC m=+1205.707633507" watchObservedRunningTime="2026-01-20 18:25:49.383096969 +0000 UTC m=+1205.713886621" Jan 20 18:25:49 crc kubenswrapper[4661]: I0120 18:25:49.402119 4661 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=3.461532129 podStartE2EDuration="4.402089299s" podCreationTimestamp="2026-01-20 18:25:45 +0000 UTC" firstStartedPulling="2026-01-20 18:25:46.463984694 +0000 UTC m=+1202.794774346" lastFinishedPulling="2026-01-20 18:25:47.404541854 +0000 UTC m=+1203.735331516" observedRunningTime="2026-01-20 18:25:49.400150998 +0000 UTC m=+1205.730940660" watchObservedRunningTime="2026-01-20 18:25:49.402089299 +0000 UTC m=+1205.732878961" Jan 20 18:25:50 crc kubenswrapper[4661]: I0120 18:25:50.203820 4661 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 20 18:25:50 crc kubenswrapper[4661]: I0120 18:25:50.277169 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/eea12f4c-2121-48d4-af51-bd98dd26100a-config-data-custom\") pod \"eea12f4c-2121-48d4-af51-bd98dd26100a\" (UID: \"eea12f4c-2121-48d4-af51-bd98dd26100a\") " Jan 20 18:25:50 crc kubenswrapper[4661]: I0120 18:25:50.277218 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/eea12f4c-2121-48d4-af51-bd98dd26100a-scripts\") pod \"eea12f4c-2121-48d4-af51-bd98dd26100a\" (UID: \"eea12f4c-2121-48d4-af51-bd98dd26100a\") " Jan 20 18:25:50 crc kubenswrapper[4661]: I0120 18:25:50.277340 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eea12f4c-2121-48d4-af51-bd98dd26100a-config-data\") pod \"eea12f4c-2121-48d4-af51-bd98dd26100a\" (UID: \"eea12f4c-2121-48d4-af51-bd98dd26100a\") " Jan 20 18:25:50 crc kubenswrapper[4661]: I0120 18:25:50.277388 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/eea12f4c-2121-48d4-af51-bd98dd26100a-logs\") pod \"eea12f4c-2121-48d4-af51-bd98dd26100a\" (UID: \"eea12f4c-2121-48d4-af51-bd98dd26100a\") " Jan 20 18:25:50 crc kubenswrapper[4661]: I0120 18:25:50.277405 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/eea12f4c-2121-48d4-af51-bd98dd26100a-etc-machine-id\") pod \"eea12f4c-2121-48d4-af51-bd98dd26100a\" (UID: \"eea12f4c-2121-48d4-af51-bd98dd26100a\") " Jan 20 18:25:50 crc kubenswrapper[4661]: I0120 18:25:50.277427 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eea12f4c-2121-48d4-af51-bd98dd26100a-combined-ca-bundle\") pod \"eea12f4c-2121-48d4-af51-bd98dd26100a\" (UID: \"eea12f4c-2121-48d4-af51-bd98dd26100a\") " Jan 20 18:25:50 crc kubenswrapper[4661]: I0120 18:25:50.277629 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-69g72\" (UniqueName: \"kubernetes.io/projected/eea12f4c-2121-48d4-af51-bd98dd26100a-kube-api-access-69g72\") pod \"eea12f4c-2121-48d4-af51-bd98dd26100a\" (UID: \"eea12f4c-2121-48d4-af51-bd98dd26100a\") " Jan 20 18:25:50 crc kubenswrapper[4661]: I0120 18:25:50.279012 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/eea12f4c-2121-48d4-af51-bd98dd26100a-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "eea12f4c-2121-48d4-af51-bd98dd26100a" (UID: "eea12f4c-2121-48d4-af51-bd98dd26100a"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 20 18:25:50 crc kubenswrapper[4661]: I0120 18:25:50.280507 4661 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/eea12f4c-2121-48d4-af51-bd98dd26100a-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 20 18:25:50 crc kubenswrapper[4661]: I0120 18:25:50.283145 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/eea12f4c-2121-48d4-af51-bd98dd26100a-logs" (OuterVolumeSpecName: "logs") pod "eea12f4c-2121-48d4-af51-bd98dd26100a" (UID: "eea12f4c-2121-48d4-af51-bd98dd26100a"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 18:25:50 crc kubenswrapper[4661]: I0120 18:25:50.285473 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eea12f4c-2121-48d4-af51-bd98dd26100a-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "eea12f4c-2121-48d4-af51-bd98dd26100a" (UID: "eea12f4c-2121-48d4-af51-bd98dd26100a"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 18:25:50 crc kubenswrapper[4661]: I0120 18:25:50.299831 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eea12f4c-2121-48d4-af51-bd98dd26100a-kube-api-access-69g72" (OuterVolumeSpecName: "kube-api-access-69g72") pod "eea12f4c-2121-48d4-af51-bd98dd26100a" (UID: "eea12f4c-2121-48d4-af51-bd98dd26100a"). InnerVolumeSpecName "kube-api-access-69g72". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 18:25:50 crc kubenswrapper[4661]: I0120 18:25:50.303278 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eea12f4c-2121-48d4-af51-bd98dd26100a-scripts" (OuterVolumeSpecName: "scripts") pod "eea12f4c-2121-48d4-af51-bd98dd26100a" (UID: "eea12f4c-2121-48d4-af51-bd98dd26100a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 18:25:50 crc kubenswrapper[4661]: I0120 18:25:50.315411 4661 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-79fcf454c6-dzz6z" podUID="52dc5147-ce19-4dcc-94b5-a2eaacfba32d" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.147:9311/healthcheck\": read tcp 10.217.0.2:53836->10.217.0.147:9311: read: connection reset by peer" Jan 20 18:25:50 crc kubenswrapper[4661]: I0120 18:25:50.315426 4661 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-79fcf454c6-dzz6z" podUID="52dc5147-ce19-4dcc-94b5-a2eaacfba32d" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.147:9311/healthcheck\": read tcp 10.217.0.2:53828->10.217.0.147:9311: read: connection reset by peer" Jan 20 18:25:50 crc kubenswrapper[4661]: I0120 18:25:50.350835 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eea12f4c-2121-48d4-af51-bd98dd26100a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "eea12f4c-2121-48d4-af51-bd98dd26100a" (UID: "eea12f4c-2121-48d4-af51-bd98dd26100a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 18:25:50 crc kubenswrapper[4661]: I0120 18:25:50.380693 4661 generic.go:334] "Generic (PLEG): container finished" podID="eea12f4c-2121-48d4-af51-bd98dd26100a" containerID="918540e896fb88325b94343b41d1363cd0ac8f7dafbded3a8c69582958133eea" exitCode=0 Jan 20 18:25:50 crc kubenswrapper[4661]: I0120 18:25:50.382451 4661 generic.go:334] "Generic (PLEG): container finished" podID="eea12f4c-2121-48d4-af51-bd98dd26100a" containerID="86d9a4b9e7a02798a5898618549682c8c507eb842aabf3ad27be7ac5086e3bc0" exitCode=143 Jan 20 18:25:50 crc kubenswrapper[4661]: I0120 18:25:50.380992 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eea12f4c-2121-48d4-af51-bd98dd26100a-config-data" (OuterVolumeSpecName: "config-data") pod "eea12f4c-2121-48d4-af51-bd98dd26100a" (UID: "eea12f4c-2121-48d4-af51-bd98dd26100a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 18:25:50 crc kubenswrapper[4661]: I0120 18:25:50.381092 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"eea12f4c-2121-48d4-af51-bd98dd26100a","Type":"ContainerDied","Data":"918540e896fb88325b94343b41d1363cd0ac8f7dafbded3a8c69582958133eea"} Jan 20 18:25:50 crc kubenswrapper[4661]: I0120 18:25:50.382739 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"eea12f4c-2121-48d4-af51-bd98dd26100a","Type":"ContainerDied","Data":"86d9a4b9e7a02798a5898618549682c8c507eb842aabf3ad27be7ac5086e3bc0"} Jan 20 18:25:50 crc kubenswrapper[4661]: I0120 18:25:50.382800 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"eea12f4c-2121-48d4-af51-bd98dd26100a","Type":"ContainerDied","Data":"305a0c8d87473469d883531d5653e7d5b09926f215ecc9a75db88e620629136a"} Jan 20 18:25:50 crc kubenswrapper[4661]: I0120 18:25:50.382856 4661 scope.go:117] "RemoveContainer" containerID="918540e896fb88325b94343b41d1363cd0ac8f7dafbded3a8c69582958133eea" Jan 20 18:25:50 crc kubenswrapper[4661]: I0120 18:25:50.381078 4661 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 20 18:25:50 crc kubenswrapper[4661]: I0120 18:25:50.382435 4661 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/eea12f4c-2121-48d4-af51-bd98dd26100a-logs\") on node \"crc\" DevicePath \"\"" Jan 20 18:25:50 crc kubenswrapper[4661]: I0120 18:25:50.383862 4661 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eea12f4c-2121-48d4-af51-bd98dd26100a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 20 18:25:50 crc kubenswrapper[4661]: I0120 18:25:50.383876 4661 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-69g72\" (UniqueName: \"kubernetes.io/projected/eea12f4c-2121-48d4-af51-bd98dd26100a-kube-api-access-69g72\") on node \"crc\" DevicePath \"\"" Jan 20 18:25:50 crc kubenswrapper[4661]: I0120 18:25:50.383887 4661 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/eea12f4c-2121-48d4-af51-bd98dd26100a-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 20 18:25:50 crc kubenswrapper[4661]: I0120 18:25:50.383906 4661 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/eea12f4c-2121-48d4-af51-bd98dd26100a-scripts\") on node \"crc\" DevicePath \"\"" Jan 20 18:25:50 crc kubenswrapper[4661]: I0120 18:25:50.404175 4661 generic.go:334] "Generic (PLEG): container finished" podID="52dc5147-ce19-4dcc-94b5-a2eaacfba32d" containerID="62dc42b97d632f736c68a5edeb599f75a56ef8ad5ff6b2d89515f96206e9419a" exitCode=0 Jan 20 18:25:50 crc kubenswrapper[4661]: I0120 18:25:50.404381 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-79fcf454c6-dzz6z" event={"ID":"52dc5147-ce19-4dcc-94b5-a2eaacfba32d","Type":"ContainerDied","Data":"62dc42b97d632f736c68a5edeb599f75a56ef8ad5ff6b2d89515f96206e9419a"} Jan 20 18:25:50 crc kubenswrapper[4661]: I0120 18:25:50.412882 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-5b484f76ff-qrd8w" event={"ID":"341d9328-73af-4986-9901-43b929a9e030","Type":"ContainerStarted","Data":"e822c1358f51a5a328f78e6f377a4f9ba8227471e5881ad269d14760136e9f61"} Jan 20 18:25:50 crc kubenswrapper[4661]: I0120 18:25:50.412919 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-5b484f76ff-qrd8w" event={"ID":"341d9328-73af-4986-9901-43b929a9e030","Type":"ContainerStarted","Data":"6e69d9dcf724ba3f48ee5bbd2883ba320685fb62da07b9e8076c51220f59c235"} Jan 20 18:25:50 crc kubenswrapper[4661]: I0120 18:25:50.423253 4661 scope.go:117] "RemoveContainer" containerID="86d9a4b9e7a02798a5898618549682c8c507eb842aabf3ad27be7ac5086e3bc0" Jan 20 18:25:50 crc kubenswrapper[4661]: I0120 18:25:50.444106 4661 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Jan 20 18:25:50 crc kubenswrapper[4661]: I0120 18:25:50.450491 4661 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Jan 20 18:25:50 crc kubenswrapper[4661]: I0120 18:25:50.470815 4661 scope.go:117] "RemoveContainer" containerID="918540e896fb88325b94343b41d1363cd0ac8f7dafbded3a8c69582958133eea" Jan 20 18:25:50 crc kubenswrapper[4661]: I0120 18:25:50.476710 4661 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Jan 20 18:25:50 crc kubenswrapper[4661]: E0120 18:25:50.477179 4661 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eea12f4c-2121-48d4-af51-bd98dd26100a" containerName="cinder-api-log" Jan 20 18:25:50 crc kubenswrapper[4661]: I0120 18:25:50.477245 4661 state_mem.go:107] "Deleted CPUSet assignment" podUID="eea12f4c-2121-48d4-af51-bd98dd26100a" containerName="cinder-api-log" Jan 20 18:25:50 crc kubenswrapper[4661]: E0120 18:25:50.477310 4661 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eea12f4c-2121-48d4-af51-bd98dd26100a" containerName="cinder-api" Jan 20 18:25:50 crc kubenswrapper[4661]: I0120 18:25:50.477358 4661 state_mem.go:107] "Deleted CPUSet assignment" podUID="eea12f4c-2121-48d4-af51-bd98dd26100a" containerName="cinder-api" Jan 20 18:25:50 crc kubenswrapper[4661]: I0120 18:25:50.477589 4661 memory_manager.go:354] "RemoveStaleState removing state" podUID="eea12f4c-2121-48d4-af51-bd98dd26100a" containerName="cinder-api" Jan 20 18:25:50 crc kubenswrapper[4661]: I0120 18:25:50.477683 4661 memory_manager.go:354] "RemoveStaleState removing state" podUID="eea12f4c-2121-48d4-af51-bd98dd26100a" containerName="cinder-api-log" Jan 20 18:25:50 crc kubenswrapper[4661]: I0120 18:25:50.478568 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 20 18:25:50 crc kubenswrapper[4661]: I0120 18:25:50.481244 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-internal-svc" Jan 20 18:25:50 crc kubenswrapper[4661]: I0120 18:25:50.481489 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-public-svc" Jan 20 18:25:50 crc kubenswrapper[4661]: I0120 18:25:50.481603 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Jan 20 18:25:50 crc kubenswrapper[4661]: I0120 18:25:50.483793 4661 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-5b484f76ff-qrd8w" podStartSLOduration=3.46633727 podStartE2EDuration="30.483778734s" podCreationTimestamp="2026-01-20 18:25:20 +0000 UTC" firstStartedPulling="2026-01-20 18:25:22.571150803 +0000 UTC m=+1178.901940465" lastFinishedPulling="2026-01-20 18:25:49.588592267 +0000 UTC m=+1205.919381929" observedRunningTime="2026-01-20 18:25:50.471236694 +0000 UTC m=+1206.802026356" watchObservedRunningTime="2026-01-20 18:25:50.483778734 +0000 UTC m=+1206.814568396" Jan 20 18:25:50 crc kubenswrapper[4661]: I0120 18:25:50.484455 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b4fa215a-165d-44b7-9bfd-19a2a9a5205c-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"b4fa215a-165d-44b7-9bfd-19a2a9a5205c\") " pod="openstack/cinder-api-0" Jan 20 18:25:50 crc kubenswrapper[4661]: I0120 18:25:50.484559 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/b4fa215a-165d-44b7-9bfd-19a2a9a5205c-etc-machine-id\") pod \"cinder-api-0\" (UID: \"b4fa215a-165d-44b7-9bfd-19a2a9a5205c\") " pod="openstack/cinder-api-0" Jan 20 18:25:50 crc kubenswrapper[4661]: I0120 18:25:50.484642 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b4fa215a-165d-44b7-9bfd-19a2a9a5205c-logs\") pod \"cinder-api-0\" (UID: \"b4fa215a-165d-44b7-9bfd-19a2a9a5205c\") " pod="openstack/cinder-api-0" Jan 20 18:25:50 crc kubenswrapper[4661]: I0120 18:25:50.484776 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-swq8n\" (UniqueName: \"kubernetes.io/projected/b4fa215a-165d-44b7-9bfd-19a2a9a5205c-kube-api-access-swq8n\") pod \"cinder-api-0\" (UID: \"b4fa215a-165d-44b7-9bfd-19a2a9a5205c\") " pod="openstack/cinder-api-0" Jan 20 18:25:50 crc kubenswrapper[4661]: I0120 18:25:50.484850 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b4fa215a-165d-44b7-9bfd-19a2a9a5205c-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"b4fa215a-165d-44b7-9bfd-19a2a9a5205c\") " pod="openstack/cinder-api-0" Jan 20 18:25:50 crc kubenswrapper[4661]: I0120 18:25:50.484935 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b4fa215a-165d-44b7-9bfd-19a2a9a5205c-config-data-custom\") pod \"cinder-api-0\" (UID: \"b4fa215a-165d-44b7-9bfd-19a2a9a5205c\") " pod="openstack/cinder-api-0" Jan 20 18:25:50 crc kubenswrapper[4661]: I0120 18:25:50.485039 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b4fa215a-165d-44b7-9bfd-19a2a9a5205c-public-tls-certs\") pod \"cinder-api-0\" (UID: \"b4fa215a-165d-44b7-9bfd-19a2a9a5205c\") " pod="openstack/cinder-api-0" Jan 20 18:25:50 crc kubenswrapper[4661]: I0120 18:25:50.485119 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b4fa215a-165d-44b7-9bfd-19a2a9a5205c-scripts\") pod \"cinder-api-0\" (UID: \"b4fa215a-165d-44b7-9bfd-19a2a9a5205c\") " pod="openstack/cinder-api-0" Jan 20 18:25:50 crc kubenswrapper[4661]: I0120 18:25:50.485182 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b4fa215a-165d-44b7-9bfd-19a2a9a5205c-config-data\") pod \"cinder-api-0\" (UID: \"b4fa215a-165d-44b7-9bfd-19a2a9a5205c\") " pod="openstack/cinder-api-0" Jan 20 18:25:50 crc kubenswrapper[4661]: I0120 18:25:50.485320 4661 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eea12f4c-2121-48d4-af51-bd98dd26100a-config-data\") on node \"crc\" DevicePath \"\"" Jan 20 18:25:50 crc kubenswrapper[4661]: E0120 18:25:50.487750 4661 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"918540e896fb88325b94343b41d1363cd0ac8f7dafbded3a8c69582958133eea\": container with ID starting with 918540e896fb88325b94343b41d1363cd0ac8f7dafbded3a8c69582958133eea not found: ID does not exist" containerID="918540e896fb88325b94343b41d1363cd0ac8f7dafbded3a8c69582958133eea" Jan 20 18:25:50 crc kubenswrapper[4661]: I0120 18:25:50.487856 4661 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"918540e896fb88325b94343b41d1363cd0ac8f7dafbded3a8c69582958133eea"} err="failed to get container status \"918540e896fb88325b94343b41d1363cd0ac8f7dafbded3a8c69582958133eea\": rpc error: code = NotFound desc = could not find container \"918540e896fb88325b94343b41d1363cd0ac8f7dafbded3a8c69582958133eea\": container with ID starting with 918540e896fb88325b94343b41d1363cd0ac8f7dafbded3a8c69582958133eea not found: ID does not exist" Jan 20 18:25:50 crc kubenswrapper[4661]: I0120 18:25:50.487935 4661 scope.go:117] "RemoveContainer" containerID="86d9a4b9e7a02798a5898618549682c8c507eb842aabf3ad27be7ac5086e3bc0" Jan 20 18:25:50 crc kubenswrapper[4661]: E0120 18:25:50.496704 4661 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"86d9a4b9e7a02798a5898618549682c8c507eb842aabf3ad27be7ac5086e3bc0\": container with ID starting with 86d9a4b9e7a02798a5898618549682c8c507eb842aabf3ad27be7ac5086e3bc0 not found: ID does not exist" containerID="86d9a4b9e7a02798a5898618549682c8c507eb842aabf3ad27be7ac5086e3bc0" Jan 20 18:25:50 crc kubenswrapper[4661]: I0120 18:25:50.496816 4661 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"86d9a4b9e7a02798a5898618549682c8c507eb842aabf3ad27be7ac5086e3bc0"} err="failed to get container status \"86d9a4b9e7a02798a5898618549682c8c507eb842aabf3ad27be7ac5086e3bc0\": rpc error: code = NotFound desc = could not find container \"86d9a4b9e7a02798a5898618549682c8c507eb842aabf3ad27be7ac5086e3bc0\": container with ID starting with 86d9a4b9e7a02798a5898618549682c8c507eb842aabf3ad27be7ac5086e3bc0 not found: ID does not exist" Jan 20 18:25:50 crc kubenswrapper[4661]: I0120 18:25:50.496884 4661 scope.go:117] "RemoveContainer" containerID="918540e896fb88325b94343b41d1363cd0ac8f7dafbded3a8c69582958133eea" Jan 20 18:25:50 crc kubenswrapper[4661]: I0120 18:25:50.498372 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 20 18:25:50 crc kubenswrapper[4661]: I0120 18:25:50.507650 4661 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"918540e896fb88325b94343b41d1363cd0ac8f7dafbded3a8c69582958133eea"} err="failed to get container status \"918540e896fb88325b94343b41d1363cd0ac8f7dafbded3a8c69582958133eea\": rpc error: code = NotFound desc = could not find container \"918540e896fb88325b94343b41d1363cd0ac8f7dafbded3a8c69582958133eea\": container with ID starting with 918540e896fb88325b94343b41d1363cd0ac8f7dafbded3a8c69582958133eea not found: ID does not exist" Jan 20 18:25:50 crc kubenswrapper[4661]: I0120 18:25:50.507833 4661 scope.go:117] "RemoveContainer" containerID="86d9a4b9e7a02798a5898618549682c8c507eb842aabf3ad27be7ac5086e3bc0" Jan 20 18:25:50 crc kubenswrapper[4661]: I0120 18:25:50.514124 4661 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"86d9a4b9e7a02798a5898618549682c8c507eb842aabf3ad27be7ac5086e3bc0"} err="failed to get container status \"86d9a4b9e7a02798a5898618549682c8c507eb842aabf3ad27be7ac5086e3bc0\": rpc error: code = NotFound desc = could not find container \"86d9a4b9e7a02798a5898618549682c8c507eb842aabf3ad27be7ac5086e3bc0\": container with ID starting with 86d9a4b9e7a02798a5898618549682c8c507eb842aabf3ad27be7ac5086e3bc0 not found: ID does not exist" Jan 20 18:25:50 crc kubenswrapper[4661]: I0120 18:25:50.587334 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b4fa215a-165d-44b7-9bfd-19a2a9a5205c-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"b4fa215a-165d-44b7-9bfd-19a2a9a5205c\") " pod="openstack/cinder-api-0" Jan 20 18:25:50 crc kubenswrapper[4661]: I0120 18:25:50.587411 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/b4fa215a-165d-44b7-9bfd-19a2a9a5205c-etc-machine-id\") pod \"cinder-api-0\" (UID: \"b4fa215a-165d-44b7-9bfd-19a2a9a5205c\") " pod="openstack/cinder-api-0" Jan 20 18:25:50 crc kubenswrapper[4661]: I0120 18:25:50.587429 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b4fa215a-165d-44b7-9bfd-19a2a9a5205c-logs\") pod \"cinder-api-0\" (UID: \"b4fa215a-165d-44b7-9bfd-19a2a9a5205c\") " pod="openstack/cinder-api-0" Jan 20 18:25:50 crc kubenswrapper[4661]: I0120 18:25:50.587888 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b4fa215a-165d-44b7-9bfd-19a2a9a5205c-logs\") pod \"cinder-api-0\" (UID: \"b4fa215a-165d-44b7-9bfd-19a2a9a5205c\") " pod="openstack/cinder-api-0" Jan 20 18:25:50 crc kubenswrapper[4661]: I0120 18:25:50.587981 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-swq8n\" (UniqueName: \"kubernetes.io/projected/b4fa215a-165d-44b7-9bfd-19a2a9a5205c-kube-api-access-swq8n\") pod \"cinder-api-0\" (UID: \"b4fa215a-165d-44b7-9bfd-19a2a9a5205c\") " pod="openstack/cinder-api-0" Jan 20 18:25:50 crc kubenswrapper[4661]: I0120 18:25:50.588014 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b4fa215a-165d-44b7-9bfd-19a2a9a5205c-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"b4fa215a-165d-44b7-9bfd-19a2a9a5205c\") " pod="openstack/cinder-api-0" Jan 20 18:25:50 crc kubenswrapper[4661]: I0120 18:25:50.588049 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b4fa215a-165d-44b7-9bfd-19a2a9a5205c-config-data-custom\") pod \"cinder-api-0\" (UID: \"b4fa215a-165d-44b7-9bfd-19a2a9a5205c\") " pod="openstack/cinder-api-0" Jan 20 18:25:50 crc kubenswrapper[4661]: I0120 18:25:50.588095 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b4fa215a-165d-44b7-9bfd-19a2a9a5205c-public-tls-certs\") pod \"cinder-api-0\" (UID: \"b4fa215a-165d-44b7-9bfd-19a2a9a5205c\") " pod="openstack/cinder-api-0" Jan 20 18:25:50 crc kubenswrapper[4661]: I0120 18:25:50.588125 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b4fa215a-165d-44b7-9bfd-19a2a9a5205c-scripts\") pod \"cinder-api-0\" (UID: \"b4fa215a-165d-44b7-9bfd-19a2a9a5205c\") " pod="openstack/cinder-api-0" Jan 20 18:25:50 crc kubenswrapper[4661]: I0120 18:25:50.588144 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b4fa215a-165d-44b7-9bfd-19a2a9a5205c-config-data\") pod \"cinder-api-0\" (UID: \"b4fa215a-165d-44b7-9bfd-19a2a9a5205c\") " pod="openstack/cinder-api-0" Jan 20 18:25:50 crc kubenswrapper[4661]: I0120 18:25:50.588222 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/b4fa215a-165d-44b7-9bfd-19a2a9a5205c-etc-machine-id\") pod \"cinder-api-0\" (UID: \"b4fa215a-165d-44b7-9bfd-19a2a9a5205c\") " pod="openstack/cinder-api-0" Jan 20 18:25:50 crc kubenswrapper[4661]: I0120 18:25:50.596332 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b4fa215a-165d-44b7-9bfd-19a2a9a5205c-scripts\") pod \"cinder-api-0\" (UID: \"b4fa215a-165d-44b7-9bfd-19a2a9a5205c\") " pod="openstack/cinder-api-0" Jan 20 18:25:50 crc kubenswrapper[4661]: I0120 18:25:50.601515 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b4fa215a-165d-44b7-9bfd-19a2a9a5205c-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"b4fa215a-165d-44b7-9bfd-19a2a9a5205c\") " pod="openstack/cinder-api-0" Jan 20 18:25:50 crc kubenswrapper[4661]: I0120 18:25:50.602385 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b4fa215a-165d-44b7-9bfd-19a2a9a5205c-config-data-custom\") pod \"cinder-api-0\" (UID: \"b4fa215a-165d-44b7-9bfd-19a2a9a5205c\") " pod="openstack/cinder-api-0" Jan 20 18:25:50 crc kubenswrapper[4661]: I0120 18:25:50.609263 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b4fa215a-165d-44b7-9bfd-19a2a9a5205c-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"b4fa215a-165d-44b7-9bfd-19a2a9a5205c\") " pod="openstack/cinder-api-0" Jan 20 18:25:50 crc kubenswrapper[4661]: I0120 18:25:50.617536 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b4fa215a-165d-44b7-9bfd-19a2a9a5205c-config-data\") pod \"cinder-api-0\" (UID: \"b4fa215a-165d-44b7-9bfd-19a2a9a5205c\") " pod="openstack/cinder-api-0" Jan 20 18:25:50 crc kubenswrapper[4661]: I0120 18:25:50.617750 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b4fa215a-165d-44b7-9bfd-19a2a9a5205c-public-tls-certs\") pod \"cinder-api-0\" (UID: \"b4fa215a-165d-44b7-9bfd-19a2a9a5205c\") " pod="openstack/cinder-api-0" Jan 20 18:25:50 crc kubenswrapper[4661]: I0120 18:25:50.619464 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-swq8n\" (UniqueName: \"kubernetes.io/projected/b4fa215a-165d-44b7-9bfd-19a2a9a5205c-kube-api-access-swq8n\") pod \"cinder-api-0\" (UID: \"b4fa215a-165d-44b7-9bfd-19a2a9a5205c\") " pod="openstack/cinder-api-0" Jan 20 18:25:50 crc kubenswrapper[4661]: I0120 18:25:50.777468 4661 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Jan 20 18:25:50 crc kubenswrapper[4661]: I0120 18:25:50.863094 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 20 18:25:50 crc kubenswrapper[4661]: I0120 18:25:50.960704 4661 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-79fcf454c6-dzz6z" Jan 20 18:25:51 crc kubenswrapper[4661]: I0120 18:25:51.101619 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/52dc5147-ce19-4dcc-94b5-a2eaacfba32d-combined-ca-bundle\") pod \"52dc5147-ce19-4dcc-94b5-a2eaacfba32d\" (UID: \"52dc5147-ce19-4dcc-94b5-a2eaacfba32d\") " Jan 20 18:25:51 crc kubenswrapper[4661]: I0120 18:25:51.101792 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/52dc5147-ce19-4dcc-94b5-a2eaacfba32d-config-data\") pod \"52dc5147-ce19-4dcc-94b5-a2eaacfba32d\" (UID: \"52dc5147-ce19-4dcc-94b5-a2eaacfba32d\") " Jan 20 18:25:51 crc kubenswrapper[4661]: I0120 18:25:51.101826 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/52dc5147-ce19-4dcc-94b5-a2eaacfba32d-config-data-custom\") pod \"52dc5147-ce19-4dcc-94b5-a2eaacfba32d\" (UID: \"52dc5147-ce19-4dcc-94b5-a2eaacfba32d\") " Jan 20 18:25:51 crc kubenswrapper[4661]: I0120 18:25:51.101879 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/52dc5147-ce19-4dcc-94b5-a2eaacfba32d-logs\") pod \"52dc5147-ce19-4dcc-94b5-a2eaacfba32d\" (UID: \"52dc5147-ce19-4dcc-94b5-a2eaacfba32d\") " Jan 20 18:25:51 crc kubenswrapper[4661]: I0120 18:25:51.101934 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z2bhg\" (UniqueName: \"kubernetes.io/projected/52dc5147-ce19-4dcc-94b5-a2eaacfba32d-kube-api-access-z2bhg\") pod \"52dc5147-ce19-4dcc-94b5-a2eaacfba32d\" (UID: \"52dc5147-ce19-4dcc-94b5-a2eaacfba32d\") " Jan 20 18:25:51 crc kubenswrapper[4661]: I0120 18:25:51.107809 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/52dc5147-ce19-4dcc-94b5-a2eaacfba32d-logs" (OuterVolumeSpecName: "logs") pod "52dc5147-ce19-4dcc-94b5-a2eaacfba32d" (UID: "52dc5147-ce19-4dcc-94b5-a2eaacfba32d"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 18:25:51 crc kubenswrapper[4661]: I0120 18:25:51.116238 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/52dc5147-ce19-4dcc-94b5-a2eaacfba32d-kube-api-access-z2bhg" (OuterVolumeSpecName: "kube-api-access-z2bhg") pod "52dc5147-ce19-4dcc-94b5-a2eaacfba32d" (UID: "52dc5147-ce19-4dcc-94b5-a2eaacfba32d"). InnerVolumeSpecName "kube-api-access-z2bhg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 18:25:51 crc kubenswrapper[4661]: I0120 18:25:51.116330 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/52dc5147-ce19-4dcc-94b5-a2eaacfba32d-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "52dc5147-ce19-4dcc-94b5-a2eaacfba32d" (UID: "52dc5147-ce19-4dcc-94b5-a2eaacfba32d"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 18:25:51 crc kubenswrapper[4661]: I0120 18:25:51.211092 4661 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/52dc5147-ce19-4dcc-94b5-a2eaacfba32d-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 20 18:25:51 crc kubenswrapper[4661]: I0120 18:25:51.211124 4661 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/52dc5147-ce19-4dcc-94b5-a2eaacfba32d-logs\") on node \"crc\" DevicePath \"\"" Jan 20 18:25:51 crc kubenswrapper[4661]: I0120 18:25:51.211160 4661 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z2bhg\" (UniqueName: \"kubernetes.io/projected/52dc5147-ce19-4dcc-94b5-a2eaacfba32d-kube-api-access-z2bhg\") on node \"crc\" DevicePath \"\"" Jan 20 18:25:51 crc kubenswrapper[4661]: I0120 18:25:51.220299 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/52dc5147-ce19-4dcc-94b5-a2eaacfba32d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "52dc5147-ce19-4dcc-94b5-a2eaacfba32d" (UID: "52dc5147-ce19-4dcc-94b5-a2eaacfba32d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 18:25:51 crc kubenswrapper[4661]: I0120 18:25:51.252801 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/52dc5147-ce19-4dcc-94b5-a2eaacfba32d-config-data" (OuterVolumeSpecName: "config-data") pod "52dc5147-ce19-4dcc-94b5-a2eaacfba32d" (UID: "52dc5147-ce19-4dcc-94b5-a2eaacfba32d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 18:25:51 crc kubenswrapper[4661]: I0120 18:25:51.334925 4661 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/52dc5147-ce19-4dcc-94b5-a2eaacfba32d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 20 18:25:51 crc kubenswrapper[4661]: I0120 18:25:51.334964 4661 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/52dc5147-ce19-4dcc-94b5-a2eaacfba32d-config-data\") on node \"crc\" DevicePath \"\"" Jan 20 18:25:51 crc kubenswrapper[4661]: I0120 18:25:51.423544 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-79fcf454c6-dzz6z" event={"ID":"52dc5147-ce19-4dcc-94b5-a2eaacfba32d","Type":"ContainerDied","Data":"ab44fe2dfbf32408849cd233d3f532ff19e92cb93699af2f116d68e7c835b18e"} Jan 20 18:25:51 crc kubenswrapper[4661]: I0120 18:25:51.423603 4661 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-79fcf454c6-dzz6z" Jan 20 18:25:51 crc kubenswrapper[4661]: I0120 18:25:51.423697 4661 scope.go:117] "RemoveContainer" containerID="62dc42b97d632f736c68a5edeb599f75a56ef8ad5ff6b2d89515f96206e9419a" Jan 20 18:25:51 crc kubenswrapper[4661]: I0120 18:25:51.439116 4661 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-689bd5f764-p5qpx" Jan 20 18:25:51 crc kubenswrapper[4661]: I0120 18:25:51.475062 4661 scope.go:117] "RemoveContainer" containerID="1c28619d8fadc633e6dc9980642d7bffe0eecd909790908217e5b278b7484f67" Jan 20 18:25:51 crc kubenswrapper[4661]: I0120 18:25:51.493688 4661 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-79fcf454c6-dzz6z"] Jan 20 18:25:51 crc kubenswrapper[4661]: I0120 18:25:51.501407 4661 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-79fcf454c6-dzz6z"] Jan 20 18:25:51 crc kubenswrapper[4661]: I0120 18:25:51.565811 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 20 18:25:52 crc kubenswrapper[4661]: I0120 18:25:52.154721 4661 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="52dc5147-ce19-4dcc-94b5-a2eaacfba32d" path="/var/lib/kubelet/pods/52dc5147-ce19-4dcc-94b5-a2eaacfba32d/volumes" Jan 20 18:25:52 crc kubenswrapper[4661]: I0120 18:25:52.155726 4661 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eea12f4c-2121-48d4-af51-bd98dd26100a" path="/var/lib/kubelet/pods/eea12f4c-2121-48d4-af51-bd98dd26100a/volumes" Jan 20 18:25:52 crc kubenswrapper[4661]: I0120 18:25:52.436064 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"b4fa215a-165d-44b7-9bfd-19a2a9a5205c","Type":"ContainerStarted","Data":"4ccfd796d8fce418b568a7dc057257fe6b4ae99b638f406f0d8e8568a2e60c28"} Jan 20 18:25:53 crc kubenswrapper[4661]: I0120 18:25:53.374607 4661 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-66f64dd556-cvpcx" Jan 20 18:25:53 crc kubenswrapper[4661]: I0120 18:25:53.450695 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"b4fa215a-165d-44b7-9bfd-19a2a9a5205c","Type":"ContainerStarted","Data":"22643bf1bae716ab318f95f3a33e701aa6a98304c01c3f7d14d621d4a8e1fd01"} Jan 20 18:25:53 crc kubenswrapper[4661]: I0120 18:25:53.452050 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"b4fa215a-165d-44b7-9bfd-19a2a9a5205c","Type":"ContainerStarted","Data":"b6a4de48a0137abeb1101a71c95e30f8f2133adb040542d064af155f31e51f72"} Jan 20 18:25:53 crc kubenswrapper[4661]: I0120 18:25:53.452178 4661 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Jan 20 18:25:53 crc kubenswrapper[4661]: I0120 18:25:53.553947 4661 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-66f64dd556-cvpcx" Jan 20 18:25:53 crc kubenswrapper[4661]: I0120 18:25:53.570114 4661 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=3.570095868 podStartE2EDuration="3.570095868s" podCreationTimestamp="2026-01-20 18:25:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 18:25:53.48620874 +0000 UTC m=+1209.816998422" watchObservedRunningTime="2026-01-20 18:25:53.570095868 +0000 UTC m=+1209.900885530" Jan 20 18:25:53 crc kubenswrapper[4661]: I0120 18:25:53.795873 4661 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-57dd7457c5-2txjn" Jan 20 18:25:53 crc kubenswrapper[4661]: I0120 18:25:53.866442 4661 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-689bd5f764-p5qpx"] Jan 20 18:25:53 crc kubenswrapper[4661]: I0120 18:25:53.866747 4661 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-689bd5f764-p5qpx" podUID="051bbaec-94d3-4a08-ab1b-4417e566e5f3" containerName="neutron-api" containerID="cri-o://efa988c462b7d5828080f9b8427590917c2e9b2ce41f95a8ba5e62b8472613a8" gracePeriod=30 Jan 20 18:25:53 crc kubenswrapper[4661]: I0120 18:25:53.866862 4661 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-689bd5f764-p5qpx" podUID="051bbaec-94d3-4a08-ab1b-4417e566e5f3" containerName="neutron-httpd" containerID="cri-o://8956ce0726710ad0120621168680b90297d4cdda4657606c96b00e928727d835" gracePeriod=30 Jan 20 18:25:54 crc kubenswrapper[4661]: I0120 18:25:54.387968 4661 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/keystone-c468c8b55-f2kw4" Jan 20 18:25:54 crc kubenswrapper[4661]: I0120 18:25:54.461059 4661 generic.go:334] "Generic (PLEG): container finished" podID="051bbaec-94d3-4a08-ab1b-4417e566e5f3" containerID="8956ce0726710ad0120621168680b90297d4cdda4657606c96b00e928727d835" exitCode=0 Jan 20 18:25:54 crc kubenswrapper[4661]: I0120 18:25:54.461144 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-689bd5f764-p5qpx" event={"ID":"051bbaec-94d3-4a08-ab1b-4417e566e5f3","Type":"ContainerDied","Data":"8956ce0726710ad0120621168680b90297d4cdda4657606c96b00e928727d835"} Jan 20 18:25:54 crc kubenswrapper[4661]: I0120 18:25:54.858720 4661 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Jan 20 18:25:54 crc kubenswrapper[4661]: E0120 18:25:54.859071 4661 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="52dc5147-ce19-4dcc-94b5-a2eaacfba32d" containerName="barbican-api-log" Jan 20 18:25:54 crc kubenswrapper[4661]: I0120 18:25:54.859089 4661 state_mem.go:107] "Deleted CPUSet assignment" podUID="52dc5147-ce19-4dcc-94b5-a2eaacfba32d" containerName="barbican-api-log" Jan 20 18:25:54 crc kubenswrapper[4661]: E0120 18:25:54.859105 4661 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="52dc5147-ce19-4dcc-94b5-a2eaacfba32d" containerName="barbican-api" Jan 20 18:25:54 crc kubenswrapper[4661]: I0120 18:25:54.859111 4661 state_mem.go:107] "Deleted CPUSet assignment" podUID="52dc5147-ce19-4dcc-94b5-a2eaacfba32d" containerName="barbican-api" Jan 20 18:25:54 crc kubenswrapper[4661]: I0120 18:25:54.859275 4661 memory_manager.go:354] "RemoveStaleState removing state" podUID="52dc5147-ce19-4dcc-94b5-a2eaacfba32d" containerName="barbican-api-log" Jan 20 18:25:54 crc kubenswrapper[4661]: I0120 18:25:54.859293 4661 memory_manager.go:354] "RemoveStaleState removing state" podUID="52dc5147-ce19-4dcc-94b5-a2eaacfba32d" containerName="barbican-api" Jan 20 18:25:54 crc kubenswrapper[4661]: I0120 18:25:54.859874 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 20 18:25:54 crc kubenswrapper[4661]: I0120 18:25:54.862809 4661 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Jan 20 18:25:54 crc kubenswrapper[4661]: I0120 18:25:54.863038 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstackclient-openstackclient-dockercfg-ddv6q" Jan 20 18:25:54 crc kubenswrapper[4661]: I0120 18:25:54.863263 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Jan 20 18:25:54 crc kubenswrapper[4661]: I0120 18:25:54.871316 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Jan 20 18:25:54 crc kubenswrapper[4661]: I0120 18:25:54.917562 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c6b78b7c-8709-4a28-bc8f-1cf8960203cc-combined-ca-bundle\") pod \"openstackclient\" (UID: \"c6b78b7c-8709-4a28-bc8f-1cf8960203cc\") " pod="openstack/openstackclient" Jan 20 18:25:54 crc kubenswrapper[4661]: I0120 18:25:54.917610 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/c6b78b7c-8709-4a28-bc8f-1cf8960203cc-openstack-config\") pod \"openstackclient\" (UID: \"c6b78b7c-8709-4a28-bc8f-1cf8960203cc\") " pod="openstack/openstackclient" Jan 20 18:25:54 crc kubenswrapper[4661]: I0120 18:25:54.917797 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cbd9x\" (UniqueName: \"kubernetes.io/projected/c6b78b7c-8709-4a28-bc8f-1cf8960203cc-kube-api-access-cbd9x\") pod \"openstackclient\" (UID: \"c6b78b7c-8709-4a28-bc8f-1cf8960203cc\") " pod="openstack/openstackclient" Jan 20 18:25:54 crc kubenswrapper[4661]: I0120 18:25:54.917824 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/c6b78b7c-8709-4a28-bc8f-1cf8960203cc-openstack-config-secret\") pod \"openstackclient\" (UID: \"c6b78b7c-8709-4a28-bc8f-1cf8960203cc\") " pod="openstack/openstackclient" Jan 20 18:25:55 crc kubenswrapper[4661]: I0120 18:25:55.019033 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cbd9x\" (UniqueName: \"kubernetes.io/projected/c6b78b7c-8709-4a28-bc8f-1cf8960203cc-kube-api-access-cbd9x\") pod \"openstackclient\" (UID: \"c6b78b7c-8709-4a28-bc8f-1cf8960203cc\") " pod="openstack/openstackclient" Jan 20 18:25:55 crc kubenswrapper[4661]: I0120 18:25:55.019085 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/c6b78b7c-8709-4a28-bc8f-1cf8960203cc-openstack-config-secret\") pod \"openstackclient\" (UID: \"c6b78b7c-8709-4a28-bc8f-1cf8960203cc\") " pod="openstack/openstackclient" Jan 20 18:25:55 crc kubenswrapper[4661]: I0120 18:25:55.019136 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c6b78b7c-8709-4a28-bc8f-1cf8960203cc-combined-ca-bundle\") pod \"openstackclient\" (UID: \"c6b78b7c-8709-4a28-bc8f-1cf8960203cc\") " pod="openstack/openstackclient" Jan 20 18:25:55 crc kubenswrapper[4661]: I0120 18:25:55.019158 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/c6b78b7c-8709-4a28-bc8f-1cf8960203cc-openstack-config\") pod \"openstackclient\" (UID: \"c6b78b7c-8709-4a28-bc8f-1cf8960203cc\") " pod="openstack/openstackclient" Jan 20 18:25:55 crc kubenswrapper[4661]: I0120 18:25:55.020145 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/c6b78b7c-8709-4a28-bc8f-1cf8960203cc-openstack-config\") pod \"openstackclient\" (UID: \"c6b78b7c-8709-4a28-bc8f-1cf8960203cc\") " pod="openstack/openstackclient" Jan 20 18:25:55 crc kubenswrapper[4661]: I0120 18:25:55.027385 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c6b78b7c-8709-4a28-bc8f-1cf8960203cc-combined-ca-bundle\") pod \"openstackclient\" (UID: \"c6b78b7c-8709-4a28-bc8f-1cf8960203cc\") " pod="openstack/openstackclient" Jan 20 18:25:55 crc kubenswrapper[4661]: I0120 18:25:55.028051 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/c6b78b7c-8709-4a28-bc8f-1cf8960203cc-openstack-config-secret\") pod \"openstackclient\" (UID: \"c6b78b7c-8709-4a28-bc8f-1cf8960203cc\") " pod="openstack/openstackclient" Jan 20 18:25:55 crc kubenswrapper[4661]: I0120 18:25:55.038244 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cbd9x\" (UniqueName: \"kubernetes.io/projected/c6b78b7c-8709-4a28-bc8f-1cf8960203cc-kube-api-access-cbd9x\") pod \"openstackclient\" (UID: \"c6b78b7c-8709-4a28-bc8f-1cf8960203cc\") " pod="openstack/openstackclient" Jan 20 18:25:55 crc kubenswrapper[4661]: I0120 18:25:55.175655 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 20 18:25:55 crc kubenswrapper[4661]: I0120 18:25:55.785765 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Jan 20 18:25:55 crc kubenswrapper[4661]: W0120 18:25:55.810379 4661 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc6b78b7c_8709_4a28_bc8f_1cf8960203cc.slice/crio-305760c0756b3bfda3d5fc06b78c97cf1e33edf84992f38fb0227177858be649 WatchSource:0}: Error finding container 305760c0756b3bfda3d5fc06b78c97cf1e33edf84992f38fb0227177858be649: Status 404 returned error can't find the container with id 305760c0756b3bfda3d5fc06b78c97cf1e33edf84992f38fb0227177858be649 Jan 20 18:25:55 crc kubenswrapper[4661]: I0120 18:25:55.911855 4661 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6d97fcdd8f-fsv8j" Jan 20 18:25:55 crc kubenswrapper[4661]: I0120 18:25:55.975802 4661 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6bb684768f-5nk7f"] Jan 20 18:25:55 crc kubenswrapper[4661]: I0120 18:25:55.976026 4661 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6bb684768f-5nk7f" podUID="deebca4d-9edb-45ba-90a2-c58696b6c0d7" containerName="dnsmasq-dns" containerID="cri-o://6e736c620712a32ceddafe9766bb3f4402b36e9925c35abadad5dd11afbf1e7b" gracePeriod=10 Jan 20 18:25:56 crc kubenswrapper[4661]: I0120 18:25:56.061106 4661 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Jan 20 18:25:56 crc kubenswrapper[4661]: I0120 18:25:56.163566 4661 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 20 18:25:56 crc kubenswrapper[4661]: I0120 18:25:56.436985 4661 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6bb684768f-5nk7f" Jan 20 18:25:56 crc kubenswrapper[4661]: I0120 18:25:56.494557 4661 generic.go:334] "Generic (PLEG): container finished" podID="deebca4d-9edb-45ba-90a2-c58696b6c0d7" containerID="6e736c620712a32ceddafe9766bb3f4402b36e9925c35abadad5dd11afbf1e7b" exitCode=0 Jan 20 18:25:56 crc kubenswrapper[4661]: I0120 18:25:56.494942 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bb684768f-5nk7f" event={"ID":"deebca4d-9edb-45ba-90a2-c58696b6c0d7","Type":"ContainerDied","Data":"6e736c620712a32ceddafe9766bb3f4402b36e9925c35abadad5dd11afbf1e7b"} Jan 20 18:25:56 crc kubenswrapper[4661]: I0120 18:25:56.504757 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bb684768f-5nk7f" event={"ID":"deebca4d-9edb-45ba-90a2-c58696b6c0d7","Type":"ContainerDied","Data":"d6a06ba0686004c7ba48069cda6b43f6a92aaf4082dcf2f4933fb1f960e5d7dc"} Jan 20 18:25:56 crc kubenswrapper[4661]: I0120 18:25:56.504843 4661 scope.go:117] "RemoveContainer" containerID="6e736c620712a32ceddafe9766bb3f4402b36e9925c35abadad5dd11afbf1e7b" Jan 20 18:25:56 crc kubenswrapper[4661]: I0120 18:25:56.504892 4661 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6bb684768f-5nk7f" Jan 20 18:25:56 crc kubenswrapper[4661]: I0120 18:25:56.516851 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"c6b78b7c-8709-4a28-bc8f-1cf8960203cc","Type":"ContainerStarted","Data":"305760c0756b3bfda3d5fc06b78c97cf1e33edf84992f38fb0227177858be649"} Jan 20 18:25:56 crc kubenswrapper[4661]: I0120 18:25:56.517248 4661 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="07c8af61-2003-440a-b91e-325023e1ce8d" containerName="cinder-scheduler" containerID="cri-o://0de7201438bba87c91f9d8730d86f685696f9d6c3ef60d87fc5ebd2d344e743e" gracePeriod=30 Jan 20 18:25:56 crc kubenswrapper[4661]: I0120 18:25:56.517747 4661 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="07c8af61-2003-440a-b91e-325023e1ce8d" containerName="probe" containerID="cri-o://23968789b84c0642363f8c2db9928aa1d1a4b495738652c786126f866c2ac0ac" gracePeriod=30 Jan 20 18:25:56 crc kubenswrapper[4661]: I0120 18:25:56.549798 4661 scope.go:117] "RemoveContainer" containerID="98fb9616f7b5f78e1a53f936d47574a9188899d576bed25acbe1f83c83288f5f" Jan 20 18:25:56 crc kubenswrapper[4661]: I0120 18:25:56.550733 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/deebca4d-9edb-45ba-90a2-c58696b6c0d7-ovsdbserver-nb\") pod \"deebca4d-9edb-45ba-90a2-c58696b6c0d7\" (UID: \"deebca4d-9edb-45ba-90a2-c58696b6c0d7\") " Jan 20 18:25:56 crc kubenswrapper[4661]: I0120 18:25:56.550886 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/deebca4d-9edb-45ba-90a2-c58696b6c0d7-dns-svc\") pod \"deebca4d-9edb-45ba-90a2-c58696b6c0d7\" (UID: \"deebca4d-9edb-45ba-90a2-c58696b6c0d7\") " Jan 20 18:25:56 crc kubenswrapper[4661]: I0120 18:25:56.551120 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tkssr\" (UniqueName: \"kubernetes.io/projected/deebca4d-9edb-45ba-90a2-c58696b6c0d7-kube-api-access-tkssr\") pod \"deebca4d-9edb-45ba-90a2-c58696b6c0d7\" (UID: \"deebca4d-9edb-45ba-90a2-c58696b6c0d7\") " Jan 20 18:25:56 crc kubenswrapper[4661]: I0120 18:25:56.551264 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/deebca4d-9edb-45ba-90a2-c58696b6c0d7-config\") pod \"deebca4d-9edb-45ba-90a2-c58696b6c0d7\" (UID: \"deebca4d-9edb-45ba-90a2-c58696b6c0d7\") " Jan 20 18:25:56 crc kubenswrapper[4661]: I0120 18:25:56.552422 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/deebca4d-9edb-45ba-90a2-c58696b6c0d7-ovsdbserver-sb\") pod \"deebca4d-9edb-45ba-90a2-c58696b6c0d7\" (UID: \"deebca4d-9edb-45ba-90a2-c58696b6c0d7\") " Jan 20 18:25:56 crc kubenswrapper[4661]: I0120 18:25:56.581412 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/deebca4d-9edb-45ba-90a2-c58696b6c0d7-kube-api-access-tkssr" (OuterVolumeSpecName: "kube-api-access-tkssr") pod "deebca4d-9edb-45ba-90a2-c58696b6c0d7" (UID: "deebca4d-9edb-45ba-90a2-c58696b6c0d7"). InnerVolumeSpecName "kube-api-access-tkssr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 18:25:56 crc kubenswrapper[4661]: I0120 18:25:56.600739 4661 scope.go:117] "RemoveContainer" containerID="6e736c620712a32ceddafe9766bb3f4402b36e9925c35abadad5dd11afbf1e7b" Jan 20 18:25:56 crc kubenswrapper[4661]: E0120 18:25:56.601219 4661 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6e736c620712a32ceddafe9766bb3f4402b36e9925c35abadad5dd11afbf1e7b\": container with ID starting with 6e736c620712a32ceddafe9766bb3f4402b36e9925c35abadad5dd11afbf1e7b not found: ID does not exist" containerID="6e736c620712a32ceddafe9766bb3f4402b36e9925c35abadad5dd11afbf1e7b" Jan 20 18:25:56 crc kubenswrapper[4661]: I0120 18:25:56.601267 4661 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6e736c620712a32ceddafe9766bb3f4402b36e9925c35abadad5dd11afbf1e7b"} err="failed to get container status \"6e736c620712a32ceddafe9766bb3f4402b36e9925c35abadad5dd11afbf1e7b\": rpc error: code = NotFound desc = could not find container \"6e736c620712a32ceddafe9766bb3f4402b36e9925c35abadad5dd11afbf1e7b\": container with ID starting with 6e736c620712a32ceddafe9766bb3f4402b36e9925c35abadad5dd11afbf1e7b not found: ID does not exist" Jan 20 18:25:56 crc kubenswrapper[4661]: I0120 18:25:56.601287 4661 scope.go:117] "RemoveContainer" containerID="98fb9616f7b5f78e1a53f936d47574a9188899d576bed25acbe1f83c83288f5f" Jan 20 18:25:56 crc kubenswrapper[4661]: E0120 18:25:56.601609 4661 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"98fb9616f7b5f78e1a53f936d47574a9188899d576bed25acbe1f83c83288f5f\": container with ID starting with 98fb9616f7b5f78e1a53f936d47574a9188899d576bed25acbe1f83c83288f5f not found: ID does not exist" containerID="98fb9616f7b5f78e1a53f936d47574a9188899d576bed25acbe1f83c83288f5f" Jan 20 18:25:56 crc kubenswrapper[4661]: I0120 18:25:56.601627 4661 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"98fb9616f7b5f78e1a53f936d47574a9188899d576bed25acbe1f83c83288f5f"} err="failed to get container status \"98fb9616f7b5f78e1a53f936d47574a9188899d576bed25acbe1f83c83288f5f\": rpc error: code = NotFound desc = could not find container \"98fb9616f7b5f78e1a53f936d47574a9188899d576bed25acbe1f83c83288f5f\": container with ID starting with 98fb9616f7b5f78e1a53f936d47574a9188899d576bed25acbe1f83c83288f5f not found: ID does not exist" Jan 20 18:25:56 crc kubenswrapper[4661]: I0120 18:25:56.618394 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/deebca4d-9edb-45ba-90a2-c58696b6c0d7-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "deebca4d-9edb-45ba-90a2-c58696b6c0d7" (UID: "deebca4d-9edb-45ba-90a2-c58696b6c0d7"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 18:25:56 crc kubenswrapper[4661]: I0120 18:25:56.639270 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/deebca4d-9edb-45ba-90a2-c58696b6c0d7-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "deebca4d-9edb-45ba-90a2-c58696b6c0d7" (UID: "deebca4d-9edb-45ba-90a2-c58696b6c0d7"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 18:25:56 crc kubenswrapper[4661]: I0120 18:25:56.654687 4661 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tkssr\" (UniqueName: \"kubernetes.io/projected/deebca4d-9edb-45ba-90a2-c58696b6c0d7-kube-api-access-tkssr\") on node \"crc\" DevicePath \"\"" Jan 20 18:25:56 crc kubenswrapper[4661]: I0120 18:25:56.654717 4661 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/deebca4d-9edb-45ba-90a2-c58696b6c0d7-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 20 18:25:56 crc kubenswrapper[4661]: I0120 18:25:56.654727 4661 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/deebca4d-9edb-45ba-90a2-c58696b6c0d7-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 20 18:25:56 crc kubenswrapper[4661]: I0120 18:25:56.661219 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/deebca4d-9edb-45ba-90a2-c58696b6c0d7-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "deebca4d-9edb-45ba-90a2-c58696b6c0d7" (UID: "deebca4d-9edb-45ba-90a2-c58696b6c0d7"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 18:25:56 crc kubenswrapper[4661]: I0120 18:25:56.687177 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/deebca4d-9edb-45ba-90a2-c58696b6c0d7-config" (OuterVolumeSpecName: "config") pod "deebca4d-9edb-45ba-90a2-c58696b6c0d7" (UID: "deebca4d-9edb-45ba-90a2-c58696b6c0d7"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 18:25:56 crc kubenswrapper[4661]: I0120 18:25:56.757896 4661 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/deebca4d-9edb-45ba-90a2-c58696b6c0d7-config\") on node \"crc\" DevicePath \"\"" Jan 20 18:25:56 crc kubenswrapper[4661]: I0120 18:25:56.757930 4661 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/deebca4d-9edb-45ba-90a2-c58696b6c0d7-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 20 18:25:56 crc kubenswrapper[4661]: I0120 18:25:56.845292 4661 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6bb684768f-5nk7f"] Jan 20 18:25:56 crc kubenswrapper[4661]: I0120 18:25:56.853087 4661 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6bb684768f-5nk7f"] Jan 20 18:25:57 crc kubenswrapper[4661]: I0120 18:25:57.528846 4661 generic.go:334] "Generic (PLEG): container finished" podID="07c8af61-2003-440a-b91e-325023e1ce8d" containerID="23968789b84c0642363f8c2db9928aa1d1a4b495738652c786126f866c2ac0ac" exitCode=0 Jan 20 18:25:57 crc kubenswrapper[4661]: I0120 18:25:57.528928 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"07c8af61-2003-440a-b91e-325023e1ce8d","Type":"ContainerDied","Data":"23968789b84c0642363f8c2db9928aa1d1a4b495738652c786126f866c2ac0ac"} Jan 20 18:25:58 crc kubenswrapper[4661]: I0120 18:25:58.151645 4661 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="deebca4d-9edb-45ba-90a2-c58696b6c0d7" path="/var/lib/kubelet/pods/deebca4d-9edb-45ba-90a2-c58696b6c0d7/volumes" Jan 20 18:25:58 crc kubenswrapper[4661]: I0120 18:25:58.567913 4661 generic.go:334] "Generic (PLEG): container finished" podID="051bbaec-94d3-4a08-ab1b-4417e566e5f3" containerID="efa988c462b7d5828080f9b8427590917c2e9b2ce41f95a8ba5e62b8472613a8" exitCode=0 Jan 20 18:25:58 crc kubenswrapper[4661]: I0120 18:25:58.568089 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-689bd5f764-p5qpx" event={"ID":"051bbaec-94d3-4a08-ab1b-4417e566e5f3","Type":"ContainerDied","Data":"efa988c462b7d5828080f9b8427590917c2e9b2ce41f95a8ba5e62b8472613a8"} Jan 20 18:25:58 crc kubenswrapper[4661]: I0120 18:25:58.586781 4661 generic.go:334] "Generic (PLEG): container finished" podID="07c8af61-2003-440a-b91e-325023e1ce8d" containerID="0de7201438bba87c91f9d8730d86f685696f9d6c3ef60d87fc5ebd2d344e743e" exitCode=0 Jan 20 18:25:58 crc kubenswrapper[4661]: I0120 18:25:58.586825 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"07c8af61-2003-440a-b91e-325023e1ce8d","Type":"ContainerDied","Data":"0de7201438bba87c91f9d8730d86f685696f9d6c3ef60d87fc5ebd2d344e743e"} Jan 20 18:25:58 crc kubenswrapper[4661]: I0120 18:25:58.646513 4661 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 20 18:25:58 crc kubenswrapper[4661]: I0120 18:25:58.703270 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9vdrg\" (UniqueName: \"kubernetes.io/projected/07c8af61-2003-440a-b91e-325023e1ce8d-kube-api-access-9vdrg\") pod \"07c8af61-2003-440a-b91e-325023e1ce8d\" (UID: \"07c8af61-2003-440a-b91e-325023e1ce8d\") " Jan 20 18:25:58 crc kubenswrapper[4661]: I0120 18:25:58.703322 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/07c8af61-2003-440a-b91e-325023e1ce8d-config-data\") pod \"07c8af61-2003-440a-b91e-325023e1ce8d\" (UID: \"07c8af61-2003-440a-b91e-325023e1ce8d\") " Jan 20 18:25:58 crc kubenswrapper[4661]: I0120 18:25:58.703397 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/07c8af61-2003-440a-b91e-325023e1ce8d-etc-machine-id\") pod \"07c8af61-2003-440a-b91e-325023e1ce8d\" (UID: \"07c8af61-2003-440a-b91e-325023e1ce8d\") " Jan 20 18:25:58 crc kubenswrapper[4661]: I0120 18:25:58.703416 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/07c8af61-2003-440a-b91e-325023e1ce8d-scripts\") pod \"07c8af61-2003-440a-b91e-325023e1ce8d\" (UID: \"07c8af61-2003-440a-b91e-325023e1ce8d\") " Jan 20 18:25:58 crc kubenswrapper[4661]: I0120 18:25:58.703466 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07c8af61-2003-440a-b91e-325023e1ce8d-combined-ca-bundle\") pod \"07c8af61-2003-440a-b91e-325023e1ce8d\" (UID: \"07c8af61-2003-440a-b91e-325023e1ce8d\") " Jan 20 18:25:58 crc kubenswrapper[4661]: I0120 18:25:58.703504 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/07c8af61-2003-440a-b91e-325023e1ce8d-config-data-custom\") pod \"07c8af61-2003-440a-b91e-325023e1ce8d\" (UID: \"07c8af61-2003-440a-b91e-325023e1ce8d\") " Jan 20 18:25:58 crc kubenswrapper[4661]: I0120 18:25:58.703836 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/07c8af61-2003-440a-b91e-325023e1ce8d-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "07c8af61-2003-440a-b91e-325023e1ce8d" (UID: "07c8af61-2003-440a-b91e-325023e1ce8d"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 20 18:25:58 crc kubenswrapper[4661]: I0120 18:25:58.719551 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/07c8af61-2003-440a-b91e-325023e1ce8d-kube-api-access-9vdrg" (OuterVolumeSpecName: "kube-api-access-9vdrg") pod "07c8af61-2003-440a-b91e-325023e1ce8d" (UID: "07c8af61-2003-440a-b91e-325023e1ce8d"). InnerVolumeSpecName "kube-api-access-9vdrg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 18:25:58 crc kubenswrapper[4661]: I0120 18:25:58.732193 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/07c8af61-2003-440a-b91e-325023e1ce8d-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "07c8af61-2003-440a-b91e-325023e1ce8d" (UID: "07c8af61-2003-440a-b91e-325023e1ce8d"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 18:25:58 crc kubenswrapper[4661]: I0120 18:25:58.732304 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/07c8af61-2003-440a-b91e-325023e1ce8d-scripts" (OuterVolumeSpecName: "scripts") pod "07c8af61-2003-440a-b91e-325023e1ce8d" (UID: "07c8af61-2003-440a-b91e-325023e1ce8d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 18:25:58 crc kubenswrapper[4661]: I0120 18:25:58.805945 4661 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9vdrg\" (UniqueName: \"kubernetes.io/projected/07c8af61-2003-440a-b91e-325023e1ce8d-kube-api-access-9vdrg\") on node \"crc\" DevicePath \"\"" Jan 20 18:25:58 crc kubenswrapper[4661]: I0120 18:25:58.805973 4661 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/07c8af61-2003-440a-b91e-325023e1ce8d-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 20 18:25:58 crc kubenswrapper[4661]: I0120 18:25:58.805984 4661 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/07c8af61-2003-440a-b91e-325023e1ce8d-scripts\") on node \"crc\" DevicePath \"\"" Jan 20 18:25:58 crc kubenswrapper[4661]: I0120 18:25:58.805994 4661 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/07c8af61-2003-440a-b91e-325023e1ce8d-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 20 18:25:58 crc kubenswrapper[4661]: I0120 18:25:58.836874 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/07c8af61-2003-440a-b91e-325023e1ce8d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "07c8af61-2003-440a-b91e-325023e1ce8d" (UID: "07c8af61-2003-440a-b91e-325023e1ce8d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 18:25:58 crc kubenswrapper[4661]: I0120 18:25:58.908282 4661 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07c8af61-2003-440a-b91e-325023e1ce8d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 20 18:25:58 crc kubenswrapper[4661]: I0120 18:25:58.917398 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/07c8af61-2003-440a-b91e-325023e1ce8d-config-data" (OuterVolumeSpecName: "config-data") pod "07c8af61-2003-440a-b91e-325023e1ce8d" (UID: "07c8af61-2003-440a-b91e-325023e1ce8d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 18:25:59 crc kubenswrapper[4661]: I0120 18:25:59.009984 4661 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/07c8af61-2003-440a-b91e-325023e1ce8d-config-data\") on node \"crc\" DevicePath \"\"" Jan 20 18:25:59 crc kubenswrapper[4661]: I0120 18:25:59.021720 4661 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-689bd5f764-p5qpx" Jan 20 18:25:59 crc kubenswrapper[4661]: I0120 18:25:59.111525 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/051bbaec-94d3-4a08-ab1b-4417e566e5f3-httpd-config\") pod \"051bbaec-94d3-4a08-ab1b-4417e566e5f3\" (UID: \"051bbaec-94d3-4a08-ab1b-4417e566e5f3\") " Jan 20 18:25:59 crc kubenswrapper[4661]: I0120 18:25:59.111647 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nmd2z\" (UniqueName: \"kubernetes.io/projected/051bbaec-94d3-4a08-ab1b-4417e566e5f3-kube-api-access-nmd2z\") pod \"051bbaec-94d3-4a08-ab1b-4417e566e5f3\" (UID: \"051bbaec-94d3-4a08-ab1b-4417e566e5f3\") " Jan 20 18:25:59 crc kubenswrapper[4661]: I0120 18:25:59.111773 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/051bbaec-94d3-4a08-ab1b-4417e566e5f3-config\") pod \"051bbaec-94d3-4a08-ab1b-4417e566e5f3\" (UID: \"051bbaec-94d3-4a08-ab1b-4417e566e5f3\") " Jan 20 18:25:59 crc kubenswrapper[4661]: I0120 18:25:59.111792 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/051bbaec-94d3-4a08-ab1b-4417e566e5f3-ovndb-tls-certs\") pod \"051bbaec-94d3-4a08-ab1b-4417e566e5f3\" (UID: \"051bbaec-94d3-4a08-ab1b-4417e566e5f3\") " Jan 20 18:25:59 crc kubenswrapper[4661]: I0120 18:25:59.111813 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/051bbaec-94d3-4a08-ab1b-4417e566e5f3-combined-ca-bundle\") pod \"051bbaec-94d3-4a08-ab1b-4417e566e5f3\" (UID: \"051bbaec-94d3-4a08-ab1b-4417e566e5f3\") " Jan 20 18:25:59 crc kubenswrapper[4661]: I0120 18:25:59.126007 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/051bbaec-94d3-4a08-ab1b-4417e566e5f3-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "051bbaec-94d3-4a08-ab1b-4417e566e5f3" (UID: "051bbaec-94d3-4a08-ab1b-4417e566e5f3"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 18:25:59 crc kubenswrapper[4661]: I0120 18:25:59.132831 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/051bbaec-94d3-4a08-ab1b-4417e566e5f3-kube-api-access-nmd2z" (OuterVolumeSpecName: "kube-api-access-nmd2z") pod "051bbaec-94d3-4a08-ab1b-4417e566e5f3" (UID: "051bbaec-94d3-4a08-ab1b-4417e566e5f3"). InnerVolumeSpecName "kube-api-access-nmd2z". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 18:25:59 crc kubenswrapper[4661]: I0120 18:25:59.181812 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/051bbaec-94d3-4a08-ab1b-4417e566e5f3-config" (OuterVolumeSpecName: "config") pod "051bbaec-94d3-4a08-ab1b-4417e566e5f3" (UID: "051bbaec-94d3-4a08-ab1b-4417e566e5f3"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 18:25:59 crc kubenswrapper[4661]: I0120 18:25:59.187953 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/051bbaec-94d3-4a08-ab1b-4417e566e5f3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "051bbaec-94d3-4a08-ab1b-4417e566e5f3" (UID: "051bbaec-94d3-4a08-ab1b-4417e566e5f3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 18:25:59 crc kubenswrapper[4661]: I0120 18:25:59.196296 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/051bbaec-94d3-4a08-ab1b-4417e566e5f3-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "051bbaec-94d3-4a08-ab1b-4417e566e5f3" (UID: "051bbaec-94d3-4a08-ab1b-4417e566e5f3"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 18:25:59 crc kubenswrapper[4661]: I0120 18:25:59.213718 4661 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/051bbaec-94d3-4a08-ab1b-4417e566e5f3-config\") on node \"crc\" DevicePath \"\"" Jan 20 18:25:59 crc kubenswrapper[4661]: I0120 18:25:59.213748 4661 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/051bbaec-94d3-4a08-ab1b-4417e566e5f3-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 20 18:25:59 crc kubenswrapper[4661]: I0120 18:25:59.213759 4661 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/051bbaec-94d3-4a08-ab1b-4417e566e5f3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 20 18:25:59 crc kubenswrapper[4661]: I0120 18:25:59.213767 4661 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/051bbaec-94d3-4a08-ab1b-4417e566e5f3-httpd-config\") on node \"crc\" DevicePath \"\"" Jan 20 18:25:59 crc kubenswrapper[4661]: I0120 18:25:59.213776 4661 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nmd2z\" (UniqueName: \"kubernetes.io/projected/051bbaec-94d3-4a08-ab1b-4417e566e5f3-kube-api-access-nmd2z\") on node \"crc\" DevicePath \"\"" Jan 20 18:25:59 crc kubenswrapper[4661]: I0120 18:25:59.607347 4661 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 20 18:25:59 crc kubenswrapper[4661]: I0120 18:25:59.607553 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"07c8af61-2003-440a-b91e-325023e1ce8d","Type":"ContainerDied","Data":"f93e08509ce1f145f10587f8b38d038ced72bffd6d45df1f433faa44ee9a2951"} Jan 20 18:25:59 crc kubenswrapper[4661]: I0120 18:25:59.608065 4661 scope.go:117] "RemoveContainer" containerID="23968789b84c0642363f8c2db9928aa1d1a4b495738652c786126f866c2ac0ac" Jan 20 18:25:59 crc kubenswrapper[4661]: I0120 18:25:59.624799 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-689bd5f764-p5qpx" event={"ID":"051bbaec-94d3-4a08-ab1b-4417e566e5f3","Type":"ContainerDied","Data":"971f04cb600621070bde5d5dd44cb0d4b7b2a2b05eb782bf691609c8c447fae4"} Jan 20 18:25:59 crc kubenswrapper[4661]: I0120 18:25:59.624872 4661 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-689bd5f764-p5qpx" Jan 20 18:25:59 crc kubenswrapper[4661]: I0120 18:25:59.663725 4661 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 20 18:25:59 crc kubenswrapper[4661]: I0120 18:25:59.672920 4661 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 20 18:25:59 crc kubenswrapper[4661]: I0120 18:25:59.681730 4661 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-689bd5f764-p5qpx"] Jan 20 18:25:59 crc kubenswrapper[4661]: I0120 18:25:59.688939 4661 scope.go:117] "RemoveContainer" containerID="0de7201438bba87c91f9d8730d86f685696f9d6c3ef60d87fc5ebd2d344e743e" Jan 20 18:25:59 crc kubenswrapper[4661]: I0120 18:25:59.690019 4661 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-689bd5f764-p5qpx"] Jan 20 18:25:59 crc kubenswrapper[4661]: I0120 18:25:59.709812 4661 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Jan 20 18:25:59 crc kubenswrapper[4661]: E0120 18:25:59.710263 4661 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="051bbaec-94d3-4a08-ab1b-4417e566e5f3" containerName="neutron-httpd" Jan 20 18:25:59 crc kubenswrapper[4661]: I0120 18:25:59.710337 4661 state_mem.go:107] "Deleted CPUSet assignment" podUID="051bbaec-94d3-4a08-ab1b-4417e566e5f3" containerName="neutron-httpd" Jan 20 18:25:59 crc kubenswrapper[4661]: E0120 18:25:59.710394 4661 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="07c8af61-2003-440a-b91e-325023e1ce8d" containerName="cinder-scheduler" Jan 20 18:25:59 crc kubenswrapper[4661]: I0120 18:25:59.710452 4661 state_mem.go:107] "Deleted CPUSet assignment" podUID="07c8af61-2003-440a-b91e-325023e1ce8d" containerName="cinder-scheduler" Jan 20 18:25:59 crc kubenswrapper[4661]: E0120 18:25:59.710516 4661 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="deebca4d-9edb-45ba-90a2-c58696b6c0d7" containerName="init" Jan 20 18:25:59 crc kubenswrapper[4661]: I0120 18:25:59.710568 4661 state_mem.go:107] "Deleted CPUSet assignment" podUID="deebca4d-9edb-45ba-90a2-c58696b6c0d7" containerName="init" Jan 20 18:25:59 crc kubenswrapper[4661]: E0120 18:25:59.710617 4661 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="051bbaec-94d3-4a08-ab1b-4417e566e5f3" containerName="neutron-api" Jan 20 18:25:59 crc kubenswrapper[4661]: I0120 18:25:59.710681 4661 state_mem.go:107] "Deleted CPUSet assignment" podUID="051bbaec-94d3-4a08-ab1b-4417e566e5f3" containerName="neutron-api" Jan 20 18:25:59 crc kubenswrapper[4661]: E0120 18:25:59.710740 4661 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="07c8af61-2003-440a-b91e-325023e1ce8d" containerName="probe" Jan 20 18:25:59 crc kubenswrapper[4661]: I0120 18:25:59.710798 4661 state_mem.go:107] "Deleted CPUSet assignment" podUID="07c8af61-2003-440a-b91e-325023e1ce8d" containerName="probe" Jan 20 18:25:59 crc kubenswrapper[4661]: E0120 18:25:59.710864 4661 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="deebca4d-9edb-45ba-90a2-c58696b6c0d7" containerName="dnsmasq-dns" Jan 20 18:25:59 crc kubenswrapper[4661]: I0120 18:25:59.710913 4661 state_mem.go:107] "Deleted CPUSet assignment" podUID="deebca4d-9edb-45ba-90a2-c58696b6c0d7" containerName="dnsmasq-dns" Jan 20 18:25:59 crc kubenswrapper[4661]: I0120 18:25:59.711132 4661 memory_manager.go:354] "RemoveStaleState removing state" podUID="051bbaec-94d3-4a08-ab1b-4417e566e5f3" containerName="neutron-api" Jan 20 18:25:59 crc kubenswrapper[4661]: I0120 18:25:59.711198 4661 memory_manager.go:354] "RemoveStaleState removing state" podUID="07c8af61-2003-440a-b91e-325023e1ce8d" containerName="probe" Jan 20 18:25:59 crc kubenswrapper[4661]: I0120 18:25:59.711258 4661 memory_manager.go:354] "RemoveStaleState removing state" podUID="deebca4d-9edb-45ba-90a2-c58696b6c0d7" containerName="dnsmasq-dns" Jan 20 18:25:59 crc kubenswrapper[4661]: I0120 18:25:59.711326 4661 memory_manager.go:354] "RemoveStaleState removing state" podUID="07c8af61-2003-440a-b91e-325023e1ce8d" containerName="cinder-scheduler" Jan 20 18:25:59 crc kubenswrapper[4661]: I0120 18:25:59.711381 4661 memory_manager.go:354] "RemoveStaleState removing state" podUID="051bbaec-94d3-4a08-ab1b-4417e566e5f3" containerName="neutron-httpd" Jan 20 18:25:59 crc kubenswrapper[4661]: I0120 18:25:59.712355 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 20 18:25:59 crc kubenswrapper[4661]: I0120 18:25:59.727436 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 20 18:25:59 crc kubenswrapper[4661]: I0120 18:25:59.731034 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Jan 20 18:25:59 crc kubenswrapper[4661]: I0120 18:25:59.744960 4661 scope.go:117] "RemoveContainer" containerID="8956ce0726710ad0120621168680b90297d4cdda4657606c96b00e928727d835" Jan 20 18:25:59 crc kubenswrapper[4661]: I0120 18:25:59.833566 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/da178eaf-bf04-4638-a071-808d119fd4ec-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"da178eaf-bf04-4638-a071-808d119fd4ec\") " pod="openstack/cinder-scheduler-0" Jan 20 18:25:59 crc kubenswrapper[4661]: I0120 18:25:59.833623 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/da178eaf-bf04-4638-a071-808d119fd4ec-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"da178eaf-bf04-4638-a071-808d119fd4ec\") " pod="openstack/cinder-scheduler-0" Jan 20 18:25:59 crc kubenswrapper[4661]: I0120 18:25:59.833656 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/da178eaf-bf04-4638-a071-808d119fd4ec-scripts\") pod \"cinder-scheduler-0\" (UID: \"da178eaf-bf04-4638-a071-808d119fd4ec\") " pod="openstack/cinder-scheduler-0" Jan 20 18:25:59 crc kubenswrapper[4661]: I0120 18:25:59.833688 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-whjb9\" (UniqueName: \"kubernetes.io/projected/da178eaf-bf04-4638-a071-808d119fd4ec-kube-api-access-whjb9\") pod \"cinder-scheduler-0\" (UID: \"da178eaf-bf04-4638-a071-808d119fd4ec\") " pod="openstack/cinder-scheduler-0" Jan 20 18:25:59 crc kubenswrapper[4661]: I0120 18:25:59.833754 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/da178eaf-bf04-4638-a071-808d119fd4ec-config-data\") pod \"cinder-scheduler-0\" (UID: \"da178eaf-bf04-4638-a071-808d119fd4ec\") " pod="openstack/cinder-scheduler-0" Jan 20 18:25:59 crc kubenswrapper[4661]: I0120 18:25:59.833789 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/da178eaf-bf04-4638-a071-808d119fd4ec-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"da178eaf-bf04-4638-a071-808d119fd4ec\") " pod="openstack/cinder-scheduler-0" Jan 20 18:25:59 crc kubenswrapper[4661]: I0120 18:25:59.867926 4661 scope.go:117] "RemoveContainer" containerID="efa988c462b7d5828080f9b8427590917c2e9b2ce41f95a8ba5e62b8472613a8" Jan 20 18:25:59 crc kubenswrapper[4661]: I0120 18:25:59.935221 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/da178eaf-bf04-4638-a071-808d119fd4ec-scripts\") pod \"cinder-scheduler-0\" (UID: \"da178eaf-bf04-4638-a071-808d119fd4ec\") " pod="openstack/cinder-scheduler-0" Jan 20 18:25:59 crc kubenswrapper[4661]: I0120 18:25:59.935288 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-whjb9\" (UniqueName: \"kubernetes.io/projected/da178eaf-bf04-4638-a071-808d119fd4ec-kube-api-access-whjb9\") pod \"cinder-scheduler-0\" (UID: \"da178eaf-bf04-4638-a071-808d119fd4ec\") " pod="openstack/cinder-scheduler-0" Jan 20 18:25:59 crc kubenswrapper[4661]: I0120 18:25:59.935360 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/da178eaf-bf04-4638-a071-808d119fd4ec-config-data\") pod \"cinder-scheduler-0\" (UID: \"da178eaf-bf04-4638-a071-808d119fd4ec\") " pod="openstack/cinder-scheduler-0" Jan 20 18:25:59 crc kubenswrapper[4661]: I0120 18:25:59.935394 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/da178eaf-bf04-4638-a071-808d119fd4ec-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"da178eaf-bf04-4638-a071-808d119fd4ec\") " pod="openstack/cinder-scheduler-0" Jan 20 18:25:59 crc kubenswrapper[4661]: I0120 18:25:59.935435 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/da178eaf-bf04-4638-a071-808d119fd4ec-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"da178eaf-bf04-4638-a071-808d119fd4ec\") " pod="openstack/cinder-scheduler-0" Jan 20 18:25:59 crc kubenswrapper[4661]: I0120 18:25:59.935466 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/da178eaf-bf04-4638-a071-808d119fd4ec-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"da178eaf-bf04-4638-a071-808d119fd4ec\") " pod="openstack/cinder-scheduler-0" Jan 20 18:25:59 crc kubenswrapper[4661]: I0120 18:25:59.935550 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/da178eaf-bf04-4638-a071-808d119fd4ec-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"da178eaf-bf04-4638-a071-808d119fd4ec\") " pod="openstack/cinder-scheduler-0" Jan 20 18:25:59 crc kubenswrapper[4661]: I0120 18:25:59.942566 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/da178eaf-bf04-4638-a071-808d119fd4ec-scripts\") pod \"cinder-scheduler-0\" (UID: \"da178eaf-bf04-4638-a071-808d119fd4ec\") " pod="openstack/cinder-scheduler-0" Jan 20 18:25:59 crc kubenswrapper[4661]: I0120 18:25:59.944212 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/da178eaf-bf04-4638-a071-808d119fd4ec-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"da178eaf-bf04-4638-a071-808d119fd4ec\") " pod="openstack/cinder-scheduler-0" Jan 20 18:25:59 crc kubenswrapper[4661]: I0120 18:25:59.944950 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/da178eaf-bf04-4638-a071-808d119fd4ec-config-data\") pod \"cinder-scheduler-0\" (UID: \"da178eaf-bf04-4638-a071-808d119fd4ec\") " pod="openstack/cinder-scheduler-0" Jan 20 18:25:59 crc kubenswrapper[4661]: I0120 18:25:59.945346 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/da178eaf-bf04-4638-a071-808d119fd4ec-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"da178eaf-bf04-4638-a071-808d119fd4ec\") " pod="openstack/cinder-scheduler-0" Jan 20 18:25:59 crc kubenswrapper[4661]: I0120 18:25:59.959754 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-whjb9\" (UniqueName: \"kubernetes.io/projected/da178eaf-bf04-4638-a071-808d119fd4ec-kube-api-access-whjb9\") pod \"cinder-scheduler-0\" (UID: \"da178eaf-bf04-4638-a071-808d119fd4ec\") " pod="openstack/cinder-scheduler-0" Jan 20 18:26:00 crc kubenswrapper[4661]: I0120 18:26:00.054288 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 20 18:26:00 crc kubenswrapper[4661]: I0120 18:26:00.155211 4661 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="051bbaec-94d3-4a08-ab1b-4417e566e5f3" path="/var/lib/kubelet/pods/051bbaec-94d3-4a08-ab1b-4417e566e5f3/volumes" Jan 20 18:26:00 crc kubenswrapper[4661]: I0120 18:26:00.155891 4661 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="07c8af61-2003-440a-b91e-325023e1ce8d" path="/var/lib/kubelet/pods/07c8af61-2003-440a-b91e-325023e1ce8d/volumes" Jan 20 18:26:00 crc kubenswrapper[4661]: I0120 18:26:00.582492 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 20 18:26:00 crc kubenswrapper[4661]: W0120 18:26:00.586274 4661 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podda178eaf_bf04_4638_a071_808d119fd4ec.slice/crio-398f13405bfbd59ab5c399e4a13c487caeaa1602c34810373bfb3952fdb4c6f0 WatchSource:0}: Error finding container 398f13405bfbd59ab5c399e4a13c487caeaa1602c34810373bfb3952fdb4c6f0: Status 404 returned error can't find the container with id 398f13405bfbd59ab5c399e4a13c487caeaa1602c34810373bfb3952fdb4c6f0 Jan 20 18:26:00 crc kubenswrapper[4661]: I0120 18:26:00.648293 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"da178eaf-bf04-4638-a071-808d119fd4ec","Type":"ContainerStarted","Data":"398f13405bfbd59ab5c399e4a13c487caeaa1602c34810373bfb3952fdb4c6f0"} Jan 20 18:26:01 crc kubenswrapper[4661]: I0120 18:26:01.665442 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"da178eaf-bf04-4638-a071-808d119fd4ec","Type":"ContainerStarted","Data":"50ae99bf608e6cfb9e46c4e58d225a503faea8b23080939b633ef84d6553207d"} Jan 20 18:26:01 crc kubenswrapper[4661]: I0120 18:26:01.824339 4661 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-db-create-vlvmp"] Jan 20 18:26:01 crc kubenswrapper[4661]: I0120 18:26:01.825453 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-vlvmp" Jan 20 18:26:01 crc kubenswrapper[4661]: I0120 18:26:01.833358 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-vlvmp"] Jan 20 18:26:01 crc kubenswrapper[4661]: I0120 18:26:01.866032 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jl8fq\" (UniqueName: \"kubernetes.io/projected/a47f9dfb-b359-4486-bbb8-7895eddd6176-kube-api-access-jl8fq\") pod \"nova-api-db-create-vlvmp\" (UID: \"a47f9dfb-b359-4486-bbb8-7895eddd6176\") " pod="openstack/nova-api-db-create-vlvmp" Jan 20 18:26:01 crc kubenswrapper[4661]: I0120 18:26:01.866088 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a47f9dfb-b359-4486-bbb8-7895eddd6176-operator-scripts\") pod \"nova-api-db-create-vlvmp\" (UID: \"a47f9dfb-b359-4486-bbb8-7895eddd6176\") " pod="openstack/nova-api-db-create-vlvmp" Jan 20 18:26:01 crc kubenswrapper[4661]: I0120 18:26:01.968719 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jl8fq\" (UniqueName: \"kubernetes.io/projected/a47f9dfb-b359-4486-bbb8-7895eddd6176-kube-api-access-jl8fq\") pod \"nova-api-db-create-vlvmp\" (UID: \"a47f9dfb-b359-4486-bbb8-7895eddd6176\") " pod="openstack/nova-api-db-create-vlvmp" Jan 20 18:26:01 crc kubenswrapper[4661]: I0120 18:26:01.969037 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a47f9dfb-b359-4486-bbb8-7895eddd6176-operator-scripts\") pod \"nova-api-db-create-vlvmp\" (UID: \"a47f9dfb-b359-4486-bbb8-7895eddd6176\") " pod="openstack/nova-api-db-create-vlvmp" Jan 20 18:26:01 crc kubenswrapper[4661]: I0120 18:26:01.969799 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a47f9dfb-b359-4486-bbb8-7895eddd6176-operator-scripts\") pod \"nova-api-db-create-vlvmp\" (UID: \"a47f9dfb-b359-4486-bbb8-7895eddd6176\") " pod="openstack/nova-api-db-create-vlvmp" Jan 20 18:26:01 crc kubenswrapper[4661]: I0120 18:26:01.994342 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jl8fq\" (UniqueName: \"kubernetes.io/projected/a47f9dfb-b359-4486-bbb8-7895eddd6176-kube-api-access-jl8fq\") pod \"nova-api-db-create-vlvmp\" (UID: \"a47f9dfb-b359-4486-bbb8-7895eddd6176\") " pod="openstack/nova-api-db-create-vlvmp" Jan 20 18:26:02 crc kubenswrapper[4661]: I0120 18:26:02.049026 4661 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-8e5c-account-create-update-4p6rp"] Jan 20 18:26:02 crc kubenswrapper[4661]: I0120 18:26:02.050018 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-8e5c-account-create-update-4p6rp" Jan 20 18:26:02 crc kubenswrapper[4661]: I0120 18:26:02.054232 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Jan 20 18:26:02 crc kubenswrapper[4661]: I0120 18:26:02.063348 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-8e5c-account-create-update-4p6rp"] Jan 20 18:26:02 crc kubenswrapper[4661]: I0120 18:26:02.076534 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b20fc329-8560-46c2-ad9e-235a140302e2-operator-scripts\") pod \"nova-api-8e5c-account-create-update-4p6rp\" (UID: \"b20fc329-8560-46c2-ad9e-235a140302e2\") " pod="openstack/nova-api-8e5c-account-create-update-4p6rp" Jan 20 18:26:02 crc kubenswrapper[4661]: I0120 18:26:02.076580 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rqv65\" (UniqueName: \"kubernetes.io/projected/b20fc329-8560-46c2-ad9e-235a140302e2-kube-api-access-rqv65\") pod \"nova-api-8e5c-account-create-update-4p6rp\" (UID: \"b20fc329-8560-46c2-ad9e-235a140302e2\") " pod="openstack/nova-api-8e5c-account-create-update-4p6rp" Jan 20 18:26:02 crc kubenswrapper[4661]: I0120 18:26:02.134137 4661 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-db-create-cpd2l"] Jan 20 18:26:02 crc kubenswrapper[4661]: I0120 18:26:02.135247 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-cpd2l" Jan 20 18:26:02 crc kubenswrapper[4661]: I0120 18:26:02.168559 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-cpd2l"] Jan 20 18:26:02 crc kubenswrapper[4661]: I0120 18:26:02.172812 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-vlvmp" Jan 20 18:26:02 crc kubenswrapper[4661]: I0120 18:26:02.180572 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1879889e-4ae9-4bbf-b25d-efc0020c3000-operator-scripts\") pod \"nova-cell0-db-create-cpd2l\" (UID: \"1879889e-4ae9-4bbf-b25d-efc0020c3000\") " pod="openstack/nova-cell0-db-create-cpd2l" Jan 20 18:26:02 crc kubenswrapper[4661]: I0120 18:26:02.180655 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-58bdq\" (UniqueName: \"kubernetes.io/projected/1879889e-4ae9-4bbf-b25d-efc0020c3000-kube-api-access-58bdq\") pod \"nova-cell0-db-create-cpd2l\" (UID: \"1879889e-4ae9-4bbf-b25d-efc0020c3000\") " pod="openstack/nova-cell0-db-create-cpd2l" Jan 20 18:26:02 crc kubenswrapper[4661]: I0120 18:26:02.181189 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b20fc329-8560-46c2-ad9e-235a140302e2-operator-scripts\") pod \"nova-api-8e5c-account-create-update-4p6rp\" (UID: \"b20fc329-8560-46c2-ad9e-235a140302e2\") " pod="openstack/nova-api-8e5c-account-create-update-4p6rp" Jan 20 18:26:02 crc kubenswrapper[4661]: I0120 18:26:02.181271 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rqv65\" (UniqueName: \"kubernetes.io/projected/b20fc329-8560-46c2-ad9e-235a140302e2-kube-api-access-rqv65\") pod \"nova-api-8e5c-account-create-update-4p6rp\" (UID: \"b20fc329-8560-46c2-ad9e-235a140302e2\") " pod="openstack/nova-api-8e5c-account-create-update-4p6rp" Jan 20 18:26:02 crc kubenswrapper[4661]: I0120 18:26:02.183693 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b20fc329-8560-46c2-ad9e-235a140302e2-operator-scripts\") pod \"nova-api-8e5c-account-create-update-4p6rp\" (UID: \"b20fc329-8560-46c2-ad9e-235a140302e2\") " pod="openstack/nova-api-8e5c-account-create-update-4p6rp" Jan 20 18:26:02 crc kubenswrapper[4661]: I0120 18:26:02.208153 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rqv65\" (UniqueName: \"kubernetes.io/projected/b20fc329-8560-46c2-ad9e-235a140302e2-kube-api-access-rqv65\") pod \"nova-api-8e5c-account-create-update-4p6rp\" (UID: \"b20fc329-8560-46c2-ad9e-235a140302e2\") " pod="openstack/nova-api-8e5c-account-create-update-4p6rp" Jan 20 18:26:02 crc kubenswrapper[4661]: I0120 18:26:02.283619 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1879889e-4ae9-4bbf-b25d-efc0020c3000-operator-scripts\") pod \"nova-cell0-db-create-cpd2l\" (UID: \"1879889e-4ae9-4bbf-b25d-efc0020c3000\") " pod="openstack/nova-cell0-db-create-cpd2l" Jan 20 18:26:02 crc kubenswrapper[4661]: I0120 18:26:02.284422 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-58bdq\" (UniqueName: \"kubernetes.io/projected/1879889e-4ae9-4bbf-b25d-efc0020c3000-kube-api-access-58bdq\") pod \"nova-cell0-db-create-cpd2l\" (UID: \"1879889e-4ae9-4bbf-b25d-efc0020c3000\") " pod="openstack/nova-cell0-db-create-cpd2l" Jan 20 18:26:02 crc kubenswrapper[4661]: I0120 18:26:02.284369 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1879889e-4ae9-4bbf-b25d-efc0020c3000-operator-scripts\") pod \"nova-cell0-db-create-cpd2l\" (UID: \"1879889e-4ae9-4bbf-b25d-efc0020c3000\") " pod="openstack/nova-cell0-db-create-cpd2l" Jan 20 18:26:02 crc kubenswrapper[4661]: I0120 18:26:02.310776 4661 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-db-create-pkjqx"] Jan 20 18:26:02 crc kubenswrapper[4661]: I0120 18:26:02.312106 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-pkjqx" Jan 20 18:26:02 crc kubenswrapper[4661]: I0120 18:26:02.328731 4661 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-f854-account-create-update-9gfsv"] Jan 20 18:26:02 crc kubenswrapper[4661]: I0120 18:26:02.329445 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-58bdq\" (UniqueName: \"kubernetes.io/projected/1879889e-4ae9-4bbf-b25d-efc0020c3000-kube-api-access-58bdq\") pod \"nova-cell0-db-create-cpd2l\" (UID: \"1879889e-4ae9-4bbf-b25d-efc0020c3000\") " pod="openstack/nova-cell0-db-create-cpd2l" Jan 20 18:26:02 crc kubenswrapper[4661]: I0120 18:26:02.330190 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-f854-account-create-update-9gfsv" Jan 20 18:26:02 crc kubenswrapper[4661]: I0120 18:26:02.339819 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Jan 20 18:26:02 crc kubenswrapper[4661]: I0120 18:26:02.362920 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-pkjqx"] Jan 20 18:26:02 crc kubenswrapper[4661]: I0120 18:26:02.365100 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-8e5c-account-create-update-4p6rp" Jan 20 18:26:02 crc kubenswrapper[4661]: I0120 18:26:02.379819 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-f854-account-create-update-9gfsv"] Jan 20 18:26:02 crc kubenswrapper[4661]: I0120 18:26:02.389432 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/07abbdc8-94ea-4e7f-9d9f-92eef06d26c5-operator-scripts\") pod \"nova-cell1-db-create-pkjqx\" (UID: \"07abbdc8-94ea-4e7f-9d9f-92eef06d26c5\") " pod="openstack/nova-cell1-db-create-pkjqx" Jan 20 18:26:02 crc kubenswrapper[4661]: I0120 18:26:02.389480 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g2462\" (UniqueName: \"kubernetes.io/projected/49840a38-6542-47bf-ab78-ba21cd4fdd94-kube-api-access-g2462\") pod \"nova-cell0-f854-account-create-update-9gfsv\" (UID: \"49840a38-6542-47bf-ab78-ba21cd4fdd94\") " pod="openstack/nova-cell0-f854-account-create-update-9gfsv" Jan 20 18:26:02 crc kubenswrapper[4661]: I0120 18:26:02.389510 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/49840a38-6542-47bf-ab78-ba21cd4fdd94-operator-scripts\") pod \"nova-cell0-f854-account-create-update-9gfsv\" (UID: \"49840a38-6542-47bf-ab78-ba21cd4fdd94\") " pod="openstack/nova-cell0-f854-account-create-update-9gfsv" Jan 20 18:26:02 crc kubenswrapper[4661]: I0120 18:26:02.389584 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q6z77\" (UniqueName: \"kubernetes.io/projected/07abbdc8-94ea-4e7f-9d9f-92eef06d26c5-kube-api-access-q6z77\") pod \"nova-cell1-db-create-pkjqx\" (UID: \"07abbdc8-94ea-4e7f-9d9f-92eef06d26c5\") " pod="openstack/nova-cell1-db-create-pkjqx" Jan 20 18:26:02 crc kubenswrapper[4661]: I0120 18:26:02.464860 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-cpd2l" Jan 20 18:26:02 crc kubenswrapper[4661]: I0120 18:26:02.470717 4661 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-750e-account-create-update-lq645"] Jan 20 18:26:02 crc kubenswrapper[4661]: I0120 18:26:02.538706 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q6z77\" (UniqueName: \"kubernetes.io/projected/07abbdc8-94ea-4e7f-9d9f-92eef06d26c5-kube-api-access-q6z77\") pod \"nova-cell1-db-create-pkjqx\" (UID: \"07abbdc8-94ea-4e7f-9d9f-92eef06d26c5\") " pod="openstack/nova-cell1-db-create-pkjqx" Jan 20 18:26:02 crc kubenswrapper[4661]: I0120 18:26:02.539214 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/07abbdc8-94ea-4e7f-9d9f-92eef06d26c5-operator-scripts\") pod \"nova-cell1-db-create-pkjqx\" (UID: \"07abbdc8-94ea-4e7f-9d9f-92eef06d26c5\") " pod="openstack/nova-cell1-db-create-pkjqx" Jan 20 18:26:02 crc kubenswrapper[4661]: I0120 18:26:02.539284 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g2462\" (UniqueName: \"kubernetes.io/projected/49840a38-6542-47bf-ab78-ba21cd4fdd94-kube-api-access-g2462\") pod \"nova-cell0-f854-account-create-update-9gfsv\" (UID: \"49840a38-6542-47bf-ab78-ba21cd4fdd94\") " pod="openstack/nova-cell0-f854-account-create-update-9gfsv" Jan 20 18:26:02 crc kubenswrapper[4661]: I0120 18:26:02.539349 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/49840a38-6542-47bf-ab78-ba21cd4fdd94-operator-scripts\") pod \"nova-cell0-f854-account-create-update-9gfsv\" (UID: \"49840a38-6542-47bf-ab78-ba21cd4fdd94\") " pod="openstack/nova-cell0-f854-account-create-update-9gfsv" Jan 20 18:26:02 crc kubenswrapper[4661]: I0120 18:26:02.541940 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/49840a38-6542-47bf-ab78-ba21cd4fdd94-operator-scripts\") pod \"nova-cell0-f854-account-create-update-9gfsv\" (UID: \"49840a38-6542-47bf-ab78-ba21cd4fdd94\") " pod="openstack/nova-cell0-f854-account-create-update-9gfsv" Jan 20 18:26:02 crc kubenswrapper[4661]: I0120 18:26:02.543168 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-750e-account-create-update-lq645" Jan 20 18:26:02 crc kubenswrapper[4661]: I0120 18:26:02.549292 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/07abbdc8-94ea-4e7f-9d9f-92eef06d26c5-operator-scripts\") pod \"nova-cell1-db-create-pkjqx\" (UID: \"07abbdc8-94ea-4e7f-9d9f-92eef06d26c5\") " pod="openstack/nova-cell1-db-create-pkjqx" Jan 20 18:26:02 crc kubenswrapper[4661]: I0120 18:26:02.557625 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-750e-account-create-update-lq645"] Jan 20 18:26:02 crc kubenswrapper[4661]: I0120 18:26:02.573887 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Jan 20 18:26:02 crc kubenswrapper[4661]: I0120 18:26:02.619079 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q6z77\" (UniqueName: \"kubernetes.io/projected/07abbdc8-94ea-4e7f-9d9f-92eef06d26c5-kube-api-access-q6z77\") pod \"nova-cell1-db-create-pkjqx\" (UID: \"07abbdc8-94ea-4e7f-9d9f-92eef06d26c5\") " pod="openstack/nova-cell1-db-create-pkjqx" Jan 20 18:26:02 crc kubenswrapper[4661]: I0120 18:26:02.641947 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c4wzb\" (UniqueName: \"kubernetes.io/projected/e49fe865-f025-4842-8e32-a4f9213cdc2a-kube-api-access-c4wzb\") pod \"nova-cell1-750e-account-create-update-lq645\" (UID: \"e49fe865-f025-4842-8e32-a4f9213cdc2a\") " pod="openstack/nova-cell1-750e-account-create-update-lq645" Jan 20 18:26:02 crc kubenswrapper[4661]: I0120 18:26:02.642064 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e49fe865-f025-4842-8e32-a4f9213cdc2a-operator-scripts\") pod \"nova-cell1-750e-account-create-update-lq645\" (UID: \"e49fe865-f025-4842-8e32-a4f9213cdc2a\") " pod="openstack/nova-cell1-750e-account-create-update-lq645" Jan 20 18:26:02 crc kubenswrapper[4661]: I0120 18:26:02.657572 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g2462\" (UniqueName: \"kubernetes.io/projected/49840a38-6542-47bf-ab78-ba21cd4fdd94-kube-api-access-g2462\") pod \"nova-cell0-f854-account-create-update-9gfsv\" (UID: \"49840a38-6542-47bf-ab78-ba21cd4fdd94\") " pod="openstack/nova-cell0-f854-account-create-update-9gfsv" Jan 20 18:26:02 crc kubenswrapper[4661]: I0120 18:26:02.732293 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"da178eaf-bf04-4638-a071-808d119fd4ec","Type":"ContainerStarted","Data":"7b178bc3962c4676c3cff7c96090c1070afdfb6249386cab33bb147b4c00b9de"} Jan 20 18:26:02 crc kubenswrapper[4661]: I0120 18:26:02.734138 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-pkjqx" Jan 20 18:26:02 crc kubenswrapper[4661]: I0120 18:26:02.745754 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e49fe865-f025-4842-8e32-a4f9213cdc2a-operator-scripts\") pod \"nova-cell1-750e-account-create-update-lq645\" (UID: \"e49fe865-f025-4842-8e32-a4f9213cdc2a\") " pod="openstack/nova-cell1-750e-account-create-update-lq645" Jan 20 18:26:02 crc kubenswrapper[4661]: I0120 18:26:02.745839 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c4wzb\" (UniqueName: \"kubernetes.io/projected/e49fe865-f025-4842-8e32-a4f9213cdc2a-kube-api-access-c4wzb\") pod \"nova-cell1-750e-account-create-update-lq645\" (UID: \"e49fe865-f025-4842-8e32-a4f9213cdc2a\") " pod="openstack/nova-cell1-750e-account-create-update-lq645" Jan 20 18:26:02 crc kubenswrapper[4661]: I0120 18:26:02.746626 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e49fe865-f025-4842-8e32-a4f9213cdc2a-operator-scripts\") pod \"nova-cell1-750e-account-create-update-lq645\" (UID: \"e49fe865-f025-4842-8e32-a4f9213cdc2a\") " pod="openstack/nova-cell1-750e-account-create-update-lq645" Jan 20 18:26:02 crc kubenswrapper[4661]: I0120 18:26:02.773263 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-f854-account-create-update-9gfsv" Jan 20 18:26:02 crc kubenswrapper[4661]: I0120 18:26:02.782246 4661 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=3.782229071 podStartE2EDuration="3.782229071s" podCreationTimestamp="2026-01-20 18:25:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 18:26:02.773897741 +0000 UTC m=+1219.104687403" watchObservedRunningTime="2026-01-20 18:26:02.782229071 +0000 UTC m=+1219.113018723" Jan 20 18:26:02 crc kubenswrapper[4661]: I0120 18:26:02.809512 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c4wzb\" (UniqueName: \"kubernetes.io/projected/e49fe865-f025-4842-8e32-a4f9213cdc2a-kube-api-access-c4wzb\") pod \"nova-cell1-750e-account-create-update-lq645\" (UID: \"e49fe865-f025-4842-8e32-a4f9213cdc2a\") " pod="openstack/nova-cell1-750e-account-create-update-lq645" Jan 20 18:26:03 crc kubenswrapper[4661]: I0120 18:26:03.044514 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-750e-account-create-update-lq645" Jan 20 18:26:03 crc kubenswrapper[4661]: I0120 18:26:03.161377 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-vlvmp"] Jan 20 18:26:03 crc kubenswrapper[4661]: W0120 18:26:03.177829 4661 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda47f9dfb_b359_4486_bbb8_7895eddd6176.slice/crio-c8ad2a25f87dac8dc4edd32b21a072a5b62fe48e4a2422e4adee33a1325aba68 WatchSource:0}: Error finding container c8ad2a25f87dac8dc4edd32b21a072a5b62fe48e4a2422e4adee33a1325aba68: Status 404 returned error can't find the container with id c8ad2a25f87dac8dc4edd32b21a072a5b62fe48e4a2422e4adee33a1325aba68 Jan 20 18:26:03 crc kubenswrapper[4661]: I0120 18:26:03.180226 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-8e5c-account-create-update-4p6rp"] Jan 20 18:26:03 crc kubenswrapper[4661]: W0120 18:26:03.211700 4661 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb20fc329_8560_46c2_ad9e_235a140302e2.slice/crio-8b6962f4a65bb45f76609c4d090df5bf69cbba581fce5eed47521b965a0c776e WatchSource:0}: Error finding container 8b6962f4a65bb45f76609c4d090df5bf69cbba581fce5eed47521b965a0c776e: Status 404 returned error can't find the container with id 8b6962f4a65bb45f76609c4d090df5bf69cbba581fce5eed47521b965a0c776e Jan 20 18:26:03 crc kubenswrapper[4661]: I0120 18:26:03.472584 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-cpd2l"] Jan 20 18:26:03 crc kubenswrapper[4661]: I0120 18:26:03.539648 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-pkjqx"] Jan 20 18:26:03 crc kubenswrapper[4661]: I0120 18:26:03.692488 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-f854-account-create-update-9gfsv"] Jan 20 18:26:03 crc kubenswrapper[4661]: W0120 18:26:03.704174 4661 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod49840a38_6542_47bf_ab78_ba21cd4fdd94.slice/crio-a421c2a6c1e342b1e6779670965b3b8155caf312d8a3b4ab769f96242c7489f4 WatchSource:0}: Error finding container a421c2a6c1e342b1e6779670965b3b8155caf312d8a3b4ab769f96242c7489f4: Status 404 returned error can't find the container with id a421c2a6c1e342b1e6779670965b3b8155caf312d8a3b4ab769f96242c7489f4 Jan 20 18:26:03 crc kubenswrapper[4661]: I0120 18:26:03.811014 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-f854-account-create-update-9gfsv" event={"ID":"49840a38-6542-47bf-ab78-ba21cd4fdd94","Type":"ContainerStarted","Data":"a421c2a6c1e342b1e6779670965b3b8155caf312d8a3b4ab769f96242c7489f4"} Jan 20 18:26:03 crc kubenswrapper[4661]: I0120 18:26:03.828766 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-750e-account-create-update-lq645"] Jan 20 18:26:03 crc kubenswrapper[4661]: I0120 18:26:03.840094 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-pkjqx" event={"ID":"07abbdc8-94ea-4e7f-9d9f-92eef06d26c5","Type":"ContainerStarted","Data":"4494311e9861f7e3d8eebc404e2013b25f2e2f6d4bb8d134b66c8999c355442c"} Jan 20 18:26:03 crc kubenswrapper[4661]: I0120 18:26:03.857949 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-cpd2l" event={"ID":"1879889e-4ae9-4bbf-b25d-efc0020c3000","Type":"ContainerStarted","Data":"d5d734fb12e34d64c6f3c6cbca036ae9d0a62f4dbe9bf80228019d325f53e745"} Jan 20 18:26:03 crc kubenswrapper[4661]: I0120 18:26:03.862402 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-vlvmp" event={"ID":"a47f9dfb-b359-4486-bbb8-7895eddd6176","Type":"ContainerStarted","Data":"c8ad2a25f87dac8dc4edd32b21a072a5b62fe48e4a2422e4adee33a1325aba68"} Jan 20 18:26:03 crc kubenswrapper[4661]: I0120 18:26:03.869719 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-8e5c-account-create-update-4p6rp" event={"ID":"b20fc329-8560-46c2-ad9e-235a140302e2","Type":"ContainerStarted","Data":"8b6962f4a65bb45f76609c4d090df5bf69cbba581fce5eed47521b965a0c776e"} Jan 20 18:26:04 crc kubenswrapper[4661]: I0120 18:26:04.879491 4661 generic.go:334] "Generic (PLEG): container finished" podID="07abbdc8-94ea-4e7f-9d9f-92eef06d26c5" containerID="e856c8ed4f3ff69814e70be1993cc2847ac0c782d52155a3460e0c435a669ba1" exitCode=0 Jan 20 18:26:04 crc kubenswrapper[4661]: I0120 18:26:04.879623 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-pkjqx" event={"ID":"07abbdc8-94ea-4e7f-9d9f-92eef06d26c5","Type":"ContainerDied","Data":"e856c8ed4f3ff69814e70be1993cc2847ac0c782d52155a3460e0c435a669ba1"} Jan 20 18:26:04 crc kubenswrapper[4661]: I0120 18:26:04.880872 4661 generic.go:334] "Generic (PLEG): container finished" podID="1879889e-4ae9-4bbf-b25d-efc0020c3000" containerID="149dfd1b9ed80c83523be9a2d33cfc40fbe80e6fbcaf62730c06b27739790b82" exitCode=0 Jan 20 18:26:04 crc kubenswrapper[4661]: I0120 18:26:04.880961 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-cpd2l" event={"ID":"1879889e-4ae9-4bbf-b25d-efc0020c3000","Type":"ContainerDied","Data":"149dfd1b9ed80c83523be9a2d33cfc40fbe80e6fbcaf62730c06b27739790b82"} Jan 20 18:26:04 crc kubenswrapper[4661]: I0120 18:26:04.883026 4661 generic.go:334] "Generic (PLEG): container finished" podID="a47f9dfb-b359-4486-bbb8-7895eddd6176" containerID="cbc718a107e4e6824972da0ab909768ba760bb49841908a25546d3620f552c18" exitCode=0 Jan 20 18:26:04 crc kubenswrapper[4661]: I0120 18:26:04.883089 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-vlvmp" event={"ID":"a47f9dfb-b359-4486-bbb8-7895eddd6176","Type":"ContainerDied","Data":"cbc718a107e4e6824972da0ab909768ba760bb49841908a25546d3620f552c18"} Jan 20 18:26:04 crc kubenswrapper[4661]: I0120 18:26:04.884208 4661 generic.go:334] "Generic (PLEG): container finished" podID="b20fc329-8560-46c2-ad9e-235a140302e2" containerID="b40a508cc12511d22aae4c26fcfede2554ce1b126643d36df46ac6337e227d94" exitCode=0 Jan 20 18:26:04 crc kubenswrapper[4661]: I0120 18:26:04.884275 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-8e5c-account-create-update-4p6rp" event={"ID":"b20fc329-8560-46c2-ad9e-235a140302e2","Type":"ContainerDied","Data":"b40a508cc12511d22aae4c26fcfede2554ce1b126643d36df46ac6337e227d94"} Jan 20 18:26:04 crc kubenswrapper[4661]: I0120 18:26:04.885611 4661 generic.go:334] "Generic (PLEG): container finished" podID="e49fe865-f025-4842-8e32-a4f9213cdc2a" containerID="3117defb758cb93e92036fcd7872361417ce7c3f258b7e847604642f3aaed493" exitCode=0 Jan 20 18:26:04 crc kubenswrapper[4661]: I0120 18:26:04.885680 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-750e-account-create-update-lq645" event={"ID":"e49fe865-f025-4842-8e32-a4f9213cdc2a","Type":"ContainerDied","Data":"3117defb758cb93e92036fcd7872361417ce7c3f258b7e847604642f3aaed493"} Jan 20 18:26:04 crc kubenswrapper[4661]: I0120 18:26:04.885698 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-750e-account-create-update-lq645" event={"ID":"e49fe865-f025-4842-8e32-a4f9213cdc2a","Type":"ContainerStarted","Data":"263633724dc6323af8a5812f9b5e1b62096ee17a0c917cf4c310b6fd8552ac9d"} Jan 20 18:26:04 crc kubenswrapper[4661]: I0120 18:26:04.886745 4661 generic.go:334] "Generic (PLEG): container finished" podID="49840a38-6542-47bf-ab78-ba21cd4fdd94" containerID="4e632186df404443d8e36f9acac18c5e5a84d07feaa93ee71904ebd4c057a8bf" exitCode=0 Jan 20 18:26:04 crc kubenswrapper[4661]: I0120 18:26:04.886778 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-f854-account-create-update-9gfsv" event={"ID":"49840a38-6542-47bf-ab78-ba21cd4fdd94","Type":"ContainerDied","Data":"4e632186df404443d8e36f9acac18c5e5a84d07feaa93ee71904ebd4c057a8bf"} Jan 20 18:26:05 crc kubenswrapper[4661]: I0120 18:26:05.055105 4661 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Jan 20 18:26:05 crc kubenswrapper[4661]: I0120 18:26:05.868975 4661 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/cinder-api-0" podUID="b4fa215a-165d-44b7-9bfd-19a2a9a5205c" containerName="cinder-api" probeResult="failure" output="Get \"https://10.217.0.154:8776/healthcheck\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 20 18:26:08 crc kubenswrapper[4661]: I0120 18:26:08.636262 4661 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Jan 20 18:26:10 crc kubenswrapper[4661]: I0120 18:26:10.280056 4661 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Jan 20 18:26:11 crc kubenswrapper[4661]: I0120 18:26:11.562751 4661 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Jan 20 18:26:11 crc kubenswrapper[4661]: I0120 18:26:11.953490 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-750e-account-create-update-lq645" event={"ID":"e49fe865-f025-4842-8e32-a4f9213cdc2a","Type":"ContainerDied","Data":"263633724dc6323af8a5812f9b5e1b62096ee17a0c917cf4c310b6fd8552ac9d"} Jan 20 18:26:11 crc kubenswrapper[4661]: I0120 18:26:11.953804 4661 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="263633724dc6323af8a5812f9b5e1b62096ee17a0c917cf4c310b6fd8552ac9d" Jan 20 18:26:11 crc kubenswrapper[4661]: I0120 18:26:11.955818 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-pkjqx" event={"ID":"07abbdc8-94ea-4e7f-9d9f-92eef06d26c5","Type":"ContainerDied","Data":"4494311e9861f7e3d8eebc404e2013b25f2e2f6d4bb8d134b66c8999c355442c"} Jan 20 18:26:11 crc kubenswrapper[4661]: I0120 18:26:11.955886 4661 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4494311e9861f7e3d8eebc404e2013b25f2e2f6d4bb8d134b66c8999c355442c" Jan 20 18:26:11 crc kubenswrapper[4661]: I0120 18:26:11.957711 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-f854-account-create-update-9gfsv" event={"ID":"49840a38-6542-47bf-ab78-ba21cd4fdd94","Type":"ContainerDied","Data":"a421c2a6c1e342b1e6779670965b3b8155caf312d8a3b4ab769f96242c7489f4"} Jan 20 18:26:11 crc kubenswrapper[4661]: I0120 18:26:11.957739 4661 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a421c2a6c1e342b1e6779670965b3b8155caf312d8a3b4ab769f96242c7489f4" Jan 20 18:26:11 crc kubenswrapper[4661]: I0120 18:26:11.959386 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-cpd2l" event={"ID":"1879889e-4ae9-4bbf-b25d-efc0020c3000","Type":"ContainerDied","Data":"d5d734fb12e34d64c6f3c6cbca036ae9d0a62f4dbe9bf80228019d325f53e745"} Jan 20 18:26:11 crc kubenswrapper[4661]: I0120 18:26:11.959404 4661 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d5d734fb12e34d64c6f3c6cbca036ae9d0a62f4dbe9bf80228019d325f53e745" Jan 20 18:26:11 crc kubenswrapper[4661]: I0120 18:26:11.961295 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-vlvmp" event={"ID":"a47f9dfb-b359-4486-bbb8-7895eddd6176","Type":"ContainerDied","Data":"c8ad2a25f87dac8dc4edd32b21a072a5b62fe48e4a2422e4adee33a1325aba68"} Jan 20 18:26:11 crc kubenswrapper[4661]: I0120 18:26:11.961321 4661 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c8ad2a25f87dac8dc4edd32b21a072a5b62fe48e4a2422e4adee33a1325aba68" Jan 20 18:26:11 crc kubenswrapper[4661]: I0120 18:26:11.962721 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-8e5c-account-create-update-4p6rp" event={"ID":"b20fc329-8560-46c2-ad9e-235a140302e2","Type":"ContainerDied","Data":"8b6962f4a65bb45f76609c4d090df5bf69cbba581fce5eed47521b965a0c776e"} Jan 20 18:26:11 crc kubenswrapper[4661]: I0120 18:26:11.962743 4661 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8b6962f4a65bb45f76609c4d090df5bf69cbba581fce5eed47521b965a0c776e" Jan 20 18:26:12 crc kubenswrapper[4661]: I0120 18:26:12.089694 4661 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-pkjqx" Jan 20 18:26:12 crc kubenswrapper[4661]: I0120 18:26:12.094732 4661 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-8e5c-account-create-update-4p6rp" Jan 20 18:26:12 crc kubenswrapper[4661]: I0120 18:26:12.100037 4661 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-vlvmp" Jan 20 18:26:12 crc kubenswrapper[4661]: I0120 18:26:12.106769 4661 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-750e-account-create-update-lq645" Jan 20 18:26:12 crc kubenswrapper[4661]: I0120 18:26:12.113302 4661 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-f854-account-create-update-9gfsv" Jan 20 18:26:12 crc kubenswrapper[4661]: I0120 18:26:12.151775 4661 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-cpd2l" Jan 20 18:26:12 crc kubenswrapper[4661]: I0120 18:26:12.169337 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jl8fq\" (UniqueName: \"kubernetes.io/projected/a47f9dfb-b359-4486-bbb8-7895eddd6176-kube-api-access-jl8fq\") pod \"a47f9dfb-b359-4486-bbb8-7895eddd6176\" (UID: \"a47f9dfb-b359-4486-bbb8-7895eddd6176\") " Jan 20 18:26:12 crc kubenswrapper[4661]: I0120 18:26:12.169490 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/07abbdc8-94ea-4e7f-9d9f-92eef06d26c5-operator-scripts\") pod \"07abbdc8-94ea-4e7f-9d9f-92eef06d26c5\" (UID: \"07abbdc8-94ea-4e7f-9d9f-92eef06d26c5\") " Jan 20 18:26:12 crc kubenswrapper[4661]: I0120 18:26:12.169537 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b20fc329-8560-46c2-ad9e-235a140302e2-operator-scripts\") pod \"b20fc329-8560-46c2-ad9e-235a140302e2\" (UID: \"b20fc329-8560-46c2-ad9e-235a140302e2\") " Jan 20 18:26:12 crc kubenswrapper[4661]: I0120 18:26:12.169582 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q6z77\" (UniqueName: \"kubernetes.io/projected/07abbdc8-94ea-4e7f-9d9f-92eef06d26c5-kube-api-access-q6z77\") pod \"07abbdc8-94ea-4e7f-9d9f-92eef06d26c5\" (UID: \"07abbdc8-94ea-4e7f-9d9f-92eef06d26c5\") " Jan 20 18:26:12 crc kubenswrapper[4661]: I0120 18:26:12.169603 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rqv65\" (UniqueName: \"kubernetes.io/projected/b20fc329-8560-46c2-ad9e-235a140302e2-kube-api-access-rqv65\") pod \"b20fc329-8560-46c2-ad9e-235a140302e2\" (UID: \"b20fc329-8560-46c2-ad9e-235a140302e2\") " Jan 20 18:26:12 crc kubenswrapper[4661]: I0120 18:26:12.169645 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a47f9dfb-b359-4486-bbb8-7895eddd6176-operator-scripts\") pod \"a47f9dfb-b359-4486-bbb8-7895eddd6176\" (UID: \"a47f9dfb-b359-4486-bbb8-7895eddd6176\") " Jan 20 18:26:12 crc kubenswrapper[4661]: I0120 18:26:12.170458 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a47f9dfb-b359-4486-bbb8-7895eddd6176-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "a47f9dfb-b359-4486-bbb8-7895eddd6176" (UID: "a47f9dfb-b359-4486-bbb8-7895eddd6176"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 18:26:12 crc kubenswrapper[4661]: I0120 18:26:12.170545 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b20fc329-8560-46c2-ad9e-235a140302e2-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "b20fc329-8560-46c2-ad9e-235a140302e2" (UID: "b20fc329-8560-46c2-ad9e-235a140302e2"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 18:26:12 crc kubenswrapper[4661]: I0120 18:26:12.171121 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/07abbdc8-94ea-4e7f-9d9f-92eef06d26c5-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "07abbdc8-94ea-4e7f-9d9f-92eef06d26c5" (UID: "07abbdc8-94ea-4e7f-9d9f-92eef06d26c5"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 18:26:12 crc kubenswrapper[4661]: I0120 18:26:12.198075 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a47f9dfb-b359-4486-bbb8-7895eddd6176-kube-api-access-jl8fq" (OuterVolumeSpecName: "kube-api-access-jl8fq") pod "a47f9dfb-b359-4486-bbb8-7895eddd6176" (UID: "a47f9dfb-b359-4486-bbb8-7895eddd6176"). InnerVolumeSpecName "kube-api-access-jl8fq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 18:26:12 crc kubenswrapper[4661]: I0120 18:26:12.200494 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b20fc329-8560-46c2-ad9e-235a140302e2-kube-api-access-rqv65" (OuterVolumeSpecName: "kube-api-access-rqv65") pod "b20fc329-8560-46c2-ad9e-235a140302e2" (UID: "b20fc329-8560-46c2-ad9e-235a140302e2"). InnerVolumeSpecName "kube-api-access-rqv65". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 18:26:12 crc kubenswrapper[4661]: I0120 18:26:12.205307 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/07abbdc8-94ea-4e7f-9d9f-92eef06d26c5-kube-api-access-q6z77" (OuterVolumeSpecName: "kube-api-access-q6z77") pod "07abbdc8-94ea-4e7f-9d9f-92eef06d26c5" (UID: "07abbdc8-94ea-4e7f-9d9f-92eef06d26c5"). InnerVolumeSpecName "kube-api-access-q6z77". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 18:26:12 crc kubenswrapper[4661]: I0120 18:26:12.274549 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g2462\" (UniqueName: \"kubernetes.io/projected/49840a38-6542-47bf-ab78-ba21cd4fdd94-kube-api-access-g2462\") pod \"49840a38-6542-47bf-ab78-ba21cd4fdd94\" (UID: \"49840a38-6542-47bf-ab78-ba21cd4fdd94\") " Jan 20 18:26:12 crc kubenswrapper[4661]: I0120 18:26:12.274618 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-58bdq\" (UniqueName: \"kubernetes.io/projected/1879889e-4ae9-4bbf-b25d-efc0020c3000-kube-api-access-58bdq\") pod \"1879889e-4ae9-4bbf-b25d-efc0020c3000\" (UID: \"1879889e-4ae9-4bbf-b25d-efc0020c3000\") " Jan 20 18:26:12 crc kubenswrapper[4661]: I0120 18:26:12.274706 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/49840a38-6542-47bf-ab78-ba21cd4fdd94-operator-scripts\") pod \"49840a38-6542-47bf-ab78-ba21cd4fdd94\" (UID: \"49840a38-6542-47bf-ab78-ba21cd4fdd94\") " Jan 20 18:26:12 crc kubenswrapper[4661]: I0120 18:26:12.281822 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c4wzb\" (UniqueName: \"kubernetes.io/projected/e49fe865-f025-4842-8e32-a4f9213cdc2a-kube-api-access-c4wzb\") pod \"e49fe865-f025-4842-8e32-a4f9213cdc2a\" (UID: \"e49fe865-f025-4842-8e32-a4f9213cdc2a\") " Jan 20 18:26:12 crc kubenswrapper[4661]: I0120 18:26:12.281961 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e49fe865-f025-4842-8e32-a4f9213cdc2a-operator-scripts\") pod \"e49fe865-f025-4842-8e32-a4f9213cdc2a\" (UID: \"e49fe865-f025-4842-8e32-a4f9213cdc2a\") " Jan 20 18:26:12 crc kubenswrapper[4661]: I0120 18:26:12.281987 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1879889e-4ae9-4bbf-b25d-efc0020c3000-operator-scripts\") pod \"1879889e-4ae9-4bbf-b25d-efc0020c3000\" (UID: \"1879889e-4ae9-4bbf-b25d-efc0020c3000\") " Jan 20 18:26:12 crc kubenswrapper[4661]: I0120 18:26:12.283109 4661 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/07abbdc8-94ea-4e7f-9d9f-92eef06d26c5-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 20 18:26:12 crc kubenswrapper[4661]: I0120 18:26:12.283127 4661 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b20fc329-8560-46c2-ad9e-235a140302e2-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 20 18:26:12 crc kubenswrapper[4661]: I0120 18:26:12.283139 4661 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q6z77\" (UniqueName: \"kubernetes.io/projected/07abbdc8-94ea-4e7f-9d9f-92eef06d26c5-kube-api-access-q6z77\") on node \"crc\" DevicePath \"\"" Jan 20 18:26:12 crc kubenswrapper[4661]: I0120 18:26:12.283150 4661 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rqv65\" (UniqueName: \"kubernetes.io/projected/b20fc329-8560-46c2-ad9e-235a140302e2-kube-api-access-rqv65\") on node \"crc\" DevicePath \"\"" Jan 20 18:26:12 crc kubenswrapper[4661]: I0120 18:26:12.283163 4661 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a47f9dfb-b359-4486-bbb8-7895eddd6176-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 20 18:26:12 crc kubenswrapper[4661]: I0120 18:26:12.283172 4661 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jl8fq\" (UniqueName: \"kubernetes.io/projected/a47f9dfb-b359-4486-bbb8-7895eddd6176-kube-api-access-jl8fq\") on node \"crc\" DevicePath \"\"" Jan 20 18:26:12 crc kubenswrapper[4661]: I0120 18:26:12.290772 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1879889e-4ae9-4bbf-b25d-efc0020c3000-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "1879889e-4ae9-4bbf-b25d-efc0020c3000" (UID: "1879889e-4ae9-4bbf-b25d-efc0020c3000"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 18:26:12 crc kubenswrapper[4661]: I0120 18:26:12.293849 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e49fe865-f025-4842-8e32-a4f9213cdc2a-kube-api-access-c4wzb" (OuterVolumeSpecName: "kube-api-access-c4wzb") pod "e49fe865-f025-4842-8e32-a4f9213cdc2a" (UID: "e49fe865-f025-4842-8e32-a4f9213cdc2a"). InnerVolumeSpecName "kube-api-access-c4wzb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 18:26:12 crc kubenswrapper[4661]: I0120 18:26:12.294450 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e49fe865-f025-4842-8e32-a4f9213cdc2a-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "e49fe865-f025-4842-8e32-a4f9213cdc2a" (UID: "e49fe865-f025-4842-8e32-a4f9213cdc2a"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 18:26:12 crc kubenswrapper[4661]: I0120 18:26:12.294618 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49840a38-6542-47bf-ab78-ba21cd4fdd94-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "49840a38-6542-47bf-ab78-ba21cd4fdd94" (UID: "49840a38-6542-47bf-ab78-ba21cd4fdd94"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 18:26:12 crc kubenswrapper[4661]: I0120 18:26:12.297832 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1879889e-4ae9-4bbf-b25d-efc0020c3000-kube-api-access-58bdq" (OuterVolumeSpecName: "kube-api-access-58bdq") pod "1879889e-4ae9-4bbf-b25d-efc0020c3000" (UID: "1879889e-4ae9-4bbf-b25d-efc0020c3000"). InnerVolumeSpecName "kube-api-access-58bdq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 18:26:12 crc kubenswrapper[4661]: I0120 18:26:12.297948 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49840a38-6542-47bf-ab78-ba21cd4fdd94-kube-api-access-g2462" (OuterVolumeSpecName: "kube-api-access-g2462") pod "49840a38-6542-47bf-ab78-ba21cd4fdd94" (UID: "49840a38-6542-47bf-ab78-ba21cd4fdd94"). InnerVolumeSpecName "kube-api-access-g2462". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 18:26:12 crc kubenswrapper[4661]: I0120 18:26:12.385216 4661 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e49fe865-f025-4842-8e32-a4f9213cdc2a-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 20 18:26:12 crc kubenswrapper[4661]: I0120 18:26:12.385249 4661 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1879889e-4ae9-4bbf-b25d-efc0020c3000-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 20 18:26:12 crc kubenswrapper[4661]: I0120 18:26:12.385258 4661 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g2462\" (UniqueName: \"kubernetes.io/projected/49840a38-6542-47bf-ab78-ba21cd4fdd94-kube-api-access-g2462\") on node \"crc\" DevicePath \"\"" Jan 20 18:26:12 crc kubenswrapper[4661]: I0120 18:26:12.385270 4661 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-58bdq\" (UniqueName: \"kubernetes.io/projected/1879889e-4ae9-4bbf-b25d-efc0020c3000-kube-api-access-58bdq\") on node \"crc\" DevicePath \"\"" Jan 20 18:26:12 crc kubenswrapper[4661]: I0120 18:26:12.385279 4661 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/49840a38-6542-47bf-ab78-ba21cd4fdd94-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 20 18:26:12 crc kubenswrapper[4661]: I0120 18:26:12.385288 4661 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c4wzb\" (UniqueName: \"kubernetes.io/projected/e49fe865-f025-4842-8e32-a4f9213cdc2a-kube-api-access-c4wzb\") on node \"crc\" DevicePath \"\"" Jan 20 18:26:12 crc kubenswrapper[4661]: I0120 18:26:12.971656 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"c6b78b7c-8709-4a28-bc8f-1cf8960203cc","Type":"ContainerStarted","Data":"74e87ab801ac982e08c386f2e766c1b8753240b981dae46dcc5c1a6625219518"} Jan 20 18:26:12 crc kubenswrapper[4661]: I0120 18:26:12.971757 4661 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-cpd2l" Jan 20 18:26:12 crc kubenswrapper[4661]: I0120 18:26:12.971879 4661 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-vlvmp" Jan 20 18:26:12 crc kubenswrapper[4661]: I0120 18:26:12.971951 4661 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-750e-account-create-update-lq645" Jan 20 18:26:12 crc kubenswrapper[4661]: I0120 18:26:12.971983 4661 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-f854-account-create-update-9gfsv" Jan 20 18:26:12 crc kubenswrapper[4661]: I0120 18:26:12.972132 4661 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-pkjqx" Jan 20 18:26:12 crc kubenswrapper[4661]: I0120 18:26:12.973059 4661 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-8e5c-account-create-update-4p6rp" Jan 20 18:26:12 crc kubenswrapper[4661]: I0120 18:26:12.999055 4661 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=2.840919678 podStartE2EDuration="18.999031962s" podCreationTimestamp="2026-01-20 18:25:54 +0000 UTC" firstStartedPulling="2026-01-20 18:25:55.813201894 +0000 UTC m=+1212.143991556" lastFinishedPulling="2026-01-20 18:26:11.971314178 +0000 UTC m=+1228.302103840" observedRunningTime="2026-01-20 18:26:12.998318943 +0000 UTC m=+1229.329108665" watchObservedRunningTime="2026-01-20 18:26:12.999031962 +0000 UTC m=+1229.329821634" Jan 20 18:26:16 crc kubenswrapper[4661]: I0120 18:26:16.906723 4661 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 20 18:26:16 crc kubenswrapper[4661]: I0120 18:26:16.907401 4661 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="b2c8897f-8188-4a97-8839-e205b94514c7" containerName="kube-state-metrics" containerID="cri-o://6610edecbf5f8693af6fa0504bc285ed19cc0eb56dd390c3e1503ba694919956" gracePeriod=30 Jan 20 18:26:17 crc kubenswrapper[4661]: I0120 18:26:17.525971 4661 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 20 18:26:17 crc kubenswrapper[4661]: I0120 18:26:17.679584 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7nvgf\" (UniqueName: \"kubernetes.io/projected/b2c8897f-8188-4a97-8839-e205b94514c7-kube-api-access-7nvgf\") pod \"b2c8897f-8188-4a97-8839-e205b94514c7\" (UID: \"b2c8897f-8188-4a97-8839-e205b94514c7\") " Jan 20 18:26:17 crc kubenswrapper[4661]: I0120 18:26:17.703899 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b2c8897f-8188-4a97-8839-e205b94514c7-kube-api-access-7nvgf" (OuterVolumeSpecName: "kube-api-access-7nvgf") pod "b2c8897f-8188-4a97-8839-e205b94514c7" (UID: "b2c8897f-8188-4a97-8839-e205b94514c7"). InnerVolumeSpecName "kube-api-access-7nvgf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 18:26:17 crc kubenswrapper[4661]: I0120 18:26:17.766138 4661 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-db-sync-zbltb"] Jan 20 18:26:17 crc kubenswrapper[4661]: E0120 18:26:17.766451 4661 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="07abbdc8-94ea-4e7f-9d9f-92eef06d26c5" containerName="mariadb-database-create" Jan 20 18:26:17 crc kubenswrapper[4661]: I0120 18:26:17.766467 4661 state_mem.go:107] "Deleted CPUSet assignment" podUID="07abbdc8-94ea-4e7f-9d9f-92eef06d26c5" containerName="mariadb-database-create" Jan 20 18:26:17 crc kubenswrapper[4661]: E0120 18:26:17.766481 4661 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e49fe865-f025-4842-8e32-a4f9213cdc2a" containerName="mariadb-account-create-update" Jan 20 18:26:17 crc kubenswrapper[4661]: I0120 18:26:17.766506 4661 state_mem.go:107] "Deleted CPUSet assignment" podUID="e49fe865-f025-4842-8e32-a4f9213cdc2a" containerName="mariadb-account-create-update" Jan 20 18:26:17 crc kubenswrapper[4661]: E0120 18:26:17.766524 4661 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="49840a38-6542-47bf-ab78-ba21cd4fdd94" containerName="mariadb-account-create-update" Jan 20 18:26:17 crc kubenswrapper[4661]: I0120 18:26:17.766531 4661 state_mem.go:107] "Deleted CPUSet assignment" podUID="49840a38-6542-47bf-ab78-ba21cd4fdd94" containerName="mariadb-account-create-update" Jan 20 18:26:17 crc kubenswrapper[4661]: E0120 18:26:17.766541 4661 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1879889e-4ae9-4bbf-b25d-efc0020c3000" containerName="mariadb-database-create" Jan 20 18:26:17 crc kubenswrapper[4661]: I0120 18:26:17.766546 4661 state_mem.go:107] "Deleted CPUSet assignment" podUID="1879889e-4ae9-4bbf-b25d-efc0020c3000" containerName="mariadb-database-create" Jan 20 18:26:17 crc kubenswrapper[4661]: E0120 18:26:17.766557 4661 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a47f9dfb-b359-4486-bbb8-7895eddd6176" containerName="mariadb-database-create" Jan 20 18:26:17 crc kubenswrapper[4661]: I0120 18:26:17.766563 4661 state_mem.go:107] "Deleted CPUSet assignment" podUID="a47f9dfb-b359-4486-bbb8-7895eddd6176" containerName="mariadb-database-create" Jan 20 18:26:17 crc kubenswrapper[4661]: E0120 18:26:17.766576 4661 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b2c8897f-8188-4a97-8839-e205b94514c7" containerName="kube-state-metrics" Jan 20 18:26:17 crc kubenswrapper[4661]: I0120 18:26:17.766582 4661 state_mem.go:107] "Deleted CPUSet assignment" podUID="b2c8897f-8188-4a97-8839-e205b94514c7" containerName="kube-state-metrics" Jan 20 18:26:17 crc kubenswrapper[4661]: E0120 18:26:17.766590 4661 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b20fc329-8560-46c2-ad9e-235a140302e2" containerName="mariadb-account-create-update" Jan 20 18:26:17 crc kubenswrapper[4661]: I0120 18:26:17.766598 4661 state_mem.go:107] "Deleted CPUSet assignment" podUID="b20fc329-8560-46c2-ad9e-235a140302e2" containerName="mariadb-account-create-update" Jan 20 18:26:17 crc kubenswrapper[4661]: I0120 18:26:17.766750 4661 memory_manager.go:354] "RemoveStaleState removing state" podUID="49840a38-6542-47bf-ab78-ba21cd4fdd94" containerName="mariadb-account-create-update" Jan 20 18:26:17 crc kubenswrapper[4661]: I0120 18:26:17.766758 4661 memory_manager.go:354] "RemoveStaleState removing state" podUID="a47f9dfb-b359-4486-bbb8-7895eddd6176" containerName="mariadb-database-create" Jan 20 18:26:17 crc kubenswrapper[4661]: I0120 18:26:17.766764 4661 memory_manager.go:354] "RemoveStaleState removing state" podUID="1879889e-4ae9-4bbf-b25d-efc0020c3000" containerName="mariadb-database-create" Jan 20 18:26:17 crc kubenswrapper[4661]: I0120 18:26:17.766779 4661 memory_manager.go:354] "RemoveStaleState removing state" podUID="e49fe865-f025-4842-8e32-a4f9213cdc2a" containerName="mariadb-account-create-update" Jan 20 18:26:17 crc kubenswrapper[4661]: I0120 18:26:17.766786 4661 memory_manager.go:354] "RemoveStaleState removing state" podUID="07abbdc8-94ea-4e7f-9d9f-92eef06d26c5" containerName="mariadb-database-create" Jan 20 18:26:17 crc kubenswrapper[4661]: I0120 18:26:17.766794 4661 memory_manager.go:354] "RemoveStaleState removing state" podUID="b20fc329-8560-46c2-ad9e-235a140302e2" containerName="mariadb-account-create-update" Jan 20 18:26:17 crc kubenswrapper[4661]: I0120 18:26:17.766801 4661 memory_manager.go:354] "RemoveStaleState removing state" podUID="b2c8897f-8188-4a97-8839-e205b94514c7" containerName="kube-state-metrics" Jan 20 18:26:17 crc kubenswrapper[4661]: I0120 18:26:17.767263 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-zbltb" Jan 20 18:26:17 crc kubenswrapper[4661]: I0120 18:26:17.769984 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-scripts" Jan 20 18:26:17 crc kubenswrapper[4661]: I0120 18:26:17.770539 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Jan 20 18:26:17 crc kubenswrapper[4661]: I0120 18:26:17.770661 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-fpdcz" Jan 20 18:26:17 crc kubenswrapper[4661]: I0120 18:26:17.773246 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-zbltb"] Jan 20 18:26:17 crc kubenswrapper[4661]: I0120 18:26:17.781985 4661 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7nvgf\" (UniqueName: \"kubernetes.io/projected/b2c8897f-8188-4a97-8839-e205b94514c7-kube-api-access-7nvgf\") on node \"crc\" DevicePath \"\"" Jan 20 18:26:17 crc kubenswrapper[4661]: I0120 18:26:17.883837 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9d53b16a-96f4-476d-bbfb-4b83adc3e33a-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-zbltb\" (UID: \"9d53b16a-96f4-476d-bbfb-4b83adc3e33a\") " pod="openstack/nova-cell0-conductor-db-sync-zbltb" Jan 20 18:26:17 crc kubenswrapper[4661]: I0120 18:26:17.883883 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9d53b16a-96f4-476d-bbfb-4b83adc3e33a-scripts\") pod \"nova-cell0-conductor-db-sync-zbltb\" (UID: \"9d53b16a-96f4-476d-bbfb-4b83adc3e33a\") " pod="openstack/nova-cell0-conductor-db-sync-zbltb" Jan 20 18:26:17 crc kubenswrapper[4661]: I0120 18:26:17.884122 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9d53b16a-96f4-476d-bbfb-4b83adc3e33a-config-data\") pod \"nova-cell0-conductor-db-sync-zbltb\" (UID: \"9d53b16a-96f4-476d-bbfb-4b83adc3e33a\") " pod="openstack/nova-cell0-conductor-db-sync-zbltb" Jan 20 18:26:17 crc kubenswrapper[4661]: I0120 18:26:17.884239 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mgkmn\" (UniqueName: \"kubernetes.io/projected/9d53b16a-96f4-476d-bbfb-4b83adc3e33a-kube-api-access-mgkmn\") pod \"nova-cell0-conductor-db-sync-zbltb\" (UID: \"9d53b16a-96f4-476d-bbfb-4b83adc3e33a\") " pod="openstack/nova-cell0-conductor-db-sync-zbltb" Jan 20 18:26:17 crc kubenswrapper[4661]: I0120 18:26:17.985465 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9d53b16a-96f4-476d-bbfb-4b83adc3e33a-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-zbltb\" (UID: \"9d53b16a-96f4-476d-bbfb-4b83adc3e33a\") " pod="openstack/nova-cell0-conductor-db-sync-zbltb" Jan 20 18:26:17 crc kubenswrapper[4661]: I0120 18:26:17.985514 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9d53b16a-96f4-476d-bbfb-4b83adc3e33a-scripts\") pod \"nova-cell0-conductor-db-sync-zbltb\" (UID: \"9d53b16a-96f4-476d-bbfb-4b83adc3e33a\") " pod="openstack/nova-cell0-conductor-db-sync-zbltb" Jan 20 18:26:17 crc kubenswrapper[4661]: I0120 18:26:17.985630 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9d53b16a-96f4-476d-bbfb-4b83adc3e33a-config-data\") pod \"nova-cell0-conductor-db-sync-zbltb\" (UID: \"9d53b16a-96f4-476d-bbfb-4b83adc3e33a\") " pod="openstack/nova-cell0-conductor-db-sync-zbltb" Jan 20 18:26:17 crc kubenswrapper[4661]: I0120 18:26:17.985681 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mgkmn\" (UniqueName: \"kubernetes.io/projected/9d53b16a-96f4-476d-bbfb-4b83adc3e33a-kube-api-access-mgkmn\") pod \"nova-cell0-conductor-db-sync-zbltb\" (UID: \"9d53b16a-96f4-476d-bbfb-4b83adc3e33a\") " pod="openstack/nova-cell0-conductor-db-sync-zbltb" Jan 20 18:26:17 crc kubenswrapper[4661]: I0120 18:26:17.989993 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9d53b16a-96f4-476d-bbfb-4b83adc3e33a-scripts\") pod \"nova-cell0-conductor-db-sync-zbltb\" (UID: \"9d53b16a-96f4-476d-bbfb-4b83adc3e33a\") " pod="openstack/nova-cell0-conductor-db-sync-zbltb" Jan 20 18:26:17 crc kubenswrapper[4661]: I0120 18:26:17.990129 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9d53b16a-96f4-476d-bbfb-4b83adc3e33a-config-data\") pod \"nova-cell0-conductor-db-sync-zbltb\" (UID: \"9d53b16a-96f4-476d-bbfb-4b83adc3e33a\") " pod="openstack/nova-cell0-conductor-db-sync-zbltb" Jan 20 18:26:17 crc kubenswrapper[4661]: I0120 18:26:17.992304 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9d53b16a-96f4-476d-bbfb-4b83adc3e33a-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-zbltb\" (UID: \"9d53b16a-96f4-476d-bbfb-4b83adc3e33a\") " pod="openstack/nova-cell0-conductor-db-sync-zbltb" Jan 20 18:26:18 crc kubenswrapper[4661]: I0120 18:26:18.007646 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mgkmn\" (UniqueName: \"kubernetes.io/projected/9d53b16a-96f4-476d-bbfb-4b83adc3e33a-kube-api-access-mgkmn\") pod \"nova-cell0-conductor-db-sync-zbltb\" (UID: \"9d53b16a-96f4-476d-bbfb-4b83adc3e33a\") " pod="openstack/nova-cell0-conductor-db-sync-zbltb" Jan 20 18:26:18 crc kubenswrapper[4661]: I0120 18:26:18.008929 4661 generic.go:334] "Generic (PLEG): container finished" podID="b2c8897f-8188-4a97-8839-e205b94514c7" containerID="6610edecbf5f8693af6fa0504bc285ed19cc0eb56dd390c3e1503ba694919956" exitCode=2 Jan 20 18:26:18 crc kubenswrapper[4661]: I0120 18:26:18.008995 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"b2c8897f-8188-4a97-8839-e205b94514c7","Type":"ContainerDied","Data":"6610edecbf5f8693af6fa0504bc285ed19cc0eb56dd390c3e1503ba694919956"} Jan 20 18:26:18 crc kubenswrapper[4661]: I0120 18:26:18.009024 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"b2c8897f-8188-4a97-8839-e205b94514c7","Type":"ContainerDied","Data":"fd7d6e00d9b53c5cf44324345d643ed00376d51d84b938c2fc28429f580ef091"} Jan 20 18:26:18 crc kubenswrapper[4661]: I0120 18:26:18.009041 4661 scope.go:117] "RemoveContainer" containerID="6610edecbf5f8693af6fa0504bc285ed19cc0eb56dd390c3e1503ba694919956" Jan 20 18:26:18 crc kubenswrapper[4661]: I0120 18:26:18.009050 4661 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 20 18:26:18 crc kubenswrapper[4661]: I0120 18:26:18.048635 4661 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 20 18:26:18 crc kubenswrapper[4661]: I0120 18:26:18.051634 4661 scope.go:117] "RemoveContainer" containerID="6610edecbf5f8693af6fa0504bc285ed19cc0eb56dd390c3e1503ba694919956" Jan 20 18:26:18 crc kubenswrapper[4661]: E0120 18:26:18.056059 4661 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6610edecbf5f8693af6fa0504bc285ed19cc0eb56dd390c3e1503ba694919956\": container with ID starting with 6610edecbf5f8693af6fa0504bc285ed19cc0eb56dd390c3e1503ba694919956 not found: ID does not exist" containerID="6610edecbf5f8693af6fa0504bc285ed19cc0eb56dd390c3e1503ba694919956" Jan 20 18:26:18 crc kubenswrapper[4661]: I0120 18:26:18.056107 4661 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6610edecbf5f8693af6fa0504bc285ed19cc0eb56dd390c3e1503ba694919956"} err="failed to get container status \"6610edecbf5f8693af6fa0504bc285ed19cc0eb56dd390c3e1503ba694919956\": rpc error: code = NotFound desc = could not find container \"6610edecbf5f8693af6fa0504bc285ed19cc0eb56dd390c3e1503ba694919956\": container with ID starting with 6610edecbf5f8693af6fa0504bc285ed19cc0eb56dd390c3e1503ba694919956 not found: ID does not exist" Jan 20 18:26:18 crc kubenswrapper[4661]: I0120 18:26:18.073559 4661 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 20 18:26:18 crc kubenswrapper[4661]: I0120 18:26:18.082824 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-zbltb" Jan 20 18:26:18 crc kubenswrapper[4661]: I0120 18:26:18.106738 4661 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Jan 20 18:26:18 crc kubenswrapper[4661]: I0120 18:26:18.108070 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 20 18:26:18 crc kubenswrapper[4661]: I0120 18:26:18.116177 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"kube-state-metrics-tls-config" Jan 20 18:26:18 crc kubenswrapper[4661]: I0120 18:26:18.116657 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-kube-state-metrics-svc" Jan 20 18:26:18 crc kubenswrapper[4661]: I0120 18:26:18.140838 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 20 18:26:18 crc kubenswrapper[4661]: I0120 18:26:18.177823 4661 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b2c8897f-8188-4a97-8839-e205b94514c7" path="/var/lib/kubelet/pods/b2c8897f-8188-4a97-8839-e205b94514c7/volumes" Jan 20 18:26:18 crc kubenswrapper[4661]: I0120 18:26:18.221302 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/a42dbd72-de9b-49d9-b7fb-b8255659f933-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"a42dbd72-de9b-49d9-b7fb-b8255659f933\") " pod="openstack/kube-state-metrics-0" Jan 20 18:26:18 crc kubenswrapper[4661]: I0120 18:26:18.221535 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dnvlm\" (UniqueName: \"kubernetes.io/projected/a42dbd72-de9b-49d9-b7fb-b8255659f933-kube-api-access-dnvlm\") pod \"kube-state-metrics-0\" (UID: \"a42dbd72-de9b-49d9-b7fb-b8255659f933\") " pod="openstack/kube-state-metrics-0" Jan 20 18:26:18 crc kubenswrapper[4661]: I0120 18:26:18.221655 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a42dbd72-de9b-49d9-b7fb-b8255659f933-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"a42dbd72-de9b-49d9-b7fb-b8255659f933\") " pod="openstack/kube-state-metrics-0" Jan 20 18:26:18 crc kubenswrapper[4661]: I0120 18:26:18.221966 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/a42dbd72-de9b-49d9-b7fb-b8255659f933-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"a42dbd72-de9b-49d9-b7fb-b8255659f933\") " pod="openstack/kube-state-metrics-0" Jan 20 18:26:18 crc kubenswrapper[4661]: I0120 18:26:18.327104 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/a42dbd72-de9b-49d9-b7fb-b8255659f933-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"a42dbd72-de9b-49d9-b7fb-b8255659f933\") " pod="openstack/kube-state-metrics-0" Jan 20 18:26:18 crc kubenswrapper[4661]: I0120 18:26:18.327196 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/a42dbd72-de9b-49d9-b7fb-b8255659f933-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"a42dbd72-de9b-49d9-b7fb-b8255659f933\") " pod="openstack/kube-state-metrics-0" Jan 20 18:26:18 crc kubenswrapper[4661]: I0120 18:26:18.327214 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dnvlm\" (UniqueName: \"kubernetes.io/projected/a42dbd72-de9b-49d9-b7fb-b8255659f933-kube-api-access-dnvlm\") pod \"kube-state-metrics-0\" (UID: \"a42dbd72-de9b-49d9-b7fb-b8255659f933\") " pod="openstack/kube-state-metrics-0" Jan 20 18:26:18 crc kubenswrapper[4661]: I0120 18:26:18.327239 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a42dbd72-de9b-49d9-b7fb-b8255659f933-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"a42dbd72-de9b-49d9-b7fb-b8255659f933\") " pod="openstack/kube-state-metrics-0" Jan 20 18:26:18 crc kubenswrapper[4661]: I0120 18:26:18.334262 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/a42dbd72-de9b-49d9-b7fb-b8255659f933-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"a42dbd72-de9b-49d9-b7fb-b8255659f933\") " pod="openstack/kube-state-metrics-0" Jan 20 18:26:18 crc kubenswrapper[4661]: I0120 18:26:18.336640 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a42dbd72-de9b-49d9-b7fb-b8255659f933-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"a42dbd72-de9b-49d9-b7fb-b8255659f933\") " pod="openstack/kube-state-metrics-0" Jan 20 18:26:18 crc kubenswrapper[4661]: I0120 18:26:18.351985 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dnvlm\" (UniqueName: \"kubernetes.io/projected/a42dbd72-de9b-49d9-b7fb-b8255659f933-kube-api-access-dnvlm\") pod \"kube-state-metrics-0\" (UID: \"a42dbd72-de9b-49d9-b7fb-b8255659f933\") " pod="openstack/kube-state-metrics-0" Jan 20 18:26:18 crc kubenswrapper[4661]: I0120 18:26:18.366373 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/a42dbd72-de9b-49d9-b7fb-b8255659f933-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"a42dbd72-de9b-49d9-b7fb-b8255659f933\") " pod="openstack/kube-state-metrics-0" Jan 20 18:26:18 crc kubenswrapper[4661]: I0120 18:26:18.450848 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 20 18:26:18 crc kubenswrapper[4661]: I0120 18:26:18.638378 4661 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 20 18:26:18 crc kubenswrapper[4661]: I0120 18:26:18.638659 4661 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="1a173834-dcef-416e-9eec-7c3038fcb78e" containerName="ceilometer-central-agent" containerID="cri-o://b6ca95cfe8ef7b57524da992adeffb92f85b769bf81132da041917b2e0abe0c4" gracePeriod=30 Jan 20 18:26:18 crc kubenswrapper[4661]: I0120 18:26:18.638748 4661 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="1a173834-dcef-416e-9eec-7c3038fcb78e" containerName="proxy-httpd" containerID="cri-o://1c528b84591f85ed085d477bc1bda4d1bf46412cadbedf2f18823c1c354c3c27" gracePeriod=30 Jan 20 18:26:18 crc kubenswrapper[4661]: I0120 18:26:18.638795 4661 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="1a173834-dcef-416e-9eec-7c3038fcb78e" containerName="sg-core" containerID="cri-o://565590ab98ab21b5dbe5f431ca9ea7967e33cef8e250a5dd0923f9a533f5d327" gracePeriod=30 Jan 20 18:26:18 crc kubenswrapper[4661]: I0120 18:26:18.638831 4661 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="1a173834-dcef-416e-9eec-7c3038fcb78e" containerName="ceilometer-notification-agent" containerID="cri-o://b163de4ac1a6ddbaa0c1653886e73f82c0ee48f5eb2958119459079f2d9d89b7" gracePeriod=30 Jan 20 18:26:18 crc kubenswrapper[4661]: I0120 18:26:18.694213 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-zbltb"] Jan 20 18:26:18 crc kubenswrapper[4661]: W0120 18:26:18.712077 4661 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9d53b16a_96f4_476d_bbfb_4b83adc3e33a.slice/crio-4ecde656544ccf281d0c27ee84720360cca86efad0f7e13db2f1f810585faedc WatchSource:0}: Error finding container 4ecde656544ccf281d0c27ee84720360cca86efad0f7e13db2f1f810585faedc: Status 404 returned error can't find the container with id 4ecde656544ccf281d0c27ee84720360cca86efad0f7e13db2f1f810585faedc Jan 20 18:26:18 crc kubenswrapper[4661]: I0120 18:26:18.888899 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 20 18:26:18 crc kubenswrapper[4661]: W0120 18:26:18.892339 4661 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda42dbd72_de9b_49d9_b7fb_b8255659f933.slice/crio-2dd9586a896671c50fc76e654a44394590507f463d4cddf62f1afd63a7a3b90f WatchSource:0}: Error finding container 2dd9586a896671c50fc76e654a44394590507f463d4cddf62f1afd63a7a3b90f: Status 404 returned error can't find the container with id 2dd9586a896671c50fc76e654a44394590507f463d4cddf62f1afd63a7a3b90f Jan 20 18:26:19 crc kubenswrapper[4661]: I0120 18:26:19.020066 4661 generic.go:334] "Generic (PLEG): container finished" podID="1a173834-dcef-416e-9eec-7c3038fcb78e" containerID="1c528b84591f85ed085d477bc1bda4d1bf46412cadbedf2f18823c1c354c3c27" exitCode=0 Jan 20 18:26:19 crc kubenswrapper[4661]: I0120 18:26:19.020114 4661 generic.go:334] "Generic (PLEG): container finished" podID="1a173834-dcef-416e-9eec-7c3038fcb78e" containerID="565590ab98ab21b5dbe5f431ca9ea7967e33cef8e250a5dd0923f9a533f5d327" exitCode=2 Jan 20 18:26:19 crc kubenswrapper[4661]: I0120 18:26:19.020148 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1a173834-dcef-416e-9eec-7c3038fcb78e","Type":"ContainerDied","Data":"1c528b84591f85ed085d477bc1bda4d1bf46412cadbedf2f18823c1c354c3c27"} Jan 20 18:26:19 crc kubenswrapper[4661]: I0120 18:26:19.020195 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1a173834-dcef-416e-9eec-7c3038fcb78e","Type":"ContainerDied","Data":"565590ab98ab21b5dbe5f431ca9ea7967e33cef8e250a5dd0923f9a533f5d327"} Jan 20 18:26:19 crc kubenswrapper[4661]: I0120 18:26:19.022888 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"a42dbd72-de9b-49d9-b7fb-b8255659f933","Type":"ContainerStarted","Data":"2dd9586a896671c50fc76e654a44394590507f463d4cddf62f1afd63a7a3b90f"} Jan 20 18:26:19 crc kubenswrapper[4661]: I0120 18:26:19.025530 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-zbltb" event={"ID":"9d53b16a-96f4-476d-bbfb-4b83adc3e33a","Type":"ContainerStarted","Data":"4ecde656544ccf281d0c27ee84720360cca86efad0f7e13db2f1f810585faedc"} Jan 20 18:26:19 crc kubenswrapper[4661]: I0120 18:26:19.540433 4661 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 20 18:26:19 crc kubenswrapper[4661]: I0120 18:26:19.666594 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1a173834-dcef-416e-9eec-7c3038fcb78e-run-httpd\") pod \"1a173834-dcef-416e-9eec-7c3038fcb78e\" (UID: \"1a173834-dcef-416e-9eec-7c3038fcb78e\") " Jan 20 18:26:19 crc kubenswrapper[4661]: I0120 18:26:19.666662 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1a173834-dcef-416e-9eec-7c3038fcb78e-config-data\") pod \"1a173834-dcef-416e-9eec-7c3038fcb78e\" (UID: \"1a173834-dcef-416e-9eec-7c3038fcb78e\") " Jan 20 18:26:19 crc kubenswrapper[4661]: I0120 18:26:19.666727 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1a173834-dcef-416e-9eec-7c3038fcb78e-combined-ca-bundle\") pod \"1a173834-dcef-416e-9eec-7c3038fcb78e\" (UID: \"1a173834-dcef-416e-9eec-7c3038fcb78e\") " Jan 20 18:26:19 crc kubenswrapper[4661]: I0120 18:26:19.666812 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1a173834-dcef-416e-9eec-7c3038fcb78e-log-httpd\") pod \"1a173834-dcef-416e-9eec-7c3038fcb78e\" (UID: \"1a173834-dcef-416e-9eec-7c3038fcb78e\") " Jan 20 18:26:19 crc kubenswrapper[4661]: I0120 18:26:19.666843 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/1a173834-dcef-416e-9eec-7c3038fcb78e-sg-core-conf-yaml\") pod \"1a173834-dcef-416e-9eec-7c3038fcb78e\" (UID: \"1a173834-dcef-416e-9eec-7c3038fcb78e\") " Jan 20 18:26:19 crc kubenswrapper[4661]: I0120 18:26:19.666900 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1a173834-dcef-416e-9eec-7c3038fcb78e-scripts\") pod \"1a173834-dcef-416e-9eec-7c3038fcb78e\" (UID: \"1a173834-dcef-416e-9eec-7c3038fcb78e\") " Jan 20 18:26:19 crc kubenswrapper[4661]: I0120 18:26:19.666938 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dpfhh\" (UniqueName: \"kubernetes.io/projected/1a173834-dcef-416e-9eec-7c3038fcb78e-kube-api-access-dpfhh\") pod \"1a173834-dcef-416e-9eec-7c3038fcb78e\" (UID: \"1a173834-dcef-416e-9eec-7c3038fcb78e\") " Jan 20 18:26:19 crc kubenswrapper[4661]: I0120 18:26:19.667549 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1a173834-dcef-416e-9eec-7c3038fcb78e-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "1a173834-dcef-416e-9eec-7c3038fcb78e" (UID: "1a173834-dcef-416e-9eec-7c3038fcb78e"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 18:26:19 crc kubenswrapper[4661]: I0120 18:26:19.668013 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1a173834-dcef-416e-9eec-7c3038fcb78e-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "1a173834-dcef-416e-9eec-7c3038fcb78e" (UID: "1a173834-dcef-416e-9eec-7c3038fcb78e"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 18:26:19 crc kubenswrapper[4661]: I0120 18:26:19.692385 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1a173834-dcef-416e-9eec-7c3038fcb78e-kube-api-access-dpfhh" (OuterVolumeSpecName: "kube-api-access-dpfhh") pod "1a173834-dcef-416e-9eec-7c3038fcb78e" (UID: "1a173834-dcef-416e-9eec-7c3038fcb78e"). InnerVolumeSpecName "kube-api-access-dpfhh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 18:26:19 crc kubenswrapper[4661]: I0120 18:26:19.692490 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1a173834-dcef-416e-9eec-7c3038fcb78e-scripts" (OuterVolumeSpecName: "scripts") pod "1a173834-dcef-416e-9eec-7c3038fcb78e" (UID: "1a173834-dcef-416e-9eec-7c3038fcb78e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 18:26:19 crc kubenswrapper[4661]: I0120 18:26:19.715248 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1a173834-dcef-416e-9eec-7c3038fcb78e-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "1a173834-dcef-416e-9eec-7c3038fcb78e" (UID: "1a173834-dcef-416e-9eec-7c3038fcb78e"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 18:26:19 crc kubenswrapper[4661]: I0120 18:26:19.754017 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1a173834-dcef-416e-9eec-7c3038fcb78e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1a173834-dcef-416e-9eec-7c3038fcb78e" (UID: "1a173834-dcef-416e-9eec-7c3038fcb78e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 18:26:19 crc kubenswrapper[4661]: I0120 18:26:19.770646 4661 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1a173834-dcef-416e-9eec-7c3038fcb78e-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 20 18:26:19 crc kubenswrapper[4661]: I0120 18:26:19.770686 4661 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/1a173834-dcef-416e-9eec-7c3038fcb78e-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 20 18:26:19 crc kubenswrapper[4661]: I0120 18:26:19.770699 4661 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1a173834-dcef-416e-9eec-7c3038fcb78e-scripts\") on node \"crc\" DevicePath \"\"" Jan 20 18:26:19 crc kubenswrapper[4661]: I0120 18:26:19.770707 4661 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dpfhh\" (UniqueName: \"kubernetes.io/projected/1a173834-dcef-416e-9eec-7c3038fcb78e-kube-api-access-dpfhh\") on node \"crc\" DevicePath \"\"" Jan 20 18:26:19 crc kubenswrapper[4661]: I0120 18:26:19.770717 4661 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1a173834-dcef-416e-9eec-7c3038fcb78e-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 20 18:26:19 crc kubenswrapper[4661]: I0120 18:26:19.770724 4661 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1a173834-dcef-416e-9eec-7c3038fcb78e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 20 18:26:19 crc kubenswrapper[4661]: I0120 18:26:19.795024 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1a173834-dcef-416e-9eec-7c3038fcb78e-config-data" (OuterVolumeSpecName: "config-data") pod "1a173834-dcef-416e-9eec-7c3038fcb78e" (UID: "1a173834-dcef-416e-9eec-7c3038fcb78e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 18:26:19 crc kubenswrapper[4661]: I0120 18:26:19.872630 4661 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1a173834-dcef-416e-9eec-7c3038fcb78e-config-data\") on node \"crc\" DevicePath \"\"" Jan 20 18:26:20 crc kubenswrapper[4661]: I0120 18:26:20.036278 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"a42dbd72-de9b-49d9-b7fb-b8255659f933","Type":"ContainerStarted","Data":"b4bd3fbbeda1bea0c51b290ebd2860cd889ad9f47208a9ff9e4ee3b8af6c9b09"} Jan 20 18:26:20 crc kubenswrapper[4661]: I0120 18:26:20.036402 4661 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Jan 20 18:26:20 crc kubenswrapper[4661]: I0120 18:26:20.039723 4661 generic.go:334] "Generic (PLEG): container finished" podID="1a173834-dcef-416e-9eec-7c3038fcb78e" containerID="b163de4ac1a6ddbaa0c1653886e73f82c0ee48f5eb2958119459079f2d9d89b7" exitCode=0 Jan 20 18:26:20 crc kubenswrapper[4661]: I0120 18:26:20.039747 4661 generic.go:334] "Generic (PLEG): container finished" podID="1a173834-dcef-416e-9eec-7c3038fcb78e" containerID="b6ca95cfe8ef7b57524da992adeffb92f85b769bf81132da041917b2e0abe0c4" exitCode=0 Jan 20 18:26:20 crc kubenswrapper[4661]: I0120 18:26:20.039791 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1a173834-dcef-416e-9eec-7c3038fcb78e","Type":"ContainerDied","Data":"b163de4ac1a6ddbaa0c1653886e73f82c0ee48f5eb2958119459079f2d9d89b7"} Jan 20 18:26:20 crc kubenswrapper[4661]: I0120 18:26:20.039820 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1a173834-dcef-416e-9eec-7c3038fcb78e","Type":"ContainerDied","Data":"b6ca95cfe8ef7b57524da992adeffb92f85b769bf81132da041917b2e0abe0c4"} Jan 20 18:26:20 crc kubenswrapper[4661]: I0120 18:26:20.039853 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1a173834-dcef-416e-9eec-7c3038fcb78e","Type":"ContainerDied","Data":"c7d905549f2409bcbf3ba02269b955e951fc268fc9c0d4e299d4dc1880a54dfd"} Jan 20 18:26:20 crc kubenswrapper[4661]: I0120 18:26:20.039869 4661 scope.go:117] "RemoveContainer" containerID="1c528b84591f85ed085d477bc1bda4d1bf46412cadbedf2f18823c1c354c3c27" Jan 20 18:26:20 crc kubenswrapper[4661]: I0120 18:26:20.040002 4661 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 20 18:26:20 crc kubenswrapper[4661]: I0120 18:26:20.063743 4661 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=1.6387643330000001 podStartE2EDuration="2.063722514s" podCreationTimestamp="2026-01-20 18:26:18 +0000 UTC" firstStartedPulling="2026-01-20 18:26:18.895569928 +0000 UTC m=+1235.226359590" lastFinishedPulling="2026-01-20 18:26:19.320528109 +0000 UTC m=+1235.651317771" observedRunningTime="2026-01-20 18:26:20.053210021 +0000 UTC m=+1236.383999683" watchObservedRunningTime="2026-01-20 18:26:20.063722514 +0000 UTC m=+1236.394512176" Jan 20 18:26:20 crc kubenswrapper[4661]: I0120 18:26:20.130645 4661 scope.go:117] "RemoveContainer" containerID="565590ab98ab21b5dbe5f431ca9ea7967e33cef8e250a5dd0923f9a533f5d327" Jan 20 18:26:20 crc kubenswrapper[4661]: I0120 18:26:20.133122 4661 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 20 18:26:20 crc kubenswrapper[4661]: I0120 18:26:20.173334 4661 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 20 18:26:20 crc kubenswrapper[4661]: I0120 18:26:20.173375 4661 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 20 18:26:20 crc kubenswrapper[4661]: E0120 18:26:20.173612 4661 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1a173834-dcef-416e-9eec-7c3038fcb78e" containerName="ceilometer-central-agent" Jan 20 18:26:20 crc kubenswrapper[4661]: I0120 18:26:20.173628 4661 state_mem.go:107] "Deleted CPUSet assignment" podUID="1a173834-dcef-416e-9eec-7c3038fcb78e" containerName="ceilometer-central-agent" Jan 20 18:26:20 crc kubenswrapper[4661]: E0120 18:26:20.173690 4661 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1a173834-dcef-416e-9eec-7c3038fcb78e" containerName="sg-core" Jan 20 18:26:20 crc kubenswrapper[4661]: I0120 18:26:20.173700 4661 state_mem.go:107] "Deleted CPUSet assignment" podUID="1a173834-dcef-416e-9eec-7c3038fcb78e" containerName="sg-core" Jan 20 18:26:20 crc kubenswrapper[4661]: E0120 18:26:20.173714 4661 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1a173834-dcef-416e-9eec-7c3038fcb78e" containerName="ceilometer-notification-agent" Jan 20 18:26:20 crc kubenswrapper[4661]: I0120 18:26:20.173720 4661 state_mem.go:107] "Deleted CPUSet assignment" podUID="1a173834-dcef-416e-9eec-7c3038fcb78e" containerName="ceilometer-notification-agent" Jan 20 18:26:20 crc kubenswrapper[4661]: E0120 18:26:20.173734 4661 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1a173834-dcef-416e-9eec-7c3038fcb78e" containerName="proxy-httpd" Jan 20 18:26:20 crc kubenswrapper[4661]: I0120 18:26:20.173740 4661 state_mem.go:107] "Deleted CPUSet assignment" podUID="1a173834-dcef-416e-9eec-7c3038fcb78e" containerName="proxy-httpd" Jan 20 18:26:20 crc kubenswrapper[4661]: I0120 18:26:20.173936 4661 memory_manager.go:354] "RemoveStaleState removing state" podUID="1a173834-dcef-416e-9eec-7c3038fcb78e" containerName="ceilometer-notification-agent" Jan 20 18:26:20 crc kubenswrapper[4661]: I0120 18:26:20.173959 4661 memory_manager.go:354] "RemoveStaleState removing state" podUID="1a173834-dcef-416e-9eec-7c3038fcb78e" containerName="ceilometer-central-agent" Jan 20 18:26:20 crc kubenswrapper[4661]: I0120 18:26:20.173971 4661 memory_manager.go:354] "RemoveStaleState removing state" podUID="1a173834-dcef-416e-9eec-7c3038fcb78e" containerName="sg-core" Jan 20 18:26:20 crc kubenswrapper[4661]: I0120 18:26:20.173979 4661 memory_manager.go:354] "RemoveStaleState removing state" podUID="1a173834-dcef-416e-9eec-7c3038fcb78e" containerName="proxy-httpd" Jan 20 18:26:20 crc kubenswrapper[4661]: I0120 18:26:20.175436 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 20 18:26:20 crc kubenswrapper[4661]: I0120 18:26:20.175548 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 20 18:26:20 crc kubenswrapper[4661]: I0120 18:26:20.178578 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Jan 20 18:26:20 crc kubenswrapper[4661]: I0120 18:26:20.178796 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 20 18:26:20 crc kubenswrapper[4661]: I0120 18:26:20.181047 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 20 18:26:20 crc kubenswrapper[4661]: I0120 18:26:20.185258 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/d81e382c-ac8b-4ff0-88c0-d604d593f75a-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"d81e382c-ac8b-4ff0-88c0-d604d593f75a\") " pod="openstack/ceilometer-0" Jan 20 18:26:20 crc kubenswrapper[4661]: I0120 18:26:20.185304 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d81e382c-ac8b-4ff0-88c0-d604d593f75a-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"d81e382c-ac8b-4ff0-88c0-d604d593f75a\") " pod="openstack/ceilometer-0" Jan 20 18:26:20 crc kubenswrapper[4661]: I0120 18:26:20.185354 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d81e382c-ac8b-4ff0-88c0-d604d593f75a-log-httpd\") pod \"ceilometer-0\" (UID: \"d81e382c-ac8b-4ff0-88c0-d604d593f75a\") " pod="openstack/ceilometer-0" Jan 20 18:26:20 crc kubenswrapper[4661]: I0120 18:26:20.185418 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d81e382c-ac8b-4ff0-88c0-d604d593f75a-run-httpd\") pod \"ceilometer-0\" (UID: \"d81e382c-ac8b-4ff0-88c0-d604d593f75a\") " pod="openstack/ceilometer-0" Jan 20 18:26:20 crc kubenswrapper[4661]: I0120 18:26:20.185460 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d81e382c-ac8b-4ff0-88c0-d604d593f75a-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"d81e382c-ac8b-4ff0-88c0-d604d593f75a\") " pod="openstack/ceilometer-0" Jan 20 18:26:20 crc kubenswrapper[4661]: I0120 18:26:20.185474 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d81e382c-ac8b-4ff0-88c0-d604d593f75a-scripts\") pod \"ceilometer-0\" (UID: \"d81e382c-ac8b-4ff0-88c0-d604d593f75a\") " pod="openstack/ceilometer-0" Jan 20 18:26:20 crc kubenswrapper[4661]: I0120 18:26:20.185507 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wfzdk\" (UniqueName: \"kubernetes.io/projected/d81e382c-ac8b-4ff0-88c0-d604d593f75a-kube-api-access-wfzdk\") pod \"ceilometer-0\" (UID: \"d81e382c-ac8b-4ff0-88c0-d604d593f75a\") " pod="openstack/ceilometer-0" Jan 20 18:26:20 crc kubenswrapper[4661]: I0120 18:26:20.185534 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d81e382c-ac8b-4ff0-88c0-d604d593f75a-config-data\") pod \"ceilometer-0\" (UID: \"d81e382c-ac8b-4ff0-88c0-d604d593f75a\") " pod="openstack/ceilometer-0" Jan 20 18:26:20 crc kubenswrapper[4661]: I0120 18:26:20.206784 4661 scope.go:117] "RemoveContainer" containerID="b163de4ac1a6ddbaa0c1653886e73f82c0ee48f5eb2958119459079f2d9d89b7" Jan 20 18:26:20 crc kubenswrapper[4661]: I0120 18:26:20.229505 4661 scope.go:117] "RemoveContainer" containerID="b6ca95cfe8ef7b57524da992adeffb92f85b769bf81132da041917b2e0abe0c4" Jan 20 18:26:20 crc kubenswrapper[4661]: I0120 18:26:20.266143 4661 scope.go:117] "RemoveContainer" containerID="1c528b84591f85ed085d477bc1bda4d1bf46412cadbedf2f18823c1c354c3c27" Jan 20 18:26:20 crc kubenswrapper[4661]: E0120 18:26:20.268806 4661 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1c528b84591f85ed085d477bc1bda4d1bf46412cadbedf2f18823c1c354c3c27\": container with ID starting with 1c528b84591f85ed085d477bc1bda4d1bf46412cadbedf2f18823c1c354c3c27 not found: ID does not exist" containerID="1c528b84591f85ed085d477bc1bda4d1bf46412cadbedf2f18823c1c354c3c27" Jan 20 18:26:20 crc kubenswrapper[4661]: I0120 18:26:20.268857 4661 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1c528b84591f85ed085d477bc1bda4d1bf46412cadbedf2f18823c1c354c3c27"} err="failed to get container status \"1c528b84591f85ed085d477bc1bda4d1bf46412cadbedf2f18823c1c354c3c27\": rpc error: code = NotFound desc = could not find container \"1c528b84591f85ed085d477bc1bda4d1bf46412cadbedf2f18823c1c354c3c27\": container with ID starting with 1c528b84591f85ed085d477bc1bda4d1bf46412cadbedf2f18823c1c354c3c27 not found: ID does not exist" Jan 20 18:26:20 crc kubenswrapper[4661]: I0120 18:26:20.268889 4661 scope.go:117] "RemoveContainer" containerID="565590ab98ab21b5dbe5f431ca9ea7967e33cef8e250a5dd0923f9a533f5d327" Jan 20 18:26:20 crc kubenswrapper[4661]: E0120 18:26:20.269294 4661 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"565590ab98ab21b5dbe5f431ca9ea7967e33cef8e250a5dd0923f9a533f5d327\": container with ID starting with 565590ab98ab21b5dbe5f431ca9ea7967e33cef8e250a5dd0923f9a533f5d327 not found: ID does not exist" containerID="565590ab98ab21b5dbe5f431ca9ea7967e33cef8e250a5dd0923f9a533f5d327" Jan 20 18:26:20 crc kubenswrapper[4661]: I0120 18:26:20.269328 4661 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"565590ab98ab21b5dbe5f431ca9ea7967e33cef8e250a5dd0923f9a533f5d327"} err="failed to get container status \"565590ab98ab21b5dbe5f431ca9ea7967e33cef8e250a5dd0923f9a533f5d327\": rpc error: code = NotFound desc = could not find container \"565590ab98ab21b5dbe5f431ca9ea7967e33cef8e250a5dd0923f9a533f5d327\": container with ID starting with 565590ab98ab21b5dbe5f431ca9ea7967e33cef8e250a5dd0923f9a533f5d327 not found: ID does not exist" Jan 20 18:26:20 crc kubenswrapper[4661]: I0120 18:26:20.269344 4661 scope.go:117] "RemoveContainer" containerID="b163de4ac1a6ddbaa0c1653886e73f82c0ee48f5eb2958119459079f2d9d89b7" Jan 20 18:26:20 crc kubenswrapper[4661]: E0120 18:26:20.269603 4661 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b163de4ac1a6ddbaa0c1653886e73f82c0ee48f5eb2958119459079f2d9d89b7\": container with ID starting with b163de4ac1a6ddbaa0c1653886e73f82c0ee48f5eb2958119459079f2d9d89b7 not found: ID does not exist" containerID="b163de4ac1a6ddbaa0c1653886e73f82c0ee48f5eb2958119459079f2d9d89b7" Jan 20 18:26:20 crc kubenswrapper[4661]: I0120 18:26:20.269627 4661 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b163de4ac1a6ddbaa0c1653886e73f82c0ee48f5eb2958119459079f2d9d89b7"} err="failed to get container status \"b163de4ac1a6ddbaa0c1653886e73f82c0ee48f5eb2958119459079f2d9d89b7\": rpc error: code = NotFound desc = could not find container \"b163de4ac1a6ddbaa0c1653886e73f82c0ee48f5eb2958119459079f2d9d89b7\": container with ID starting with b163de4ac1a6ddbaa0c1653886e73f82c0ee48f5eb2958119459079f2d9d89b7 not found: ID does not exist" Jan 20 18:26:20 crc kubenswrapper[4661]: I0120 18:26:20.269644 4661 scope.go:117] "RemoveContainer" containerID="b6ca95cfe8ef7b57524da992adeffb92f85b769bf81132da041917b2e0abe0c4" Jan 20 18:26:20 crc kubenswrapper[4661]: E0120 18:26:20.269985 4661 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b6ca95cfe8ef7b57524da992adeffb92f85b769bf81132da041917b2e0abe0c4\": container with ID starting with b6ca95cfe8ef7b57524da992adeffb92f85b769bf81132da041917b2e0abe0c4 not found: ID does not exist" containerID="b6ca95cfe8ef7b57524da992adeffb92f85b769bf81132da041917b2e0abe0c4" Jan 20 18:26:20 crc kubenswrapper[4661]: I0120 18:26:20.270010 4661 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b6ca95cfe8ef7b57524da992adeffb92f85b769bf81132da041917b2e0abe0c4"} err="failed to get container status \"b6ca95cfe8ef7b57524da992adeffb92f85b769bf81132da041917b2e0abe0c4\": rpc error: code = NotFound desc = could not find container \"b6ca95cfe8ef7b57524da992adeffb92f85b769bf81132da041917b2e0abe0c4\": container with ID starting with b6ca95cfe8ef7b57524da992adeffb92f85b769bf81132da041917b2e0abe0c4 not found: ID does not exist" Jan 20 18:26:20 crc kubenswrapper[4661]: I0120 18:26:20.270024 4661 scope.go:117] "RemoveContainer" containerID="1c528b84591f85ed085d477bc1bda4d1bf46412cadbedf2f18823c1c354c3c27" Jan 20 18:26:20 crc kubenswrapper[4661]: I0120 18:26:20.270522 4661 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1c528b84591f85ed085d477bc1bda4d1bf46412cadbedf2f18823c1c354c3c27"} err="failed to get container status \"1c528b84591f85ed085d477bc1bda4d1bf46412cadbedf2f18823c1c354c3c27\": rpc error: code = NotFound desc = could not find container \"1c528b84591f85ed085d477bc1bda4d1bf46412cadbedf2f18823c1c354c3c27\": container with ID starting with 1c528b84591f85ed085d477bc1bda4d1bf46412cadbedf2f18823c1c354c3c27 not found: ID does not exist" Jan 20 18:26:20 crc kubenswrapper[4661]: I0120 18:26:20.270545 4661 scope.go:117] "RemoveContainer" containerID="565590ab98ab21b5dbe5f431ca9ea7967e33cef8e250a5dd0923f9a533f5d327" Jan 20 18:26:20 crc kubenswrapper[4661]: I0120 18:26:20.271704 4661 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"565590ab98ab21b5dbe5f431ca9ea7967e33cef8e250a5dd0923f9a533f5d327"} err="failed to get container status \"565590ab98ab21b5dbe5f431ca9ea7967e33cef8e250a5dd0923f9a533f5d327\": rpc error: code = NotFound desc = could not find container \"565590ab98ab21b5dbe5f431ca9ea7967e33cef8e250a5dd0923f9a533f5d327\": container with ID starting with 565590ab98ab21b5dbe5f431ca9ea7967e33cef8e250a5dd0923f9a533f5d327 not found: ID does not exist" Jan 20 18:26:20 crc kubenswrapper[4661]: I0120 18:26:20.271727 4661 scope.go:117] "RemoveContainer" containerID="b163de4ac1a6ddbaa0c1653886e73f82c0ee48f5eb2958119459079f2d9d89b7" Jan 20 18:26:20 crc kubenswrapper[4661]: I0120 18:26:20.272110 4661 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b163de4ac1a6ddbaa0c1653886e73f82c0ee48f5eb2958119459079f2d9d89b7"} err="failed to get container status \"b163de4ac1a6ddbaa0c1653886e73f82c0ee48f5eb2958119459079f2d9d89b7\": rpc error: code = NotFound desc = could not find container \"b163de4ac1a6ddbaa0c1653886e73f82c0ee48f5eb2958119459079f2d9d89b7\": container with ID starting with b163de4ac1a6ddbaa0c1653886e73f82c0ee48f5eb2958119459079f2d9d89b7 not found: ID does not exist" Jan 20 18:26:20 crc kubenswrapper[4661]: I0120 18:26:20.272135 4661 scope.go:117] "RemoveContainer" containerID="b6ca95cfe8ef7b57524da992adeffb92f85b769bf81132da041917b2e0abe0c4" Jan 20 18:26:20 crc kubenswrapper[4661]: I0120 18:26:20.273836 4661 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b6ca95cfe8ef7b57524da992adeffb92f85b769bf81132da041917b2e0abe0c4"} err="failed to get container status \"b6ca95cfe8ef7b57524da992adeffb92f85b769bf81132da041917b2e0abe0c4\": rpc error: code = NotFound desc = could not find container \"b6ca95cfe8ef7b57524da992adeffb92f85b769bf81132da041917b2e0abe0c4\": container with ID starting with b6ca95cfe8ef7b57524da992adeffb92f85b769bf81132da041917b2e0abe0c4 not found: ID does not exist" Jan 20 18:26:20 crc kubenswrapper[4661]: I0120 18:26:20.286743 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d81e382c-ac8b-4ff0-88c0-d604d593f75a-run-httpd\") pod \"ceilometer-0\" (UID: \"d81e382c-ac8b-4ff0-88c0-d604d593f75a\") " pod="openstack/ceilometer-0" Jan 20 18:26:20 crc kubenswrapper[4661]: I0120 18:26:20.286793 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d81e382c-ac8b-4ff0-88c0-d604d593f75a-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"d81e382c-ac8b-4ff0-88c0-d604d593f75a\") " pod="openstack/ceilometer-0" Jan 20 18:26:20 crc kubenswrapper[4661]: I0120 18:26:20.286810 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d81e382c-ac8b-4ff0-88c0-d604d593f75a-scripts\") pod \"ceilometer-0\" (UID: \"d81e382c-ac8b-4ff0-88c0-d604d593f75a\") " pod="openstack/ceilometer-0" Jan 20 18:26:20 crc kubenswrapper[4661]: I0120 18:26:20.286836 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wfzdk\" (UniqueName: \"kubernetes.io/projected/d81e382c-ac8b-4ff0-88c0-d604d593f75a-kube-api-access-wfzdk\") pod \"ceilometer-0\" (UID: \"d81e382c-ac8b-4ff0-88c0-d604d593f75a\") " pod="openstack/ceilometer-0" Jan 20 18:26:20 crc kubenswrapper[4661]: I0120 18:26:20.286862 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d81e382c-ac8b-4ff0-88c0-d604d593f75a-config-data\") pod \"ceilometer-0\" (UID: \"d81e382c-ac8b-4ff0-88c0-d604d593f75a\") " pod="openstack/ceilometer-0" Jan 20 18:26:20 crc kubenswrapper[4661]: I0120 18:26:20.286899 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/d81e382c-ac8b-4ff0-88c0-d604d593f75a-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"d81e382c-ac8b-4ff0-88c0-d604d593f75a\") " pod="openstack/ceilometer-0" Jan 20 18:26:20 crc kubenswrapper[4661]: I0120 18:26:20.286922 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d81e382c-ac8b-4ff0-88c0-d604d593f75a-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"d81e382c-ac8b-4ff0-88c0-d604d593f75a\") " pod="openstack/ceilometer-0" Jan 20 18:26:20 crc kubenswrapper[4661]: I0120 18:26:20.287013 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d81e382c-ac8b-4ff0-88c0-d604d593f75a-log-httpd\") pod \"ceilometer-0\" (UID: \"d81e382c-ac8b-4ff0-88c0-d604d593f75a\") " pod="openstack/ceilometer-0" Jan 20 18:26:20 crc kubenswrapper[4661]: I0120 18:26:20.288156 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d81e382c-ac8b-4ff0-88c0-d604d593f75a-log-httpd\") pod \"ceilometer-0\" (UID: \"d81e382c-ac8b-4ff0-88c0-d604d593f75a\") " pod="openstack/ceilometer-0" Jan 20 18:26:20 crc kubenswrapper[4661]: I0120 18:26:20.288358 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d81e382c-ac8b-4ff0-88c0-d604d593f75a-run-httpd\") pod \"ceilometer-0\" (UID: \"d81e382c-ac8b-4ff0-88c0-d604d593f75a\") " pod="openstack/ceilometer-0" Jan 20 18:26:20 crc kubenswrapper[4661]: I0120 18:26:20.293331 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d81e382c-ac8b-4ff0-88c0-d604d593f75a-config-data\") pod \"ceilometer-0\" (UID: \"d81e382c-ac8b-4ff0-88c0-d604d593f75a\") " pod="openstack/ceilometer-0" Jan 20 18:26:20 crc kubenswrapper[4661]: I0120 18:26:20.295284 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/d81e382c-ac8b-4ff0-88c0-d604d593f75a-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"d81e382c-ac8b-4ff0-88c0-d604d593f75a\") " pod="openstack/ceilometer-0" Jan 20 18:26:20 crc kubenswrapper[4661]: I0120 18:26:20.296262 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d81e382c-ac8b-4ff0-88c0-d604d593f75a-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"d81e382c-ac8b-4ff0-88c0-d604d593f75a\") " pod="openstack/ceilometer-0" Jan 20 18:26:20 crc kubenswrapper[4661]: I0120 18:26:20.296772 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d81e382c-ac8b-4ff0-88c0-d604d593f75a-scripts\") pod \"ceilometer-0\" (UID: \"d81e382c-ac8b-4ff0-88c0-d604d593f75a\") " pod="openstack/ceilometer-0" Jan 20 18:26:20 crc kubenswrapper[4661]: I0120 18:26:20.298415 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d81e382c-ac8b-4ff0-88c0-d604d593f75a-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"d81e382c-ac8b-4ff0-88c0-d604d593f75a\") " pod="openstack/ceilometer-0" Jan 20 18:26:20 crc kubenswrapper[4661]: I0120 18:26:20.315921 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wfzdk\" (UniqueName: \"kubernetes.io/projected/d81e382c-ac8b-4ff0-88c0-d604d593f75a-kube-api-access-wfzdk\") pod \"ceilometer-0\" (UID: \"d81e382c-ac8b-4ff0-88c0-d604d593f75a\") " pod="openstack/ceilometer-0" Jan 20 18:26:20 crc kubenswrapper[4661]: I0120 18:26:20.503810 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 20 18:26:20 crc kubenswrapper[4661]: I0120 18:26:20.975798 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 20 18:26:20 crc kubenswrapper[4661]: W0120 18:26:20.979058 4661 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd81e382c_ac8b_4ff0_88c0_d604d593f75a.slice/crio-dd50447810a17d906fddc4bd01fe69f89f79be64911d2f6f04b25d3c061034ca WatchSource:0}: Error finding container dd50447810a17d906fddc4bd01fe69f89f79be64911d2f6f04b25d3c061034ca: Status 404 returned error can't find the container with id dd50447810a17d906fddc4bd01fe69f89f79be64911d2f6f04b25d3c061034ca Jan 20 18:26:21 crc kubenswrapper[4661]: I0120 18:26:21.052176 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d81e382c-ac8b-4ff0-88c0-d604d593f75a","Type":"ContainerStarted","Data":"dd50447810a17d906fddc4bd01fe69f89f79be64911d2f6f04b25d3c061034ca"} Jan 20 18:26:21 crc kubenswrapper[4661]: I0120 18:26:21.366658 4661 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 20 18:26:22 crc kubenswrapper[4661]: I0120 18:26:22.072707 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d81e382c-ac8b-4ff0-88c0-d604d593f75a","Type":"ContainerStarted","Data":"eecd28c7713ef934ce80f916120c1f21736c559183af0d31f9a0212a18869453"} Jan 20 18:26:22 crc kubenswrapper[4661]: I0120 18:26:22.151170 4661 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1a173834-dcef-416e-9eec-7c3038fcb78e" path="/var/lib/kubelet/pods/1a173834-dcef-416e-9eec-7c3038fcb78e/volumes" Jan 20 18:26:23 crc kubenswrapper[4661]: I0120 18:26:23.085099 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d81e382c-ac8b-4ff0-88c0-d604d593f75a","Type":"ContainerStarted","Data":"9e152bee87e820935f37220ae3486053f2f75adad8437cd919f41e0432a28bb1"} Jan 20 18:26:24 crc kubenswrapper[4661]: I0120 18:26:24.098577 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d81e382c-ac8b-4ff0-88c0-d604d593f75a","Type":"ContainerStarted","Data":"386b4ba09b2989b6d3d4eb6cebb2ccc48f527d9df337d2a139365862bf3325cd"} Jan 20 18:26:28 crc kubenswrapper[4661]: I0120 18:26:28.463043 4661 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Jan 20 18:26:30 crc kubenswrapper[4661]: I0120 18:26:30.182433 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-zbltb" event={"ID":"9d53b16a-96f4-476d-bbfb-4b83adc3e33a","Type":"ContainerStarted","Data":"4f65aa729892c3506e526a21564bd0d05bf1660279c82973c9e5a69dc58b5b40"} Jan 20 18:26:30 crc kubenswrapper[4661]: I0120 18:26:30.202192 4661 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-db-sync-zbltb" podStartSLOduration=2.377923013 podStartE2EDuration="13.202174663s" podCreationTimestamp="2026-01-20 18:26:17 +0000 UTC" firstStartedPulling="2026-01-20 18:26:18.720589337 +0000 UTC m=+1235.051378999" lastFinishedPulling="2026-01-20 18:26:29.544840977 +0000 UTC m=+1245.875630649" observedRunningTime="2026-01-20 18:26:30.201942847 +0000 UTC m=+1246.532732509" watchObservedRunningTime="2026-01-20 18:26:30.202174663 +0000 UTC m=+1246.532964325" Jan 20 18:26:31 crc kubenswrapper[4661]: I0120 18:26:31.189294 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d81e382c-ac8b-4ff0-88c0-d604d593f75a","Type":"ContainerStarted","Data":"8ef29d914a654b2a7c8dd6fd7036af984b4b3272ea8a1d028d9cf67d21169514"} Jan 20 18:26:31 crc kubenswrapper[4661]: I0120 18:26:31.189735 4661 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 20 18:26:31 crc kubenswrapper[4661]: I0120 18:26:31.189416 4661 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="d81e382c-ac8b-4ff0-88c0-d604d593f75a" containerName="sg-core" containerID="cri-o://386b4ba09b2989b6d3d4eb6cebb2ccc48f527d9df337d2a139365862bf3325cd" gracePeriod=30 Jan 20 18:26:31 crc kubenswrapper[4661]: I0120 18:26:31.189383 4661 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="d81e382c-ac8b-4ff0-88c0-d604d593f75a" containerName="ceilometer-central-agent" containerID="cri-o://eecd28c7713ef934ce80f916120c1f21736c559183af0d31f9a0212a18869453" gracePeriod=30 Jan 20 18:26:31 crc kubenswrapper[4661]: I0120 18:26:31.189472 4661 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="d81e382c-ac8b-4ff0-88c0-d604d593f75a" containerName="proxy-httpd" containerID="cri-o://8ef29d914a654b2a7c8dd6fd7036af984b4b3272ea8a1d028d9cf67d21169514" gracePeriod=30 Jan 20 18:26:31 crc kubenswrapper[4661]: I0120 18:26:31.191080 4661 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="d81e382c-ac8b-4ff0-88c0-d604d593f75a" containerName="ceilometer-notification-agent" containerID="cri-o://9e152bee87e820935f37220ae3486053f2f75adad8437cd919f41e0432a28bb1" gracePeriod=30 Jan 20 18:26:31 crc kubenswrapper[4661]: I0120 18:26:31.238599 4661 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.733910951 podStartE2EDuration="11.238579991s" podCreationTimestamp="2026-01-20 18:26:20 +0000 UTC" firstStartedPulling="2026-01-20 18:26:20.981266226 +0000 UTC m=+1237.312055888" lastFinishedPulling="2026-01-20 18:26:30.485935266 +0000 UTC m=+1246.816724928" observedRunningTime="2026-01-20 18:26:31.21986958 +0000 UTC m=+1247.550659252" watchObservedRunningTime="2026-01-20 18:26:31.238579991 +0000 UTC m=+1247.569369653" Jan 20 18:26:32 crc kubenswrapper[4661]: I0120 18:26:32.197993 4661 generic.go:334] "Generic (PLEG): container finished" podID="d81e382c-ac8b-4ff0-88c0-d604d593f75a" containerID="8ef29d914a654b2a7c8dd6fd7036af984b4b3272ea8a1d028d9cf67d21169514" exitCode=0 Jan 20 18:26:32 crc kubenswrapper[4661]: I0120 18:26:32.198298 4661 generic.go:334] "Generic (PLEG): container finished" podID="d81e382c-ac8b-4ff0-88c0-d604d593f75a" containerID="386b4ba09b2989b6d3d4eb6cebb2ccc48f527d9df337d2a139365862bf3325cd" exitCode=2 Jan 20 18:26:32 crc kubenswrapper[4661]: I0120 18:26:32.198307 4661 generic.go:334] "Generic (PLEG): container finished" podID="d81e382c-ac8b-4ff0-88c0-d604d593f75a" containerID="9e152bee87e820935f37220ae3486053f2f75adad8437cd919f41e0432a28bb1" exitCode=0 Jan 20 18:26:32 crc kubenswrapper[4661]: I0120 18:26:32.198314 4661 generic.go:334] "Generic (PLEG): container finished" podID="d81e382c-ac8b-4ff0-88c0-d604d593f75a" containerID="eecd28c7713ef934ce80f916120c1f21736c559183af0d31f9a0212a18869453" exitCode=0 Jan 20 18:26:32 crc kubenswrapper[4661]: I0120 18:26:32.198066 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d81e382c-ac8b-4ff0-88c0-d604d593f75a","Type":"ContainerDied","Data":"8ef29d914a654b2a7c8dd6fd7036af984b4b3272ea8a1d028d9cf67d21169514"} Jan 20 18:26:32 crc kubenswrapper[4661]: I0120 18:26:32.198351 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d81e382c-ac8b-4ff0-88c0-d604d593f75a","Type":"ContainerDied","Data":"386b4ba09b2989b6d3d4eb6cebb2ccc48f527d9df337d2a139365862bf3325cd"} Jan 20 18:26:32 crc kubenswrapper[4661]: I0120 18:26:32.198367 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d81e382c-ac8b-4ff0-88c0-d604d593f75a","Type":"ContainerDied","Data":"9e152bee87e820935f37220ae3486053f2f75adad8437cd919f41e0432a28bb1"} Jan 20 18:26:32 crc kubenswrapper[4661]: I0120 18:26:32.198381 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d81e382c-ac8b-4ff0-88c0-d604d593f75a","Type":"ContainerDied","Data":"eecd28c7713ef934ce80f916120c1f21736c559183af0d31f9a0212a18869453"} Jan 20 18:26:32 crc kubenswrapper[4661]: I0120 18:26:32.198395 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d81e382c-ac8b-4ff0-88c0-d604d593f75a","Type":"ContainerDied","Data":"dd50447810a17d906fddc4bd01fe69f89f79be64911d2f6f04b25d3c061034ca"} Jan 20 18:26:32 crc kubenswrapper[4661]: I0120 18:26:32.198405 4661 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dd50447810a17d906fddc4bd01fe69f89f79be64911d2f6f04b25d3c061034ca" Jan 20 18:26:32 crc kubenswrapper[4661]: I0120 18:26:32.263398 4661 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 20 18:26:32 crc kubenswrapper[4661]: I0120 18:26:32.410752 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d81e382c-ac8b-4ff0-88c0-d604d593f75a-run-httpd\") pod \"d81e382c-ac8b-4ff0-88c0-d604d593f75a\" (UID: \"d81e382c-ac8b-4ff0-88c0-d604d593f75a\") " Jan 20 18:26:32 crc kubenswrapper[4661]: I0120 18:26:32.410999 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d81e382c-ac8b-4ff0-88c0-d604d593f75a-combined-ca-bundle\") pod \"d81e382c-ac8b-4ff0-88c0-d604d593f75a\" (UID: \"d81e382c-ac8b-4ff0-88c0-d604d593f75a\") " Jan 20 18:26:32 crc kubenswrapper[4661]: I0120 18:26:32.411102 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d81e382c-ac8b-4ff0-88c0-d604d593f75a-scripts\") pod \"d81e382c-ac8b-4ff0-88c0-d604d593f75a\" (UID: \"d81e382c-ac8b-4ff0-88c0-d604d593f75a\") " Jan 20 18:26:32 crc kubenswrapper[4661]: I0120 18:26:32.411226 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/d81e382c-ac8b-4ff0-88c0-d604d593f75a-ceilometer-tls-certs\") pod \"d81e382c-ac8b-4ff0-88c0-d604d593f75a\" (UID: \"d81e382c-ac8b-4ff0-88c0-d604d593f75a\") " Jan 20 18:26:32 crc kubenswrapper[4661]: I0120 18:26:32.411600 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d81e382c-ac8b-4ff0-88c0-d604d593f75a-sg-core-conf-yaml\") pod \"d81e382c-ac8b-4ff0-88c0-d604d593f75a\" (UID: \"d81e382c-ac8b-4ff0-88c0-d604d593f75a\") " Jan 20 18:26:32 crc kubenswrapper[4661]: I0120 18:26:32.411767 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d81e382c-ac8b-4ff0-88c0-d604d593f75a-config-data\") pod \"d81e382c-ac8b-4ff0-88c0-d604d593f75a\" (UID: \"d81e382c-ac8b-4ff0-88c0-d604d593f75a\") " Jan 20 18:26:32 crc kubenswrapper[4661]: I0120 18:26:32.411864 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d81e382c-ac8b-4ff0-88c0-d604d593f75a-log-httpd\") pod \"d81e382c-ac8b-4ff0-88c0-d604d593f75a\" (UID: \"d81e382c-ac8b-4ff0-88c0-d604d593f75a\") " Jan 20 18:26:32 crc kubenswrapper[4661]: I0120 18:26:32.411952 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wfzdk\" (UniqueName: \"kubernetes.io/projected/d81e382c-ac8b-4ff0-88c0-d604d593f75a-kube-api-access-wfzdk\") pod \"d81e382c-ac8b-4ff0-88c0-d604d593f75a\" (UID: \"d81e382c-ac8b-4ff0-88c0-d604d593f75a\") " Jan 20 18:26:32 crc kubenswrapper[4661]: I0120 18:26:32.412221 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d81e382c-ac8b-4ff0-88c0-d604d593f75a-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "d81e382c-ac8b-4ff0-88c0-d604d593f75a" (UID: "d81e382c-ac8b-4ff0-88c0-d604d593f75a"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 18:26:32 crc kubenswrapper[4661]: I0120 18:26:32.412663 4661 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d81e382c-ac8b-4ff0-88c0-d604d593f75a-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 20 18:26:32 crc kubenswrapper[4661]: I0120 18:26:32.413093 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d81e382c-ac8b-4ff0-88c0-d604d593f75a-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "d81e382c-ac8b-4ff0-88c0-d604d593f75a" (UID: "d81e382c-ac8b-4ff0-88c0-d604d593f75a"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 18:26:32 crc kubenswrapper[4661]: I0120 18:26:32.417771 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d81e382c-ac8b-4ff0-88c0-d604d593f75a-kube-api-access-wfzdk" (OuterVolumeSpecName: "kube-api-access-wfzdk") pod "d81e382c-ac8b-4ff0-88c0-d604d593f75a" (UID: "d81e382c-ac8b-4ff0-88c0-d604d593f75a"). InnerVolumeSpecName "kube-api-access-wfzdk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 18:26:32 crc kubenswrapper[4661]: I0120 18:26:32.422297 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d81e382c-ac8b-4ff0-88c0-d604d593f75a-scripts" (OuterVolumeSpecName: "scripts") pod "d81e382c-ac8b-4ff0-88c0-d604d593f75a" (UID: "d81e382c-ac8b-4ff0-88c0-d604d593f75a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 18:26:32 crc kubenswrapper[4661]: I0120 18:26:32.465842 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d81e382c-ac8b-4ff0-88c0-d604d593f75a-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "d81e382c-ac8b-4ff0-88c0-d604d593f75a" (UID: "d81e382c-ac8b-4ff0-88c0-d604d593f75a"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 18:26:32 crc kubenswrapper[4661]: I0120 18:26:32.473568 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d81e382c-ac8b-4ff0-88c0-d604d593f75a-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "d81e382c-ac8b-4ff0-88c0-d604d593f75a" (UID: "d81e382c-ac8b-4ff0-88c0-d604d593f75a"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 18:26:32 crc kubenswrapper[4661]: I0120 18:26:32.510829 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d81e382c-ac8b-4ff0-88c0-d604d593f75a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d81e382c-ac8b-4ff0-88c0-d604d593f75a" (UID: "d81e382c-ac8b-4ff0-88c0-d604d593f75a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 18:26:32 crc kubenswrapper[4661]: I0120 18:26:32.514209 4661 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d81e382c-ac8b-4ff0-88c0-d604d593f75a-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 20 18:26:32 crc kubenswrapper[4661]: I0120 18:26:32.514343 4661 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wfzdk\" (UniqueName: \"kubernetes.io/projected/d81e382c-ac8b-4ff0-88c0-d604d593f75a-kube-api-access-wfzdk\") on node \"crc\" DevicePath \"\"" Jan 20 18:26:32 crc kubenswrapper[4661]: I0120 18:26:32.514402 4661 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d81e382c-ac8b-4ff0-88c0-d604d593f75a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 20 18:26:32 crc kubenswrapper[4661]: I0120 18:26:32.514455 4661 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d81e382c-ac8b-4ff0-88c0-d604d593f75a-scripts\") on node \"crc\" DevicePath \"\"" Jan 20 18:26:32 crc kubenswrapper[4661]: I0120 18:26:32.514517 4661 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/d81e382c-ac8b-4ff0-88c0-d604d593f75a-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 20 18:26:32 crc kubenswrapper[4661]: I0120 18:26:32.514571 4661 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d81e382c-ac8b-4ff0-88c0-d604d593f75a-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 20 18:26:32 crc kubenswrapper[4661]: I0120 18:26:32.549425 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d81e382c-ac8b-4ff0-88c0-d604d593f75a-config-data" (OuterVolumeSpecName: "config-data") pod "d81e382c-ac8b-4ff0-88c0-d604d593f75a" (UID: "d81e382c-ac8b-4ff0-88c0-d604d593f75a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 18:26:32 crc kubenswrapper[4661]: I0120 18:26:32.615381 4661 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d81e382c-ac8b-4ff0-88c0-d604d593f75a-config-data\") on node \"crc\" DevicePath \"\"" Jan 20 18:26:33 crc kubenswrapper[4661]: I0120 18:26:33.205159 4661 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 20 18:26:33 crc kubenswrapper[4661]: I0120 18:26:33.244498 4661 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 20 18:26:33 crc kubenswrapper[4661]: I0120 18:26:33.259729 4661 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 20 18:26:33 crc kubenswrapper[4661]: I0120 18:26:33.272910 4661 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 20 18:26:33 crc kubenswrapper[4661]: E0120 18:26:33.273324 4661 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d81e382c-ac8b-4ff0-88c0-d604d593f75a" containerName="proxy-httpd" Jan 20 18:26:33 crc kubenswrapper[4661]: I0120 18:26:33.273351 4661 state_mem.go:107] "Deleted CPUSet assignment" podUID="d81e382c-ac8b-4ff0-88c0-d604d593f75a" containerName="proxy-httpd" Jan 20 18:26:33 crc kubenswrapper[4661]: E0120 18:26:33.273379 4661 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d81e382c-ac8b-4ff0-88c0-d604d593f75a" containerName="ceilometer-central-agent" Jan 20 18:26:33 crc kubenswrapper[4661]: I0120 18:26:33.273389 4661 state_mem.go:107] "Deleted CPUSet assignment" podUID="d81e382c-ac8b-4ff0-88c0-d604d593f75a" containerName="ceilometer-central-agent" Jan 20 18:26:33 crc kubenswrapper[4661]: E0120 18:26:33.273405 4661 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d81e382c-ac8b-4ff0-88c0-d604d593f75a" containerName="ceilometer-notification-agent" Jan 20 18:26:33 crc kubenswrapper[4661]: I0120 18:26:33.273414 4661 state_mem.go:107] "Deleted CPUSet assignment" podUID="d81e382c-ac8b-4ff0-88c0-d604d593f75a" containerName="ceilometer-notification-agent" Jan 20 18:26:33 crc kubenswrapper[4661]: E0120 18:26:33.273430 4661 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d81e382c-ac8b-4ff0-88c0-d604d593f75a" containerName="sg-core" Jan 20 18:26:33 crc kubenswrapper[4661]: I0120 18:26:33.273437 4661 state_mem.go:107] "Deleted CPUSet assignment" podUID="d81e382c-ac8b-4ff0-88c0-d604d593f75a" containerName="sg-core" Jan 20 18:26:33 crc kubenswrapper[4661]: I0120 18:26:33.273783 4661 memory_manager.go:354] "RemoveStaleState removing state" podUID="d81e382c-ac8b-4ff0-88c0-d604d593f75a" containerName="proxy-httpd" Jan 20 18:26:33 crc kubenswrapper[4661]: I0120 18:26:33.273807 4661 memory_manager.go:354] "RemoveStaleState removing state" podUID="d81e382c-ac8b-4ff0-88c0-d604d593f75a" containerName="ceilometer-central-agent" Jan 20 18:26:33 crc kubenswrapper[4661]: I0120 18:26:33.273827 4661 memory_manager.go:354] "RemoveStaleState removing state" podUID="d81e382c-ac8b-4ff0-88c0-d604d593f75a" containerName="ceilometer-notification-agent" Jan 20 18:26:33 crc kubenswrapper[4661]: I0120 18:26:33.273843 4661 memory_manager.go:354] "RemoveStaleState removing state" podUID="d81e382c-ac8b-4ff0-88c0-d604d593f75a" containerName="sg-core" Jan 20 18:26:33 crc kubenswrapper[4661]: I0120 18:26:33.275597 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 20 18:26:33 crc kubenswrapper[4661]: I0120 18:26:33.279463 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 20 18:26:33 crc kubenswrapper[4661]: I0120 18:26:33.279562 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 20 18:26:33 crc kubenswrapper[4661]: I0120 18:26:33.286521 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Jan 20 18:26:33 crc kubenswrapper[4661]: I0120 18:26:33.297506 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 20 18:26:33 crc kubenswrapper[4661]: I0120 18:26:33.470538 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/30cce0a2-3ca5-4085-b747-c296283e552c-scripts\") pod \"ceilometer-0\" (UID: \"30cce0a2-3ca5-4085-b747-c296283e552c\") " pod="openstack/ceilometer-0" Jan 20 18:26:33 crc kubenswrapper[4661]: I0120 18:26:33.470585 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/30cce0a2-3ca5-4085-b747-c296283e552c-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"30cce0a2-3ca5-4085-b747-c296283e552c\") " pod="openstack/ceilometer-0" Jan 20 18:26:33 crc kubenswrapper[4661]: I0120 18:26:33.470725 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/30cce0a2-3ca5-4085-b747-c296283e552c-config-data\") pod \"ceilometer-0\" (UID: \"30cce0a2-3ca5-4085-b747-c296283e552c\") " pod="openstack/ceilometer-0" Jan 20 18:26:33 crc kubenswrapper[4661]: I0120 18:26:33.470760 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/30cce0a2-3ca5-4085-b747-c296283e552c-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"30cce0a2-3ca5-4085-b747-c296283e552c\") " pod="openstack/ceilometer-0" Jan 20 18:26:33 crc kubenswrapper[4661]: I0120 18:26:33.470818 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jwdb7\" (UniqueName: \"kubernetes.io/projected/30cce0a2-3ca5-4085-b747-c296283e552c-kube-api-access-jwdb7\") pod \"ceilometer-0\" (UID: \"30cce0a2-3ca5-4085-b747-c296283e552c\") " pod="openstack/ceilometer-0" Jan 20 18:26:33 crc kubenswrapper[4661]: I0120 18:26:33.470862 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/30cce0a2-3ca5-4085-b747-c296283e552c-log-httpd\") pod \"ceilometer-0\" (UID: \"30cce0a2-3ca5-4085-b747-c296283e552c\") " pod="openstack/ceilometer-0" Jan 20 18:26:33 crc kubenswrapper[4661]: I0120 18:26:33.470891 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/30cce0a2-3ca5-4085-b747-c296283e552c-run-httpd\") pod \"ceilometer-0\" (UID: \"30cce0a2-3ca5-4085-b747-c296283e552c\") " pod="openstack/ceilometer-0" Jan 20 18:26:33 crc kubenswrapper[4661]: I0120 18:26:33.471005 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/30cce0a2-3ca5-4085-b747-c296283e552c-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"30cce0a2-3ca5-4085-b747-c296283e552c\") " pod="openstack/ceilometer-0" Jan 20 18:26:33 crc kubenswrapper[4661]: I0120 18:26:33.572612 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/30cce0a2-3ca5-4085-b747-c296283e552c-log-httpd\") pod \"ceilometer-0\" (UID: \"30cce0a2-3ca5-4085-b747-c296283e552c\") " pod="openstack/ceilometer-0" Jan 20 18:26:33 crc kubenswrapper[4661]: I0120 18:26:33.573009 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/30cce0a2-3ca5-4085-b747-c296283e552c-run-httpd\") pod \"ceilometer-0\" (UID: \"30cce0a2-3ca5-4085-b747-c296283e552c\") " pod="openstack/ceilometer-0" Jan 20 18:26:33 crc kubenswrapper[4661]: I0120 18:26:33.573053 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/30cce0a2-3ca5-4085-b747-c296283e552c-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"30cce0a2-3ca5-4085-b747-c296283e552c\") " pod="openstack/ceilometer-0" Jan 20 18:26:33 crc kubenswrapper[4661]: I0120 18:26:33.573085 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/30cce0a2-3ca5-4085-b747-c296283e552c-scripts\") pod \"ceilometer-0\" (UID: \"30cce0a2-3ca5-4085-b747-c296283e552c\") " pod="openstack/ceilometer-0" Jan 20 18:26:33 crc kubenswrapper[4661]: I0120 18:26:33.573100 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/30cce0a2-3ca5-4085-b747-c296283e552c-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"30cce0a2-3ca5-4085-b747-c296283e552c\") " pod="openstack/ceilometer-0" Jan 20 18:26:33 crc kubenswrapper[4661]: I0120 18:26:33.573122 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/30cce0a2-3ca5-4085-b747-c296283e552c-log-httpd\") pod \"ceilometer-0\" (UID: \"30cce0a2-3ca5-4085-b747-c296283e552c\") " pod="openstack/ceilometer-0" Jan 20 18:26:33 crc kubenswrapper[4661]: I0120 18:26:33.573152 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/30cce0a2-3ca5-4085-b747-c296283e552c-config-data\") pod \"ceilometer-0\" (UID: \"30cce0a2-3ca5-4085-b747-c296283e552c\") " pod="openstack/ceilometer-0" Jan 20 18:26:33 crc kubenswrapper[4661]: I0120 18:26:33.573977 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/30cce0a2-3ca5-4085-b747-c296283e552c-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"30cce0a2-3ca5-4085-b747-c296283e552c\") " pod="openstack/ceilometer-0" Jan 20 18:26:33 crc kubenswrapper[4661]: I0120 18:26:33.574052 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/30cce0a2-3ca5-4085-b747-c296283e552c-run-httpd\") pod \"ceilometer-0\" (UID: \"30cce0a2-3ca5-4085-b747-c296283e552c\") " pod="openstack/ceilometer-0" Jan 20 18:26:33 crc kubenswrapper[4661]: I0120 18:26:33.574054 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jwdb7\" (UniqueName: \"kubernetes.io/projected/30cce0a2-3ca5-4085-b747-c296283e552c-kube-api-access-jwdb7\") pod \"ceilometer-0\" (UID: \"30cce0a2-3ca5-4085-b747-c296283e552c\") " pod="openstack/ceilometer-0" Jan 20 18:26:33 crc kubenswrapper[4661]: I0120 18:26:33.579580 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/30cce0a2-3ca5-4085-b747-c296283e552c-scripts\") pod \"ceilometer-0\" (UID: \"30cce0a2-3ca5-4085-b747-c296283e552c\") " pod="openstack/ceilometer-0" Jan 20 18:26:33 crc kubenswrapper[4661]: I0120 18:26:33.579582 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/30cce0a2-3ca5-4085-b747-c296283e552c-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"30cce0a2-3ca5-4085-b747-c296283e552c\") " pod="openstack/ceilometer-0" Jan 20 18:26:33 crc kubenswrapper[4661]: I0120 18:26:33.581260 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/30cce0a2-3ca5-4085-b747-c296283e552c-config-data\") pod \"ceilometer-0\" (UID: \"30cce0a2-3ca5-4085-b747-c296283e552c\") " pod="openstack/ceilometer-0" Jan 20 18:26:33 crc kubenswrapper[4661]: I0120 18:26:33.583249 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/30cce0a2-3ca5-4085-b747-c296283e552c-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"30cce0a2-3ca5-4085-b747-c296283e552c\") " pod="openstack/ceilometer-0" Jan 20 18:26:33 crc kubenswrapper[4661]: I0120 18:26:33.589728 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/30cce0a2-3ca5-4085-b747-c296283e552c-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"30cce0a2-3ca5-4085-b747-c296283e552c\") " pod="openstack/ceilometer-0" Jan 20 18:26:33 crc kubenswrapper[4661]: I0120 18:26:33.614375 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jwdb7\" (UniqueName: \"kubernetes.io/projected/30cce0a2-3ca5-4085-b747-c296283e552c-kube-api-access-jwdb7\") pod \"ceilometer-0\" (UID: \"30cce0a2-3ca5-4085-b747-c296283e552c\") " pod="openstack/ceilometer-0" Jan 20 18:26:33 crc kubenswrapper[4661]: I0120 18:26:33.889056 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 20 18:26:34 crc kubenswrapper[4661]: I0120 18:26:34.152284 4661 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d81e382c-ac8b-4ff0-88c0-d604d593f75a" path="/var/lib/kubelet/pods/d81e382c-ac8b-4ff0-88c0-d604d593f75a/volumes" Jan 20 18:26:34 crc kubenswrapper[4661]: I0120 18:26:34.401054 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 20 18:26:34 crc kubenswrapper[4661]: W0120 18:26:34.414456 4661 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod30cce0a2_3ca5_4085_b747_c296283e552c.slice/crio-80a653ec48aeba76c5168229deb68cdac3f92fdab7ac6833136df5d3d808c5d1 WatchSource:0}: Error finding container 80a653ec48aeba76c5168229deb68cdac3f92fdab7ac6833136df5d3d808c5d1: Status 404 returned error can't find the container with id 80a653ec48aeba76c5168229deb68cdac3f92fdab7ac6833136df5d3d808c5d1 Jan 20 18:26:34 crc kubenswrapper[4661]: I0120 18:26:34.718501 4661 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 20 18:26:35 crc kubenswrapper[4661]: I0120 18:26:35.223454 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"30cce0a2-3ca5-4085-b747-c296283e552c","Type":"ContainerStarted","Data":"052d3bb31576962605f1cbca3cd4627b6890dfa8764e8ebb732dd91663ad27f9"} Jan 20 18:26:35 crc kubenswrapper[4661]: I0120 18:26:35.223495 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"30cce0a2-3ca5-4085-b747-c296283e552c","Type":"ContainerStarted","Data":"80a653ec48aeba76c5168229deb68cdac3f92fdab7ac6833136df5d3d808c5d1"} Jan 20 18:26:36 crc kubenswrapper[4661]: I0120 18:26:36.247899 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"30cce0a2-3ca5-4085-b747-c296283e552c","Type":"ContainerStarted","Data":"6777d4200d5c66600a93ebb0a89803789ec9173c3dfdd828c27bd4d56cc6b34a"} Jan 20 18:26:37 crc kubenswrapper[4661]: I0120 18:26:37.257739 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"30cce0a2-3ca5-4085-b747-c296283e552c","Type":"ContainerStarted","Data":"2d4a59e9543f81433aeca97052ee884222737a3a52cdb7141ae739e2d6612721"} Jan 20 18:26:39 crc kubenswrapper[4661]: I0120 18:26:39.284492 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"30cce0a2-3ca5-4085-b747-c296283e552c","Type":"ContainerStarted","Data":"aed53fa349ffde3648eb78bab60ff95e2352151022fee66989b24cef4dc917a7"} Jan 20 18:26:39 crc kubenswrapper[4661]: I0120 18:26:39.284746 4661 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 20 18:26:39 crc kubenswrapper[4661]: I0120 18:26:39.284684 4661 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="30cce0a2-3ca5-4085-b747-c296283e552c" containerName="sg-core" containerID="cri-o://2d4a59e9543f81433aeca97052ee884222737a3a52cdb7141ae739e2d6612721" gracePeriod=30 Jan 20 18:26:39 crc kubenswrapper[4661]: I0120 18:26:39.284614 4661 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="30cce0a2-3ca5-4085-b747-c296283e552c" containerName="ceilometer-central-agent" containerID="cri-o://052d3bb31576962605f1cbca3cd4627b6890dfa8764e8ebb732dd91663ad27f9" gracePeriod=30 Jan 20 18:26:39 crc kubenswrapper[4661]: I0120 18:26:39.284720 4661 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="30cce0a2-3ca5-4085-b747-c296283e552c" containerName="proxy-httpd" containerID="cri-o://aed53fa349ffde3648eb78bab60ff95e2352151022fee66989b24cef4dc917a7" gracePeriod=30 Jan 20 18:26:39 crc kubenswrapper[4661]: I0120 18:26:39.284685 4661 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="30cce0a2-3ca5-4085-b747-c296283e552c" containerName="ceilometer-notification-agent" containerID="cri-o://6777d4200d5c66600a93ebb0a89803789ec9173c3dfdd828c27bd4d56cc6b34a" gracePeriod=30 Jan 20 18:26:39 crc kubenswrapper[4661]: I0120 18:26:39.324798 4661 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.664308784 podStartE2EDuration="6.324778698s" podCreationTimestamp="2026-01-20 18:26:33 +0000 UTC" firstStartedPulling="2026-01-20 18:26:34.416763252 +0000 UTC m=+1250.747552914" lastFinishedPulling="2026-01-20 18:26:38.077233166 +0000 UTC m=+1254.408022828" observedRunningTime="2026-01-20 18:26:39.318761833 +0000 UTC m=+1255.649551495" watchObservedRunningTime="2026-01-20 18:26:39.324778698 +0000 UTC m=+1255.655568360" Jan 20 18:26:40 crc kubenswrapper[4661]: I0120 18:26:40.250025 4661 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 20 18:26:40 crc kubenswrapper[4661]: I0120 18:26:40.304280 4661 generic.go:334] "Generic (PLEG): container finished" podID="30cce0a2-3ca5-4085-b747-c296283e552c" containerID="aed53fa349ffde3648eb78bab60ff95e2352151022fee66989b24cef4dc917a7" exitCode=0 Jan 20 18:26:40 crc kubenswrapper[4661]: I0120 18:26:40.304310 4661 generic.go:334] "Generic (PLEG): container finished" podID="30cce0a2-3ca5-4085-b747-c296283e552c" containerID="2d4a59e9543f81433aeca97052ee884222737a3a52cdb7141ae739e2d6612721" exitCode=2 Jan 20 18:26:40 crc kubenswrapper[4661]: I0120 18:26:40.304321 4661 generic.go:334] "Generic (PLEG): container finished" podID="30cce0a2-3ca5-4085-b747-c296283e552c" containerID="6777d4200d5c66600a93ebb0a89803789ec9173c3dfdd828c27bd4d56cc6b34a" exitCode=0 Jan 20 18:26:40 crc kubenswrapper[4661]: I0120 18:26:40.304329 4661 generic.go:334] "Generic (PLEG): container finished" podID="30cce0a2-3ca5-4085-b747-c296283e552c" containerID="052d3bb31576962605f1cbca3cd4627b6890dfa8764e8ebb732dd91663ad27f9" exitCode=0 Jan 20 18:26:40 crc kubenswrapper[4661]: I0120 18:26:40.304349 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"30cce0a2-3ca5-4085-b747-c296283e552c","Type":"ContainerDied","Data":"aed53fa349ffde3648eb78bab60ff95e2352151022fee66989b24cef4dc917a7"} Jan 20 18:26:40 crc kubenswrapper[4661]: I0120 18:26:40.304380 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"30cce0a2-3ca5-4085-b747-c296283e552c","Type":"ContainerDied","Data":"2d4a59e9543f81433aeca97052ee884222737a3a52cdb7141ae739e2d6612721"} Jan 20 18:26:40 crc kubenswrapper[4661]: I0120 18:26:40.304390 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"30cce0a2-3ca5-4085-b747-c296283e552c","Type":"ContainerDied","Data":"6777d4200d5c66600a93ebb0a89803789ec9173c3dfdd828c27bd4d56cc6b34a"} Jan 20 18:26:40 crc kubenswrapper[4661]: I0120 18:26:40.304402 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"30cce0a2-3ca5-4085-b747-c296283e552c","Type":"ContainerDied","Data":"052d3bb31576962605f1cbca3cd4627b6890dfa8764e8ebb732dd91663ad27f9"} Jan 20 18:26:40 crc kubenswrapper[4661]: I0120 18:26:40.304411 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"30cce0a2-3ca5-4085-b747-c296283e552c","Type":"ContainerDied","Data":"80a653ec48aeba76c5168229deb68cdac3f92fdab7ac6833136df5d3d808c5d1"} Jan 20 18:26:40 crc kubenswrapper[4661]: I0120 18:26:40.304443 4661 scope.go:117] "RemoveContainer" containerID="aed53fa349ffde3648eb78bab60ff95e2352151022fee66989b24cef4dc917a7" Jan 20 18:26:40 crc kubenswrapper[4661]: I0120 18:26:40.304416 4661 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 20 18:26:40 crc kubenswrapper[4661]: I0120 18:26:40.325123 4661 scope.go:117] "RemoveContainer" containerID="2d4a59e9543f81433aeca97052ee884222737a3a52cdb7141ae739e2d6612721" Jan 20 18:26:40 crc kubenswrapper[4661]: I0120 18:26:40.341008 4661 scope.go:117] "RemoveContainer" containerID="6777d4200d5c66600a93ebb0a89803789ec9173c3dfdd828c27bd4d56cc6b34a" Jan 20 18:26:40 crc kubenswrapper[4661]: I0120 18:26:40.358272 4661 scope.go:117] "RemoveContainer" containerID="052d3bb31576962605f1cbca3cd4627b6890dfa8764e8ebb732dd91663ad27f9" Jan 20 18:26:40 crc kubenswrapper[4661]: I0120 18:26:40.381301 4661 scope.go:117] "RemoveContainer" containerID="aed53fa349ffde3648eb78bab60ff95e2352151022fee66989b24cef4dc917a7" Jan 20 18:26:40 crc kubenswrapper[4661]: E0120 18:26:40.381869 4661 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"aed53fa349ffde3648eb78bab60ff95e2352151022fee66989b24cef4dc917a7\": container with ID starting with aed53fa349ffde3648eb78bab60ff95e2352151022fee66989b24cef4dc917a7 not found: ID does not exist" containerID="aed53fa349ffde3648eb78bab60ff95e2352151022fee66989b24cef4dc917a7" Jan 20 18:26:40 crc kubenswrapper[4661]: I0120 18:26:40.381901 4661 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aed53fa349ffde3648eb78bab60ff95e2352151022fee66989b24cef4dc917a7"} err="failed to get container status \"aed53fa349ffde3648eb78bab60ff95e2352151022fee66989b24cef4dc917a7\": rpc error: code = NotFound desc = could not find container \"aed53fa349ffde3648eb78bab60ff95e2352151022fee66989b24cef4dc917a7\": container with ID starting with aed53fa349ffde3648eb78bab60ff95e2352151022fee66989b24cef4dc917a7 not found: ID does not exist" Jan 20 18:26:40 crc kubenswrapper[4661]: I0120 18:26:40.381922 4661 scope.go:117] "RemoveContainer" containerID="2d4a59e9543f81433aeca97052ee884222737a3a52cdb7141ae739e2d6612721" Jan 20 18:26:40 crc kubenswrapper[4661]: E0120 18:26:40.382391 4661 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2d4a59e9543f81433aeca97052ee884222737a3a52cdb7141ae739e2d6612721\": container with ID starting with 2d4a59e9543f81433aeca97052ee884222737a3a52cdb7141ae739e2d6612721 not found: ID does not exist" containerID="2d4a59e9543f81433aeca97052ee884222737a3a52cdb7141ae739e2d6612721" Jan 20 18:26:40 crc kubenswrapper[4661]: I0120 18:26:40.382415 4661 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2d4a59e9543f81433aeca97052ee884222737a3a52cdb7141ae739e2d6612721"} err="failed to get container status \"2d4a59e9543f81433aeca97052ee884222737a3a52cdb7141ae739e2d6612721\": rpc error: code = NotFound desc = could not find container \"2d4a59e9543f81433aeca97052ee884222737a3a52cdb7141ae739e2d6612721\": container with ID starting with 2d4a59e9543f81433aeca97052ee884222737a3a52cdb7141ae739e2d6612721 not found: ID does not exist" Jan 20 18:26:40 crc kubenswrapper[4661]: I0120 18:26:40.382428 4661 scope.go:117] "RemoveContainer" containerID="6777d4200d5c66600a93ebb0a89803789ec9173c3dfdd828c27bd4d56cc6b34a" Jan 20 18:26:40 crc kubenswrapper[4661]: E0120 18:26:40.382781 4661 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6777d4200d5c66600a93ebb0a89803789ec9173c3dfdd828c27bd4d56cc6b34a\": container with ID starting with 6777d4200d5c66600a93ebb0a89803789ec9173c3dfdd828c27bd4d56cc6b34a not found: ID does not exist" containerID="6777d4200d5c66600a93ebb0a89803789ec9173c3dfdd828c27bd4d56cc6b34a" Jan 20 18:26:40 crc kubenswrapper[4661]: I0120 18:26:40.382836 4661 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6777d4200d5c66600a93ebb0a89803789ec9173c3dfdd828c27bd4d56cc6b34a"} err="failed to get container status \"6777d4200d5c66600a93ebb0a89803789ec9173c3dfdd828c27bd4d56cc6b34a\": rpc error: code = NotFound desc = could not find container \"6777d4200d5c66600a93ebb0a89803789ec9173c3dfdd828c27bd4d56cc6b34a\": container with ID starting with 6777d4200d5c66600a93ebb0a89803789ec9173c3dfdd828c27bd4d56cc6b34a not found: ID does not exist" Jan 20 18:26:40 crc kubenswrapper[4661]: I0120 18:26:40.382871 4661 scope.go:117] "RemoveContainer" containerID="052d3bb31576962605f1cbca3cd4627b6890dfa8764e8ebb732dd91663ad27f9" Jan 20 18:26:40 crc kubenswrapper[4661]: E0120 18:26:40.383248 4661 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"052d3bb31576962605f1cbca3cd4627b6890dfa8764e8ebb732dd91663ad27f9\": container with ID starting with 052d3bb31576962605f1cbca3cd4627b6890dfa8764e8ebb732dd91663ad27f9 not found: ID does not exist" containerID="052d3bb31576962605f1cbca3cd4627b6890dfa8764e8ebb732dd91663ad27f9" Jan 20 18:26:40 crc kubenswrapper[4661]: I0120 18:26:40.383285 4661 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"052d3bb31576962605f1cbca3cd4627b6890dfa8764e8ebb732dd91663ad27f9"} err="failed to get container status \"052d3bb31576962605f1cbca3cd4627b6890dfa8764e8ebb732dd91663ad27f9\": rpc error: code = NotFound desc = could not find container \"052d3bb31576962605f1cbca3cd4627b6890dfa8764e8ebb732dd91663ad27f9\": container with ID starting with 052d3bb31576962605f1cbca3cd4627b6890dfa8764e8ebb732dd91663ad27f9 not found: ID does not exist" Jan 20 18:26:40 crc kubenswrapper[4661]: I0120 18:26:40.383304 4661 scope.go:117] "RemoveContainer" containerID="aed53fa349ffde3648eb78bab60ff95e2352151022fee66989b24cef4dc917a7" Jan 20 18:26:40 crc kubenswrapper[4661]: I0120 18:26:40.383618 4661 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aed53fa349ffde3648eb78bab60ff95e2352151022fee66989b24cef4dc917a7"} err="failed to get container status \"aed53fa349ffde3648eb78bab60ff95e2352151022fee66989b24cef4dc917a7\": rpc error: code = NotFound desc = could not find container \"aed53fa349ffde3648eb78bab60ff95e2352151022fee66989b24cef4dc917a7\": container with ID starting with aed53fa349ffde3648eb78bab60ff95e2352151022fee66989b24cef4dc917a7 not found: ID does not exist" Jan 20 18:26:40 crc kubenswrapper[4661]: I0120 18:26:40.383688 4661 scope.go:117] "RemoveContainer" containerID="2d4a59e9543f81433aeca97052ee884222737a3a52cdb7141ae739e2d6612721" Jan 20 18:26:40 crc kubenswrapper[4661]: I0120 18:26:40.384215 4661 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2d4a59e9543f81433aeca97052ee884222737a3a52cdb7141ae739e2d6612721"} err="failed to get container status \"2d4a59e9543f81433aeca97052ee884222737a3a52cdb7141ae739e2d6612721\": rpc error: code = NotFound desc = could not find container \"2d4a59e9543f81433aeca97052ee884222737a3a52cdb7141ae739e2d6612721\": container with ID starting with 2d4a59e9543f81433aeca97052ee884222737a3a52cdb7141ae739e2d6612721 not found: ID does not exist" Jan 20 18:26:40 crc kubenswrapper[4661]: I0120 18:26:40.384246 4661 scope.go:117] "RemoveContainer" containerID="6777d4200d5c66600a93ebb0a89803789ec9173c3dfdd828c27bd4d56cc6b34a" Jan 20 18:26:40 crc kubenswrapper[4661]: I0120 18:26:40.384591 4661 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6777d4200d5c66600a93ebb0a89803789ec9173c3dfdd828c27bd4d56cc6b34a"} err="failed to get container status \"6777d4200d5c66600a93ebb0a89803789ec9173c3dfdd828c27bd4d56cc6b34a\": rpc error: code = NotFound desc = could not find container \"6777d4200d5c66600a93ebb0a89803789ec9173c3dfdd828c27bd4d56cc6b34a\": container with ID starting with 6777d4200d5c66600a93ebb0a89803789ec9173c3dfdd828c27bd4d56cc6b34a not found: ID does not exist" Jan 20 18:26:40 crc kubenswrapper[4661]: I0120 18:26:40.384624 4661 scope.go:117] "RemoveContainer" containerID="052d3bb31576962605f1cbca3cd4627b6890dfa8764e8ebb732dd91663ad27f9" Jan 20 18:26:40 crc kubenswrapper[4661]: I0120 18:26:40.384922 4661 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"052d3bb31576962605f1cbca3cd4627b6890dfa8764e8ebb732dd91663ad27f9"} err="failed to get container status \"052d3bb31576962605f1cbca3cd4627b6890dfa8764e8ebb732dd91663ad27f9\": rpc error: code = NotFound desc = could not find container \"052d3bb31576962605f1cbca3cd4627b6890dfa8764e8ebb732dd91663ad27f9\": container with ID starting with 052d3bb31576962605f1cbca3cd4627b6890dfa8764e8ebb732dd91663ad27f9 not found: ID does not exist" Jan 20 18:26:40 crc kubenswrapper[4661]: I0120 18:26:40.384955 4661 scope.go:117] "RemoveContainer" containerID="aed53fa349ffde3648eb78bab60ff95e2352151022fee66989b24cef4dc917a7" Jan 20 18:26:40 crc kubenswrapper[4661]: I0120 18:26:40.385316 4661 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aed53fa349ffde3648eb78bab60ff95e2352151022fee66989b24cef4dc917a7"} err="failed to get container status \"aed53fa349ffde3648eb78bab60ff95e2352151022fee66989b24cef4dc917a7\": rpc error: code = NotFound desc = could not find container \"aed53fa349ffde3648eb78bab60ff95e2352151022fee66989b24cef4dc917a7\": container with ID starting with aed53fa349ffde3648eb78bab60ff95e2352151022fee66989b24cef4dc917a7 not found: ID does not exist" Jan 20 18:26:40 crc kubenswrapper[4661]: I0120 18:26:40.385344 4661 scope.go:117] "RemoveContainer" containerID="2d4a59e9543f81433aeca97052ee884222737a3a52cdb7141ae739e2d6612721" Jan 20 18:26:40 crc kubenswrapper[4661]: I0120 18:26:40.385828 4661 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2d4a59e9543f81433aeca97052ee884222737a3a52cdb7141ae739e2d6612721"} err="failed to get container status \"2d4a59e9543f81433aeca97052ee884222737a3a52cdb7141ae739e2d6612721\": rpc error: code = NotFound desc = could not find container \"2d4a59e9543f81433aeca97052ee884222737a3a52cdb7141ae739e2d6612721\": container with ID starting with 2d4a59e9543f81433aeca97052ee884222737a3a52cdb7141ae739e2d6612721 not found: ID does not exist" Jan 20 18:26:40 crc kubenswrapper[4661]: I0120 18:26:40.385861 4661 scope.go:117] "RemoveContainer" containerID="6777d4200d5c66600a93ebb0a89803789ec9173c3dfdd828c27bd4d56cc6b34a" Jan 20 18:26:40 crc kubenswrapper[4661]: I0120 18:26:40.386227 4661 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6777d4200d5c66600a93ebb0a89803789ec9173c3dfdd828c27bd4d56cc6b34a"} err="failed to get container status \"6777d4200d5c66600a93ebb0a89803789ec9173c3dfdd828c27bd4d56cc6b34a\": rpc error: code = NotFound desc = could not find container \"6777d4200d5c66600a93ebb0a89803789ec9173c3dfdd828c27bd4d56cc6b34a\": container with ID starting with 6777d4200d5c66600a93ebb0a89803789ec9173c3dfdd828c27bd4d56cc6b34a not found: ID does not exist" Jan 20 18:26:40 crc kubenswrapper[4661]: I0120 18:26:40.386258 4661 scope.go:117] "RemoveContainer" containerID="052d3bb31576962605f1cbca3cd4627b6890dfa8764e8ebb732dd91663ad27f9" Jan 20 18:26:40 crc kubenswrapper[4661]: I0120 18:26:40.386554 4661 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"052d3bb31576962605f1cbca3cd4627b6890dfa8764e8ebb732dd91663ad27f9"} err="failed to get container status \"052d3bb31576962605f1cbca3cd4627b6890dfa8764e8ebb732dd91663ad27f9\": rpc error: code = NotFound desc = could not find container \"052d3bb31576962605f1cbca3cd4627b6890dfa8764e8ebb732dd91663ad27f9\": container with ID starting with 052d3bb31576962605f1cbca3cd4627b6890dfa8764e8ebb732dd91663ad27f9 not found: ID does not exist" Jan 20 18:26:40 crc kubenswrapper[4661]: I0120 18:26:40.386575 4661 scope.go:117] "RemoveContainer" containerID="aed53fa349ffde3648eb78bab60ff95e2352151022fee66989b24cef4dc917a7" Jan 20 18:26:40 crc kubenswrapper[4661]: I0120 18:26:40.387172 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/30cce0a2-3ca5-4085-b747-c296283e552c-log-httpd\") pod \"30cce0a2-3ca5-4085-b747-c296283e552c\" (UID: \"30cce0a2-3ca5-4085-b747-c296283e552c\") " Jan 20 18:26:40 crc kubenswrapper[4661]: I0120 18:26:40.387218 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/30cce0a2-3ca5-4085-b747-c296283e552c-ceilometer-tls-certs\") pod \"30cce0a2-3ca5-4085-b747-c296283e552c\" (UID: \"30cce0a2-3ca5-4085-b747-c296283e552c\") " Jan 20 18:26:40 crc kubenswrapper[4661]: I0120 18:26:40.387312 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/30cce0a2-3ca5-4085-b747-c296283e552c-sg-core-conf-yaml\") pod \"30cce0a2-3ca5-4085-b747-c296283e552c\" (UID: \"30cce0a2-3ca5-4085-b747-c296283e552c\") " Jan 20 18:26:40 crc kubenswrapper[4661]: I0120 18:26:40.387365 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/30cce0a2-3ca5-4085-b747-c296283e552c-combined-ca-bundle\") pod \"30cce0a2-3ca5-4085-b747-c296283e552c\" (UID: \"30cce0a2-3ca5-4085-b747-c296283e552c\") " Jan 20 18:26:40 crc kubenswrapper[4661]: I0120 18:26:40.387492 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/30cce0a2-3ca5-4085-b747-c296283e552c-config-data\") pod \"30cce0a2-3ca5-4085-b747-c296283e552c\" (UID: \"30cce0a2-3ca5-4085-b747-c296283e552c\") " Jan 20 18:26:40 crc kubenswrapper[4661]: I0120 18:26:40.387530 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/30cce0a2-3ca5-4085-b747-c296283e552c-scripts\") pod \"30cce0a2-3ca5-4085-b747-c296283e552c\" (UID: \"30cce0a2-3ca5-4085-b747-c296283e552c\") " Jan 20 18:26:40 crc kubenswrapper[4661]: I0120 18:26:40.387563 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jwdb7\" (UniqueName: \"kubernetes.io/projected/30cce0a2-3ca5-4085-b747-c296283e552c-kube-api-access-jwdb7\") pod \"30cce0a2-3ca5-4085-b747-c296283e552c\" (UID: \"30cce0a2-3ca5-4085-b747-c296283e552c\") " Jan 20 18:26:40 crc kubenswrapper[4661]: I0120 18:26:40.387626 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/30cce0a2-3ca5-4085-b747-c296283e552c-run-httpd\") pod \"30cce0a2-3ca5-4085-b747-c296283e552c\" (UID: \"30cce0a2-3ca5-4085-b747-c296283e552c\") " Jan 20 18:26:40 crc kubenswrapper[4661]: I0120 18:26:40.387795 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/30cce0a2-3ca5-4085-b747-c296283e552c-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "30cce0a2-3ca5-4085-b747-c296283e552c" (UID: "30cce0a2-3ca5-4085-b747-c296283e552c"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 18:26:40 crc kubenswrapper[4661]: I0120 18:26:40.388086 4661 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/30cce0a2-3ca5-4085-b747-c296283e552c-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 20 18:26:40 crc kubenswrapper[4661]: I0120 18:26:40.388369 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/30cce0a2-3ca5-4085-b747-c296283e552c-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "30cce0a2-3ca5-4085-b747-c296283e552c" (UID: "30cce0a2-3ca5-4085-b747-c296283e552c"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 18:26:40 crc kubenswrapper[4661]: I0120 18:26:40.388791 4661 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aed53fa349ffde3648eb78bab60ff95e2352151022fee66989b24cef4dc917a7"} err="failed to get container status \"aed53fa349ffde3648eb78bab60ff95e2352151022fee66989b24cef4dc917a7\": rpc error: code = NotFound desc = could not find container \"aed53fa349ffde3648eb78bab60ff95e2352151022fee66989b24cef4dc917a7\": container with ID starting with aed53fa349ffde3648eb78bab60ff95e2352151022fee66989b24cef4dc917a7 not found: ID does not exist" Jan 20 18:26:40 crc kubenswrapper[4661]: I0120 18:26:40.388855 4661 scope.go:117] "RemoveContainer" containerID="2d4a59e9543f81433aeca97052ee884222737a3a52cdb7141ae739e2d6612721" Jan 20 18:26:40 crc kubenswrapper[4661]: I0120 18:26:40.390628 4661 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2d4a59e9543f81433aeca97052ee884222737a3a52cdb7141ae739e2d6612721"} err="failed to get container status \"2d4a59e9543f81433aeca97052ee884222737a3a52cdb7141ae739e2d6612721\": rpc error: code = NotFound desc = could not find container \"2d4a59e9543f81433aeca97052ee884222737a3a52cdb7141ae739e2d6612721\": container with ID starting with 2d4a59e9543f81433aeca97052ee884222737a3a52cdb7141ae739e2d6612721 not found: ID does not exist" Jan 20 18:26:40 crc kubenswrapper[4661]: I0120 18:26:40.390727 4661 scope.go:117] "RemoveContainer" containerID="6777d4200d5c66600a93ebb0a89803789ec9173c3dfdd828c27bd4d56cc6b34a" Jan 20 18:26:40 crc kubenswrapper[4661]: I0120 18:26:40.391087 4661 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6777d4200d5c66600a93ebb0a89803789ec9173c3dfdd828c27bd4d56cc6b34a"} err="failed to get container status \"6777d4200d5c66600a93ebb0a89803789ec9173c3dfdd828c27bd4d56cc6b34a\": rpc error: code = NotFound desc = could not find container \"6777d4200d5c66600a93ebb0a89803789ec9173c3dfdd828c27bd4d56cc6b34a\": container with ID starting with 6777d4200d5c66600a93ebb0a89803789ec9173c3dfdd828c27bd4d56cc6b34a not found: ID does not exist" Jan 20 18:26:40 crc kubenswrapper[4661]: I0120 18:26:40.391125 4661 scope.go:117] "RemoveContainer" containerID="052d3bb31576962605f1cbca3cd4627b6890dfa8764e8ebb732dd91663ad27f9" Jan 20 18:26:40 crc kubenswrapper[4661]: I0120 18:26:40.391382 4661 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"052d3bb31576962605f1cbca3cd4627b6890dfa8764e8ebb732dd91663ad27f9"} err="failed to get container status \"052d3bb31576962605f1cbca3cd4627b6890dfa8764e8ebb732dd91663ad27f9\": rpc error: code = NotFound desc = could not find container \"052d3bb31576962605f1cbca3cd4627b6890dfa8764e8ebb732dd91663ad27f9\": container with ID starting with 052d3bb31576962605f1cbca3cd4627b6890dfa8764e8ebb732dd91663ad27f9 not found: ID does not exist" Jan 20 18:26:40 crc kubenswrapper[4661]: I0120 18:26:40.392602 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/30cce0a2-3ca5-4085-b747-c296283e552c-scripts" (OuterVolumeSpecName: "scripts") pod "30cce0a2-3ca5-4085-b747-c296283e552c" (UID: "30cce0a2-3ca5-4085-b747-c296283e552c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 18:26:40 crc kubenswrapper[4661]: I0120 18:26:40.393162 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/30cce0a2-3ca5-4085-b747-c296283e552c-kube-api-access-jwdb7" (OuterVolumeSpecName: "kube-api-access-jwdb7") pod "30cce0a2-3ca5-4085-b747-c296283e552c" (UID: "30cce0a2-3ca5-4085-b747-c296283e552c"). InnerVolumeSpecName "kube-api-access-jwdb7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 18:26:40 crc kubenswrapper[4661]: I0120 18:26:40.421308 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/30cce0a2-3ca5-4085-b747-c296283e552c-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "30cce0a2-3ca5-4085-b747-c296283e552c" (UID: "30cce0a2-3ca5-4085-b747-c296283e552c"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 18:26:40 crc kubenswrapper[4661]: I0120 18:26:40.457574 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/30cce0a2-3ca5-4085-b747-c296283e552c-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "30cce0a2-3ca5-4085-b747-c296283e552c" (UID: "30cce0a2-3ca5-4085-b747-c296283e552c"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 18:26:40 crc kubenswrapper[4661]: I0120 18:26:40.471552 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/30cce0a2-3ca5-4085-b747-c296283e552c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "30cce0a2-3ca5-4085-b747-c296283e552c" (UID: "30cce0a2-3ca5-4085-b747-c296283e552c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 18:26:40 crc kubenswrapper[4661]: I0120 18:26:40.489652 4661 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/30cce0a2-3ca5-4085-b747-c296283e552c-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 20 18:26:40 crc kubenswrapper[4661]: I0120 18:26:40.489694 4661 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/30cce0a2-3ca5-4085-b747-c296283e552c-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 20 18:26:40 crc kubenswrapper[4661]: I0120 18:26:40.489706 4661 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/30cce0a2-3ca5-4085-b747-c296283e552c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 20 18:26:40 crc kubenswrapper[4661]: I0120 18:26:40.489716 4661 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/30cce0a2-3ca5-4085-b747-c296283e552c-scripts\") on node \"crc\" DevicePath \"\"" Jan 20 18:26:40 crc kubenswrapper[4661]: I0120 18:26:40.489726 4661 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jwdb7\" (UniqueName: \"kubernetes.io/projected/30cce0a2-3ca5-4085-b747-c296283e552c-kube-api-access-jwdb7\") on node \"crc\" DevicePath \"\"" Jan 20 18:26:40 crc kubenswrapper[4661]: I0120 18:26:40.489735 4661 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/30cce0a2-3ca5-4085-b747-c296283e552c-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 20 18:26:40 crc kubenswrapper[4661]: I0120 18:26:40.504876 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/30cce0a2-3ca5-4085-b747-c296283e552c-config-data" (OuterVolumeSpecName: "config-data") pod "30cce0a2-3ca5-4085-b747-c296283e552c" (UID: "30cce0a2-3ca5-4085-b747-c296283e552c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 18:26:40 crc kubenswrapper[4661]: I0120 18:26:40.591389 4661 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/30cce0a2-3ca5-4085-b747-c296283e552c-config-data\") on node \"crc\" DevicePath \"\"" Jan 20 18:26:40 crc kubenswrapper[4661]: I0120 18:26:40.649294 4661 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 20 18:26:40 crc kubenswrapper[4661]: I0120 18:26:40.655752 4661 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 20 18:26:40 crc kubenswrapper[4661]: I0120 18:26:40.686635 4661 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 20 18:26:40 crc kubenswrapper[4661]: E0120 18:26:40.687036 4661 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="30cce0a2-3ca5-4085-b747-c296283e552c" containerName="sg-core" Jan 20 18:26:40 crc kubenswrapper[4661]: I0120 18:26:40.687048 4661 state_mem.go:107] "Deleted CPUSet assignment" podUID="30cce0a2-3ca5-4085-b747-c296283e552c" containerName="sg-core" Jan 20 18:26:40 crc kubenswrapper[4661]: E0120 18:26:40.687057 4661 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="30cce0a2-3ca5-4085-b747-c296283e552c" containerName="ceilometer-central-agent" Jan 20 18:26:40 crc kubenswrapper[4661]: I0120 18:26:40.687062 4661 state_mem.go:107] "Deleted CPUSet assignment" podUID="30cce0a2-3ca5-4085-b747-c296283e552c" containerName="ceilometer-central-agent" Jan 20 18:26:40 crc kubenswrapper[4661]: E0120 18:26:40.687079 4661 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="30cce0a2-3ca5-4085-b747-c296283e552c" containerName="ceilometer-notification-agent" Jan 20 18:26:40 crc kubenswrapper[4661]: I0120 18:26:40.687085 4661 state_mem.go:107] "Deleted CPUSet assignment" podUID="30cce0a2-3ca5-4085-b747-c296283e552c" containerName="ceilometer-notification-agent" Jan 20 18:26:40 crc kubenswrapper[4661]: E0120 18:26:40.687097 4661 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="30cce0a2-3ca5-4085-b747-c296283e552c" containerName="proxy-httpd" Jan 20 18:26:40 crc kubenswrapper[4661]: I0120 18:26:40.687104 4661 state_mem.go:107] "Deleted CPUSet assignment" podUID="30cce0a2-3ca5-4085-b747-c296283e552c" containerName="proxy-httpd" Jan 20 18:26:40 crc kubenswrapper[4661]: I0120 18:26:40.687248 4661 memory_manager.go:354] "RemoveStaleState removing state" podUID="30cce0a2-3ca5-4085-b747-c296283e552c" containerName="ceilometer-notification-agent" Jan 20 18:26:40 crc kubenswrapper[4661]: I0120 18:26:40.687261 4661 memory_manager.go:354] "RemoveStaleState removing state" podUID="30cce0a2-3ca5-4085-b747-c296283e552c" containerName="ceilometer-central-agent" Jan 20 18:26:40 crc kubenswrapper[4661]: I0120 18:26:40.687272 4661 memory_manager.go:354] "RemoveStaleState removing state" podUID="30cce0a2-3ca5-4085-b747-c296283e552c" containerName="proxy-httpd" Jan 20 18:26:40 crc kubenswrapper[4661]: I0120 18:26:40.687288 4661 memory_manager.go:354] "RemoveStaleState removing state" podUID="30cce0a2-3ca5-4085-b747-c296283e552c" containerName="sg-core" Jan 20 18:26:40 crc kubenswrapper[4661]: I0120 18:26:40.688612 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 20 18:26:40 crc kubenswrapper[4661]: I0120 18:26:40.693592 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 20 18:26:40 crc kubenswrapper[4661]: I0120 18:26:40.697864 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 20 18:26:40 crc kubenswrapper[4661]: I0120 18:26:40.697942 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Jan 20 18:26:40 crc kubenswrapper[4661]: I0120 18:26:40.704896 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4b7a79c6-309f-494e-87c6-429326682d11-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"4b7a79c6-309f-494e-87c6-429326682d11\") " pod="openstack/ceilometer-0" Jan 20 18:26:40 crc kubenswrapper[4661]: I0120 18:26:40.705036 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4b7a79c6-309f-494e-87c6-429326682d11-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"4b7a79c6-309f-494e-87c6-429326682d11\") " pod="openstack/ceilometer-0" Jan 20 18:26:40 crc kubenswrapper[4661]: I0120 18:26:40.705076 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4b7a79c6-309f-494e-87c6-429326682d11-log-httpd\") pod \"ceilometer-0\" (UID: \"4b7a79c6-309f-494e-87c6-429326682d11\") " pod="openstack/ceilometer-0" Jan 20 18:26:40 crc kubenswrapper[4661]: I0120 18:26:40.705210 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4b7a79c6-309f-494e-87c6-429326682d11-config-data\") pod \"ceilometer-0\" (UID: \"4b7a79c6-309f-494e-87c6-429326682d11\") " pod="openstack/ceilometer-0" Jan 20 18:26:40 crc kubenswrapper[4661]: I0120 18:26:40.705287 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/4b7a79c6-309f-494e-87c6-429326682d11-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"4b7a79c6-309f-494e-87c6-429326682d11\") " pod="openstack/ceilometer-0" Jan 20 18:26:40 crc kubenswrapper[4661]: I0120 18:26:40.705316 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4b7a79c6-309f-494e-87c6-429326682d11-scripts\") pod \"ceilometer-0\" (UID: \"4b7a79c6-309f-494e-87c6-429326682d11\") " pod="openstack/ceilometer-0" Jan 20 18:26:40 crc kubenswrapper[4661]: I0120 18:26:40.705360 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gxr7d\" (UniqueName: \"kubernetes.io/projected/4b7a79c6-309f-494e-87c6-429326682d11-kube-api-access-gxr7d\") pod \"ceilometer-0\" (UID: \"4b7a79c6-309f-494e-87c6-429326682d11\") " pod="openstack/ceilometer-0" Jan 20 18:26:40 crc kubenswrapper[4661]: I0120 18:26:40.705873 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4b7a79c6-309f-494e-87c6-429326682d11-run-httpd\") pod \"ceilometer-0\" (UID: \"4b7a79c6-309f-494e-87c6-429326682d11\") " pod="openstack/ceilometer-0" Jan 20 18:26:40 crc kubenswrapper[4661]: I0120 18:26:40.744864 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 20 18:26:40 crc kubenswrapper[4661]: I0120 18:26:40.807924 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4b7a79c6-309f-494e-87c6-429326682d11-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"4b7a79c6-309f-494e-87c6-429326682d11\") " pod="openstack/ceilometer-0" Jan 20 18:26:40 crc kubenswrapper[4661]: I0120 18:26:40.807987 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4b7a79c6-309f-494e-87c6-429326682d11-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"4b7a79c6-309f-494e-87c6-429326682d11\") " pod="openstack/ceilometer-0" Jan 20 18:26:40 crc kubenswrapper[4661]: I0120 18:26:40.808008 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4b7a79c6-309f-494e-87c6-429326682d11-log-httpd\") pod \"ceilometer-0\" (UID: \"4b7a79c6-309f-494e-87c6-429326682d11\") " pod="openstack/ceilometer-0" Jan 20 18:26:40 crc kubenswrapper[4661]: I0120 18:26:40.808027 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4b7a79c6-309f-494e-87c6-429326682d11-config-data\") pod \"ceilometer-0\" (UID: \"4b7a79c6-309f-494e-87c6-429326682d11\") " pod="openstack/ceilometer-0" Jan 20 18:26:40 crc kubenswrapper[4661]: I0120 18:26:40.808057 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/4b7a79c6-309f-494e-87c6-429326682d11-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"4b7a79c6-309f-494e-87c6-429326682d11\") " pod="openstack/ceilometer-0" Jan 20 18:26:40 crc kubenswrapper[4661]: I0120 18:26:40.808075 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4b7a79c6-309f-494e-87c6-429326682d11-scripts\") pod \"ceilometer-0\" (UID: \"4b7a79c6-309f-494e-87c6-429326682d11\") " pod="openstack/ceilometer-0" Jan 20 18:26:40 crc kubenswrapper[4661]: I0120 18:26:40.809544 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gxr7d\" (UniqueName: \"kubernetes.io/projected/4b7a79c6-309f-494e-87c6-429326682d11-kube-api-access-gxr7d\") pod \"ceilometer-0\" (UID: \"4b7a79c6-309f-494e-87c6-429326682d11\") " pod="openstack/ceilometer-0" Jan 20 18:26:40 crc kubenswrapper[4661]: I0120 18:26:40.809649 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4b7a79c6-309f-494e-87c6-429326682d11-run-httpd\") pod \"ceilometer-0\" (UID: \"4b7a79c6-309f-494e-87c6-429326682d11\") " pod="openstack/ceilometer-0" Jan 20 18:26:40 crc kubenswrapper[4661]: I0120 18:26:40.808452 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4b7a79c6-309f-494e-87c6-429326682d11-log-httpd\") pod \"ceilometer-0\" (UID: \"4b7a79c6-309f-494e-87c6-429326682d11\") " pod="openstack/ceilometer-0" Jan 20 18:26:40 crc kubenswrapper[4661]: I0120 18:26:40.810058 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4b7a79c6-309f-494e-87c6-429326682d11-run-httpd\") pod \"ceilometer-0\" (UID: \"4b7a79c6-309f-494e-87c6-429326682d11\") " pod="openstack/ceilometer-0" Jan 20 18:26:40 crc kubenswrapper[4661]: I0120 18:26:40.812415 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/4b7a79c6-309f-494e-87c6-429326682d11-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"4b7a79c6-309f-494e-87c6-429326682d11\") " pod="openstack/ceilometer-0" Jan 20 18:26:40 crc kubenswrapper[4661]: I0120 18:26:40.812489 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4b7a79c6-309f-494e-87c6-429326682d11-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"4b7a79c6-309f-494e-87c6-429326682d11\") " pod="openstack/ceilometer-0" Jan 20 18:26:40 crc kubenswrapper[4661]: I0120 18:26:40.813196 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4b7a79c6-309f-494e-87c6-429326682d11-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"4b7a79c6-309f-494e-87c6-429326682d11\") " pod="openstack/ceilometer-0" Jan 20 18:26:40 crc kubenswrapper[4661]: I0120 18:26:40.813455 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4b7a79c6-309f-494e-87c6-429326682d11-scripts\") pod \"ceilometer-0\" (UID: \"4b7a79c6-309f-494e-87c6-429326682d11\") " pod="openstack/ceilometer-0" Jan 20 18:26:40 crc kubenswrapper[4661]: I0120 18:26:40.813484 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4b7a79c6-309f-494e-87c6-429326682d11-config-data\") pod \"ceilometer-0\" (UID: \"4b7a79c6-309f-494e-87c6-429326682d11\") " pod="openstack/ceilometer-0" Jan 20 18:26:40 crc kubenswrapper[4661]: I0120 18:26:40.833164 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gxr7d\" (UniqueName: \"kubernetes.io/projected/4b7a79c6-309f-494e-87c6-429326682d11-kube-api-access-gxr7d\") pod \"ceilometer-0\" (UID: \"4b7a79c6-309f-494e-87c6-429326682d11\") " pod="openstack/ceilometer-0" Jan 20 18:26:41 crc kubenswrapper[4661]: I0120 18:26:41.030379 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 20 18:26:41 crc kubenswrapper[4661]: I0120 18:26:41.501078 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 20 18:26:42 crc kubenswrapper[4661]: I0120 18:26:42.153759 4661 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="30cce0a2-3ca5-4085-b747-c296283e552c" path="/var/lib/kubelet/pods/30cce0a2-3ca5-4085-b747-c296283e552c/volumes" Jan 20 18:26:42 crc kubenswrapper[4661]: I0120 18:26:42.332734 4661 generic.go:334] "Generic (PLEG): container finished" podID="9d53b16a-96f4-476d-bbfb-4b83adc3e33a" containerID="4f65aa729892c3506e526a21564bd0d05bf1660279c82973c9e5a69dc58b5b40" exitCode=0 Jan 20 18:26:42 crc kubenswrapper[4661]: I0120 18:26:42.332868 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-zbltb" event={"ID":"9d53b16a-96f4-476d-bbfb-4b83adc3e33a","Type":"ContainerDied","Data":"4f65aa729892c3506e526a21564bd0d05bf1660279c82973c9e5a69dc58b5b40"} Jan 20 18:26:42 crc kubenswrapper[4661]: I0120 18:26:42.336813 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4b7a79c6-309f-494e-87c6-429326682d11","Type":"ContainerStarted","Data":"7fdfe51495ff10fd8052a516b680fd64efc0deb7504015f56e21edf0431bab4f"} Jan 20 18:26:42 crc kubenswrapper[4661]: I0120 18:26:42.336877 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4b7a79c6-309f-494e-87c6-429326682d11","Type":"ContainerStarted","Data":"bad56f2668fa3dfe5ff84fff1c654c919f342f78286c9c0ccf655e3444e58f61"} Jan 20 18:26:43 crc kubenswrapper[4661]: I0120 18:26:43.346749 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4b7a79c6-309f-494e-87c6-429326682d11","Type":"ContainerStarted","Data":"825b326842fc66dfde87ec11b4e620b73492f567e913b75df433a1523999470a"} Jan 20 18:26:43 crc kubenswrapper[4661]: I0120 18:26:43.619449 4661 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-zbltb" Jan 20 18:26:43 crc kubenswrapper[4661]: I0120 18:26:43.664874 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mgkmn\" (UniqueName: \"kubernetes.io/projected/9d53b16a-96f4-476d-bbfb-4b83adc3e33a-kube-api-access-mgkmn\") pod \"9d53b16a-96f4-476d-bbfb-4b83adc3e33a\" (UID: \"9d53b16a-96f4-476d-bbfb-4b83adc3e33a\") " Jan 20 18:26:43 crc kubenswrapper[4661]: I0120 18:26:43.664966 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9d53b16a-96f4-476d-bbfb-4b83adc3e33a-config-data\") pod \"9d53b16a-96f4-476d-bbfb-4b83adc3e33a\" (UID: \"9d53b16a-96f4-476d-bbfb-4b83adc3e33a\") " Jan 20 18:26:43 crc kubenswrapper[4661]: I0120 18:26:43.665096 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9d53b16a-96f4-476d-bbfb-4b83adc3e33a-scripts\") pod \"9d53b16a-96f4-476d-bbfb-4b83adc3e33a\" (UID: \"9d53b16a-96f4-476d-bbfb-4b83adc3e33a\") " Jan 20 18:26:43 crc kubenswrapper[4661]: I0120 18:26:43.665167 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9d53b16a-96f4-476d-bbfb-4b83adc3e33a-combined-ca-bundle\") pod \"9d53b16a-96f4-476d-bbfb-4b83adc3e33a\" (UID: \"9d53b16a-96f4-476d-bbfb-4b83adc3e33a\") " Jan 20 18:26:43 crc kubenswrapper[4661]: I0120 18:26:43.670057 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d53b16a-96f4-476d-bbfb-4b83adc3e33a-kube-api-access-mgkmn" (OuterVolumeSpecName: "kube-api-access-mgkmn") pod "9d53b16a-96f4-476d-bbfb-4b83adc3e33a" (UID: "9d53b16a-96f4-476d-bbfb-4b83adc3e33a"). InnerVolumeSpecName "kube-api-access-mgkmn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 18:26:43 crc kubenswrapper[4661]: I0120 18:26:43.674812 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d53b16a-96f4-476d-bbfb-4b83adc3e33a-scripts" (OuterVolumeSpecName: "scripts") pod "9d53b16a-96f4-476d-bbfb-4b83adc3e33a" (UID: "9d53b16a-96f4-476d-bbfb-4b83adc3e33a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 18:26:43 crc kubenswrapper[4661]: E0120 18:26:43.707013 4661 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9d53b16a-96f4-476d-bbfb-4b83adc3e33a-combined-ca-bundle podName:9d53b16a-96f4-476d-bbfb-4b83adc3e33a nodeName:}" failed. No retries permitted until 2026-01-20 18:26:44.206985841 +0000 UTC m=+1260.537775503 (durationBeforeRetry 500ms). Error: error cleaning subPath mounts for volume "combined-ca-bundle" (UniqueName: "kubernetes.io/secret/9d53b16a-96f4-476d-bbfb-4b83adc3e33a-combined-ca-bundle") pod "9d53b16a-96f4-476d-bbfb-4b83adc3e33a" (UID: "9d53b16a-96f4-476d-bbfb-4b83adc3e33a") : error deleting /var/lib/kubelet/pods/9d53b16a-96f4-476d-bbfb-4b83adc3e33a/volume-subpaths: remove /var/lib/kubelet/pods/9d53b16a-96f4-476d-bbfb-4b83adc3e33a/volume-subpaths: no such file or directory Jan 20 18:26:43 crc kubenswrapper[4661]: I0120 18:26:43.709805 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d53b16a-96f4-476d-bbfb-4b83adc3e33a-config-data" (OuterVolumeSpecName: "config-data") pod "9d53b16a-96f4-476d-bbfb-4b83adc3e33a" (UID: "9d53b16a-96f4-476d-bbfb-4b83adc3e33a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 18:26:43 crc kubenswrapper[4661]: I0120 18:26:43.767636 4661 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9d53b16a-96f4-476d-bbfb-4b83adc3e33a-scripts\") on node \"crc\" DevicePath \"\"" Jan 20 18:26:43 crc kubenswrapper[4661]: I0120 18:26:43.767674 4661 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mgkmn\" (UniqueName: \"kubernetes.io/projected/9d53b16a-96f4-476d-bbfb-4b83adc3e33a-kube-api-access-mgkmn\") on node \"crc\" DevicePath \"\"" Jan 20 18:26:43 crc kubenswrapper[4661]: I0120 18:26:43.767687 4661 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9d53b16a-96f4-476d-bbfb-4b83adc3e33a-config-data\") on node \"crc\" DevicePath \"\"" Jan 20 18:26:44 crc kubenswrapper[4661]: I0120 18:26:44.289783 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9d53b16a-96f4-476d-bbfb-4b83adc3e33a-combined-ca-bundle\") pod \"9d53b16a-96f4-476d-bbfb-4b83adc3e33a\" (UID: \"9d53b16a-96f4-476d-bbfb-4b83adc3e33a\") " Jan 20 18:26:44 crc kubenswrapper[4661]: I0120 18:26:44.296743 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d53b16a-96f4-476d-bbfb-4b83adc3e33a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9d53b16a-96f4-476d-bbfb-4b83adc3e33a" (UID: "9d53b16a-96f4-476d-bbfb-4b83adc3e33a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 18:26:44 crc kubenswrapper[4661]: I0120 18:26:44.354817 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4b7a79c6-309f-494e-87c6-429326682d11","Type":"ContainerStarted","Data":"e128093979c7147f4b1158a5d112ac5fca72fffa6d4813133733b7d4c8b7e5bf"} Jan 20 18:26:44 crc kubenswrapper[4661]: I0120 18:26:44.356499 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-zbltb" event={"ID":"9d53b16a-96f4-476d-bbfb-4b83adc3e33a","Type":"ContainerDied","Data":"4ecde656544ccf281d0c27ee84720360cca86efad0f7e13db2f1f810585faedc"} Jan 20 18:26:44 crc kubenswrapper[4661]: I0120 18:26:44.356522 4661 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4ecde656544ccf281d0c27ee84720360cca86efad0f7e13db2f1f810585faedc" Jan 20 18:26:44 crc kubenswrapper[4661]: I0120 18:26:44.356585 4661 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-zbltb" Jan 20 18:26:44 crc kubenswrapper[4661]: I0120 18:26:44.391639 4661 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9d53b16a-96f4-476d-bbfb-4b83adc3e33a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 20 18:26:44 crc kubenswrapper[4661]: I0120 18:26:44.457747 4661 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 20 18:26:44 crc kubenswrapper[4661]: E0120 18:26:44.458142 4661 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9d53b16a-96f4-476d-bbfb-4b83adc3e33a" containerName="nova-cell0-conductor-db-sync" Jan 20 18:26:44 crc kubenswrapper[4661]: I0120 18:26:44.458166 4661 state_mem.go:107] "Deleted CPUSet assignment" podUID="9d53b16a-96f4-476d-bbfb-4b83adc3e33a" containerName="nova-cell0-conductor-db-sync" Jan 20 18:26:44 crc kubenswrapper[4661]: I0120 18:26:44.458353 4661 memory_manager.go:354] "RemoveStaleState removing state" podUID="9d53b16a-96f4-476d-bbfb-4b83adc3e33a" containerName="nova-cell0-conductor-db-sync" Jan 20 18:26:44 crc kubenswrapper[4661]: I0120 18:26:44.458921 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 20 18:26:44 crc kubenswrapper[4661]: I0120 18:26:44.461017 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-fpdcz" Jan 20 18:26:44 crc kubenswrapper[4661]: I0120 18:26:44.461027 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Jan 20 18:26:44 crc kubenswrapper[4661]: I0120 18:26:44.467507 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 20 18:26:44 crc kubenswrapper[4661]: I0120 18:26:44.492503 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b5e5805d-a947-403a-b1dc-77949080c7be-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"b5e5805d-a947-403a-b1dc-77949080c7be\") " pod="openstack/nova-cell0-conductor-0" Jan 20 18:26:44 crc kubenswrapper[4661]: I0120 18:26:44.492572 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8lh5p\" (UniqueName: \"kubernetes.io/projected/b5e5805d-a947-403a-b1dc-77949080c7be-kube-api-access-8lh5p\") pod \"nova-cell0-conductor-0\" (UID: \"b5e5805d-a947-403a-b1dc-77949080c7be\") " pod="openstack/nova-cell0-conductor-0" Jan 20 18:26:44 crc kubenswrapper[4661]: I0120 18:26:44.492619 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b5e5805d-a947-403a-b1dc-77949080c7be-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"b5e5805d-a947-403a-b1dc-77949080c7be\") " pod="openstack/nova-cell0-conductor-0" Jan 20 18:26:44 crc kubenswrapper[4661]: I0120 18:26:44.594699 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b5e5805d-a947-403a-b1dc-77949080c7be-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"b5e5805d-a947-403a-b1dc-77949080c7be\") " pod="openstack/nova-cell0-conductor-0" Jan 20 18:26:44 crc kubenswrapper[4661]: I0120 18:26:44.595085 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8lh5p\" (UniqueName: \"kubernetes.io/projected/b5e5805d-a947-403a-b1dc-77949080c7be-kube-api-access-8lh5p\") pod \"nova-cell0-conductor-0\" (UID: \"b5e5805d-a947-403a-b1dc-77949080c7be\") " pod="openstack/nova-cell0-conductor-0" Jan 20 18:26:44 crc kubenswrapper[4661]: I0120 18:26:44.595247 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b5e5805d-a947-403a-b1dc-77949080c7be-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"b5e5805d-a947-403a-b1dc-77949080c7be\") " pod="openstack/nova-cell0-conductor-0" Jan 20 18:26:44 crc kubenswrapper[4661]: I0120 18:26:44.600420 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b5e5805d-a947-403a-b1dc-77949080c7be-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"b5e5805d-a947-403a-b1dc-77949080c7be\") " pod="openstack/nova-cell0-conductor-0" Jan 20 18:26:44 crc kubenswrapper[4661]: I0120 18:26:44.607769 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b5e5805d-a947-403a-b1dc-77949080c7be-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"b5e5805d-a947-403a-b1dc-77949080c7be\") " pod="openstack/nova-cell0-conductor-0" Jan 20 18:26:44 crc kubenswrapper[4661]: I0120 18:26:44.639934 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8lh5p\" (UniqueName: \"kubernetes.io/projected/b5e5805d-a947-403a-b1dc-77949080c7be-kube-api-access-8lh5p\") pod \"nova-cell0-conductor-0\" (UID: \"b5e5805d-a947-403a-b1dc-77949080c7be\") " pod="openstack/nova-cell0-conductor-0" Jan 20 18:26:44 crc kubenswrapper[4661]: I0120 18:26:44.773906 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 20 18:26:45 crc kubenswrapper[4661]: I0120 18:26:45.222017 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 20 18:26:45 crc kubenswrapper[4661]: I0120 18:26:45.373326 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4b7a79c6-309f-494e-87c6-429326682d11","Type":"ContainerStarted","Data":"8059494993e39192f483ebc8c2da67d8f111007964dd7398a66b980da12b1347"} Jan 20 18:26:45 crc kubenswrapper[4661]: I0120 18:26:45.373704 4661 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 20 18:26:45 crc kubenswrapper[4661]: I0120 18:26:45.374898 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"b5e5805d-a947-403a-b1dc-77949080c7be","Type":"ContainerStarted","Data":"b9b628bffb982fb45f16716c1e31529bae47db1c83fa62b11b67a9361881ac46"} Jan 20 18:26:45 crc kubenswrapper[4661]: I0120 18:26:45.396312 4661 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.279483038 podStartE2EDuration="5.396296728s" podCreationTimestamp="2026-01-20 18:26:40 +0000 UTC" firstStartedPulling="2026-01-20 18:26:41.506516693 +0000 UTC m=+1257.837306355" lastFinishedPulling="2026-01-20 18:26:44.623330383 +0000 UTC m=+1260.954120045" observedRunningTime="2026-01-20 18:26:45.390494988 +0000 UTC m=+1261.721284660" watchObservedRunningTime="2026-01-20 18:26:45.396296728 +0000 UTC m=+1261.727086390" Jan 20 18:26:46 crc kubenswrapper[4661]: I0120 18:26:46.385634 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"b5e5805d-a947-403a-b1dc-77949080c7be","Type":"ContainerStarted","Data":"f7addfdfba93e1d3e936ce87b1841be0db471e519ca829926ba1feebb54e06ea"} Jan 20 18:26:46 crc kubenswrapper[4661]: I0120 18:26:46.411352 4661 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=2.411320271 podStartE2EDuration="2.411320271s" podCreationTimestamp="2026-01-20 18:26:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 18:26:46.411117247 +0000 UTC m=+1262.741906909" watchObservedRunningTime="2026-01-20 18:26:46.411320271 +0000 UTC m=+1262.742109933" Jan 20 18:26:47 crc kubenswrapper[4661]: I0120 18:26:47.394698 4661 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Jan 20 18:26:54 crc kubenswrapper[4661]: I0120 18:26:54.810736 4661 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Jan 20 18:26:55 crc kubenswrapper[4661]: I0120 18:26:55.282494 4661 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-cell-mapping-n2hww"] Jan 20 18:26:55 crc kubenswrapper[4661]: I0120 18:26:55.283551 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-n2hww" Jan 20 18:26:55 crc kubenswrapper[4661]: I0120 18:26:55.290212 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-scripts" Jan 20 18:26:55 crc kubenswrapper[4661]: I0120 18:26:55.290225 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-config-data" Jan 20 18:26:55 crc kubenswrapper[4661]: I0120 18:26:55.304491 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-n2hww"] Jan 20 18:26:55 crc kubenswrapper[4661]: I0120 18:26:55.330269 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1e665d0f-e6d1-4829-b745-262ce699c011-scripts\") pod \"nova-cell0-cell-mapping-n2hww\" (UID: \"1e665d0f-e6d1-4829-b745-262ce699c011\") " pod="openstack/nova-cell0-cell-mapping-n2hww" Jan 20 18:26:55 crc kubenswrapper[4661]: I0120 18:26:55.330355 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cpp92\" (UniqueName: \"kubernetes.io/projected/1e665d0f-e6d1-4829-b745-262ce699c011-kube-api-access-cpp92\") pod \"nova-cell0-cell-mapping-n2hww\" (UID: \"1e665d0f-e6d1-4829-b745-262ce699c011\") " pod="openstack/nova-cell0-cell-mapping-n2hww" Jan 20 18:26:55 crc kubenswrapper[4661]: I0120 18:26:55.330478 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e665d0f-e6d1-4829-b745-262ce699c011-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-n2hww\" (UID: \"1e665d0f-e6d1-4829-b745-262ce699c011\") " pod="openstack/nova-cell0-cell-mapping-n2hww" Jan 20 18:26:55 crc kubenswrapper[4661]: I0120 18:26:55.330559 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1e665d0f-e6d1-4829-b745-262ce699c011-config-data\") pod \"nova-cell0-cell-mapping-n2hww\" (UID: \"1e665d0f-e6d1-4829-b745-262ce699c011\") " pod="openstack/nova-cell0-cell-mapping-n2hww" Jan 20 18:26:55 crc kubenswrapper[4661]: I0120 18:26:55.424214 4661 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 20 18:26:55 crc kubenswrapper[4661]: I0120 18:26:55.425557 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 20 18:26:55 crc kubenswrapper[4661]: I0120 18:26:55.430811 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 20 18:26:55 crc kubenswrapper[4661]: I0120 18:26:55.431562 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cpp92\" (UniqueName: \"kubernetes.io/projected/1e665d0f-e6d1-4829-b745-262ce699c011-kube-api-access-cpp92\") pod \"nova-cell0-cell-mapping-n2hww\" (UID: \"1e665d0f-e6d1-4829-b745-262ce699c011\") " pod="openstack/nova-cell0-cell-mapping-n2hww" Jan 20 18:26:55 crc kubenswrapper[4661]: I0120 18:26:55.431625 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e665d0f-e6d1-4829-b745-262ce699c011-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-n2hww\" (UID: \"1e665d0f-e6d1-4829-b745-262ce699c011\") " pod="openstack/nova-cell0-cell-mapping-n2hww" Jan 20 18:26:55 crc kubenswrapper[4661]: I0120 18:26:55.431684 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1e665d0f-e6d1-4829-b745-262ce699c011-config-data\") pod \"nova-cell0-cell-mapping-n2hww\" (UID: \"1e665d0f-e6d1-4829-b745-262ce699c011\") " pod="openstack/nova-cell0-cell-mapping-n2hww" Jan 20 18:26:55 crc kubenswrapper[4661]: I0120 18:26:55.431739 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1e665d0f-e6d1-4829-b745-262ce699c011-scripts\") pod \"nova-cell0-cell-mapping-n2hww\" (UID: \"1e665d0f-e6d1-4829-b745-262ce699c011\") " pod="openstack/nova-cell0-cell-mapping-n2hww" Jan 20 18:26:55 crc kubenswrapper[4661]: I0120 18:26:55.439016 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e665d0f-e6d1-4829-b745-262ce699c011-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-n2hww\" (UID: \"1e665d0f-e6d1-4829-b745-262ce699c011\") " pod="openstack/nova-cell0-cell-mapping-n2hww" Jan 20 18:26:55 crc kubenswrapper[4661]: I0120 18:26:55.439343 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1e665d0f-e6d1-4829-b745-262ce699c011-config-data\") pod \"nova-cell0-cell-mapping-n2hww\" (UID: \"1e665d0f-e6d1-4829-b745-262ce699c011\") " pod="openstack/nova-cell0-cell-mapping-n2hww" Jan 20 18:26:55 crc kubenswrapper[4661]: I0120 18:26:55.450201 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1e665d0f-e6d1-4829-b745-262ce699c011-scripts\") pod \"nova-cell0-cell-mapping-n2hww\" (UID: \"1e665d0f-e6d1-4829-b745-262ce699c011\") " pod="openstack/nova-cell0-cell-mapping-n2hww" Jan 20 18:26:55 crc kubenswrapper[4661]: I0120 18:26:55.483615 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 20 18:26:55 crc kubenswrapper[4661]: I0120 18:26:55.501509 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cpp92\" (UniqueName: \"kubernetes.io/projected/1e665d0f-e6d1-4829-b745-262ce699c011-kube-api-access-cpp92\") pod \"nova-cell0-cell-mapping-n2hww\" (UID: \"1e665d0f-e6d1-4829-b745-262ce699c011\") " pod="openstack/nova-cell0-cell-mapping-n2hww" Jan 20 18:26:55 crc kubenswrapper[4661]: I0120 18:26:55.533314 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/80a4f061-499b-40d4-9958-390e223559d1-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"80a4f061-499b-40d4-9958-390e223559d1\") " pod="openstack/nova-api-0" Jan 20 18:26:55 crc kubenswrapper[4661]: I0120 18:26:55.533384 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/80a4f061-499b-40d4-9958-390e223559d1-logs\") pod \"nova-api-0\" (UID: \"80a4f061-499b-40d4-9958-390e223559d1\") " pod="openstack/nova-api-0" Jan 20 18:26:55 crc kubenswrapper[4661]: I0120 18:26:55.533427 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/80a4f061-499b-40d4-9958-390e223559d1-config-data\") pod \"nova-api-0\" (UID: \"80a4f061-499b-40d4-9958-390e223559d1\") " pod="openstack/nova-api-0" Jan 20 18:26:55 crc kubenswrapper[4661]: I0120 18:26:55.533475 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bvhxp\" (UniqueName: \"kubernetes.io/projected/80a4f061-499b-40d4-9958-390e223559d1-kube-api-access-bvhxp\") pod \"nova-api-0\" (UID: \"80a4f061-499b-40d4-9958-390e223559d1\") " pod="openstack/nova-api-0" Jan 20 18:26:55 crc kubenswrapper[4661]: I0120 18:26:55.602930 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-n2hww" Jan 20 18:26:55 crc kubenswrapper[4661]: I0120 18:26:55.635748 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bvhxp\" (UniqueName: \"kubernetes.io/projected/80a4f061-499b-40d4-9958-390e223559d1-kube-api-access-bvhxp\") pod \"nova-api-0\" (UID: \"80a4f061-499b-40d4-9958-390e223559d1\") " pod="openstack/nova-api-0" Jan 20 18:26:55 crc kubenswrapper[4661]: I0120 18:26:55.635812 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/80a4f061-499b-40d4-9958-390e223559d1-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"80a4f061-499b-40d4-9958-390e223559d1\") " pod="openstack/nova-api-0" Jan 20 18:26:55 crc kubenswrapper[4661]: I0120 18:26:55.635857 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/80a4f061-499b-40d4-9958-390e223559d1-logs\") pod \"nova-api-0\" (UID: \"80a4f061-499b-40d4-9958-390e223559d1\") " pod="openstack/nova-api-0" Jan 20 18:26:55 crc kubenswrapper[4661]: I0120 18:26:55.635897 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/80a4f061-499b-40d4-9958-390e223559d1-config-data\") pod \"nova-api-0\" (UID: \"80a4f061-499b-40d4-9958-390e223559d1\") " pod="openstack/nova-api-0" Jan 20 18:26:55 crc kubenswrapper[4661]: I0120 18:26:55.637561 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/80a4f061-499b-40d4-9958-390e223559d1-logs\") pod \"nova-api-0\" (UID: \"80a4f061-499b-40d4-9958-390e223559d1\") " pod="openstack/nova-api-0" Jan 20 18:26:55 crc kubenswrapper[4661]: I0120 18:26:55.639346 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/80a4f061-499b-40d4-9958-390e223559d1-config-data\") pod \"nova-api-0\" (UID: \"80a4f061-499b-40d4-9958-390e223559d1\") " pod="openstack/nova-api-0" Jan 20 18:26:55 crc kubenswrapper[4661]: I0120 18:26:55.639466 4661 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 20 18:26:55 crc kubenswrapper[4661]: I0120 18:26:55.640920 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 20 18:26:55 crc kubenswrapper[4661]: I0120 18:26:55.655379 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/80a4f061-499b-40d4-9958-390e223559d1-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"80a4f061-499b-40d4-9958-390e223559d1\") " pod="openstack/nova-api-0" Jan 20 18:26:55 crc kubenswrapper[4661]: I0120 18:26:55.666347 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 20 18:26:55 crc kubenswrapper[4661]: I0120 18:26:55.672682 4661 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 20 18:26:55 crc kubenswrapper[4661]: I0120 18:26:55.673693 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 20 18:26:55 crc kubenswrapper[4661]: I0120 18:26:55.699537 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Jan 20 18:26:55 crc kubenswrapper[4661]: I0120 18:26:55.700325 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 20 18:26:55 crc kubenswrapper[4661]: I0120 18:26:55.747053 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bvhxp\" (UniqueName: \"kubernetes.io/projected/80a4f061-499b-40d4-9958-390e223559d1-kube-api-access-bvhxp\") pod \"nova-api-0\" (UID: \"80a4f061-499b-40d4-9958-390e223559d1\") " pod="openstack/nova-api-0" Jan 20 18:26:55 crc kubenswrapper[4661]: I0120 18:26:55.815182 4661 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 20 18:26:55 crc kubenswrapper[4661]: I0120 18:26:55.816245 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 20 18:26:55 crc kubenswrapper[4661]: I0120 18:26:55.846295 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 20 18:26:55 crc kubenswrapper[4661]: I0120 18:26:55.847883 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f1c886f8-c610-434f-8ec1-bfbf0d7e6cb3-logs\") pod \"nova-metadata-0\" (UID: \"f1c886f8-c610-434f-8ec1-bfbf0d7e6cb3\") " pod="openstack/nova-metadata-0" Jan 20 18:26:55 crc kubenswrapper[4661]: I0120 18:26:55.847947 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h8p88\" (UniqueName: \"kubernetes.io/projected/f0700cc3-9290-41f8-b785-ad17bf0917ed-kube-api-access-h8p88\") pod \"nova-cell1-novncproxy-0\" (UID: \"f0700cc3-9290-41f8-b785-ad17bf0917ed\") " pod="openstack/nova-cell1-novncproxy-0" Jan 20 18:26:55 crc kubenswrapper[4661]: I0120 18:26:55.847968 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f0700cc3-9290-41f8-b785-ad17bf0917ed-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"f0700cc3-9290-41f8-b785-ad17bf0917ed\") " pod="openstack/nova-cell1-novncproxy-0" Jan 20 18:26:55 crc kubenswrapper[4661]: I0120 18:26:55.848001 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tfch6\" (UniqueName: \"kubernetes.io/projected/f1c886f8-c610-434f-8ec1-bfbf0d7e6cb3-kube-api-access-tfch6\") pod \"nova-metadata-0\" (UID: \"f1c886f8-c610-434f-8ec1-bfbf0d7e6cb3\") " pod="openstack/nova-metadata-0" Jan 20 18:26:55 crc kubenswrapper[4661]: I0120 18:26:55.848028 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f1c886f8-c610-434f-8ec1-bfbf0d7e6cb3-config-data\") pod \"nova-metadata-0\" (UID: \"f1c886f8-c610-434f-8ec1-bfbf0d7e6cb3\") " pod="openstack/nova-metadata-0" Jan 20 18:26:55 crc kubenswrapper[4661]: I0120 18:26:55.848055 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f0700cc3-9290-41f8-b785-ad17bf0917ed-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"f0700cc3-9290-41f8-b785-ad17bf0917ed\") " pod="openstack/nova-cell1-novncproxy-0" Jan 20 18:26:55 crc kubenswrapper[4661]: I0120 18:26:55.848080 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f1c886f8-c610-434f-8ec1-bfbf0d7e6cb3-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"f1c886f8-c610-434f-8ec1-bfbf0d7e6cb3\") " pod="openstack/nova-metadata-0" Jan 20 18:26:55 crc kubenswrapper[4661]: I0120 18:26:55.856218 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 20 18:26:55 crc kubenswrapper[4661]: I0120 18:26:55.856688 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 20 18:26:55 crc kubenswrapper[4661]: I0120 18:26:55.919933 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 20 18:26:55 crc kubenswrapper[4661]: I0120 18:26:55.927294 4661 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-566b5b7845-222f9"] Jan 20 18:26:55 crc kubenswrapper[4661]: I0120 18:26:55.928578 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-566b5b7845-222f9" Jan 20 18:26:55 crc kubenswrapper[4661]: I0120 18:26:55.949005 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-566b5b7845-222f9"] Jan 20 18:26:55 crc kubenswrapper[4661]: I0120 18:26:55.952222 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h8p88\" (UniqueName: \"kubernetes.io/projected/f0700cc3-9290-41f8-b785-ad17bf0917ed-kube-api-access-h8p88\") pod \"nova-cell1-novncproxy-0\" (UID: \"f0700cc3-9290-41f8-b785-ad17bf0917ed\") " pod="openstack/nova-cell1-novncproxy-0" Jan 20 18:26:55 crc kubenswrapper[4661]: I0120 18:26:55.952262 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1a7bdc38-707c-4869-8724-6e6319f3ccf6-ovsdbserver-sb\") pod \"dnsmasq-dns-566b5b7845-222f9\" (UID: \"1a7bdc38-707c-4869-8724-6e6319f3ccf6\") " pod="openstack/dnsmasq-dns-566b5b7845-222f9" Jan 20 18:26:55 crc kubenswrapper[4661]: I0120 18:26:55.952284 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f0700cc3-9290-41f8-b785-ad17bf0917ed-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"f0700cc3-9290-41f8-b785-ad17bf0917ed\") " pod="openstack/nova-cell1-novncproxy-0" Jan 20 18:26:55 crc kubenswrapper[4661]: I0120 18:26:55.952321 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tfch6\" (UniqueName: \"kubernetes.io/projected/f1c886f8-c610-434f-8ec1-bfbf0d7e6cb3-kube-api-access-tfch6\") pod \"nova-metadata-0\" (UID: \"f1c886f8-c610-434f-8ec1-bfbf0d7e6cb3\") " pod="openstack/nova-metadata-0" Jan 20 18:26:55 crc kubenswrapper[4661]: I0120 18:26:55.952349 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f1c886f8-c610-434f-8ec1-bfbf0d7e6cb3-config-data\") pod \"nova-metadata-0\" (UID: \"f1c886f8-c610-434f-8ec1-bfbf0d7e6cb3\") " pod="openstack/nova-metadata-0" Jan 20 18:26:55 crc kubenswrapper[4661]: I0120 18:26:55.952375 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f0700cc3-9290-41f8-b785-ad17bf0917ed-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"f0700cc3-9290-41f8-b785-ad17bf0917ed\") " pod="openstack/nova-cell1-novncproxy-0" Jan 20 18:26:55 crc kubenswrapper[4661]: I0120 18:26:55.952395 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9836cc19-9ee3-4d13-9ae7-c774b403284a-config-data\") pod \"nova-scheduler-0\" (UID: \"9836cc19-9ee3-4d13-9ae7-c774b403284a\") " pod="openstack/nova-scheduler-0" Jan 20 18:26:55 crc kubenswrapper[4661]: I0120 18:26:55.952409 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9836cc19-9ee3-4d13-9ae7-c774b403284a-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"9836cc19-9ee3-4d13-9ae7-c774b403284a\") " pod="openstack/nova-scheduler-0" Jan 20 18:26:55 crc kubenswrapper[4661]: I0120 18:26:55.952424 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1a7bdc38-707c-4869-8724-6e6319f3ccf6-ovsdbserver-nb\") pod \"dnsmasq-dns-566b5b7845-222f9\" (UID: \"1a7bdc38-707c-4869-8724-6e6319f3ccf6\") " pod="openstack/dnsmasq-dns-566b5b7845-222f9" Jan 20 18:26:55 crc kubenswrapper[4661]: I0120 18:26:55.952443 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f1c886f8-c610-434f-8ec1-bfbf0d7e6cb3-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"f1c886f8-c610-434f-8ec1-bfbf0d7e6cb3\") " pod="openstack/nova-metadata-0" Jan 20 18:26:55 crc kubenswrapper[4661]: I0120 18:26:55.952470 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1a7bdc38-707c-4869-8724-6e6319f3ccf6-dns-svc\") pod \"dnsmasq-dns-566b5b7845-222f9\" (UID: \"1a7bdc38-707c-4869-8724-6e6319f3ccf6\") " pod="openstack/dnsmasq-dns-566b5b7845-222f9" Jan 20 18:26:55 crc kubenswrapper[4661]: I0120 18:26:55.952504 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gdpv9\" (UniqueName: \"kubernetes.io/projected/1a7bdc38-707c-4869-8724-6e6319f3ccf6-kube-api-access-gdpv9\") pod \"dnsmasq-dns-566b5b7845-222f9\" (UID: \"1a7bdc38-707c-4869-8724-6e6319f3ccf6\") " pod="openstack/dnsmasq-dns-566b5b7845-222f9" Jan 20 18:26:55 crc kubenswrapper[4661]: I0120 18:26:55.952536 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f1c886f8-c610-434f-8ec1-bfbf0d7e6cb3-logs\") pod \"nova-metadata-0\" (UID: \"f1c886f8-c610-434f-8ec1-bfbf0d7e6cb3\") " pod="openstack/nova-metadata-0" Jan 20 18:26:55 crc kubenswrapper[4661]: I0120 18:26:55.952559 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1a7bdc38-707c-4869-8724-6e6319f3ccf6-config\") pod \"dnsmasq-dns-566b5b7845-222f9\" (UID: \"1a7bdc38-707c-4869-8724-6e6319f3ccf6\") " pod="openstack/dnsmasq-dns-566b5b7845-222f9" Jan 20 18:26:55 crc kubenswrapper[4661]: I0120 18:26:55.952586 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qtz5s\" (UniqueName: \"kubernetes.io/projected/9836cc19-9ee3-4d13-9ae7-c774b403284a-kube-api-access-qtz5s\") pod \"nova-scheduler-0\" (UID: \"9836cc19-9ee3-4d13-9ae7-c774b403284a\") " pod="openstack/nova-scheduler-0" Jan 20 18:26:55 crc kubenswrapper[4661]: I0120 18:26:55.961168 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f1c886f8-c610-434f-8ec1-bfbf0d7e6cb3-logs\") pod \"nova-metadata-0\" (UID: \"f1c886f8-c610-434f-8ec1-bfbf0d7e6cb3\") " pod="openstack/nova-metadata-0" Jan 20 18:26:55 crc kubenswrapper[4661]: I0120 18:26:55.971196 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f0700cc3-9290-41f8-b785-ad17bf0917ed-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"f0700cc3-9290-41f8-b785-ad17bf0917ed\") " pod="openstack/nova-cell1-novncproxy-0" Jan 20 18:26:55 crc kubenswrapper[4661]: I0120 18:26:55.971916 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tfch6\" (UniqueName: \"kubernetes.io/projected/f1c886f8-c610-434f-8ec1-bfbf0d7e6cb3-kube-api-access-tfch6\") pod \"nova-metadata-0\" (UID: \"f1c886f8-c610-434f-8ec1-bfbf0d7e6cb3\") " pod="openstack/nova-metadata-0" Jan 20 18:26:55 crc kubenswrapper[4661]: I0120 18:26:55.976274 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f1c886f8-c610-434f-8ec1-bfbf0d7e6cb3-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"f1c886f8-c610-434f-8ec1-bfbf0d7e6cb3\") " pod="openstack/nova-metadata-0" Jan 20 18:26:55 crc kubenswrapper[4661]: I0120 18:26:55.976316 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h8p88\" (UniqueName: \"kubernetes.io/projected/f0700cc3-9290-41f8-b785-ad17bf0917ed-kube-api-access-h8p88\") pod \"nova-cell1-novncproxy-0\" (UID: \"f0700cc3-9290-41f8-b785-ad17bf0917ed\") " pod="openstack/nova-cell1-novncproxy-0" Jan 20 18:26:55 crc kubenswrapper[4661]: I0120 18:26:55.978379 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f1c886f8-c610-434f-8ec1-bfbf0d7e6cb3-config-data\") pod \"nova-metadata-0\" (UID: \"f1c886f8-c610-434f-8ec1-bfbf0d7e6cb3\") " pod="openstack/nova-metadata-0" Jan 20 18:26:55 crc kubenswrapper[4661]: I0120 18:26:55.979967 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f0700cc3-9290-41f8-b785-ad17bf0917ed-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"f0700cc3-9290-41f8-b785-ad17bf0917ed\") " pod="openstack/nova-cell1-novncproxy-0" Jan 20 18:26:56 crc kubenswrapper[4661]: I0120 18:26:56.057416 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qtz5s\" (UniqueName: \"kubernetes.io/projected/9836cc19-9ee3-4d13-9ae7-c774b403284a-kube-api-access-qtz5s\") pod \"nova-scheduler-0\" (UID: \"9836cc19-9ee3-4d13-9ae7-c774b403284a\") " pod="openstack/nova-scheduler-0" Jan 20 18:26:56 crc kubenswrapper[4661]: I0120 18:26:56.057502 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1a7bdc38-707c-4869-8724-6e6319f3ccf6-ovsdbserver-sb\") pod \"dnsmasq-dns-566b5b7845-222f9\" (UID: \"1a7bdc38-707c-4869-8724-6e6319f3ccf6\") " pod="openstack/dnsmasq-dns-566b5b7845-222f9" Jan 20 18:26:56 crc kubenswrapper[4661]: I0120 18:26:56.057621 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9836cc19-9ee3-4d13-9ae7-c774b403284a-config-data\") pod \"nova-scheduler-0\" (UID: \"9836cc19-9ee3-4d13-9ae7-c774b403284a\") " pod="openstack/nova-scheduler-0" Jan 20 18:26:56 crc kubenswrapper[4661]: I0120 18:26:56.057687 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9836cc19-9ee3-4d13-9ae7-c774b403284a-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"9836cc19-9ee3-4d13-9ae7-c774b403284a\") " pod="openstack/nova-scheduler-0" Jan 20 18:26:56 crc kubenswrapper[4661]: I0120 18:26:56.057705 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1a7bdc38-707c-4869-8724-6e6319f3ccf6-ovsdbserver-nb\") pod \"dnsmasq-dns-566b5b7845-222f9\" (UID: \"1a7bdc38-707c-4869-8724-6e6319f3ccf6\") " pod="openstack/dnsmasq-dns-566b5b7845-222f9" Jan 20 18:26:56 crc kubenswrapper[4661]: I0120 18:26:56.057738 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1a7bdc38-707c-4869-8724-6e6319f3ccf6-dns-svc\") pod \"dnsmasq-dns-566b5b7845-222f9\" (UID: \"1a7bdc38-707c-4869-8724-6e6319f3ccf6\") " pod="openstack/dnsmasq-dns-566b5b7845-222f9" Jan 20 18:26:56 crc kubenswrapper[4661]: I0120 18:26:56.057799 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gdpv9\" (UniqueName: \"kubernetes.io/projected/1a7bdc38-707c-4869-8724-6e6319f3ccf6-kube-api-access-gdpv9\") pod \"dnsmasq-dns-566b5b7845-222f9\" (UID: \"1a7bdc38-707c-4869-8724-6e6319f3ccf6\") " pod="openstack/dnsmasq-dns-566b5b7845-222f9" Jan 20 18:26:56 crc kubenswrapper[4661]: I0120 18:26:56.057859 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1a7bdc38-707c-4869-8724-6e6319f3ccf6-config\") pod \"dnsmasq-dns-566b5b7845-222f9\" (UID: \"1a7bdc38-707c-4869-8724-6e6319f3ccf6\") " pod="openstack/dnsmasq-dns-566b5b7845-222f9" Jan 20 18:26:56 crc kubenswrapper[4661]: I0120 18:26:56.059696 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1a7bdc38-707c-4869-8724-6e6319f3ccf6-ovsdbserver-nb\") pod \"dnsmasq-dns-566b5b7845-222f9\" (UID: \"1a7bdc38-707c-4869-8724-6e6319f3ccf6\") " pod="openstack/dnsmasq-dns-566b5b7845-222f9" Jan 20 18:26:56 crc kubenswrapper[4661]: I0120 18:26:56.059868 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1a7bdc38-707c-4869-8724-6e6319f3ccf6-dns-svc\") pod \"dnsmasq-dns-566b5b7845-222f9\" (UID: \"1a7bdc38-707c-4869-8724-6e6319f3ccf6\") " pod="openstack/dnsmasq-dns-566b5b7845-222f9" Jan 20 18:26:56 crc kubenswrapper[4661]: I0120 18:26:56.060289 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1a7bdc38-707c-4869-8724-6e6319f3ccf6-config\") pod \"dnsmasq-dns-566b5b7845-222f9\" (UID: \"1a7bdc38-707c-4869-8724-6e6319f3ccf6\") " pod="openstack/dnsmasq-dns-566b5b7845-222f9" Jan 20 18:26:56 crc kubenswrapper[4661]: I0120 18:26:56.060491 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1a7bdc38-707c-4869-8724-6e6319f3ccf6-ovsdbserver-sb\") pod \"dnsmasq-dns-566b5b7845-222f9\" (UID: \"1a7bdc38-707c-4869-8724-6e6319f3ccf6\") " pod="openstack/dnsmasq-dns-566b5b7845-222f9" Jan 20 18:26:56 crc kubenswrapper[4661]: I0120 18:26:56.062213 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9836cc19-9ee3-4d13-9ae7-c774b403284a-config-data\") pod \"nova-scheduler-0\" (UID: \"9836cc19-9ee3-4d13-9ae7-c774b403284a\") " pod="openstack/nova-scheduler-0" Jan 20 18:26:56 crc kubenswrapper[4661]: I0120 18:26:56.065523 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9836cc19-9ee3-4d13-9ae7-c774b403284a-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"9836cc19-9ee3-4d13-9ae7-c774b403284a\") " pod="openstack/nova-scheduler-0" Jan 20 18:26:56 crc kubenswrapper[4661]: I0120 18:26:56.082216 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gdpv9\" (UniqueName: \"kubernetes.io/projected/1a7bdc38-707c-4869-8724-6e6319f3ccf6-kube-api-access-gdpv9\") pod \"dnsmasq-dns-566b5b7845-222f9\" (UID: \"1a7bdc38-707c-4869-8724-6e6319f3ccf6\") " pod="openstack/dnsmasq-dns-566b5b7845-222f9" Jan 20 18:26:56 crc kubenswrapper[4661]: I0120 18:26:56.082230 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qtz5s\" (UniqueName: \"kubernetes.io/projected/9836cc19-9ee3-4d13-9ae7-c774b403284a-kube-api-access-qtz5s\") pod \"nova-scheduler-0\" (UID: \"9836cc19-9ee3-4d13-9ae7-c774b403284a\") " pod="openstack/nova-scheduler-0" Jan 20 18:26:56 crc kubenswrapper[4661]: I0120 18:26:56.092492 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 20 18:26:56 crc kubenswrapper[4661]: I0120 18:26:56.147887 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 20 18:26:56 crc kubenswrapper[4661]: I0120 18:26:56.218687 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 20 18:26:56 crc kubenswrapper[4661]: I0120 18:26:56.251566 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-566b5b7845-222f9" Jan 20 18:26:56 crc kubenswrapper[4661]: I0120 18:26:56.343544 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-n2hww"] Jan 20 18:26:56 crc kubenswrapper[4661]: W0120 18:26:56.362983 4661 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1e665d0f_e6d1_4829_b745_262ce699c011.slice/crio-ed2bf8b9c87f024a7f2b35787d06945bd1b6ec22d066815a666b95fcd4af8606 WatchSource:0}: Error finding container ed2bf8b9c87f024a7f2b35787d06945bd1b6ec22d066815a666b95fcd4af8606: Status 404 returned error can't find the container with id ed2bf8b9c87f024a7f2b35787d06945bd1b6ec22d066815a666b95fcd4af8606 Jan 20 18:26:56 crc kubenswrapper[4661]: I0120 18:26:56.510781 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-n2hww" event={"ID":"1e665d0f-e6d1-4829-b745-262ce699c011","Type":"ContainerStarted","Data":"ed2bf8b9c87f024a7f2b35787d06945bd1b6ec22d066815a666b95fcd4af8606"} Jan 20 18:26:56 crc kubenswrapper[4661]: I0120 18:26:56.595405 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 20 18:26:56 crc kubenswrapper[4661]: I0120 18:26:56.604459 4661 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 20 18:26:56 crc kubenswrapper[4661]: I0120 18:26:56.860102 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 20 18:26:57 crc kubenswrapper[4661]: I0120 18:26:57.032986 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 20 18:26:57 crc kubenswrapper[4661]: I0120 18:26:57.113943 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-566b5b7845-222f9"] Jan 20 18:26:57 crc kubenswrapper[4661]: I0120 18:26:57.166473 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 20 18:26:57 crc kubenswrapper[4661]: I0120 18:26:57.332683 4661 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-db-sync-9p79w"] Jan 20 18:26:57 crc kubenswrapper[4661]: I0120 18:26:57.334092 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-9p79w" Jan 20 18:26:57 crc kubenswrapper[4661]: I0120 18:26:57.338196 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Jan 20 18:26:57 crc kubenswrapper[4661]: I0120 18:26:57.338578 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-scripts" Jan 20 18:26:57 crc kubenswrapper[4661]: I0120 18:26:57.359433 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-9p79w"] Jan 20 18:26:57 crc kubenswrapper[4661]: I0120 18:26:57.408932 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f75d1b27-5cb6-496a-82ae-e3880dfd4b4c-config-data\") pod \"nova-cell1-conductor-db-sync-9p79w\" (UID: \"f75d1b27-5cb6-496a-82ae-e3880dfd4b4c\") " pod="openstack/nova-cell1-conductor-db-sync-9p79w" Jan 20 18:26:57 crc kubenswrapper[4661]: I0120 18:26:57.409097 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-96tqv\" (UniqueName: \"kubernetes.io/projected/f75d1b27-5cb6-496a-82ae-e3880dfd4b4c-kube-api-access-96tqv\") pod \"nova-cell1-conductor-db-sync-9p79w\" (UID: \"f75d1b27-5cb6-496a-82ae-e3880dfd4b4c\") " pod="openstack/nova-cell1-conductor-db-sync-9p79w" Jan 20 18:26:57 crc kubenswrapper[4661]: I0120 18:26:57.409218 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f75d1b27-5cb6-496a-82ae-e3880dfd4b4c-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-9p79w\" (UID: \"f75d1b27-5cb6-496a-82ae-e3880dfd4b4c\") " pod="openstack/nova-cell1-conductor-db-sync-9p79w" Jan 20 18:26:57 crc kubenswrapper[4661]: I0120 18:26:57.409288 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f75d1b27-5cb6-496a-82ae-e3880dfd4b4c-scripts\") pod \"nova-cell1-conductor-db-sync-9p79w\" (UID: \"f75d1b27-5cb6-496a-82ae-e3880dfd4b4c\") " pod="openstack/nova-cell1-conductor-db-sync-9p79w" Jan 20 18:26:57 crc kubenswrapper[4661]: I0120 18:26:57.510885 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f75d1b27-5cb6-496a-82ae-e3880dfd4b4c-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-9p79w\" (UID: \"f75d1b27-5cb6-496a-82ae-e3880dfd4b4c\") " pod="openstack/nova-cell1-conductor-db-sync-9p79w" Jan 20 18:26:57 crc kubenswrapper[4661]: I0120 18:26:57.510946 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f75d1b27-5cb6-496a-82ae-e3880dfd4b4c-scripts\") pod \"nova-cell1-conductor-db-sync-9p79w\" (UID: \"f75d1b27-5cb6-496a-82ae-e3880dfd4b4c\") " pod="openstack/nova-cell1-conductor-db-sync-9p79w" Jan 20 18:26:57 crc kubenswrapper[4661]: I0120 18:26:57.511019 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f75d1b27-5cb6-496a-82ae-e3880dfd4b4c-config-data\") pod \"nova-cell1-conductor-db-sync-9p79w\" (UID: \"f75d1b27-5cb6-496a-82ae-e3880dfd4b4c\") " pod="openstack/nova-cell1-conductor-db-sync-9p79w" Jan 20 18:26:57 crc kubenswrapper[4661]: I0120 18:26:57.511066 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-96tqv\" (UniqueName: \"kubernetes.io/projected/f75d1b27-5cb6-496a-82ae-e3880dfd4b4c-kube-api-access-96tqv\") pod \"nova-cell1-conductor-db-sync-9p79w\" (UID: \"f75d1b27-5cb6-496a-82ae-e3880dfd4b4c\") " pod="openstack/nova-cell1-conductor-db-sync-9p79w" Jan 20 18:26:57 crc kubenswrapper[4661]: I0120 18:26:57.515167 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f75d1b27-5cb6-496a-82ae-e3880dfd4b4c-scripts\") pod \"nova-cell1-conductor-db-sync-9p79w\" (UID: \"f75d1b27-5cb6-496a-82ae-e3880dfd4b4c\") " pod="openstack/nova-cell1-conductor-db-sync-9p79w" Jan 20 18:26:57 crc kubenswrapper[4661]: I0120 18:26:57.515772 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f75d1b27-5cb6-496a-82ae-e3880dfd4b4c-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-9p79w\" (UID: \"f75d1b27-5cb6-496a-82ae-e3880dfd4b4c\") " pod="openstack/nova-cell1-conductor-db-sync-9p79w" Jan 20 18:26:57 crc kubenswrapper[4661]: I0120 18:26:57.526284 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f75d1b27-5cb6-496a-82ae-e3880dfd4b4c-config-data\") pod \"nova-cell1-conductor-db-sync-9p79w\" (UID: \"f75d1b27-5cb6-496a-82ae-e3880dfd4b4c\") " pod="openstack/nova-cell1-conductor-db-sync-9p79w" Jan 20 18:26:57 crc kubenswrapper[4661]: I0120 18:26:57.543459 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-96tqv\" (UniqueName: \"kubernetes.io/projected/f75d1b27-5cb6-496a-82ae-e3880dfd4b4c-kube-api-access-96tqv\") pod \"nova-cell1-conductor-db-sync-9p79w\" (UID: \"f75d1b27-5cb6-496a-82ae-e3880dfd4b4c\") " pod="openstack/nova-cell1-conductor-db-sync-9p79w" Jan 20 18:26:57 crc kubenswrapper[4661]: I0120 18:26:57.548062 4661 generic.go:334] "Generic (PLEG): container finished" podID="1a7bdc38-707c-4869-8724-6e6319f3ccf6" containerID="ac95899342d34113b0d790041f47b4b0e1a8ddc101ff0079d6f661d2f6afd04e" exitCode=0 Jan 20 18:26:57 crc kubenswrapper[4661]: I0120 18:26:57.548167 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-566b5b7845-222f9" event={"ID":"1a7bdc38-707c-4869-8724-6e6319f3ccf6","Type":"ContainerDied","Data":"ac95899342d34113b0d790041f47b4b0e1a8ddc101ff0079d6f661d2f6afd04e"} Jan 20 18:26:57 crc kubenswrapper[4661]: I0120 18:26:57.548204 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-566b5b7845-222f9" event={"ID":"1a7bdc38-707c-4869-8724-6e6319f3ccf6","Type":"ContainerStarted","Data":"596eed7ca8913285e1a5b09db8048c7d8b6aabe62385555533164f3e3eb9b686"} Jan 20 18:26:57 crc kubenswrapper[4661]: I0120 18:26:57.550608 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"f0700cc3-9290-41f8-b785-ad17bf0917ed","Type":"ContainerStarted","Data":"1281a5fb7ef38717348c98c652c1553c933194e76009e1bb94374bfd79ee1089"} Jan 20 18:26:57 crc kubenswrapper[4661]: I0120 18:26:57.560142 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"f1c886f8-c610-434f-8ec1-bfbf0d7e6cb3","Type":"ContainerStarted","Data":"6d9fedce6956276c44f198d47ccb5db2c89762daf1876a287e11796132fa2cdc"} Jan 20 18:26:57 crc kubenswrapper[4661]: I0120 18:26:57.563140 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"80a4f061-499b-40d4-9958-390e223559d1","Type":"ContainerStarted","Data":"eec4b567528fb7812e3fd365d01b6b9648176921b2726047c2f704eacfdd1216"} Jan 20 18:26:57 crc kubenswrapper[4661]: I0120 18:26:57.582264 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-n2hww" event={"ID":"1e665d0f-e6d1-4829-b745-262ce699c011","Type":"ContainerStarted","Data":"bb3d7e3bf9f512e16e057a0d13379209ee93ce7865d68839abaebac3894b6855"} Jan 20 18:26:57 crc kubenswrapper[4661]: I0120 18:26:57.590929 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"9836cc19-9ee3-4d13-9ae7-c774b403284a","Type":"ContainerStarted","Data":"628ea18ed17b171cdaea90e95cbf2873fd817da297f05ec180e150cc6c26c569"} Jan 20 18:26:57 crc kubenswrapper[4661]: I0120 18:26:57.658260 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-9p79w" Jan 20 18:26:58 crc kubenswrapper[4661]: I0120 18:26:58.196836 4661 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-cell-mapping-n2hww" podStartSLOduration=3.196815588 podStartE2EDuration="3.196815588s" podCreationTimestamp="2026-01-20 18:26:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 18:26:57.612596346 +0000 UTC m=+1273.943386008" watchObservedRunningTime="2026-01-20 18:26:58.196815588 +0000 UTC m=+1274.527605270" Jan 20 18:26:58 crc kubenswrapper[4661]: I0120 18:26:58.206703 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-9p79w"] Jan 20 18:26:58 crc kubenswrapper[4661]: W0120 18:26:58.218410 4661 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf75d1b27_5cb6_496a_82ae_e3880dfd4b4c.slice/crio-8b8a41b282bf0ec0ec2eb649e59841aad6a2f9ed9225226174dd0ca5d70eb57f WatchSource:0}: Error finding container 8b8a41b282bf0ec0ec2eb649e59841aad6a2f9ed9225226174dd0ca5d70eb57f: Status 404 returned error can't find the container with id 8b8a41b282bf0ec0ec2eb649e59841aad6a2f9ed9225226174dd0ca5d70eb57f Jan 20 18:26:58 crc kubenswrapper[4661]: I0120 18:26:58.603190 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-566b5b7845-222f9" event={"ID":"1a7bdc38-707c-4869-8724-6e6319f3ccf6","Type":"ContainerStarted","Data":"abcf39dc50198c72691c46c759fdbd2613e5ec363520acc4e246fff40428c656"} Jan 20 18:26:58 crc kubenswrapper[4661]: I0120 18:26:58.603874 4661 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-566b5b7845-222f9" Jan 20 18:26:58 crc kubenswrapper[4661]: I0120 18:26:58.623326 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-9p79w" event={"ID":"f75d1b27-5cb6-496a-82ae-e3880dfd4b4c","Type":"ContainerStarted","Data":"28e5b932ef2a30e51319c647957fe9b4e19d736cab1f5d41182433ea4c6885f6"} Jan 20 18:26:58 crc kubenswrapper[4661]: I0120 18:26:58.623370 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-9p79w" event={"ID":"f75d1b27-5cb6-496a-82ae-e3880dfd4b4c","Type":"ContainerStarted","Data":"8b8a41b282bf0ec0ec2eb649e59841aad6a2f9ed9225226174dd0ca5d70eb57f"} Jan 20 18:26:58 crc kubenswrapper[4661]: I0120 18:26:58.634011 4661 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-566b5b7845-222f9" podStartSLOduration=3.633990073 podStartE2EDuration="3.633990073s" podCreationTimestamp="2026-01-20 18:26:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 18:26:58.623852178 +0000 UTC m=+1274.954641840" watchObservedRunningTime="2026-01-20 18:26:58.633990073 +0000 UTC m=+1274.964779735" Jan 20 18:26:58 crc kubenswrapper[4661]: I0120 18:26:58.649252 4661 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-db-sync-9p79w" podStartSLOduration=1.649230201 podStartE2EDuration="1.649230201s" podCreationTimestamp="2026-01-20 18:26:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 18:26:58.640541751 +0000 UTC m=+1274.971331423" watchObservedRunningTime="2026-01-20 18:26:58.649230201 +0000 UTC m=+1274.980019863" Jan 20 18:26:59 crc kubenswrapper[4661]: I0120 18:26:59.424125 4661 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 20 18:26:59 crc kubenswrapper[4661]: I0120 18:26:59.441764 4661 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 20 18:27:01 crc kubenswrapper[4661]: I0120 18:27:01.645370 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"80a4f061-499b-40d4-9958-390e223559d1","Type":"ContainerStarted","Data":"32b2b2daf650b7926639da40af76fafc7d0476a8c12ce85e77ca915191924019"} Jan 20 18:27:01 crc kubenswrapper[4661]: I0120 18:27:01.647684 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"80a4f061-499b-40d4-9958-390e223559d1","Type":"ContainerStarted","Data":"e2c5cd34cf4e5266d6c0bc2f9ea81539f2ad5147260f9af65e235608a42c7bea"} Jan 20 18:27:01 crc kubenswrapper[4661]: I0120 18:27:01.648374 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"9836cc19-9ee3-4d13-9ae7-c774b403284a","Type":"ContainerStarted","Data":"7d90cc24945686cc416855e344577f6460ed94c770e46d35d11778209756fa51"} Jan 20 18:27:01 crc kubenswrapper[4661]: I0120 18:27:01.650380 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"f0700cc3-9290-41f8-b785-ad17bf0917ed","Type":"ContainerStarted","Data":"85ae039c6ba58ca8acaec9761c0ab69e29eda80f7400d96e264e079a6c80dc89"} Jan 20 18:27:01 crc kubenswrapper[4661]: I0120 18:27:01.650445 4661 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="f0700cc3-9290-41f8-b785-ad17bf0917ed" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://85ae039c6ba58ca8acaec9761c0ab69e29eda80f7400d96e264e079a6c80dc89" gracePeriod=30 Jan 20 18:27:01 crc kubenswrapper[4661]: I0120 18:27:01.664564 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"f1c886f8-c610-434f-8ec1-bfbf0d7e6cb3","Type":"ContainerStarted","Data":"ad8c0df9bbefaeb942992dae41dbab46d01f138f1a71e5cde396d744dfc97324"} Jan 20 18:27:01 crc kubenswrapper[4661]: I0120 18:27:01.664850 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"f1c886f8-c610-434f-8ec1-bfbf0d7e6cb3","Type":"ContainerStarted","Data":"6c090336234f5539e42676dada652587ecda259ed63db1b980420dd6fe2a0008"} Jan 20 18:27:01 crc kubenswrapper[4661]: I0120 18:27:01.664985 4661 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="f1c886f8-c610-434f-8ec1-bfbf0d7e6cb3" containerName="nova-metadata-log" containerID="cri-o://6c090336234f5539e42676dada652587ecda259ed63db1b980420dd6fe2a0008" gracePeriod=30 Jan 20 18:27:01 crc kubenswrapper[4661]: I0120 18:27:01.665038 4661 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="f1c886f8-c610-434f-8ec1-bfbf0d7e6cb3" containerName="nova-metadata-metadata" containerID="cri-o://ad8c0df9bbefaeb942992dae41dbab46d01f138f1a71e5cde396d744dfc97324" gracePeriod=30 Jan 20 18:27:01 crc kubenswrapper[4661]: I0120 18:27:01.674978 4661 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.360367272 podStartE2EDuration="6.674956013s" podCreationTimestamp="2026-01-20 18:26:55 +0000 UTC" firstStartedPulling="2026-01-20 18:26:56.604270044 +0000 UTC m=+1272.935059706" lastFinishedPulling="2026-01-20 18:27:00.918858785 +0000 UTC m=+1277.249648447" observedRunningTime="2026-01-20 18:27:01.670548137 +0000 UTC m=+1278.001337799" watchObservedRunningTime="2026-01-20 18:27:01.674956013 +0000 UTC m=+1278.005745675" Jan 20 18:27:01 crc kubenswrapper[4661]: I0120 18:27:01.689001 4661 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.937076102 podStartE2EDuration="6.688985642s" podCreationTimestamp="2026-01-20 18:26:55 +0000 UTC" firstStartedPulling="2026-01-20 18:26:57.168200366 +0000 UTC m=+1273.498990028" lastFinishedPulling="2026-01-20 18:27:00.920109906 +0000 UTC m=+1277.250899568" observedRunningTime="2026-01-20 18:27:01.685795845 +0000 UTC m=+1278.016585507" watchObservedRunningTime="2026-01-20 18:27:01.688985642 +0000 UTC m=+1278.019775294" Jan 20 18:27:01 crc kubenswrapper[4661]: I0120 18:27:01.705483 4661 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.812687692 podStartE2EDuration="6.705465209s" podCreationTimestamp="2026-01-20 18:26:55 +0000 UTC" firstStartedPulling="2026-01-20 18:26:57.030316801 +0000 UTC m=+1273.361106453" lastFinishedPulling="2026-01-20 18:27:00.923094298 +0000 UTC m=+1277.253883970" observedRunningTime="2026-01-20 18:27:01.699946156 +0000 UTC m=+1278.030735818" watchObservedRunningTime="2026-01-20 18:27:01.705465209 +0000 UTC m=+1278.036254871" Jan 20 18:27:02 crc kubenswrapper[4661]: I0120 18:27:02.675548 4661 generic.go:334] "Generic (PLEG): container finished" podID="f1c886f8-c610-434f-8ec1-bfbf0d7e6cb3" containerID="ad8c0df9bbefaeb942992dae41dbab46d01f138f1a71e5cde396d744dfc97324" exitCode=0 Jan 20 18:27:02 crc kubenswrapper[4661]: I0120 18:27:02.675833 4661 generic.go:334] "Generic (PLEG): container finished" podID="f1c886f8-c610-434f-8ec1-bfbf0d7e6cb3" containerID="6c090336234f5539e42676dada652587ecda259ed63db1b980420dd6fe2a0008" exitCode=143 Jan 20 18:27:02 crc kubenswrapper[4661]: I0120 18:27:02.675632 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"f1c886f8-c610-434f-8ec1-bfbf0d7e6cb3","Type":"ContainerDied","Data":"ad8c0df9bbefaeb942992dae41dbab46d01f138f1a71e5cde396d744dfc97324"} Jan 20 18:27:02 crc kubenswrapper[4661]: I0120 18:27:02.675881 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"f1c886f8-c610-434f-8ec1-bfbf0d7e6cb3","Type":"ContainerDied","Data":"6c090336234f5539e42676dada652587ecda259ed63db1b980420dd6fe2a0008"} Jan 20 18:27:03 crc kubenswrapper[4661]: I0120 18:27:03.100094 4661 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 20 18:27:03 crc kubenswrapper[4661]: I0120 18:27:03.124534 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f1c886f8-c610-434f-8ec1-bfbf0d7e6cb3-combined-ca-bundle\") pod \"f1c886f8-c610-434f-8ec1-bfbf0d7e6cb3\" (UID: \"f1c886f8-c610-434f-8ec1-bfbf0d7e6cb3\") " Jan 20 18:27:03 crc kubenswrapper[4661]: I0120 18:27:03.124588 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f1c886f8-c610-434f-8ec1-bfbf0d7e6cb3-logs\") pod \"f1c886f8-c610-434f-8ec1-bfbf0d7e6cb3\" (UID: \"f1c886f8-c610-434f-8ec1-bfbf0d7e6cb3\") " Jan 20 18:27:03 crc kubenswrapper[4661]: I0120 18:27:03.124623 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f1c886f8-c610-434f-8ec1-bfbf0d7e6cb3-config-data\") pod \"f1c886f8-c610-434f-8ec1-bfbf0d7e6cb3\" (UID: \"f1c886f8-c610-434f-8ec1-bfbf0d7e6cb3\") " Jan 20 18:27:03 crc kubenswrapper[4661]: I0120 18:27:03.124682 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tfch6\" (UniqueName: \"kubernetes.io/projected/f1c886f8-c610-434f-8ec1-bfbf0d7e6cb3-kube-api-access-tfch6\") pod \"f1c886f8-c610-434f-8ec1-bfbf0d7e6cb3\" (UID: \"f1c886f8-c610-434f-8ec1-bfbf0d7e6cb3\") " Jan 20 18:27:03 crc kubenswrapper[4661]: I0120 18:27:03.125992 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f1c886f8-c610-434f-8ec1-bfbf0d7e6cb3-logs" (OuterVolumeSpecName: "logs") pod "f1c886f8-c610-434f-8ec1-bfbf0d7e6cb3" (UID: "f1c886f8-c610-434f-8ec1-bfbf0d7e6cb3"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 18:27:03 crc kubenswrapper[4661]: I0120 18:27:03.142957 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f1c886f8-c610-434f-8ec1-bfbf0d7e6cb3-kube-api-access-tfch6" (OuterVolumeSpecName: "kube-api-access-tfch6") pod "f1c886f8-c610-434f-8ec1-bfbf0d7e6cb3" (UID: "f1c886f8-c610-434f-8ec1-bfbf0d7e6cb3"). InnerVolumeSpecName "kube-api-access-tfch6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 18:27:03 crc kubenswrapper[4661]: I0120 18:27:03.170101 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f1c886f8-c610-434f-8ec1-bfbf0d7e6cb3-config-data" (OuterVolumeSpecName: "config-data") pod "f1c886f8-c610-434f-8ec1-bfbf0d7e6cb3" (UID: "f1c886f8-c610-434f-8ec1-bfbf0d7e6cb3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 18:27:03 crc kubenswrapper[4661]: I0120 18:27:03.172196 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f1c886f8-c610-434f-8ec1-bfbf0d7e6cb3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f1c886f8-c610-434f-8ec1-bfbf0d7e6cb3" (UID: "f1c886f8-c610-434f-8ec1-bfbf0d7e6cb3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 18:27:03 crc kubenswrapper[4661]: I0120 18:27:03.227187 4661 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f1c886f8-c610-434f-8ec1-bfbf0d7e6cb3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 20 18:27:03 crc kubenswrapper[4661]: I0120 18:27:03.227221 4661 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f1c886f8-c610-434f-8ec1-bfbf0d7e6cb3-logs\") on node \"crc\" DevicePath \"\"" Jan 20 18:27:03 crc kubenswrapper[4661]: I0120 18:27:03.227232 4661 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f1c886f8-c610-434f-8ec1-bfbf0d7e6cb3-config-data\") on node \"crc\" DevicePath \"\"" Jan 20 18:27:03 crc kubenswrapper[4661]: I0120 18:27:03.227242 4661 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tfch6\" (UniqueName: \"kubernetes.io/projected/f1c886f8-c610-434f-8ec1-bfbf0d7e6cb3-kube-api-access-tfch6\") on node \"crc\" DevicePath \"\"" Jan 20 18:27:03 crc kubenswrapper[4661]: I0120 18:27:03.685343 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"f1c886f8-c610-434f-8ec1-bfbf0d7e6cb3","Type":"ContainerDied","Data":"6d9fedce6956276c44f198d47ccb5db2c89762daf1876a287e11796132fa2cdc"} Jan 20 18:27:03 crc kubenswrapper[4661]: I0120 18:27:03.685611 4661 scope.go:117] "RemoveContainer" containerID="ad8c0df9bbefaeb942992dae41dbab46d01f138f1a71e5cde396d744dfc97324" Jan 20 18:27:03 crc kubenswrapper[4661]: I0120 18:27:03.685792 4661 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 20 18:27:03 crc kubenswrapper[4661]: I0120 18:27:03.746316 4661 scope.go:117] "RemoveContainer" containerID="6c090336234f5539e42676dada652587ecda259ed63db1b980420dd6fe2a0008" Jan 20 18:27:03 crc kubenswrapper[4661]: I0120 18:27:03.776899 4661 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 20 18:27:03 crc kubenswrapper[4661]: I0120 18:27:03.802857 4661 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 20 18:27:03 crc kubenswrapper[4661]: I0120 18:27:03.818372 4661 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 20 18:27:03 crc kubenswrapper[4661]: E0120 18:27:03.818765 4661 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f1c886f8-c610-434f-8ec1-bfbf0d7e6cb3" containerName="nova-metadata-log" Jan 20 18:27:03 crc kubenswrapper[4661]: I0120 18:27:03.818785 4661 state_mem.go:107] "Deleted CPUSet assignment" podUID="f1c886f8-c610-434f-8ec1-bfbf0d7e6cb3" containerName="nova-metadata-log" Jan 20 18:27:03 crc kubenswrapper[4661]: E0120 18:27:03.818819 4661 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f1c886f8-c610-434f-8ec1-bfbf0d7e6cb3" containerName="nova-metadata-metadata" Jan 20 18:27:03 crc kubenswrapper[4661]: I0120 18:27:03.818826 4661 state_mem.go:107] "Deleted CPUSet assignment" podUID="f1c886f8-c610-434f-8ec1-bfbf0d7e6cb3" containerName="nova-metadata-metadata" Jan 20 18:27:03 crc kubenswrapper[4661]: I0120 18:27:03.818986 4661 memory_manager.go:354] "RemoveStaleState removing state" podUID="f1c886f8-c610-434f-8ec1-bfbf0d7e6cb3" containerName="nova-metadata-metadata" Jan 20 18:27:03 crc kubenswrapper[4661]: I0120 18:27:03.819032 4661 memory_manager.go:354] "RemoveStaleState removing state" podUID="f1c886f8-c610-434f-8ec1-bfbf0d7e6cb3" containerName="nova-metadata-log" Jan 20 18:27:03 crc kubenswrapper[4661]: I0120 18:27:03.820024 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 20 18:27:03 crc kubenswrapper[4661]: I0120 18:27:03.833295 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 20 18:27:03 crc kubenswrapper[4661]: I0120 18:27:03.833383 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Jan 20 18:27:03 crc kubenswrapper[4661]: I0120 18:27:03.838636 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vzbpr\" (UniqueName: \"kubernetes.io/projected/9fa7c917-5b54-4460-924a-f0d01d60e5dd-kube-api-access-vzbpr\") pod \"nova-metadata-0\" (UID: \"9fa7c917-5b54-4460-924a-f0d01d60e5dd\") " pod="openstack/nova-metadata-0" Jan 20 18:27:03 crc kubenswrapper[4661]: I0120 18:27:03.838817 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9fa7c917-5b54-4460-924a-f0d01d60e5dd-logs\") pod \"nova-metadata-0\" (UID: \"9fa7c917-5b54-4460-924a-f0d01d60e5dd\") " pod="openstack/nova-metadata-0" Jan 20 18:27:03 crc kubenswrapper[4661]: I0120 18:27:03.838838 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/9fa7c917-5b54-4460-924a-f0d01d60e5dd-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"9fa7c917-5b54-4460-924a-f0d01d60e5dd\") " pod="openstack/nova-metadata-0" Jan 20 18:27:03 crc kubenswrapper[4661]: I0120 18:27:03.838872 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9fa7c917-5b54-4460-924a-f0d01d60e5dd-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"9fa7c917-5b54-4460-924a-f0d01d60e5dd\") " pod="openstack/nova-metadata-0" Jan 20 18:27:03 crc kubenswrapper[4661]: I0120 18:27:03.838895 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9fa7c917-5b54-4460-924a-f0d01d60e5dd-config-data\") pod \"nova-metadata-0\" (UID: \"9fa7c917-5b54-4460-924a-f0d01d60e5dd\") " pod="openstack/nova-metadata-0" Jan 20 18:27:03 crc kubenswrapper[4661]: I0120 18:27:03.844012 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 20 18:27:03 crc kubenswrapper[4661]: I0120 18:27:03.940113 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vzbpr\" (UniqueName: \"kubernetes.io/projected/9fa7c917-5b54-4460-924a-f0d01d60e5dd-kube-api-access-vzbpr\") pod \"nova-metadata-0\" (UID: \"9fa7c917-5b54-4460-924a-f0d01d60e5dd\") " pod="openstack/nova-metadata-0" Jan 20 18:27:03 crc kubenswrapper[4661]: I0120 18:27:03.940506 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/9fa7c917-5b54-4460-924a-f0d01d60e5dd-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"9fa7c917-5b54-4460-924a-f0d01d60e5dd\") " pod="openstack/nova-metadata-0" Jan 20 18:27:03 crc kubenswrapper[4661]: I0120 18:27:03.940530 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9fa7c917-5b54-4460-924a-f0d01d60e5dd-logs\") pod \"nova-metadata-0\" (UID: \"9fa7c917-5b54-4460-924a-f0d01d60e5dd\") " pod="openstack/nova-metadata-0" Jan 20 18:27:03 crc kubenswrapper[4661]: I0120 18:27:03.940555 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9fa7c917-5b54-4460-924a-f0d01d60e5dd-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"9fa7c917-5b54-4460-924a-f0d01d60e5dd\") " pod="openstack/nova-metadata-0" Jan 20 18:27:03 crc kubenswrapper[4661]: I0120 18:27:03.940574 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9fa7c917-5b54-4460-924a-f0d01d60e5dd-config-data\") pod \"nova-metadata-0\" (UID: \"9fa7c917-5b54-4460-924a-f0d01d60e5dd\") " pod="openstack/nova-metadata-0" Jan 20 18:27:03 crc kubenswrapper[4661]: I0120 18:27:03.940962 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9fa7c917-5b54-4460-924a-f0d01d60e5dd-logs\") pod \"nova-metadata-0\" (UID: \"9fa7c917-5b54-4460-924a-f0d01d60e5dd\") " pod="openstack/nova-metadata-0" Jan 20 18:27:03 crc kubenswrapper[4661]: I0120 18:27:03.944651 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/9fa7c917-5b54-4460-924a-f0d01d60e5dd-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"9fa7c917-5b54-4460-924a-f0d01d60e5dd\") " pod="openstack/nova-metadata-0" Jan 20 18:27:03 crc kubenswrapper[4661]: I0120 18:27:03.945540 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9fa7c917-5b54-4460-924a-f0d01d60e5dd-config-data\") pod \"nova-metadata-0\" (UID: \"9fa7c917-5b54-4460-924a-f0d01d60e5dd\") " pod="openstack/nova-metadata-0" Jan 20 18:27:03 crc kubenswrapper[4661]: I0120 18:27:03.945963 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9fa7c917-5b54-4460-924a-f0d01d60e5dd-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"9fa7c917-5b54-4460-924a-f0d01d60e5dd\") " pod="openstack/nova-metadata-0" Jan 20 18:27:03 crc kubenswrapper[4661]: I0120 18:27:03.958544 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vzbpr\" (UniqueName: \"kubernetes.io/projected/9fa7c917-5b54-4460-924a-f0d01d60e5dd-kube-api-access-vzbpr\") pod \"nova-metadata-0\" (UID: \"9fa7c917-5b54-4460-924a-f0d01d60e5dd\") " pod="openstack/nova-metadata-0" Jan 20 18:27:04 crc kubenswrapper[4661]: I0120 18:27:04.140826 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 20 18:27:04 crc kubenswrapper[4661]: I0120 18:27:04.153472 4661 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f1c886f8-c610-434f-8ec1-bfbf0d7e6cb3" path="/var/lib/kubelet/pods/f1c886f8-c610-434f-8ec1-bfbf0d7e6cb3/volumes" Jan 20 18:27:04 crc kubenswrapper[4661]: I0120 18:27:04.608299 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 20 18:27:04 crc kubenswrapper[4661]: I0120 18:27:04.696132 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"9fa7c917-5b54-4460-924a-f0d01d60e5dd","Type":"ContainerStarted","Data":"a5ec842ae9c1ecdd02f820814bd0f72b5a4788a6e3c43490fa10eccf45ef61a2"} Jan 20 18:27:05 crc kubenswrapper[4661]: I0120 18:27:05.728712 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"9fa7c917-5b54-4460-924a-f0d01d60e5dd","Type":"ContainerStarted","Data":"b48ab3eb10504464fae84a78c6595bf33427d23d857def308da3506791d9883c"} Jan 20 18:27:05 crc kubenswrapper[4661]: I0120 18:27:05.728977 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"9fa7c917-5b54-4460-924a-f0d01d60e5dd","Type":"ContainerStarted","Data":"f2c38146ea86dc43dad0d3198a1703a3926a0216f675f179fb04767a359afde0"} Jan 20 18:27:05 crc kubenswrapper[4661]: I0120 18:27:05.747230 4661 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.74721512 podStartE2EDuration="2.74721512s" podCreationTimestamp="2026-01-20 18:27:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 18:27:05.746963124 +0000 UTC m=+1282.077752816" watchObservedRunningTime="2026-01-20 18:27:05.74721512 +0000 UTC m=+1282.078004782" Jan 20 18:27:05 crc kubenswrapper[4661]: I0120 18:27:05.857520 4661 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 20 18:27:05 crc kubenswrapper[4661]: I0120 18:27:05.857576 4661 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 20 18:27:06 crc kubenswrapper[4661]: I0120 18:27:06.156731 4661 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Jan 20 18:27:06 crc kubenswrapper[4661]: I0120 18:27:06.219394 4661 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Jan 20 18:27:06 crc kubenswrapper[4661]: I0120 18:27:06.219453 4661 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 20 18:27:06 crc kubenswrapper[4661]: I0120 18:27:06.252841 4661 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-566b5b7845-222f9" Jan 20 18:27:06 crc kubenswrapper[4661]: I0120 18:27:06.263379 4661 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Jan 20 18:27:06 crc kubenswrapper[4661]: I0120 18:27:06.337421 4661 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6d97fcdd8f-fsv8j"] Jan 20 18:27:06 crc kubenswrapper[4661]: I0120 18:27:06.337624 4661 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6d97fcdd8f-fsv8j" podUID="67e439a0-e39b-4b8f-b3ec-9a4d33ff65b7" containerName="dnsmasq-dns" containerID="cri-o://e74a347199d47c64729d7e0a4baaf234808f081cd5cc79ddf30d623ca11bd566" gracePeriod=10 Jan 20 18:27:06 crc kubenswrapper[4661]: I0120 18:27:06.752696 4661 generic.go:334] "Generic (PLEG): container finished" podID="67e439a0-e39b-4b8f-b3ec-9a4d33ff65b7" containerID="e74a347199d47c64729d7e0a4baaf234808f081cd5cc79ddf30d623ca11bd566" exitCode=0 Jan 20 18:27:06 crc kubenswrapper[4661]: I0120 18:27:06.753002 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6d97fcdd8f-fsv8j" event={"ID":"67e439a0-e39b-4b8f-b3ec-9a4d33ff65b7","Type":"ContainerDied","Data":"e74a347199d47c64729d7e0a4baaf234808f081cd5cc79ddf30d623ca11bd566"} Jan 20 18:27:06 crc kubenswrapper[4661]: I0120 18:27:06.756975 4661 generic.go:334] "Generic (PLEG): container finished" podID="1e665d0f-e6d1-4829-b745-262ce699c011" containerID="bb3d7e3bf9f512e16e057a0d13379209ee93ce7865d68839abaebac3894b6855" exitCode=0 Jan 20 18:27:06 crc kubenswrapper[4661]: I0120 18:27:06.757794 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-n2hww" event={"ID":"1e665d0f-e6d1-4829-b745-262ce699c011","Type":"ContainerDied","Data":"bb3d7e3bf9f512e16e057a0d13379209ee93ce7865d68839abaebac3894b6855"} Jan 20 18:27:06 crc kubenswrapper[4661]: I0120 18:27:06.802021 4661 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Jan 20 18:27:06 crc kubenswrapper[4661]: I0120 18:27:06.892723 4661 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6d97fcdd8f-fsv8j" Jan 20 18:27:06 crc kubenswrapper[4661]: I0120 18:27:06.942860 4661 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="80a4f061-499b-40d4-9958-390e223559d1" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.170:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 20 18:27:06 crc kubenswrapper[4661]: I0120 18:27:06.943148 4661 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="80a4f061-499b-40d4-9958-390e223559d1" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.170:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 20 18:27:07 crc kubenswrapper[4661]: I0120 18:27:07.001552 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/67e439a0-e39b-4b8f-b3ec-9a4d33ff65b7-config\") pod \"67e439a0-e39b-4b8f-b3ec-9a4d33ff65b7\" (UID: \"67e439a0-e39b-4b8f-b3ec-9a4d33ff65b7\") " Jan 20 18:27:07 crc kubenswrapper[4661]: I0120 18:27:07.002452 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/67e439a0-e39b-4b8f-b3ec-9a4d33ff65b7-dns-svc\") pod \"67e439a0-e39b-4b8f-b3ec-9a4d33ff65b7\" (UID: \"67e439a0-e39b-4b8f-b3ec-9a4d33ff65b7\") " Jan 20 18:27:07 crc kubenswrapper[4661]: I0120 18:27:07.002488 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kwhc6\" (UniqueName: \"kubernetes.io/projected/67e439a0-e39b-4b8f-b3ec-9a4d33ff65b7-kube-api-access-kwhc6\") pod \"67e439a0-e39b-4b8f-b3ec-9a4d33ff65b7\" (UID: \"67e439a0-e39b-4b8f-b3ec-9a4d33ff65b7\") " Jan 20 18:27:07 crc kubenswrapper[4661]: I0120 18:27:07.002952 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/67e439a0-e39b-4b8f-b3ec-9a4d33ff65b7-ovsdbserver-nb\") pod \"67e439a0-e39b-4b8f-b3ec-9a4d33ff65b7\" (UID: \"67e439a0-e39b-4b8f-b3ec-9a4d33ff65b7\") " Jan 20 18:27:07 crc kubenswrapper[4661]: I0120 18:27:07.003244 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/67e439a0-e39b-4b8f-b3ec-9a4d33ff65b7-ovsdbserver-sb\") pod \"67e439a0-e39b-4b8f-b3ec-9a4d33ff65b7\" (UID: \"67e439a0-e39b-4b8f-b3ec-9a4d33ff65b7\") " Jan 20 18:27:07 crc kubenswrapper[4661]: I0120 18:27:07.010806 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/67e439a0-e39b-4b8f-b3ec-9a4d33ff65b7-kube-api-access-kwhc6" (OuterVolumeSpecName: "kube-api-access-kwhc6") pod "67e439a0-e39b-4b8f-b3ec-9a4d33ff65b7" (UID: "67e439a0-e39b-4b8f-b3ec-9a4d33ff65b7"). InnerVolumeSpecName "kube-api-access-kwhc6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 18:27:07 crc kubenswrapper[4661]: I0120 18:27:07.053688 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/67e439a0-e39b-4b8f-b3ec-9a4d33ff65b7-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "67e439a0-e39b-4b8f-b3ec-9a4d33ff65b7" (UID: "67e439a0-e39b-4b8f-b3ec-9a4d33ff65b7"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 18:27:07 crc kubenswrapper[4661]: I0120 18:27:07.080885 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/67e439a0-e39b-4b8f-b3ec-9a4d33ff65b7-config" (OuterVolumeSpecName: "config") pod "67e439a0-e39b-4b8f-b3ec-9a4d33ff65b7" (UID: "67e439a0-e39b-4b8f-b3ec-9a4d33ff65b7"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 18:27:07 crc kubenswrapper[4661]: I0120 18:27:07.094592 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/67e439a0-e39b-4b8f-b3ec-9a4d33ff65b7-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "67e439a0-e39b-4b8f-b3ec-9a4d33ff65b7" (UID: "67e439a0-e39b-4b8f-b3ec-9a4d33ff65b7"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 18:27:07 crc kubenswrapper[4661]: I0120 18:27:07.098908 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/67e439a0-e39b-4b8f-b3ec-9a4d33ff65b7-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "67e439a0-e39b-4b8f-b3ec-9a4d33ff65b7" (UID: "67e439a0-e39b-4b8f-b3ec-9a4d33ff65b7"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 18:27:07 crc kubenswrapper[4661]: I0120 18:27:07.107651 4661 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kwhc6\" (UniqueName: \"kubernetes.io/projected/67e439a0-e39b-4b8f-b3ec-9a4d33ff65b7-kube-api-access-kwhc6\") on node \"crc\" DevicePath \"\"" Jan 20 18:27:07 crc kubenswrapper[4661]: I0120 18:27:07.107838 4661 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/67e439a0-e39b-4b8f-b3ec-9a4d33ff65b7-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 20 18:27:07 crc kubenswrapper[4661]: I0120 18:27:07.107848 4661 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/67e439a0-e39b-4b8f-b3ec-9a4d33ff65b7-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 20 18:27:07 crc kubenswrapper[4661]: I0120 18:27:07.107857 4661 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/67e439a0-e39b-4b8f-b3ec-9a4d33ff65b7-config\") on node \"crc\" DevicePath \"\"" Jan 20 18:27:07 crc kubenswrapper[4661]: I0120 18:27:07.107866 4661 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/67e439a0-e39b-4b8f-b3ec-9a4d33ff65b7-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 20 18:27:07 crc kubenswrapper[4661]: I0120 18:27:07.767330 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6d97fcdd8f-fsv8j" event={"ID":"67e439a0-e39b-4b8f-b3ec-9a4d33ff65b7","Type":"ContainerDied","Data":"0c4bb09fabc8c45a3ba4bcd8d8f4da140c1d4789d024a6c9564577b90d0feb46"} Jan 20 18:27:07 crc kubenswrapper[4661]: I0120 18:27:07.767379 4661 scope.go:117] "RemoveContainer" containerID="e74a347199d47c64729d7e0a4baaf234808f081cd5cc79ddf30d623ca11bd566" Jan 20 18:27:07 crc kubenswrapper[4661]: I0120 18:27:07.767488 4661 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6d97fcdd8f-fsv8j" Jan 20 18:27:07 crc kubenswrapper[4661]: I0120 18:27:07.771657 4661 generic.go:334] "Generic (PLEG): container finished" podID="f75d1b27-5cb6-496a-82ae-e3880dfd4b4c" containerID="28e5b932ef2a30e51319c647957fe9b4e19d736cab1f5d41182433ea4c6885f6" exitCode=0 Jan 20 18:27:07 crc kubenswrapper[4661]: I0120 18:27:07.771843 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-9p79w" event={"ID":"f75d1b27-5cb6-496a-82ae-e3880dfd4b4c","Type":"ContainerDied","Data":"28e5b932ef2a30e51319c647957fe9b4e19d736cab1f5d41182433ea4c6885f6"} Jan 20 18:27:07 crc kubenswrapper[4661]: I0120 18:27:07.792979 4661 scope.go:117] "RemoveContainer" containerID="2d39e9c62669bf5e6d1cfe7d5b5ffea656fcdc7770f10e4ecca1d006ec76a61d" Jan 20 18:27:07 crc kubenswrapper[4661]: I0120 18:27:07.843323 4661 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6d97fcdd8f-fsv8j"] Jan 20 18:27:07 crc kubenswrapper[4661]: I0120 18:27:07.856326 4661 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6d97fcdd8f-fsv8j"] Jan 20 18:27:08 crc kubenswrapper[4661]: I0120 18:27:08.155579 4661 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="67e439a0-e39b-4b8f-b3ec-9a4d33ff65b7" path="/var/lib/kubelet/pods/67e439a0-e39b-4b8f-b3ec-9a4d33ff65b7/volumes" Jan 20 18:27:08 crc kubenswrapper[4661]: I0120 18:27:08.167385 4661 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-n2hww" Jan 20 18:27:08 crc kubenswrapper[4661]: I0120 18:27:08.228997 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cpp92\" (UniqueName: \"kubernetes.io/projected/1e665d0f-e6d1-4829-b745-262ce699c011-kube-api-access-cpp92\") pod \"1e665d0f-e6d1-4829-b745-262ce699c011\" (UID: \"1e665d0f-e6d1-4829-b745-262ce699c011\") " Jan 20 18:27:08 crc kubenswrapper[4661]: I0120 18:27:08.229045 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e665d0f-e6d1-4829-b745-262ce699c011-combined-ca-bundle\") pod \"1e665d0f-e6d1-4829-b745-262ce699c011\" (UID: \"1e665d0f-e6d1-4829-b745-262ce699c011\") " Jan 20 18:27:08 crc kubenswrapper[4661]: I0120 18:27:08.229087 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1e665d0f-e6d1-4829-b745-262ce699c011-scripts\") pod \"1e665d0f-e6d1-4829-b745-262ce699c011\" (UID: \"1e665d0f-e6d1-4829-b745-262ce699c011\") " Jan 20 18:27:08 crc kubenswrapper[4661]: I0120 18:27:08.229126 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1e665d0f-e6d1-4829-b745-262ce699c011-config-data\") pod \"1e665d0f-e6d1-4829-b745-262ce699c011\" (UID: \"1e665d0f-e6d1-4829-b745-262ce699c011\") " Jan 20 18:27:08 crc kubenswrapper[4661]: I0120 18:27:08.233467 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1e665d0f-e6d1-4829-b745-262ce699c011-scripts" (OuterVolumeSpecName: "scripts") pod "1e665d0f-e6d1-4829-b745-262ce699c011" (UID: "1e665d0f-e6d1-4829-b745-262ce699c011"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 18:27:08 crc kubenswrapper[4661]: I0120 18:27:08.237914 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1e665d0f-e6d1-4829-b745-262ce699c011-kube-api-access-cpp92" (OuterVolumeSpecName: "kube-api-access-cpp92") pod "1e665d0f-e6d1-4829-b745-262ce699c011" (UID: "1e665d0f-e6d1-4829-b745-262ce699c011"). InnerVolumeSpecName "kube-api-access-cpp92". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 18:27:08 crc kubenswrapper[4661]: I0120 18:27:08.254462 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1e665d0f-e6d1-4829-b745-262ce699c011-config-data" (OuterVolumeSpecName: "config-data") pod "1e665d0f-e6d1-4829-b745-262ce699c011" (UID: "1e665d0f-e6d1-4829-b745-262ce699c011"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 18:27:08 crc kubenswrapper[4661]: I0120 18:27:08.266016 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1e665d0f-e6d1-4829-b745-262ce699c011-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1e665d0f-e6d1-4829-b745-262ce699c011" (UID: "1e665d0f-e6d1-4829-b745-262ce699c011"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 18:27:08 crc kubenswrapper[4661]: I0120 18:27:08.330799 4661 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cpp92\" (UniqueName: \"kubernetes.io/projected/1e665d0f-e6d1-4829-b745-262ce699c011-kube-api-access-cpp92\") on node \"crc\" DevicePath \"\"" Jan 20 18:27:08 crc kubenswrapper[4661]: I0120 18:27:08.331122 4661 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e665d0f-e6d1-4829-b745-262ce699c011-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 20 18:27:08 crc kubenswrapper[4661]: I0120 18:27:08.331210 4661 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1e665d0f-e6d1-4829-b745-262ce699c011-scripts\") on node \"crc\" DevicePath \"\"" Jan 20 18:27:08 crc kubenswrapper[4661]: I0120 18:27:08.331276 4661 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1e665d0f-e6d1-4829-b745-262ce699c011-config-data\") on node \"crc\" DevicePath \"\"" Jan 20 18:27:08 crc kubenswrapper[4661]: I0120 18:27:08.782399 4661 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-n2hww" Jan 20 18:27:08 crc kubenswrapper[4661]: I0120 18:27:08.783323 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-n2hww" event={"ID":"1e665d0f-e6d1-4829-b745-262ce699c011","Type":"ContainerDied","Data":"ed2bf8b9c87f024a7f2b35787d06945bd1b6ec22d066815a666b95fcd4af8606"} Jan 20 18:27:08 crc kubenswrapper[4661]: I0120 18:27:08.783411 4661 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ed2bf8b9c87f024a7f2b35787d06945bd1b6ec22d066815a666b95fcd4af8606" Jan 20 18:27:08 crc kubenswrapper[4661]: I0120 18:27:08.955978 4661 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 20 18:27:08 crc kubenswrapper[4661]: I0120 18:27:08.956198 4661 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="9836cc19-9ee3-4d13-9ae7-c774b403284a" containerName="nova-scheduler-scheduler" containerID="cri-o://7d90cc24945686cc416855e344577f6460ed94c770e46d35d11778209756fa51" gracePeriod=30 Jan 20 18:27:08 crc kubenswrapper[4661]: I0120 18:27:08.967259 4661 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 20 18:27:08 crc kubenswrapper[4661]: I0120 18:27:08.967837 4661 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="80a4f061-499b-40d4-9958-390e223559d1" containerName="nova-api-log" containerID="cri-o://e2c5cd34cf4e5266d6c0bc2f9ea81539f2ad5147260f9af65e235608a42c7bea" gracePeriod=30 Jan 20 18:27:08 crc kubenswrapper[4661]: I0120 18:27:08.968323 4661 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="80a4f061-499b-40d4-9958-390e223559d1" containerName="nova-api-api" containerID="cri-o://32b2b2daf650b7926639da40af76fafc7d0476a8c12ce85e77ca915191924019" gracePeriod=30 Jan 20 18:27:09 crc kubenswrapper[4661]: I0120 18:27:09.045856 4661 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 20 18:27:09 crc kubenswrapper[4661]: I0120 18:27:09.046031 4661 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="9fa7c917-5b54-4460-924a-f0d01d60e5dd" containerName="nova-metadata-log" containerID="cri-o://f2c38146ea86dc43dad0d3198a1703a3926a0216f675f179fb04767a359afde0" gracePeriod=30 Jan 20 18:27:09 crc kubenswrapper[4661]: I0120 18:27:09.046453 4661 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="9fa7c917-5b54-4460-924a-f0d01d60e5dd" containerName="nova-metadata-metadata" containerID="cri-o://b48ab3eb10504464fae84a78c6595bf33427d23d857def308da3506791d9883c" gracePeriod=30 Jan 20 18:27:09 crc kubenswrapper[4661]: I0120 18:27:09.142311 4661 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 20 18:27:09 crc kubenswrapper[4661]: I0120 18:27:09.142542 4661 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 20 18:27:09 crc kubenswrapper[4661]: I0120 18:27:09.301172 4661 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-9p79w" Jan 20 18:27:09 crc kubenswrapper[4661]: I0120 18:27:09.462858 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f75d1b27-5cb6-496a-82ae-e3880dfd4b4c-scripts\") pod \"f75d1b27-5cb6-496a-82ae-e3880dfd4b4c\" (UID: \"f75d1b27-5cb6-496a-82ae-e3880dfd4b4c\") " Jan 20 18:27:09 crc kubenswrapper[4661]: I0120 18:27:09.462984 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-96tqv\" (UniqueName: \"kubernetes.io/projected/f75d1b27-5cb6-496a-82ae-e3880dfd4b4c-kube-api-access-96tqv\") pod \"f75d1b27-5cb6-496a-82ae-e3880dfd4b4c\" (UID: \"f75d1b27-5cb6-496a-82ae-e3880dfd4b4c\") " Jan 20 18:27:09 crc kubenswrapper[4661]: I0120 18:27:09.463083 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f75d1b27-5cb6-496a-82ae-e3880dfd4b4c-config-data\") pod \"f75d1b27-5cb6-496a-82ae-e3880dfd4b4c\" (UID: \"f75d1b27-5cb6-496a-82ae-e3880dfd4b4c\") " Jan 20 18:27:09 crc kubenswrapper[4661]: I0120 18:27:09.463151 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f75d1b27-5cb6-496a-82ae-e3880dfd4b4c-combined-ca-bundle\") pod \"f75d1b27-5cb6-496a-82ae-e3880dfd4b4c\" (UID: \"f75d1b27-5cb6-496a-82ae-e3880dfd4b4c\") " Jan 20 18:27:09 crc kubenswrapper[4661]: I0120 18:27:09.478116 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f75d1b27-5cb6-496a-82ae-e3880dfd4b4c-scripts" (OuterVolumeSpecName: "scripts") pod "f75d1b27-5cb6-496a-82ae-e3880dfd4b4c" (UID: "f75d1b27-5cb6-496a-82ae-e3880dfd4b4c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 18:27:09 crc kubenswrapper[4661]: I0120 18:27:09.496847 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f75d1b27-5cb6-496a-82ae-e3880dfd4b4c-kube-api-access-96tqv" (OuterVolumeSpecName: "kube-api-access-96tqv") pod "f75d1b27-5cb6-496a-82ae-e3880dfd4b4c" (UID: "f75d1b27-5cb6-496a-82ae-e3880dfd4b4c"). InnerVolumeSpecName "kube-api-access-96tqv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 18:27:09 crc kubenswrapper[4661]: I0120 18:27:09.508823 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f75d1b27-5cb6-496a-82ae-e3880dfd4b4c-config-data" (OuterVolumeSpecName: "config-data") pod "f75d1b27-5cb6-496a-82ae-e3880dfd4b4c" (UID: "f75d1b27-5cb6-496a-82ae-e3880dfd4b4c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 18:27:09 crc kubenswrapper[4661]: I0120 18:27:09.538185 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f75d1b27-5cb6-496a-82ae-e3880dfd4b4c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f75d1b27-5cb6-496a-82ae-e3880dfd4b4c" (UID: "f75d1b27-5cb6-496a-82ae-e3880dfd4b4c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 18:27:09 crc kubenswrapper[4661]: I0120 18:27:09.566046 4661 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f75d1b27-5cb6-496a-82ae-e3880dfd4b4c-config-data\") on node \"crc\" DevicePath \"\"" Jan 20 18:27:09 crc kubenswrapper[4661]: I0120 18:27:09.566081 4661 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f75d1b27-5cb6-496a-82ae-e3880dfd4b4c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 20 18:27:09 crc kubenswrapper[4661]: I0120 18:27:09.566095 4661 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f75d1b27-5cb6-496a-82ae-e3880dfd4b4c-scripts\") on node \"crc\" DevicePath \"\"" Jan 20 18:27:09 crc kubenswrapper[4661]: I0120 18:27:09.566106 4661 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-96tqv\" (UniqueName: \"kubernetes.io/projected/f75d1b27-5cb6-496a-82ae-e3880dfd4b4c-kube-api-access-96tqv\") on node \"crc\" DevicePath \"\"" Jan 20 18:27:09 crc kubenswrapper[4661]: I0120 18:27:09.673033 4661 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 20 18:27:09 crc kubenswrapper[4661]: I0120 18:27:09.768161 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/9fa7c917-5b54-4460-924a-f0d01d60e5dd-nova-metadata-tls-certs\") pod \"9fa7c917-5b54-4460-924a-f0d01d60e5dd\" (UID: \"9fa7c917-5b54-4460-924a-f0d01d60e5dd\") " Jan 20 18:27:09 crc kubenswrapper[4661]: I0120 18:27:09.768216 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vzbpr\" (UniqueName: \"kubernetes.io/projected/9fa7c917-5b54-4460-924a-f0d01d60e5dd-kube-api-access-vzbpr\") pod \"9fa7c917-5b54-4460-924a-f0d01d60e5dd\" (UID: \"9fa7c917-5b54-4460-924a-f0d01d60e5dd\") " Jan 20 18:27:09 crc kubenswrapper[4661]: I0120 18:27:09.768294 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9fa7c917-5b54-4460-924a-f0d01d60e5dd-logs\") pod \"9fa7c917-5b54-4460-924a-f0d01d60e5dd\" (UID: \"9fa7c917-5b54-4460-924a-f0d01d60e5dd\") " Jan 20 18:27:09 crc kubenswrapper[4661]: I0120 18:27:09.768330 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9fa7c917-5b54-4460-924a-f0d01d60e5dd-combined-ca-bundle\") pod \"9fa7c917-5b54-4460-924a-f0d01d60e5dd\" (UID: \"9fa7c917-5b54-4460-924a-f0d01d60e5dd\") " Jan 20 18:27:09 crc kubenswrapper[4661]: I0120 18:27:09.768387 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9fa7c917-5b54-4460-924a-f0d01d60e5dd-config-data\") pod \"9fa7c917-5b54-4460-924a-f0d01d60e5dd\" (UID: \"9fa7c917-5b54-4460-924a-f0d01d60e5dd\") " Jan 20 18:27:09 crc kubenswrapper[4661]: I0120 18:27:09.768724 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9fa7c917-5b54-4460-924a-f0d01d60e5dd-logs" (OuterVolumeSpecName: "logs") pod "9fa7c917-5b54-4460-924a-f0d01d60e5dd" (UID: "9fa7c917-5b54-4460-924a-f0d01d60e5dd"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 18:27:09 crc kubenswrapper[4661]: I0120 18:27:09.772852 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9fa7c917-5b54-4460-924a-f0d01d60e5dd-kube-api-access-vzbpr" (OuterVolumeSpecName: "kube-api-access-vzbpr") pod "9fa7c917-5b54-4460-924a-f0d01d60e5dd" (UID: "9fa7c917-5b54-4460-924a-f0d01d60e5dd"). InnerVolumeSpecName "kube-api-access-vzbpr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 18:27:09 crc kubenswrapper[4661]: I0120 18:27:09.794562 4661 generic.go:334] "Generic (PLEG): container finished" podID="80a4f061-499b-40d4-9958-390e223559d1" containerID="e2c5cd34cf4e5266d6c0bc2f9ea81539f2ad5147260f9af65e235608a42c7bea" exitCode=143 Jan 20 18:27:09 crc kubenswrapper[4661]: I0120 18:27:09.794636 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"80a4f061-499b-40d4-9958-390e223559d1","Type":"ContainerDied","Data":"e2c5cd34cf4e5266d6c0bc2f9ea81539f2ad5147260f9af65e235608a42c7bea"} Jan 20 18:27:09 crc kubenswrapper[4661]: I0120 18:27:09.796220 4661 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-9p79w" Jan 20 18:27:09 crc kubenswrapper[4661]: I0120 18:27:09.796214 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-9p79w" event={"ID":"f75d1b27-5cb6-496a-82ae-e3880dfd4b4c","Type":"ContainerDied","Data":"8b8a41b282bf0ec0ec2eb649e59841aad6a2f9ed9225226174dd0ca5d70eb57f"} Jan 20 18:27:09 crc kubenswrapper[4661]: I0120 18:27:09.796345 4661 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8b8a41b282bf0ec0ec2eb649e59841aad6a2f9ed9225226174dd0ca5d70eb57f" Jan 20 18:27:09 crc kubenswrapper[4661]: I0120 18:27:09.796437 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9fa7c917-5b54-4460-924a-f0d01d60e5dd-config-data" (OuterVolumeSpecName: "config-data") pod "9fa7c917-5b54-4460-924a-f0d01d60e5dd" (UID: "9fa7c917-5b54-4460-924a-f0d01d60e5dd"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 18:27:09 crc kubenswrapper[4661]: I0120 18:27:09.796738 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9fa7c917-5b54-4460-924a-f0d01d60e5dd-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9fa7c917-5b54-4460-924a-f0d01d60e5dd" (UID: "9fa7c917-5b54-4460-924a-f0d01d60e5dd"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 18:27:09 crc kubenswrapper[4661]: I0120 18:27:09.798703 4661 generic.go:334] "Generic (PLEG): container finished" podID="9fa7c917-5b54-4460-924a-f0d01d60e5dd" containerID="b48ab3eb10504464fae84a78c6595bf33427d23d857def308da3506791d9883c" exitCode=0 Jan 20 18:27:09 crc kubenswrapper[4661]: I0120 18:27:09.798862 4661 generic.go:334] "Generic (PLEG): container finished" podID="9fa7c917-5b54-4460-924a-f0d01d60e5dd" containerID="f2c38146ea86dc43dad0d3198a1703a3926a0216f675f179fb04767a359afde0" exitCode=143 Jan 20 18:27:09 crc kubenswrapper[4661]: I0120 18:27:09.798829 4661 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 20 18:27:09 crc kubenswrapper[4661]: I0120 18:27:09.798793 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"9fa7c917-5b54-4460-924a-f0d01d60e5dd","Type":"ContainerDied","Data":"b48ab3eb10504464fae84a78c6595bf33427d23d857def308da3506791d9883c"} Jan 20 18:27:09 crc kubenswrapper[4661]: I0120 18:27:09.799304 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"9fa7c917-5b54-4460-924a-f0d01d60e5dd","Type":"ContainerDied","Data":"f2c38146ea86dc43dad0d3198a1703a3926a0216f675f179fb04767a359afde0"} Jan 20 18:27:09 crc kubenswrapper[4661]: I0120 18:27:09.799364 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"9fa7c917-5b54-4460-924a-f0d01d60e5dd","Type":"ContainerDied","Data":"a5ec842ae9c1ecdd02f820814bd0f72b5a4788a6e3c43490fa10eccf45ef61a2"} Jan 20 18:27:09 crc kubenswrapper[4661]: I0120 18:27:09.799388 4661 scope.go:117] "RemoveContainer" containerID="b48ab3eb10504464fae84a78c6595bf33427d23d857def308da3506791d9883c" Jan 20 18:27:09 crc kubenswrapper[4661]: I0120 18:27:09.836990 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9fa7c917-5b54-4460-924a-f0d01d60e5dd-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "9fa7c917-5b54-4460-924a-f0d01d60e5dd" (UID: "9fa7c917-5b54-4460-924a-f0d01d60e5dd"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 18:27:09 crc kubenswrapper[4661]: I0120 18:27:09.844841 4661 scope.go:117] "RemoveContainer" containerID="f2c38146ea86dc43dad0d3198a1703a3926a0216f675f179fb04767a359afde0" Jan 20 18:27:09 crc kubenswrapper[4661]: I0120 18:27:09.866589 4661 scope.go:117] "RemoveContainer" containerID="b48ab3eb10504464fae84a78c6595bf33427d23d857def308da3506791d9883c" Jan 20 18:27:09 crc kubenswrapper[4661]: E0120 18:27:09.866998 4661 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b48ab3eb10504464fae84a78c6595bf33427d23d857def308da3506791d9883c\": container with ID starting with b48ab3eb10504464fae84a78c6595bf33427d23d857def308da3506791d9883c not found: ID does not exist" containerID="b48ab3eb10504464fae84a78c6595bf33427d23d857def308da3506791d9883c" Jan 20 18:27:09 crc kubenswrapper[4661]: I0120 18:27:09.867042 4661 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b48ab3eb10504464fae84a78c6595bf33427d23d857def308da3506791d9883c"} err="failed to get container status \"b48ab3eb10504464fae84a78c6595bf33427d23d857def308da3506791d9883c\": rpc error: code = NotFound desc = could not find container \"b48ab3eb10504464fae84a78c6595bf33427d23d857def308da3506791d9883c\": container with ID starting with b48ab3eb10504464fae84a78c6595bf33427d23d857def308da3506791d9883c not found: ID does not exist" Jan 20 18:27:09 crc kubenswrapper[4661]: I0120 18:27:09.867070 4661 scope.go:117] "RemoveContainer" containerID="f2c38146ea86dc43dad0d3198a1703a3926a0216f675f179fb04767a359afde0" Jan 20 18:27:09 crc kubenswrapper[4661]: E0120 18:27:09.867451 4661 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f2c38146ea86dc43dad0d3198a1703a3926a0216f675f179fb04767a359afde0\": container with ID starting with f2c38146ea86dc43dad0d3198a1703a3926a0216f675f179fb04767a359afde0 not found: ID does not exist" containerID="f2c38146ea86dc43dad0d3198a1703a3926a0216f675f179fb04767a359afde0" Jan 20 18:27:09 crc kubenswrapper[4661]: I0120 18:27:09.867476 4661 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f2c38146ea86dc43dad0d3198a1703a3926a0216f675f179fb04767a359afde0"} err="failed to get container status \"f2c38146ea86dc43dad0d3198a1703a3926a0216f675f179fb04767a359afde0\": rpc error: code = NotFound desc = could not find container \"f2c38146ea86dc43dad0d3198a1703a3926a0216f675f179fb04767a359afde0\": container with ID starting with f2c38146ea86dc43dad0d3198a1703a3926a0216f675f179fb04767a359afde0 not found: ID does not exist" Jan 20 18:27:09 crc kubenswrapper[4661]: I0120 18:27:09.867493 4661 scope.go:117] "RemoveContainer" containerID="b48ab3eb10504464fae84a78c6595bf33427d23d857def308da3506791d9883c" Jan 20 18:27:09 crc kubenswrapper[4661]: I0120 18:27:09.867722 4661 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b48ab3eb10504464fae84a78c6595bf33427d23d857def308da3506791d9883c"} err="failed to get container status \"b48ab3eb10504464fae84a78c6595bf33427d23d857def308da3506791d9883c\": rpc error: code = NotFound desc = could not find container \"b48ab3eb10504464fae84a78c6595bf33427d23d857def308da3506791d9883c\": container with ID starting with b48ab3eb10504464fae84a78c6595bf33427d23d857def308da3506791d9883c not found: ID does not exist" Jan 20 18:27:09 crc kubenswrapper[4661]: I0120 18:27:09.867750 4661 scope.go:117] "RemoveContainer" containerID="f2c38146ea86dc43dad0d3198a1703a3926a0216f675f179fb04767a359afde0" Jan 20 18:27:09 crc kubenswrapper[4661]: I0120 18:27:09.869525 4661 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f2c38146ea86dc43dad0d3198a1703a3926a0216f675f179fb04767a359afde0"} err="failed to get container status \"f2c38146ea86dc43dad0d3198a1703a3926a0216f675f179fb04767a359afde0\": rpc error: code = NotFound desc = could not find container \"f2c38146ea86dc43dad0d3198a1703a3926a0216f675f179fb04767a359afde0\": container with ID starting with f2c38146ea86dc43dad0d3198a1703a3926a0216f675f179fb04767a359afde0 not found: ID does not exist" Jan 20 18:27:09 crc kubenswrapper[4661]: I0120 18:27:09.870927 4661 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9fa7c917-5b54-4460-924a-f0d01d60e5dd-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 20 18:27:09 crc kubenswrapper[4661]: I0120 18:27:09.870957 4661 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9fa7c917-5b54-4460-924a-f0d01d60e5dd-config-data\") on node \"crc\" DevicePath \"\"" Jan 20 18:27:09 crc kubenswrapper[4661]: I0120 18:27:09.870967 4661 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/9fa7c917-5b54-4460-924a-f0d01d60e5dd-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 20 18:27:09 crc kubenswrapper[4661]: I0120 18:27:09.870980 4661 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vzbpr\" (UniqueName: \"kubernetes.io/projected/9fa7c917-5b54-4460-924a-f0d01d60e5dd-kube-api-access-vzbpr\") on node \"crc\" DevicePath \"\"" Jan 20 18:27:09 crc kubenswrapper[4661]: I0120 18:27:09.870990 4661 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9fa7c917-5b54-4460-924a-f0d01d60e5dd-logs\") on node \"crc\" DevicePath \"\"" Jan 20 18:27:09 crc kubenswrapper[4661]: I0120 18:27:09.883918 4661 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 20 18:27:09 crc kubenswrapper[4661]: E0120 18:27:09.884251 4661 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f75d1b27-5cb6-496a-82ae-e3880dfd4b4c" containerName="nova-cell1-conductor-db-sync" Jan 20 18:27:09 crc kubenswrapper[4661]: I0120 18:27:09.884270 4661 state_mem.go:107] "Deleted CPUSet assignment" podUID="f75d1b27-5cb6-496a-82ae-e3880dfd4b4c" containerName="nova-cell1-conductor-db-sync" Jan 20 18:27:09 crc kubenswrapper[4661]: E0120 18:27:09.884280 4661 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9fa7c917-5b54-4460-924a-f0d01d60e5dd" containerName="nova-metadata-log" Jan 20 18:27:09 crc kubenswrapper[4661]: I0120 18:27:09.884287 4661 state_mem.go:107] "Deleted CPUSet assignment" podUID="9fa7c917-5b54-4460-924a-f0d01d60e5dd" containerName="nova-metadata-log" Jan 20 18:27:09 crc kubenswrapper[4661]: E0120 18:27:09.884307 4661 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1e665d0f-e6d1-4829-b745-262ce699c011" containerName="nova-manage" Jan 20 18:27:09 crc kubenswrapper[4661]: I0120 18:27:09.884313 4661 state_mem.go:107] "Deleted CPUSet assignment" podUID="1e665d0f-e6d1-4829-b745-262ce699c011" containerName="nova-manage" Jan 20 18:27:09 crc kubenswrapper[4661]: E0120 18:27:09.884324 4661 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="67e439a0-e39b-4b8f-b3ec-9a4d33ff65b7" containerName="init" Jan 20 18:27:09 crc kubenswrapper[4661]: I0120 18:27:09.884332 4661 state_mem.go:107] "Deleted CPUSet assignment" podUID="67e439a0-e39b-4b8f-b3ec-9a4d33ff65b7" containerName="init" Jan 20 18:27:09 crc kubenswrapper[4661]: E0120 18:27:09.884341 4661 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="67e439a0-e39b-4b8f-b3ec-9a4d33ff65b7" containerName="dnsmasq-dns" Jan 20 18:27:09 crc kubenswrapper[4661]: I0120 18:27:09.884347 4661 state_mem.go:107] "Deleted CPUSet assignment" podUID="67e439a0-e39b-4b8f-b3ec-9a4d33ff65b7" containerName="dnsmasq-dns" Jan 20 18:27:09 crc kubenswrapper[4661]: E0120 18:27:09.884366 4661 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9fa7c917-5b54-4460-924a-f0d01d60e5dd" containerName="nova-metadata-metadata" Jan 20 18:27:09 crc kubenswrapper[4661]: I0120 18:27:09.884374 4661 state_mem.go:107] "Deleted CPUSet assignment" podUID="9fa7c917-5b54-4460-924a-f0d01d60e5dd" containerName="nova-metadata-metadata" Jan 20 18:27:09 crc kubenswrapper[4661]: I0120 18:27:09.884537 4661 memory_manager.go:354] "RemoveStaleState removing state" podUID="67e439a0-e39b-4b8f-b3ec-9a4d33ff65b7" containerName="dnsmasq-dns" Jan 20 18:27:09 crc kubenswrapper[4661]: I0120 18:27:09.884552 4661 memory_manager.go:354] "RemoveStaleState removing state" podUID="f75d1b27-5cb6-496a-82ae-e3880dfd4b4c" containerName="nova-cell1-conductor-db-sync" Jan 20 18:27:09 crc kubenswrapper[4661]: I0120 18:27:09.884572 4661 memory_manager.go:354] "RemoveStaleState removing state" podUID="9fa7c917-5b54-4460-924a-f0d01d60e5dd" containerName="nova-metadata-metadata" Jan 20 18:27:09 crc kubenswrapper[4661]: I0120 18:27:09.884583 4661 memory_manager.go:354] "RemoveStaleState removing state" podUID="9fa7c917-5b54-4460-924a-f0d01d60e5dd" containerName="nova-metadata-log" Jan 20 18:27:09 crc kubenswrapper[4661]: I0120 18:27:09.884602 4661 memory_manager.go:354] "RemoveStaleState removing state" podUID="1e665d0f-e6d1-4829-b745-262ce699c011" containerName="nova-manage" Jan 20 18:27:09 crc kubenswrapper[4661]: I0120 18:27:09.885171 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Jan 20 18:27:09 crc kubenswrapper[4661]: I0120 18:27:09.890600 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Jan 20 18:27:09 crc kubenswrapper[4661]: I0120 18:27:09.910966 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 20 18:27:09 crc kubenswrapper[4661]: I0120 18:27:09.972561 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ebdbbfb8-e8c3-405b-914d-0ace13b50e32-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"ebdbbfb8-e8c3-405b-914d-0ace13b50e32\") " pod="openstack/nova-cell1-conductor-0" Jan 20 18:27:09 crc kubenswrapper[4661]: I0120 18:27:09.972608 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lzxlm\" (UniqueName: \"kubernetes.io/projected/ebdbbfb8-e8c3-405b-914d-0ace13b50e32-kube-api-access-lzxlm\") pod \"nova-cell1-conductor-0\" (UID: \"ebdbbfb8-e8c3-405b-914d-0ace13b50e32\") " pod="openstack/nova-cell1-conductor-0" Jan 20 18:27:09 crc kubenswrapper[4661]: I0120 18:27:09.972754 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ebdbbfb8-e8c3-405b-914d-0ace13b50e32-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"ebdbbfb8-e8c3-405b-914d-0ace13b50e32\") " pod="openstack/nova-cell1-conductor-0" Jan 20 18:27:10 crc kubenswrapper[4661]: I0120 18:27:10.074234 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ebdbbfb8-e8c3-405b-914d-0ace13b50e32-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"ebdbbfb8-e8c3-405b-914d-0ace13b50e32\") " pod="openstack/nova-cell1-conductor-0" Jan 20 18:27:10 crc kubenswrapper[4661]: I0120 18:27:10.074603 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ebdbbfb8-e8c3-405b-914d-0ace13b50e32-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"ebdbbfb8-e8c3-405b-914d-0ace13b50e32\") " pod="openstack/nova-cell1-conductor-0" Jan 20 18:27:10 crc kubenswrapper[4661]: I0120 18:27:10.074760 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lzxlm\" (UniqueName: \"kubernetes.io/projected/ebdbbfb8-e8c3-405b-914d-0ace13b50e32-kube-api-access-lzxlm\") pod \"nova-cell1-conductor-0\" (UID: \"ebdbbfb8-e8c3-405b-914d-0ace13b50e32\") " pod="openstack/nova-cell1-conductor-0" Jan 20 18:27:10 crc kubenswrapper[4661]: I0120 18:27:10.078082 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ebdbbfb8-e8c3-405b-914d-0ace13b50e32-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"ebdbbfb8-e8c3-405b-914d-0ace13b50e32\") " pod="openstack/nova-cell1-conductor-0" Jan 20 18:27:10 crc kubenswrapper[4661]: I0120 18:27:10.079370 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ebdbbfb8-e8c3-405b-914d-0ace13b50e32-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"ebdbbfb8-e8c3-405b-914d-0ace13b50e32\") " pod="openstack/nova-cell1-conductor-0" Jan 20 18:27:10 crc kubenswrapper[4661]: I0120 18:27:10.094536 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lzxlm\" (UniqueName: \"kubernetes.io/projected/ebdbbfb8-e8c3-405b-914d-0ace13b50e32-kube-api-access-lzxlm\") pod \"nova-cell1-conductor-0\" (UID: \"ebdbbfb8-e8c3-405b-914d-0ace13b50e32\") " pod="openstack/nova-cell1-conductor-0" Jan 20 18:27:10 crc kubenswrapper[4661]: I0120 18:27:10.134873 4661 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 20 18:27:10 crc kubenswrapper[4661]: I0120 18:27:10.162875 4661 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 20 18:27:10 crc kubenswrapper[4661]: I0120 18:27:10.183751 4661 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 20 18:27:10 crc kubenswrapper[4661]: I0120 18:27:10.185443 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 20 18:27:10 crc kubenswrapper[4661]: I0120 18:27:10.191449 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Jan 20 18:27:10 crc kubenswrapper[4661]: I0120 18:27:10.191803 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 20 18:27:10 crc kubenswrapper[4661]: I0120 18:27:10.198649 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 20 18:27:10 crc kubenswrapper[4661]: I0120 18:27:10.217633 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Jan 20 18:27:10 crc kubenswrapper[4661]: I0120 18:27:10.385437 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b10d37a8-0883-4e5a-9594-b6c216b03c38-config-data\") pod \"nova-metadata-0\" (UID: \"b10d37a8-0883-4e5a-9594-b6c216b03c38\") " pod="openstack/nova-metadata-0" Jan 20 18:27:10 crc kubenswrapper[4661]: I0120 18:27:10.385538 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b10d37a8-0883-4e5a-9594-b6c216b03c38-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"b10d37a8-0883-4e5a-9594-b6c216b03c38\") " pod="openstack/nova-metadata-0" Jan 20 18:27:10 crc kubenswrapper[4661]: I0120 18:27:10.385565 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b10d37a8-0883-4e5a-9594-b6c216b03c38-logs\") pod \"nova-metadata-0\" (UID: \"b10d37a8-0883-4e5a-9594-b6c216b03c38\") " pod="openstack/nova-metadata-0" Jan 20 18:27:10 crc kubenswrapper[4661]: I0120 18:27:10.385595 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xstc2\" (UniqueName: \"kubernetes.io/projected/b10d37a8-0883-4e5a-9594-b6c216b03c38-kube-api-access-xstc2\") pod \"nova-metadata-0\" (UID: \"b10d37a8-0883-4e5a-9594-b6c216b03c38\") " pod="openstack/nova-metadata-0" Jan 20 18:27:10 crc kubenswrapper[4661]: I0120 18:27:10.385633 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/b10d37a8-0883-4e5a-9594-b6c216b03c38-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"b10d37a8-0883-4e5a-9594-b6c216b03c38\") " pod="openstack/nova-metadata-0" Jan 20 18:27:10 crc kubenswrapper[4661]: I0120 18:27:10.486683 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b10d37a8-0883-4e5a-9594-b6c216b03c38-config-data\") pod \"nova-metadata-0\" (UID: \"b10d37a8-0883-4e5a-9594-b6c216b03c38\") " pod="openstack/nova-metadata-0" Jan 20 18:27:10 crc kubenswrapper[4661]: I0120 18:27:10.486730 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b10d37a8-0883-4e5a-9594-b6c216b03c38-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"b10d37a8-0883-4e5a-9594-b6c216b03c38\") " pod="openstack/nova-metadata-0" Jan 20 18:27:10 crc kubenswrapper[4661]: I0120 18:27:10.486752 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b10d37a8-0883-4e5a-9594-b6c216b03c38-logs\") pod \"nova-metadata-0\" (UID: \"b10d37a8-0883-4e5a-9594-b6c216b03c38\") " pod="openstack/nova-metadata-0" Jan 20 18:27:10 crc kubenswrapper[4661]: I0120 18:27:10.486784 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xstc2\" (UniqueName: \"kubernetes.io/projected/b10d37a8-0883-4e5a-9594-b6c216b03c38-kube-api-access-xstc2\") pod \"nova-metadata-0\" (UID: \"b10d37a8-0883-4e5a-9594-b6c216b03c38\") " pod="openstack/nova-metadata-0" Jan 20 18:27:10 crc kubenswrapper[4661]: I0120 18:27:10.486817 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/b10d37a8-0883-4e5a-9594-b6c216b03c38-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"b10d37a8-0883-4e5a-9594-b6c216b03c38\") " pod="openstack/nova-metadata-0" Jan 20 18:27:10 crc kubenswrapper[4661]: I0120 18:27:10.487556 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b10d37a8-0883-4e5a-9594-b6c216b03c38-logs\") pod \"nova-metadata-0\" (UID: \"b10d37a8-0883-4e5a-9594-b6c216b03c38\") " pod="openstack/nova-metadata-0" Jan 20 18:27:10 crc kubenswrapper[4661]: I0120 18:27:10.491486 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/b10d37a8-0883-4e5a-9594-b6c216b03c38-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"b10d37a8-0883-4e5a-9594-b6c216b03c38\") " pod="openstack/nova-metadata-0" Jan 20 18:27:10 crc kubenswrapper[4661]: I0120 18:27:10.493400 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b10d37a8-0883-4e5a-9594-b6c216b03c38-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"b10d37a8-0883-4e5a-9594-b6c216b03c38\") " pod="openstack/nova-metadata-0" Jan 20 18:27:10 crc kubenswrapper[4661]: I0120 18:27:10.494157 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b10d37a8-0883-4e5a-9594-b6c216b03c38-config-data\") pod \"nova-metadata-0\" (UID: \"b10d37a8-0883-4e5a-9594-b6c216b03c38\") " pod="openstack/nova-metadata-0" Jan 20 18:27:10 crc kubenswrapper[4661]: I0120 18:27:10.508926 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xstc2\" (UniqueName: \"kubernetes.io/projected/b10d37a8-0883-4e5a-9594-b6c216b03c38-kube-api-access-xstc2\") pod \"nova-metadata-0\" (UID: \"b10d37a8-0883-4e5a-9594-b6c216b03c38\") " pod="openstack/nova-metadata-0" Jan 20 18:27:10 crc kubenswrapper[4661]: I0120 18:27:10.546649 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 20 18:27:10 crc kubenswrapper[4661]: I0120 18:27:10.944475 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 20 18:27:11 crc kubenswrapper[4661]: I0120 18:27:11.026784 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 20 18:27:11 crc kubenswrapper[4661]: I0120 18:27:11.041661 4661 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Jan 20 18:27:11 crc kubenswrapper[4661]: E0120 18:27:11.226485 4661 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="7d90cc24945686cc416855e344577f6460ed94c770e46d35d11778209756fa51" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 20 18:27:11 crc kubenswrapper[4661]: E0120 18:27:11.234392 4661 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="7d90cc24945686cc416855e344577f6460ed94c770e46d35d11778209756fa51" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 20 18:27:11 crc kubenswrapper[4661]: E0120 18:27:11.235536 4661 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="7d90cc24945686cc416855e344577f6460ed94c770e46d35d11778209756fa51" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 20 18:27:11 crc kubenswrapper[4661]: E0120 18:27:11.235605 4661 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="9836cc19-9ee3-4d13-9ae7-c774b403284a" containerName="nova-scheduler-scheduler" Jan 20 18:27:11 crc kubenswrapper[4661]: I0120 18:27:11.821346 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"b10d37a8-0883-4e5a-9594-b6c216b03c38","Type":"ContainerStarted","Data":"49efe543fd7bba20e2fb74d878c116c5c2b7c2095c0136a8ea6e06ae5ceb501d"} Jan 20 18:27:11 crc kubenswrapper[4661]: I0120 18:27:11.821396 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"b10d37a8-0883-4e5a-9594-b6c216b03c38","Type":"ContainerStarted","Data":"49b02e4ed668519864414d43c44c2b4ef92f630699e9f34bb24814112bd47ce5"} Jan 20 18:27:11 crc kubenswrapper[4661]: I0120 18:27:11.821409 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"b10d37a8-0883-4e5a-9594-b6c216b03c38","Type":"ContainerStarted","Data":"5af71259454017f1ba4c791f64cd2e0b36bc1ecb0c9668f45f06e1cf5bfb1432"} Jan 20 18:27:11 crc kubenswrapper[4661]: I0120 18:27:11.824862 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"ebdbbfb8-e8c3-405b-914d-0ace13b50e32","Type":"ContainerStarted","Data":"d3e1368a0edecb0f9f18248038641aa5e68b5009590fc255ca9e5e4d50b4bd50"} Jan 20 18:27:11 crc kubenswrapper[4661]: I0120 18:27:11.824957 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"ebdbbfb8-e8c3-405b-914d-0ace13b50e32","Type":"ContainerStarted","Data":"3b69b5b6a3fcca369aa5ddd7b20b1d7554c14d8b98a83ecd17cdcadd6e3f37fd"} Jan 20 18:27:11 crc kubenswrapper[4661]: I0120 18:27:11.825311 4661 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Jan 20 18:27:11 crc kubenswrapper[4661]: I0120 18:27:11.866563 4661 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=1.8665420830000001 podStartE2EDuration="1.866542083s" podCreationTimestamp="2026-01-20 18:27:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 18:27:11.849065001 +0000 UTC m=+1288.179854683" watchObservedRunningTime="2026-01-20 18:27:11.866542083 +0000 UTC m=+1288.197331765" Jan 20 18:27:11 crc kubenswrapper[4661]: I0120 18:27:11.886152 4661 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=2.8861323949999997 podStartE2EDuration="2.886132395s" podCreationTimestamp="2026-01-20 18:27:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 18:27:11.86313235 +0000 UTC m=+1288.193922022" watchObservedRunningTime="2026-01-20 18:27:11.886132395 +0000 UTC m=+1288.216922057" Jan 20 18:27:12 crc kubenswrapper[4661]: I0120 18:27:12.153635 4661 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9fa7c917-5b54-4460-924a-f0d01d60e5dd" path="/var/lib/kubelet/pods/9fa7c917-5b54-4460-924a-f0d01d60e5dd/volumes" Jan 20 18:27:12 crc kubenswrapper[4661]: I0120 18:27:12.806871 4661 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 20 18:27:12 crc kubenswrapper[4661]: I0120 18:27:12.843095 4661 generic.go:334] "Generic (PLEG): container finished" podID="80a4f061-499b-40d4-9958-390e223559d1" containerID="32b2b2daf650b7926639da40af76fafc7d0476a8c12ce85e77ca915191924019" exitCode=0 Jan 20 18:27:12 crc kubenswrapper[4661]: I0120 18:27:12.844185 4661 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 20 18:27:12 crc kubenswrapper[4661]: I0120 18:27:12.844249 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"80a4f061-499b-40d4-9958-390e223559d1","Type":"ContainerDied","Data":"32b2b2daf650b7926639da40af76fafc7d0476a8c12ce85e77ca915191924019"} Jan 20 18:27:12 crc kubenswrapper[4661]: I0120 18:27:12.844299 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"80a4f061-499b-40d4-9958-390e223559d1","Type":"ContainerDied","Data":"eec4b567528fb7812e3fd365d01b6b9648176921b2726047c2f704eacfdd1216"} Jan 20 18:27:12 crc kubenswrapper[4661]: I0120 18:27:12.844320 4661 scope.go:117] "RemoveContainer" containerID="32b2b2daf650b7926639da40af76fafc7d0476a8c12ce85e77ca915191924019" Jan 20 18:27:12 crc kubenswrapper[4661]: I0120 18:27:12.872329 4661 scope.go:117] "RemoveContainer" containerID="e2c5cd34cf4e5266d6c0bc2f9ea81539f2ad5147260f9af65e235608a42c7bea" Jan 20 18:27:12 crc kubenswrapper[4661]: I0120 18:27:12.896796 4661 scope.go:117] "RemoveContainer" containerID="32b2b2daf650b7926639da40af76fafc7d0476a8c12ce85e77ca915191924019" Jan 20 18:27:12 crc kubenswrapper[4661]: E0120 18:27:12.897477 4661 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"32b2b2daf650b7926639da40af76fafc7d0476a8c12ce85e77ca915191924019\": container with ID starting with 32b2b2daf650b7926639da40af76fafc7d0476a8c12ce85e77ca915191924019 not found: ID does not exist" containerID="32b2b2daf650b7926639da40af76fafc7d0476a8c12ce85e77ca915191924019" Jan 20 18:27:12 crc kubenswrapper[4661]: I0120 18:27:12.897531 4661 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"32b2b2daf650b7926639da40af76fafc7d0476a8c12ce85e77ca915191924019"} err="failed to get container status \"32b2b2daf650b7926639da40af76fafc7d0476a8c12ce85e77ca915191924019\": rpc error: code = NotFound desc = could not find container \"32b2b2daf650b7926639da40af76fafc7d0476a8c12ce85e77ca915191924019\": container with ID starting with 32b2b2daf650b7926639da40af76fafc7d0476a8c12ce85e77ca915191924019 not found: ID does not exist" Jan 20 18:27:12 crc kubenswrapper[4661]: I0120 18:27:12.897565 4661 scope.go:117] "RemoveContainer" containerID="e2c5cd34cf4e5266d6c0bc2f9ea81539f2ad5147260f9af65e235608a42c7bea" Jan 20 18:27:12 crc kubenswrapper[4661]: E0120 18:27:12.900071 4661 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e2c5cd34cf4e5266d6c0bc2f9ea81539f2ad5147260f9af65e235608a42c7bea\": container with ID starting with e2c5cd34cf4e5266d6c0bc2f9ea81539f2ad5147260f9af65e235608a42c7bea not found: ID does not exist" containerID="e2c5cd34cf4e5266d6c0bc2f9ea81539f2ad5147260f9af65e235608a42c7bea" Jan 20 18:27:12 crc kubenswrapper[4661]: I0120 18:27:12.900102 4661 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e2c5cd34cf4e5266d6c0bc2f9ea81539f2ad5147260f9af65e235608a42c7bea"} err="failed to get container status \"e2c5cd34cf4e5266d6c0bc2f9ea81539f2ad5147260f9af65e235608a42c7bea\": rpc error: code = NotFound desc = could not find container \"e2c5cd34cf4e5266d6c0bc2f9ea81539f2ad5147260f9af65e235608a42c7bea\": container with ID starting with e2c5cd34cf4e5266d6c0bc2f9ea81539f2ad5147260f9af65e235608a42c7bea not found: ID does not exist" Jan 20 18:27:12 crc kubenswrapper[4661]: I0120 18:27:12.932209 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bvhxp\" (UniqueName: \"kubernetes.io/projected/80a4f061-499b-40d4-9958-390e223559d1-kube-api-access-bvhxp\") pod \"80a4f061-499b-40d4-9958-390e223559d1\" (UID: \"80a4f061-499b-40d4-9958-390e223559d1\") " Jan 20 18:27:12 crc kubenswrapper[4661]: I0120 18:27:12.932606 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/80a4f061-499b-40d4-9958-390e223559d1-logs\") pod \"80a4f061-499b-40d4-9958-390e223559d1\" (UID: \"80a4f061-499b-40d4-9958-390e223559d1\") " Jan 20 18:27:12 crc kubenswrapper[4661]: I0120 18:27:12.932826 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/80a4f061-499b-40d4-9958-390e223559d1-combined-ca-bundle\") pod \"80a4f061-499b-40d4-9958-390e223559d1\" (UID: \"80a4f061-499b-40d4-9958-390e223559d1\") " Jan 20 18:27:12 crc kubenswrapper[4661]: I0120 18:27:12.932931 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/80a4f061-499b-40d4-9958-390e223559d1-config-data\") pod \"80a4f061-499b-40d4-9958-390e223559d1\" (UID: \"80a4f061-499b-40d4-9958-390e223559d1\") " Jan 20 18:27:12 crc kubenswrapper[4661]: I0120 18:27:12.934477 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/80a4f061-499b-40d4-9958-390e223559d1-logs" (OuterVolumeSpecName: "logs") pod "80a4f061-499b-40d4-9958-390e223559d1" (UID: "80a4f061-499b-40d4-9958-390e223559d1"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 18:27:12 crc kubenswrapper[4661]: I0120 18:27:12.944882 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/80a4f061-499b-40d4-9958-390e223559d1-kube-api-access-bvhxp" (OuterVolumeSpecName: "kube-api-access-bvhxp") pod "80a4f061-499b-40d4-9958-390e223559d1" (UID: "80a4f061-499b-40d4-9958-390e223559d1"). InnerVolumeSpecName "kube-api-access-bvhxp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 18:27:12 crc kubenswrapper[4661]: I0120 18:27:12.982176 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/80a4f061-499b-40d4-9958-390e223559d1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "80a4f061-499b-40d4-9958-390e223559d1" (UID: "80a4f061-499b-40d4-9958-390e223559d1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 18:27:13 crc kubenswrapper[4661]: I0120 18:27:13.004874 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/80a4f061-499b-40d4-9958-390e223559d1-config-data" (OuterVolumeSpecName: "config-data") pod "80a4f061-499b-40d4-9958-390e223559d1" (UID: "80a4f061-499b-40d4-9958-390e223559d1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 18:27:13 crc kubenswrapper[4661]: I0120 18:27:13.036819 4661 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/80a4f061-499b-40d4-9958-390e223559d1-logs\") on node \"crc\" DevicePath \"\"" Jan 20 18:27:13 crc kubenswrapper[4661]: I0120 18:27:13.036853 4661 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/80a4f061-499b-40d4-9958-390e223559d1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 20 18:27:13 crc kubenswrapper[4661]: I0120 18:27:13.036868 4661 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/80a4f061-499b-40d4-9958-390e223559d1-config-data\") on node \"crc\" DevicePath \"\"" Jan 20 18:27:13 crc kubenswrapper[4661]: I0120 18:27:13.036877 4661 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bvhxp\" (UniqueName: \"kubernetes.io/projected/80a4f061-499b-40d4-9958-390e223559d1-kube-api-access-bvhxp\") on node \"crc\" DevicePath \"\"" Jan 20 18:27:13 crc kubenswrapper[4661]: I0120 18:27:13.696700 4661 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 20 18:27:13 crc kubenswrapper[4661]: I0120 18:27:13.704999 4661 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 20 18:27:13 crc kubenswrapper[4661]: I0120 18:27:13.748041 4661 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 20 18:27:13 crc kubenswrapper[4661]: E0120 18:27:13.748447 4661 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="80a4f061-499b-40d4-9958-390e223559d1" containerName="nova-api-log" Jan 20 18:27:13 crc kubenswrapper[4661]: I0120 18:27:13.748469 4661 state_mem.go:107] "Deleted CPUSet assignment" podUID="80a4f061-499b-40d4-9958-390e223559d1" containerName="nova-api-log" Jan 20 18:27:13 crc kubenswrapper[4661]: E0120 18:27:13.748501 4661 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="80a4f061-499b-40d4-9958-390e223559d1" containerName="nova-api-api" Jan 20 18:27:13 crc kubenswrapper[4661]: I0120 18:27:13.748507 4661 state_mem.go:107] "Deleted CPUSet assignment" podUID="80a4f061-499b-40d4-9958-390e223559d1" containerName="nova-api-api" Jan 20 18:27:13 crc kubenswrapper[4661]: I0120 18:27:13.748683 4661 memory_manager.go:354] "RemoveStaleState removing state" podUID="80a4f061-499b-40d4-9958-390e223559d1" containerName="nova-api-log" Jan 20 18:27:13 crc kubenswrapper[4661]: I0120 18:27:13.748707 4661 memory_manager.go:354] "RemoveStaleState removing state" podUID="80a4f061-499b-40d4-9958-390e223559d1" containerName="nova-api-api" Jan 20 18:27:13 crc kubenswrapper[4661]: I0120 18:27:13.749641 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 20 18:27:13 crc kubenswrapper[4661]: I0120 18:27:13.751824 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 20 18:27:13 crc kubenswrapper[4661]: I0120 18:27:13.788225 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 20 18:27:13 crc kubenswrapper[4661]: I0120 18:27:13.950098 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z5gzr\" (UniqueName: \"kubernetes.io/projected/b5862fe2-a3a1-4918-bf74-1c8df23a2584-kube-api-access-z5gzr\") pod \"nova-api-0\" (UID: \"b5862fe2-a3a1-4918-bf74-1c8df23a2584\") " pod="openstack/nova-api-0" Jan 20 18:27:13 crc kubenswrapper[4661]: I0120 18:27:13.950152 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b5862fe2-a3a1-4918-bf74-1c8df23a2584-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"b5862fe2-a3a1-4918-bf74-1c8df23a2584\") " pod="openstack/nova-api-0" Jan 20 18:27:13 crc kubenswrapper[4661]: I0120 18:27:13.950179 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b5862fe2-a3a1-4918-bf74-1c8df23a2584-config-data\") pod \"nova-api-0\" (UID: \"b5862fe2-a3a1-4918-bf74-1c8df23a2584\") " pod="openstack/nova-api-0" Jan 20 18:27:13 crc kubenswrapper[4661]: I0120 18:27:13.950405 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b5862fe2-a3a1-4918-bf74-1c8df23a2584-logs\") pod \"nova-api-0\" (UID: \"b5862fe2-a3a1-4918-bf74-1c8df23a2584\") " pod="openstack/nova-api-0" Jan 20 18:27:14 crc kubenswrapper[4661]: I0120 18:27:14.055078 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z5gzr\" (UniqueName: \"kubernetes.io/projected/b5862fe2-a3a1-4918-bf74-1c8df23a2584-kube-api-access-z5gzr\") pod \"nova-api-0\" (UID: \"b5862fe2-a3a1-4918-bf74-1c8df23a2584\") " pod="openstack/nova-api-0" Jan 20 18:27:14 crc kubenswrapper[4661]: I0120 18:27:14.055603 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b5862fe2-a3a1-4918-bf74-1c8df23a2584-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"b5862fe2-a3a1-4918-bf74-1c8df23a2584\") " pod="openstack/nova-api-0" Jan 20 18:27:14 crc kubenswrapper[4661]: I0120 18:27:14.055654 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b5862fe2-a3a1-4918-bf74-1c8df23a2584-config-data\") pod \"nova-api-0\" (UID: \"b5862fe2-a3a1-4918-bf74-1c8df23a2584\") " pod="openstack/nova-api-0" Jan 20 18:27:14 crc kubenswrapper[4661]: I0120 18:27:14.055793 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b5862fe2-a3a1-4918-bf74-1c8df23a2584-logs\") pod \"nova-api-0\" (UID: \"b5862fe2-a3a1-4918-bf74-1c8df23a2584\") " pod="openstack/nova-api-0" Jan 20 18:27:14 crc kubenswrapper[4661]: I0120 18:27:14.060916 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b5862fe2-a3a1-4918-bf74-1c8df23a2584-logs\") pod \"nova-api-0\" (UID: \"b5862fe2-a3a1-4918-bf74-1c8df23a2584\") " pod="openstack/nova-api-0" Jan 20 18:27:14 crc kubenswrapper[4661]: I0120 18:27:14.074651 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b5862fe2-a3a1-4918-bf74-1c8df23a2584-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"b5862fe2-a3a1-4918-bf74-1c8df23a2584\") " pod="openstack/nova-api-0" Jan 20 18:27:14 crc kubenswrapper[4661]: I0120 18:27:14.077194 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b5862fe2-a3a1-4918-bf74-1c8df23a2584-config-data\") pod \"nova-api-0\" (UID: \"b5862fe2-a3a1-4918-bf74-1c8df23a2584\") " pod="openstack/nova-api-0" Jan 20 18:27:14 crc kubenswrapper[4661]: I0120 18:27:14.084216 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z5gzr\" (UniqueName: \"kubernetes.io/projected/b5862fe2-a3a1-4918-bf74-1c8df23a2584-kube-api-access-z5gzr\") pod \"nova-api-0\" (UID: \"b5862fe2-a3a1-4918-bf74-1c8df23a2584\") " pod="openstack/nova-api-0" Jan 20 18:27:14 crc kubenswrapper[4661]: I0120 18:27:14.152396 4661 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="80a4f061-499b-40d4-9958-390e223559d1" path="/var/lib/kubelet/pods/80a4f061-499b-40d4-9958-390e223559d1/volumes" Jan 20 18:27:14 crc kubenswrapper[4661]: I0120 18:27:14.369013 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 20 18:27:14 crc kubenswrapper[4661]: I0120 18:27:14.840025 4661 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 20 18:27:14 crc kubenswrapper[4661]: I0120 18:27:14.888218 4661 generic.go:334] "Generic (PLEG): container finished" podID="9836cc19-9ee3-4d13-9ae7-c774b403284a" containerID="7d90cc24945686cc416855e344577f6460ed94c770e46d35d11778209756fa51" exitCode=0 Jan 20 18:27:14 crc kubenswrapper[4661]: I0120 18:27:14.888356 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"9836cc19-9ee3-4d13-9ae7-c774b403284a","Type":"ContainerDied","Data":"7d90cc24945686cc416855e344577f6460ed94c770e46d35d11778209756fa51"} Jan 20 18:27:14 crc kubenswrapper[4661]: I0120 18:27:14.888440 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"9836cc19-9ee3-4d13-9ae7-c774b403284a","Type":"ContainerDied","Data":"628ea18ed17b171cdaea90e95cbf2873fd817da297f05ec180e150cc6c26c569"} Jan 20 18:27:14 crc kubenswrapper[4661]: I0120 18:27:14.888474 4661 scope.go:117] "RemoveContainer" containerID="7d90cc24945686cc416855e344577f6460ed94c770e46d35d11778209756fa51" Jan 20 18:27:14 crc kubenswrapper[4661]: I0120 18:27:14.888655 4661 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 20 18:27:14 crc kubenswrapper[4661]: I0120 18:27:14.924414 4661 scope.go:117] "RemoveContainer" containerID="7d90cc24945686cc416855e344577f6460ed94c770e46d35d11778209756fa51" Jan 20 18:27:14 crc kubenswrapper[4661]: E0120 18:27:14.925148 4661 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7d90cc24945686cc416855e344577f6460ed94c770e46d35d11778209756fa51\": container with ID starting with 7d90cc24945686cc416855e344577f6460ed94c770e46d35d11778209756fa51 not found: ID does not exist" containerID="7d90cc24945686cc416855e344577f6460ed94c770e46d35d11778209756fa51" Jan 20 18:27:14 crc kubenswrapper[4661]: I0120 18:27:14.925644 4661 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7d90cc24945686cc416855e344577f6460ed94c770e46d35d11778209756fa51"} err="failed to get container status \"7d90cc24945686cc416855e344577f6460ed94c770e46d35d11778209756fa51\": rpc error: code = NotFound desc = could not find container \"7d90cc24945686cc416855e344577f6460ed94c770e46d35d11778209756fa51\": container with ID starting with 7d90cc24945686cc416855e344577f6460ed94c770e46d35d11778209756fa51 not found: ID does not exist" Jan 20 18:27:14 crc kubenswrapper[4661]: I0120 18:27:14.969855 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9836cc19-9ee3-4d13-9ae7-c774b403284a-config-data\") pod \"9836cc19-9ee3-4d13-9ae7-c774b403284a\" (UID: \"9836cc19-9ee3-4d13-9ae7-c774b403284a\") " Jan 20 18:27:14 crc kubenswrapper[4661]: I0120 18:27:14.970015 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9836cc19-9ee3-4d13-9ae7-c774b403284a-combined-ca-bundle\") pod \"9836cc19-9ee3-4d13-9ae7-c774b403284a\" (UID: \"9836cc19-9ee3-4d13-9ae7-c774b403284a\") " Jan 20 18:27:14 crc kubenswrapper[4661]: I0120 18:27:14.970192 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qtz5s\" (UniqueName: \"kubernetes.io/projected/9836cc19-9ee3-4d13-9ae7-c774b403284a-kube-api-access-qtz5s\") pod \"9836cc19-9ee3-4d13-9ae7-c774b403284a\" (UID: \"9836cc19-9ee3-4d13-9ae7-c774b403284a\") " Jan 20 18:27:14 crc kubenswrapper[4661]: I0120 18:27:14.977271 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9836cc19-9ee3-4d13-9ae7-c774b403284a-kube-api-access-qtz5s" (OuterVolumeSpecName: "kube-api-access-qtz5s") pod "9836cc19-9ee3-4d13-9ae7-c774b403284a" (UID: "9836cc19-9ee3-4d13-9ae7-c774b403284a"). InnerVolumeSpecName "kube-api-access-qtz5s". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 18:27:14 crc kubenswrapper[4661]: I0120 18:27:14.996936 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9836cc19-9ee3-4d13-9ae7-c774b403284a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9836cc19-9ee3-4d13-9ae7-c774b403284a" (UID: "9836cc19-9ee3-4d13-9ae7-c774b403284a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 18:27:15 crc kubenswrapper[4661]: I0120 18:27:15.006884 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9836cc19-9ee3-4d13-9ae7-c774b403284a-config-data" (OuterVolumeSpecName: "config-data") pod "9836cc19-9ee3-4d13-9ae7-c774b403284a" (UID: "9836cc19-9ee3-4d13-9ae7-c774b403284a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 18:27:15 crc kubenswrapper[4661]: I0120 18:27:15.045558 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 20 18:27:15 crc kubenswrapper[4661]: W0120 18:27:15.052076 4661 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb5862fe2_a3a1_4918_bf74_1c8df23a2584.slice/crio-e4e7aeb9e17fbbd565ca3e7a89214730bf634823920fa2d944f96361953bedce WatchSource:0}: Error finding container e4e7aeb9e17fbbd565ca3e7a89214730bf634823920fa2d944f96361953bedce: Status 404 returned error can't find the container with id e4e7aeb9e17fbbd565ca3e7a89214730bf634823920fa2d944f96361953bedce Jan 20 18:27:15 crc kubenswrapper[4661]: I0120 18:27:15.077615 4661 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9836cc19-9ee3-4d13-9ae7-c774b403284a-config-data\") on node \"crc\" DevicePath \"\"" Jan 20 18:27:15 crc kubenswrapper[4661]: I0120 18:27:15.077652 4661 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9836cc19-9ee3-4d13-9ae7-c774b403284a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 20 18:27:15 crc kubenswrapper[4661]: I0120 18:27:15.077678 4661 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qtz5s\" (UniqueName: \"kubernetes.io/projected/9836cc19-9ee3-4d13-9ae7-c774b403284a-kube-api-access-qtz5s\") on node \"crc\" DevicePath \"\"" Jan 20 18:27:15 crc kubenswrapper[4661]: I0120 18:27:15.232455 4661 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 20 18:27:15 crc kubenswrapper[4661]: I0120 18:27:15.242043 4661 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Jan 20 18:27:15 crc kubenswrapper[4661]: I0120 18:27:15.250365 4661 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 20 18:27:15 crc kubenswrapper[4661]: E0120 18:27:15.250739 4661 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9836cc19-9ee3-4d13-9ae7-c774b403284a" containerName="nova-scheduler-scheduler" Jan 20 18:27:15 crc kubenswrapper[4661]: I0120 18:27:15.250756 4661 state_mem.go:107] "Deleted CPUSet assignment" podUID="9836cc19-9ee3-4d13-9ae7-c774b403284a" containerName="nova-scheduler-scheduler" Jan 20 18:27:15 crc kubenswrapper[4661]: I0120 18:27:15.250899 4661 memory_manager.go:354] "RemoveStaleState removing state" podUID="9836cc19-9ee3-4d13-9ae7-c774b403284a" containerName="nova-scheduler-scheduler" Jan 20 18:27:15 crc kubenswrapper[4661]: I0120 18:27:15.251576 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 20 18:27:15 crc kubenswrapper[4661]: I0120 18:27:15.253609 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 20 18:27:15 crc kubenswrapper[4661]: I0120 18:27:15.269113 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 20 18:27:15 crc kubenswrapper[4661]: I0120 18:27:15.282313 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3ac36711-8044-4bf7-a7a3-12f6805f0538-config-data\") pod \"nova-scheduler-0\" (UID: \"3ac36711-8044-4bf7-a7a3-12f6805f0538\") " pod="openstack/nova-scheduler-0" Jan 20 18:27:15 crc kubenswrapper[4661]: I0120 18:27:15.282454 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3ac36711-8044-4bf7-a7a3-12f6805f0538-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"3ac36711-8044-4bf7-a7a3-12f6805f0538\") " pod="openstack/nova-scheduler-0" Jan 20 18:27:15 crc kubenswrapper[4661]: I0120 18:27:15.282509 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9f2vx\" (UniqueName: \"kubernetes.io/projected/3ac36711-8044-4bf7-a7a3-12f6805f0538-kube-api-access-9f2vx\") pod \"nova-scheduler-0\" (UID: \"3ac36711-8044-4bf7-a7a3-12f6805f0538\") " pod="openstack/nova-scheduler-0" Jan 20 18:27:15 crc kubenswrapper[4661]: I0120 18:27:15.383582 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3ac36711-8044-4bf7-a7a3-12f6805f0538-config-data\") pod \"nova-scheduler-0\" (UID: \"3ac36711-8044-4bf7-a7a3-12f6805f0538\") " pod="openstack/nova-scheduler-0" Jan 20 18:27:15 crc kubenswrapper[4661]: I0120 18:27:15.384066 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3ac36711-8044-4bf7-a7a3-12f6805f0538-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"3ac36711-8044-4bf7-a7a3-12f6805f0538\") " pod="openstack/nova-scheduler-0" Jan 20 18:27:15 crc kubenswrapper[4661]: I0120 18:27:15.384109 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9f2vx\" (UniqueName: \"kubernetes.io/projected/3ac36711-8044-4bf7-a7a3-12f6805f0538-kube-api-access-9f2vx\") pod \"nova-scheduler-0\" (UID: \"3ac36711-8044-4bf7-a7a3-12f6805f0538\") " pod="openstack/nova-scheduler-0" Jan 20 18:27:15 crc kubenswrapper[4661]: I0120 18:27:15.390459 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3ac36711-8044-4bf7-a7a3-12f6805f0538-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"3ac36711-8044-4bf7-a7a3-12f6805f0538\") " pod="openstack/nova-scheduler-0" Jan 20 18:27:15 crc kubenswrapper[4661]: I0120 18:27:15.392150 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3ac36711-8044-4bf7-a7a3-12f6805f0538-config-data\") pod \"nova-scheduler-0\" (UID: \"3ac36711-8044-4bf7-a7a3-12f6805f0538\") " pod="openstack/nova-scheduler-0" Jan 20 18:27:15 crc kubenswrapper[4661]: I0120 18:27:15.403261 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9f2vx\" (UniqueName: \"kubernetes.io/projected/3ac36711-8044-4bf7-a7a3-12f6805f0538-kube-api-access-9f2vx\") pod \"nova-scheduler-0\" (UID: \"3ac36711-8044-4bf7-a7a3-12f6805f0538\") " pod="openstack/nova-scheduler-0" Jan 20 18:27:15 crc kubenswrapper[4661]: I0120 18:27:15.547645 4661 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 20 18:27:15 crc kubenswrapper[4661]: I0120 18:27:15.547739 4661 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 20 18:27:15 crc kubenswrapper[4661]: I0120 18:27:15.575978 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 20 18:27:15 crc kubenswrapper[4661]: I0120 18:27:15.896702 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"b5862fe2-a3a1-4918-bf74-1c8df23a2584","Type":"ContainerStarted","Data":"3675801446bfa6cf88d7b4cf903d877df797578ba07e86f6d5bcfb1d67b142d4"} Jan 20 18:27:15 crc kubenswrapper[4661]: I0120 18:27:15.897161 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"b5862fe2-a3a1-4918-bf74-1c8df23a2584","Type":"ContainerStarted","Data":"11ab9a9b7ece251b799f535eaec6ff94347521b988eb1201b772db1fb4d1ef57"} Jan 20 18:27:15 crc kubenswrapper[4661]: I0120 18:27:15.897174 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"b5862fe2-a3a1-4918-bf74-1c8df23a2584","Type":"ContainerStarted","Data":"e4e7aeb9e17fbbd565ca3e7a89214730bf634823920fa2d944f96361953bedce"} Jan 20 18:27:16 crc kubenswrapper[4661]: I0120 18:27:16.049329 4661 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=3.049309137 podStartE2EDuration="3.049309137s" podCreationTimestamp="2026-01-20 18:27:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 18:27:15.929386585 +0000 UTC m=+1292.260176247" watchObservedRunningTime="2026-01-20 18:27:16.049309137 +0000 UTC m=+1292.380098799" Jan 20 18:27:16 crc kubenswrapper[4661]: I0120 18:27:16.055501 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 20 18:27:16 crc kubenswrapper[4661]: W0120 18:27:16.056103 4661 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3ac36711_8044_4bf7_a7a3_12f6805f0538.slice/crio-ef0c09c5229e734bc9d5ee60b13fa0c34ae3ed4bacf8e905ddb11d40461fa6d8 WatchSource:0}: Error finding container ef0c09c5229e734bc9d5ee60b13fa0c34ae3ed4bacf8e905ddb11d40461fa6d8: Status 404 returned error can't find the container with id ef0c09c5229e734bc9d5ee60b13fa0c34ae3ed4bacf8e905ddb11d40461fa6d8 Jan 20 18:27:16 crc kubenswrapper[4661]: I0120 18:27:16.150918 4661 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9836cc19-9ee3-4d13-9ae7-c774b403284a" path="/var/lib/kubelet/pods/9836cc19-9ee3-4d13-9ae7-c774b403284a/volumes" Jan 20 18:27:16 crc kubenswrapper[4661]: I0120 18:27:16.911466 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"3ac36711-8044-4bf7-a7a3-12f6805f0538","Type":"ContainerStarted","Data":"aa8e7288364cad916dff48d1f1dbcfccccb4192b05929a79050678fdbc4ecc24"} Jan 20 18:27:16 crc kubenswrapper[4661]: I0120 18:27:16.911806 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"3ac36711-8044-4bf7-a7a3-12f6805f0538","Type":"ContainerStarted","Data":"ef0c09c5229e734bc9d5ee60b13fa0c34ae3ed4bacf8e905ddb11d40461fa6d8"} Jan 20 18:27:20 crc kubenswrapper[4661]: I0120 18:27:20.248618 4661 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Jan 20 18:27:20 crc kubenswrapper[4661]: I0120 18:27:20.264767 4661 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=5.264749752 podStartE2EDuration="5.264749752s" podCreationTimestamp="2026-01-20 18:27:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 18:27:16.937930421 +0000 UTC m=+1293.268720093" watchObservedRunningTime="2026-01-20 18:27:20.264749752 +0000 UTC m=+1296.595539414" Jan 20 18:27:20 crc kubenswrapper[4661]: I0120 18:27:20.547287 4661 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 20 18:27:20 crc kubenswrapper[4661]: I0120 18:27:20.547341 4661 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 20 18:27:20 crc kubenswrapper[4661]: I0120 18:27:20.576612 4661 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 20 18:27:21 crc kubenswrapper[4661]: I0120 18:27:21.560013 4661 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="b10d37a8-0883-4e5a-9594-b6c216b03c38" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.178:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 20 18:27:21 crc kubenswrapper[4661]: I0120 18:27:21.560027 4661 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="b10d37a8-0883-4e5a-9594-b6c216b03c38" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.178:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 20 18:27:24 crc kubenswrapper[4661]: I0120 18:27:24.370662 4661 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 20 18:27:24 crc kubenswrapper[4661]: I0120 18:27:24.371049 4661 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 20 18:27:25 crc kubenswrapper[4661]: I0120 18:27:25.453851 4661 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="b5862fe2-a3a1-4918-bf74-1c8df23a2584" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.179:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 20 18:27:25 crc kubenswrapper[4661]: I0120 18:27:25.453875 4661 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="b5862fe2-a3a1-4918-bf74-1c8df23a2584" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.179:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 20 18:27:25 crc kubenswrapper[4661]: I0120 18:27:25.576886 4661 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Jan 20 18:27:25 crc kubenswrapper[4661]: I0120 18:27:25.714034 4661 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Jan 20 18:27:26 crc kubenswrapper[4661]: I0120 18:27:26.006313 4661 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Jan 20 18:27:30 crc kubenswrapper[4661]: I0120 18:27:30.553255 4661 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 20 18:27:30 crc kubenswrapper[4661]: I0120 18:27:30.557270 4661 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 20 18:27:30 crc kubenswrapper[4661]: I0120 18:27:30.580652 4661 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 20 18:27:31 crc kubenswrapper[4661]: I0120 18:27:31.030166 4661 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 20 18:27:32 crc kubenswrapper[4661]: I0120 18:27:32.025609 4661 generic.go:334] "Generic (PLEG): container finished" podID="f0700cc3-9290-41f8-b785-ad17bf0917ed" containerID="85ae039c6ba58ca8acaec9761c0ab69e29eda80f7400d96e264e079a6c80dc89" exitCode=137 Jan 20 18:27:32 crc kubenswrapper[4661]: I0120 18:27:32.025707 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"f0700cc3-9290-41f8-b785-ad17bf0917ed","Type":"ContainerDied","Data":"85ae039c6ba58ca8acaec9761c0ab69e29eda80f7400d96e264e079a6c80dc89"} Jan 20 18:27:32 crc kubenswrapper[4661]: I0120 18:27:32.026035 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"f0700cc3-9290-41f8-b785-ad17bf0917ed","Type":"ContainerDied","Data":"1281a5fb7ef38717348c98c652c1553c933194e76009e1bb94374bfd79ee1089"} Jan 20 18:27:32 crc kubenswrapper[4661]: I0120 18:27:32.026060 4661 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1281a5fb7ef38717348c98c652c1553c933194e76009e1bb94374bfd79ee1089" Jan 20 18:27:32 crc kubenswrapper[4661]: I0120 18:27:32.029649 4661 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 20 18:27:32 crc kubenswrapper[4661]: I0120 18:27:32.136571 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f0700cc3-9290-41f8-b785-ad17bf0917ed-combined-ca-bundle\") pod \"f0700cc3-9290-41f8-b785-ad17bf0917ed\" (UID: \"f0700cc3-9290-41f8-b785-ad17bf0917ed\") " Jan 20 18:27:32 crc kubenswrapper[4661]: I0120 18:27:32.136760 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h8p88\" (UniqueName: \"kubernetes.io/projected/f0700cc3-9290-41f8-b785-ad17bf0917ed-kube-api-access-h8p88\") pod \"f0700cc3-9290-41f8-b785-ad17bf0917ed\" (UID: \"f0700cc3-9290-41f8-b785-ad17bf0917ed\") " Jan 20 18:27:32 crc kubenswrapper[4661]: I0120 18:27:32.136822 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f0700cc3-9290-41f8-b785-ad17bf0917ed-config-data\") pod \"f0700cc3-9290-41f8-b785-ad17bf0917ed\" (UID: \"f0700cc3-9290-41f8-b785-ad17bf0917ed\") " Jan 20 18:27:32 crc kubenswrapper[4661]: I0120 18:27:32.142697 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f0700cc3-9290-41f8-b785-ad17bf0917ed-kube-api-access-h8p88" (OuterVolumeSpecName: "kube-api-access-h8p88") pod "f0700cc3-9290-41f8-b785-ad17bf0917ed" (UID: "f0700cc3-9290-41f8-b785-ad17bf0917ed"). InnerVolumeSpecName "kube-api-access-h8p88". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 18:27:32 crc kubenswrapper[4661]: I0120 18:27:32.160895 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f0700cc3-9290-41f8-b785-ad17bf0917ed-config-data" (OuterVolumeSpecName: "config-data") pod "f0700cc3-9290-41f8-b785-ad17bf0917ed" (UID: "f0700cc3-9290-41f8-b785-ad17bf0917ed"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 18:27:32 crc kubenswrapper[4661]: I0120 18:27:32.165025 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f0700cc3-9290-41f8-b785-ad17bf0917ed-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f0700cc3-9290-41f8-b785-ad17bf0917ed" (UID: "f0700cc3-9290-41f8-b785-ad17bf0917ed"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 18:27:32 crc kubenswrapper[4661]: I0120 18:27:32.238364 4661 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h8p88\" (UniqueName: \"kubernetes.io/projected/f0700cc3-9290-41f8-b785-ad17bf0917ed-kube-api-access-h8p88\") on node \"crc\" DevicePath \"\"" Jan 20 18:27:32 crc kubenswrapper[4661]: I0120 18:27:32.238397 4661 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f0700cc3-9290-41f8-b785-ad17bf0917ed-config-data\") on node \"crc\" DevicePath \"\"" Jan 20 18:27:32 crc kubenswrapper[4661]: I0120 18:27:32.238407 4661 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f0700cc3-9290-41f8-b785-ad17bf0917ed-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 20 18:27:33 crc kubenswrapper[4661]: I0120 18:27:33.034297 4661 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 20 18:27:33 crc kubenswrapper[4661]: I0120 18:27:33.083923 4661 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 20 18:27:33 crc kubenswrapper[4661]: I0120 18:27:33.098695 4661 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 20 18:27:33 crc kubenswrapper[4661]: I0120 18:27:33.110733 4661 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 20 18:27:33 crc kubenswrapper[4661]: E0120 18:27:33.111131 4661 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f0700cc3-9290-41f8-b785-ad17bf0917ed" containerName="nova-cell1-novncproxy-novncproxy" Jan 20 18:27:33 crc kubenswrapper[4661]: I0120 18:27:33.111149 4661 state_mem.go:107] "Deleted CPUSet assignment" podUID="f0700cc3-9290-41f8-b785-ad17bf0917ed" containerName="nova-cell1-novncproxy-novncproxy" Jan 20 18:27:33 crc kubenswrapper[4661]: I0120 18:27:33.111317 4661 memory_manager.go:354] "RemoveStaleState removing state" podUID="f0700cc3-9290-41f8-b785-ad17bf0917ed" containerName="nova-cell1-novncproxy-novncproxy" Jan 20 18:27:33 crc kubenswrapper[4661]: I0120 18:27:33.111875 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 20 18:27:33 crc kubenswrapper[4661]: I0120 18:27:33.116076 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Jan 20 18:27:33 crc kubenswrapper[4661]: I0120 18:27:33.116485 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-public-svc" Jan 20 18:27:33 crc kubenswrapper[4661]: I0120 18:27:33.116779 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-vencrypt" Jan 20 18:27:33 crc kubenswrapper[4661]: I0120 18:27:33.126640 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 20 18:27:33 crc kubenswrapper[4661]: I0120 18:27:33.256073 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/22e1bf04-4a38-4fa3-85c3-b63e90226ffa-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"22e1bf04-4a38-4fa3-85c3-b63e90226ffa\") " pod="openstack/nova-cell1-novncproxy-0" Jan 20 18:27:33 crc kubenswrapper[4661]: I0120 18:27:33.256223 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/22e1bf04-4a38-4fa3-85c3-b63e90226ffa-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"22e1bf04-4a38-4fa3-85c3-b63e90226ffa\") " pod="openstack/nova-cell1-novncproxy-0" Jan 20 18:27:33 crc kubenswrapper[4661]: I0120 18:27:33.256282 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/22e1bf04-4a38-4fa3-85c3-b63e90226ffa-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"22e1bf04-4a38-4fa3-85c3-b63e90226ffa\") " pod="openstack/nova-cell1-novncproxy-0" Jan 20 18:27:33 crc kubenswrapper[4661]: I0120 18:27:33.256303 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xrxfg\" (UniqueName: \"kubernetes.io/projected/22e1bf04-4a38-4fa3-85c3-b63e90226ffa-kube-api-access-xrxfg\") pod \"nova-cell1-novncproxy-0\" (UID: \"22e1bf04-4a38-4fa3-85c3-b63e90226ffa\") " pod="openstack/nova-cell1-novncproxy-0" Jan 20 18:27:33 crc kubenswrapper[4661]: I0120 18:27:33.256365 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/22e1bf04-4a38-4fa3-85c3-b63e90226ffa-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"22e1bf04-4a38-4fa3-85c3-b63e90226ffa\") " pod="openstack/nova-cell1-novncproxy-0" Jan 20 18:27:33 crc kubenswrapper[4661]: I0120 18:27:33.357951 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/22e1bf04-4a38-4fa3-85c3-b63e90226ffa-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"22e1bf04-4a38-4fa3-85c3-b63e90226ffa\") " pod="openstack/nova-cell1-novncproxy-0" Jan 20 18:27:33 crc kubenswrapper[4661]: I0120 18:27:33.358376 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/22e1bf04-4a38-4fa3-85c3-b63e90226ffa-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"22e1bf04-4a38-4fa3-85c3-b63e90226ffa\") " pod="openstack/nova-cell1-novncproxy-0" Jan 20 18:27:33 crc kubenswrapper[4661]: I0120 18:27:33.359085 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xrxfg\" (UniqueName: \"kubernetes.io/projected/22e1bf04-4a38-4fa3-85c3-b63e90226ffa-kube-api-access-xrxfg\") pod \"nova-cell1-novncproxy-0\" (UID: \"22e1bf04-4a38-4fa3-85c3-b63e90226ffa\") " pod="openstack/nova-cell1-novncproxy-0" Jan 20 18:27:33 crc kubenswrapper[4661]: I0120 18:27:33.359287 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/22e1bf04-4a38-4fa3-85c3-b63e90226ffa-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"22e1bf04-4a38-4fa3-85c3-b63e90226ffa\") " pod="openstack/nova-cell1-novncproxy-0" Jan 20 18:27:33 crc kubenswrapper[4661]: I0120 18:27:33.359476 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/22e1bf04-4a38-4fa3-85c3-b63e90226ffa-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"22e1bf04-4a38-4fa3-85c3-b63e90226ffa\") " pod="openstack/nova-cell1-novncproxy-0" Jan 20 18:27:33 crc kubenswrapper[4661]: I0120 18:27:33.377507 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/22e1bf04-4a38-4fa3-85c3-b63e90226ffa-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"22e1bf04-4a38-4fa3-85c3-b63e90226ffa\") " pod="openstack/nova-cell1-novncproxy-0" Jan 20 18:27:33 crc kubenswrapper[4661]: I0120 18:27:33.380248 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/22e1bf04-4a38-4fa3-85c3-b63e90226ffa-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"22e1bf04-4a38-4fa3-85c3-b63e90226ffa\") " pod="openstack/nova-cell1-novncproxy-0" Jan 20 18:27:33 crc kubenswrapper[4661]: I0120 18:27:33.397234 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/22e1bf04-4a38-4fa3-85c3-b63e90226ffa-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"22e1bf04-4a38-4fa3-85c3-b63e90226ffa\") " pod="openstack/nova-cell1-novncproxy-0" Jan 20 18:27:33 crc kubenswrapper[4661]: I0120 18:27:33.404256 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/22e1bf04-4a38-4fa3-85c3-b63e90226ffa-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"22e1bf04-4a38-4fa3-85c3-b63e90226ffa\") " pod="openstack/nova-cell1-novncproxy-0" Jan 20 18:27:33 crc kubenswrapper[4661]: I0120 18:27:33.411409 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xrxfg\" (UniqueName: \"kubernetes.io/projected/22e1bf04-4a38-4fa3-85c3-b63e90226ffa-kube-api-access-xrxfg\") pod \"nova-cell1-novncproxy-0\" (UID: \"22e1bf04-4a38-4fa3-85c3-b63e90226ffa\") " pod="openstack/nova-cell1-novncproxy-0" Jan 20 18:27:33 crc kubenswrapper[4661]: I0120 18:27:33.442104 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 20 18:27:33 crc kubenswrapper[4661]: I0120 18:27:33.999391 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 20 18:27:34 crc kubenswrapper[4661]: I0120 18:27:34.066516 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"22e1bf04-4a38-4fa3-85c3-b63e90226ffa","Type":"ContainerStarted","Data":"9d3a9e8796f9530346411dfe794491a6367c1b71fac945c3bcc331db3392b9cb"} Jan 20 18:27:34 crc kubenswrapper[4661]: I0120 18:27:34.152300 4661 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f0700cc3-9290-41f8-b785-ad17bf0917ed" path="/var/lib/kubelet/pods/f0700cc3-9290-41f8-b785-ad17bf0917ed/volumes" Jan 20 18:27:34 crc kubenswrapper[4661]: I0120 18:27:34.376155 4661 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 20 18:27:34 crc kubenswrapper[4661]: I0120 18:27:34.376540 4661 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 20 18:27:34 crc kubenswrapper[4661]: I0120 18:27:34.378969 4661 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 20 18:27:34 crc kubenswrapper[4661]: I0120 18:27:34.380430 4661 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 20 18:27:35 crc kubenswrapper[4661]: I0120 18:27:35.074945 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"22e1bf04-4a38-4fa3-85c3-b63e90226ffa","Type":"ContainerStarted","Data":"bac3d86fcd15548b8f3425582443c5e756d152ff2744d49e21ed0573858b6c84"} Jan 20 18:27:35 crc kubenswrapper[4661]: I0120 18:27:35.075382 4661 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 20 18:27:35 crc kubenswrapper[4661]: I0120 18:27:35.080819 4661 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 20 18:27:35 crc kubenswrapper[4661]: I0120 18:27:35.108899 4661 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.108880835 podStartE2EDuration="2.108880835s" podCreationTimestamp="2026-01-20 18:27:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 18:27:35.092226521 +0000 UTC m=+1311.423016183" watchObservedRunningTime="2026-01-20 18:27:35.108880835 +0000 UTC m=+1311.439670487" Jan 20 18:27:35 crc kubenswrapper[4661]: I0120 18:27:35.356640 4661 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5b856c5697-x6dtq"] Jan 20 18:27:35 crc kubenswrapper[4661]: I0120 18:27:35.358038 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b856c5697-x6dtq" Jan 20 18:27:35 crc kubenswrapper[4661]: I0120 18:27:35.385090 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5b856c5697-x6dtq"] Jan 20 18:27:35 crc kubenswrapper[4661]: I0120 18:27:35.507699 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/70b388ac-7547-498d-9bc2-97c8248fe4bc-ovsdbserver-sb\") pod \"dnsmasq-dns-5b856c5697-x6dtq\" (UID: \"70b388ac-7547-498d-9bc2-97c8248fe4bc\") " pod="openstack/dnsmasq-dns-5b856c5697-x6dtq" Jan 20 18:27:35 crc kubenswrapper[4661]: I0120 18:27:35.507829 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/70b388ac-7547-498d-9bc2-97c8248fe4bc-ovsdbserver-nb\") pod \"dnsmasq-dns-5b856c5697-x6dtq\" (UID: \"70b388ac-7547-498d-9bc2-97c8248fe4bc\") " pod="openstack/dnsmasq-dns-5b856c5697-x6dtq" Jan 20 18:27:35 crc kubenswrapper[4661]: I0120 18:27:35.507871 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bxhgs\" (UniqueName: \"kubernetes.io/projected/70b388ac-7547-498d-9bc2-97c8248fe4bc-kube-api-access-bxhgs\") pod \"dnsmasq-dns-5b856c5697-x6dtq\" (UID: \"70b388ac-7547-498d-9bc2-97c8248fe4bc\") " pod="openstack/dnsmasq-dns-5b856c5697-x6dtq" Jan 20 18:27:35 crc kubenswrapper[4661]: I0120 18:27:35.507908 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/70b388ac-7547-498d-9bc2-97c8248fe4bc-config\") pod \"dnsmasq-dns-5b856c5697-x6dtq\" (UID: \"70b388ac-7547-498d-9bc2-97c8248fe4bc\") " pod="openstack/dnsmasq-dns-5b856c5697-x6dtq" Jan 20 18:27:35 crc kubenswrapper[4661]: I0120 18:27:35.508083 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/70b388ac-7547-498d-9bc2-97c8248fe4bc-dns-svc\") pod \"dnsmasq-dns-5b856c5697-x6dtq\" (UID: \"70b388ac-7547-498d-9bc2-97c8248fe4bc\") " pod="openstack/dnsmasq-dns-5b856c5697-x6dtq" Jan 20 18:27:35 crc kubenswrapper[4661]: I0120 18:27:35.610152 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/70b388ac-7547-498d-9bc2-97c8248fe4bc-ovsdbserver-nb\") pod \"dnsmasq-dns-5b856c5697-x6dtq\" (UID: \"70b388ac-7547-498d-9bc2-97c8248fe4bc\") " pod="openstack/dnsmasq-dns-5b856c5697-x6dtq" Jan 20 18:27:35 crc kubenswrapper[4661]: I0120 18:27:35.610458 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bxhgs\" (UniqueName: \"kubernetes.io/projected/70b388ac-7547-498d-9bc2-97c8248fe4bc-kube-api-access-bxhgs\") pod \"dnsmasq-dns-5b856c5697-x6dtq\" (UID: \"70b388ac-7547-498d-9bc2-97c8248fe4bc\") " pod="openstack/dnsmasq-dns-5b856c5697-x6dtq" Jan 20 18:27:35 crc kubenswrapper[4661]: I0120 18:27:35.610495 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/70b388ac-7547-498d-9bc2-97c8248fe4bc-config\") pod \"dnsmasq-dns-5b856c5697-x6dtq\" (UID: \"70b388ac-7547-498d-9bc2-97c8248fe4bc\") " pod="openstack/dnsmasq-dns-5b856c5697-x6dtq" Jan 20 18:27:35 crc kubenswrapper[4661]: I0120 18:27:35.610521 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/70b388ac-7547-498d-9bc2-97c8248fe4bc-dns-svc\") pod \"dnsmasq-dns-5b856c5697-x6dtq\" (UID: \"70b388ac-7547-498d-9bc2-97c8248fe4bc\") " pod="openstack/dnsmasq-dns-5b856c5697-x6dtq" Jan 20 18:27:35 crc kubenswrapper[4661]: I0120 18:27:35.610567 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/70b388ac-7547-498d-9bc2-97c8248fe4bc-ovsdbserver-sb\") pod \"dnsmasq-dns-5b856c5697-x6dtq\" (UID: \"70b388ac-7547-498d-9bc2-97c8248fe4bc\") " pod="openstack/dnsmasq-dns-5b856c5697-x6dtq" Jan 20 18:27:35 crc kubenswrapper[4661]: I0120 18:27:35.611154 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/70b388ac-7547-498d-9bc2-97c8248fe4bc-ovsdbserver-nb\") pod \"dnsmasq-dns-5b856c5697-x6dtq\" (UID: \"70b388ac-7547-498d-9bc2-97c8248fe4bc\") " pod="openstack/dnsmasq-dns-5b856c5697-x6dtq" Jan 20 18:27:35 crc kubenswrapper[4661]: I0120 18:27:35.611267 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/70b388ac-7547-498d-9bc2-97c8248fe4bc-dns-svc\") pod \"dnsmasq-dns-5b856c5697-x6dtq\" (UID: \"70b388ac-7547-498d-9bc2-97c8248fe4bc\") " pod="openstack/dnsmasq-dns-5b856c5697-x6dtq" Jan 20 18:27:35 crc kubenswrapper[4661]: I0120 18:27:35.611336 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/70b388ac-7547-498d-9bc2-97c8248fe4bc-ovsdbserver-sb\") pod \"dnsmasq-dns-5b856c5697-x6dtq\" (UID: \"70b388ac-7547-498d-9bc2-97c8248fe4bc\") " pod="openstack/dnsmasq-dns-5b856c5697-x6dtq" Jan 20 18:27:35 crc kubenswrapper[4661]: I0120 18:27:35.611467 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/70b388ac-7547-498d-9bc2-97c8248fe4bc-config\") pod \"dnsmasq-dns-5b856c5697-x6dtq\" (UID: \"70b388ac-7547-498d-9bc2-97c8248fe4bc\") " pod="openstack/dnsmasq-dns-5b856c5697-x6dtq" Jan 20 18:27:35 crc kubenswrapper[4661]: I0120 18:27:35.633341 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bxhgs\" (UniqueName: \"kubernetes.io/projected/70b388ac-7547-498d-9bc2-97c8248fe4bc-kube-api-access-bxhgs\") pod \"dnsmasq-dns-5b856c5697-x6dtq\" (UID: \"70b388ac-7547-498d-9bc2-97c8248fe4bc\") " pod="openstack/dnsmasq-dns-5b856c5697-x6dtq" Jan 20 18:27:35 crc kubenswrapper[4661]: I0120 18:27:35.696147 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b856c5697-x6dtq" Jan 20 18:27:36 crc kubenswrapper[4661]: W0120 18:27:36.029375 4661 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod70b388ac_7547_498d_9bc2_97c8248fe4bc.slice/crio-60874919977fee9fe744f98ca369154d99e74ef0a063bf52a2a83bf51fc1c8d4 WatchSource:0}: Error finding container 60874919977fee9fe744f98ca369154d99e74ef0a063bf52a2a83bf51fc1c8d4: Status 404 returned error can't find the container with id 60874919977fee9fe744f98ca369154d99e74ef0a063bf52a2a83bf51fc1c8d4 Jan 20 18:27:36 crc kubenswrapper[4661]: I0120 18:27:36.044423 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5b856c5697-x6dtq"] Jan 20 18:27:36 crc kubenswrapper[4661]: I0120 18:27:36.102145 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b856c5697-x6dtq" event={"ID":"70b388ac-7547-498d-9bc2-97c8248fe4bc","Type":"ContainerStarted","Data":"60874919977fee9fe744f98ca369154d99e74ef0a063bf52a2a83bf51fc1c8d4"} Jan 20 18:27:37 crc kubenswrapper[4661]: I0120 18:27:37.111366 4661 generic.go:334] "Generic (PLEG): container finished" podID="70b388ac-7547-498d-9bc2-97c8248fe4bc" containerID="9a5f818cdc31cc9bb71f9c92850996f8ba24b018d4bbab00f7c696ee7056dc26" exitCode=0 Jan 20 18:27:37 crc kubenswrapper[4661]: I0120 18:27:37.111420 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b856c5697-x6dtq" event={"ID":"70b388ac-7547-498d-9bc2-97c8248fe4bc","Type":"ContainerDied","Data":"9a5f818cdc31cc9bb71f9c92850996f8ba24b018d4bbab00f7c696ee7056dc26"} Jan 20 18:27:38 crc kubenswrapper[4661]: I0120 18:27:38.007684 4661 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 20 18:27:38 crc kubenswrapper[4661]: I0120 18:27:38.120440 4661 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="b5862fe2-a3a1-4918-bf74-1c8df23a2584" containerName="nova-api-log" containerID="cri-o://11ab9a9b7ece251b799f535eaec6ff94347521b988eb1201b772db1fb4d1ef57" gracePeriod=30 Jan 20 18:27:38 crc kubenswrapper[4661]: I0120 18:27:38.121185 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b856c5697-x6dtq" event={"ID":"70b388ac-7547-498d-9bc2-97c8248fe4bc","Type":"ContainerStarted","Data":"ceed4bd7cda1f544ac2f5a8e3be9e2511e166bc08c31d936f1955c3ed45f5383"} Jan 20 18:27:38 crc kubenswrapper[4661]: I0120 18:27:38.121217 4661 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5b856c5697-x6dtq" Jan 20 18:27:38 crc kubenswrapper[4661]: I0120 18:27:38.121489 4661 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="b5862fe2-a3a1-4918-bf74-1c8df23a2584" containerName="nova-api-api" containerID="cri-o://3675801446bfa6cf88d7b4cf903d877df797578ba07e86f6d5bcfb1d67b142d4" gracePeriod=30 Jan 20 18:27:38 crc kubenswrapper[4661]: I0120 18:27:38.142741 4661 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5b856c5697-x6dtq" podStartSLOduration=3.142721742 podStartE2EDuration="3.142721742s" podCreationTimestamp="2026-01-20 18:27:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 18:27:38.136771757 +0000 UTC m=+1314.467561419" watchObservedRunningTime="2026-01-20 18:27:38.142721742 +0000 UTC m=+1314.473511404" Jan 20 18:27:38 crc kubenswrapper[4661]: I0120 18:27:38.357993 4661 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 20 18:27:38 crc kubenswrapper[4661]: I0120 18:27:38.358330 4661 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="4b7a79c6-309f-494e-87c6-429326682d11" containerName="sg-core" containerID="cri-o://e128093979c7147f4b1158a5d112ac5fca72fffa6d4813133733b7d4c8b7e5bf" gracePeriod=30 Jan 20 18:27:38 crc kubenswrapper[4661]: I0120 18:27:38.358355 4661 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="4b7a79c6-309f-494e-87c6-429326682d11" containerName="ceilometer-notification-agent" containerID="cri-o://825b326842fc66dfde87ec11b4e620b73492f567e913b75df433a1523999470a" gracePeriod=30 Jan 20 18:27:38 crc kubenswrapper[4661]: I0120 18:27:38.358573 4661 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="4b7a79c6-309f-494e-87c6-429326682d11" containerName="ceilometer-central-agent" containerID="cri-o://7fdfe51495ff10fd8052a516b680fd64efc0deb7504015f56e21edf0431bab4f" gracePeriod=30 Jan 20 18:27:38 crc kubenswrapper[4661]: I0120 18:27:38.358474 4661 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="4b7a79c6-309f-494e-87c6-429326682d11" containerName="proxy-httpd" containerID="cri-o://8059494993e39192f483ebc8c2da67d8f111007964dd7398a66b980da12b1347" gracePeriod=30 Jan 20 18:27:38 crc kubenswrapper[4661]: I0120 18:27:38.442850 4661 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Jan 20 18:27:39 crc kubenswrapper[4661]: I0120 18:27:39.131142 4661 generic.go:334] "Generic (PLEG): container finished" podID="b5862fe2-a3a1-4918-bf74-1c8df23a2584" containerID="11ab9a9b7ece251b799f535eaec6ff94347521b988eb1201b772db1fb4d1ef57" exitCode=143 Jan 20 18:27:39 crc kubenswrapper[4661]: I0120 18:27:39.131394 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"b5862fe2-a3a1-4918-bf74-1c8df23a2584","Type":"ContainerDied","Data":"11ab9a9b7ece251b799f535eaec6ff94347521b988eb1201b772db1fb4d1ef57"} Jan 20 18:27:39 crc kubenswrapper[4661]: I0120 18:27:39.133470 4661 generic.go:334] "Generic (PLEG): container finished" podID="4b7a79c6-309f-494e-87c6-429326682d11" containerID="8059494993e39192f483ebc8c2da67d8f111007964dd7398a66b980da12b1347" exitCode=0 Jan 20 18:27:39 crc kubenswrapper[4661]: I0120 18:27:39.133487 4661 generic.go:334] "Generic (PLEG): container finished" podID="4b7a79c6-309f-494e-87c6-429326682d11" containerID="e128093979c7147f4b1158a5d112ac5fca72fffa6d4813133733b7d4c8b7e5bf" exitCode=2 Jan 20 18:27:39 crc kubenswrapper[4661]: I0120 18:27:39.133494 4661 generic.go:334] "Generic (PLEG): container finished" podID="4b7a79c6-309f-494e-87c6-429326682d11" containerID="7fdfe51495ff10fd8052a516b680fd64efc0deb7504015f56e21edf0431bab4f" exitCode=0 Jan 20 18:27:39 crc kubenswrapper[4661]: I0120 18:27:39.134266 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4b7a79c6-309f-494e-87c6-429326682d11","Type":"ContainerDied","Data":"8059494993e39192f483ebc8c2da67d8f111007964dd7398a66b980da12b1347"} Jan 20 18:27:39 crc kubenswrapper[4661]: I0120 18:27:39.134290 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4b7a79c6-309f-494e-87c6-429326682d11","Type":"ContainerDied","Data":"e128093979c7147f4b1158a5d112ac5fca72fffa6d4813133733b7d4c8b7e5bf"} Jan 20 18:27:39 crc kubenswrapper[4661]: I0120 18:27:39.134301 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4b7a79c6-309f-494e-87c6-429326682d11","Type":"ContainerDied","Data":"7fdfe51495ff10fd8052a516b680fd64efc0deb7504015f56e21edf0431bab4f"} Jan 20 18:27:41 crc kubenswrapper[4661]: I0120 18:27:41.036341 4661 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="4b7a79c6-309f-494e-87c6-429326682d11" containerName="proxy-httpd" probeResult="failure" output="Get \"https://10.217.0.167:3000/\": dial tcp 10.217.0.167:3000: connect: connection refused" Jan 20 18:27:41 crc kubenswrapper[4661]: I0120 18:27:41.158935 4661 generic.go:334] "Generic (PLEG): container finished" podID="4b7a79c6-309f-494e-87c6-429326682d11" containerID="825b326842fc66dfde87ec11b4e620b73492f567e913b75df433a1523999470a" exitCode=0 Jan 20 18:27:41 crc kubenswrapper[4661]: I0120 18:27:41.159030 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4b7a79c6-309f-494e-87c6-429326682d11","Type":"ContainerDied","Data":"825b326842fc66dfde87ec11b4e620b73492f567e913b75df433a1523999470a"} Jan 20 18:27:41 crc kubenswrapper[4661]: I0120 18:27:41.400995 4661 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 20 18:27:41 crc kubenswrapper[4661]: I0120 18:27:41.518514 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4b7a79c6-309f-494e-87c6-429326682d11-sg-core-conf-yaml\") pod \"4b7a79c6-309f-494e-87c6-429326682d11\" (UID: \"4b7a79c6-309f-494e-87c6-429326682d11\") " Jan 20 18:27:41 crc kubenswrapper[4661]: I0120 18:27:41.518573 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4b7a79c6-309f-494e-87c6-429326682d11-combined-ca-bundle\") pod \"4b7a79c6-309f-494e-87c6-429326682d11\" (UID: \"4b7a79c6-309f-494e-87c6-429326682d11\") " Jan 20 18:27:41 crc kubenswrapper[4661]: I0120 18:27:41.518615 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4b7a79c6-309f-494e-87c6-429326682d11-config-data\") pod \"4b7a79c6-309f-494e-87c6-429326682d11\" (UID: \"4b7a79c6-309f-494e-87c6-429326682d11\") " Jan 20 18:27:41 crc kubenswrapper[4661]: I0120 18:27:41.518667 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4b7a79c6-309f-494e-87c6-429326682d11-run-httpd\") pod \"4b7a79c6-309f-494e-87c6-429326682d11\" (UID: \"4b7a79c6-309f-494e-87c6-429326682d11\") " Jan 20 18:27:41 crc kubenswrapper[4661]: I0120 18:27:41.518718 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gxr7d\" (UniqueName: \"kubernetes.io/projected/4b7a79c6-309f-494e-87c6-429326682d11-kube-api-access-gxr7d\") pod \"4b7a79c6-309f-494e-87c6-429326682d11\" (UID: \"4b7a79c6-309f-494e-87c6-429326682d11\") " Jan 20 18:27:41 crc kubenswrapper[4661]: I0120 18:27:41.518782 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4b7a79c6-309f-494e-87c6-429326682d11-log-httpd\") pod \"4b7a79c6-309f-494e-87c6-429326682d11\" (UID: \"4b7a79c6-309f-494e-87c6-429326682d11\") " Jan 20 18:27:41 crc kubenswrapper[4661]: I0120 18:27:41.518834 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/4b7a79c6-309f-494e-87c6-429326682d11-ceilometer-tls-certs\") pod \"4b7a79c6-309f-494e-87c6-429326682d11\" (UID: \"4b7a79c6-309f-494e-87c6-429326682d11\") " Jan 20 18:27:41 crc kubenswrapper[4661]: I0120 18:27:41.519009 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4b7a79c6-309f-494e-87c6-429326682d11-scripts\") pod \"4b7a79c6-309f-494e-87c6-429326682d11\" (UID: \"4b7a79c6-309f-494e-87c6-429326682d11\") " Jan 20 18:27:41 crc kubenswrapper[4661]: I0120 18:27:41.520173 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4b7a79c6-309f-494e-87c6-429326682d11-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "4b7a79c6-309f-494e-87c6-429326682d11" (UID: "4b7a79c6-309f-494e-87c6-429326682d11"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 18:27:41 crc kubenswrapper[4661]: I0120 18:27:41.520544 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4b7a79c6-309f-494e-87c6-429326682d11-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "4b7a79c6-309f-494e-87c6-429326682d11" (UID: "4b7a79c6-309f-494e-87c6-429326682d11"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 18:27:41 crc kubenswrapper[4661]: I0120 18:27:41.546896 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4b7a79c6-309f-494e-87c6-429326682d11-kube-api-access-gxr7d" (OuterVolumeSpecName: "kube-api-access-gxr7d") pod "4b7a79c6-309f-494e-87c6-429326682d11" (UID: "4b7a79c6-309f-494e-87c6-429326682d11"). InnerVolumeSpecName "kube-api-access-gxr7d". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 18:27:41 crc kubenswrapper[4661]: I0120 18:27:41.549069 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4b7a79c6-309f-494e-87c6-429326682d11-scripts" (OuterVolumeSpecName: "scripts") pod "4b7a79c6-309f-494e-87c6-429326682d11" (UID: "4b7a79c6-309f-494e-87c6-429326682d11"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 18:27:41 crc kubenswrapper[4661]: I0120 18:27:41.589735 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4b7a79c6-309f-494e-87c6-429326682d11-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "4b7a79c6-309f-494e-87c6-429326682d11" (UID: "4b7a79c6-309f-494e-87c6-429326682d11"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 18:27:41 crc kubenswrapper[4661]: I0120 18:27:41.615845 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4b7a79c6-309f-494e-87c6-429326682d11-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "4b7a79c6-309f-494e-87c6-429326682d11" (UID: "4b7a79c6-309f-494e-87c6-429326682d11"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 18:27:41 crc kubenswrapper[4661]: I0120 18:27:41.620723 4661 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gxr7d\" (UniqueName: \"kubernetes.io/projected/4b7a79c6-309f-494e-87c6-429326682d11-kube-api-access-gxr7d\") on node \"crc\" DevicePath \"\"" Jan 20 18:27:41 crc kubenswrapper[4661]: I0120 18:27:41.620751 4661 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4b7a79c6-309f-494e-87c6-429326682d11-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 20 18:27:41 crc kubenswrapper[4661]: I0120 18:27:41.620761 4661 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/4b7a79c6-309f-494e-87c6-429326682d11-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 20 18:27:41 crc kubenswrapper[4661]: I0120 18:27:41.620770 4661 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4b7a79c6-309f-494e-87c6-429326682d11-scripts\") on node \"crc\" DevicePath \"\"" Jan 20 18:27:41 crc kubenswrapper[4661]: I0120 18:27:41.620779 4661 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4b7a79c6-309f-494e-87c6-429326682d11-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 20 18:27:41 crc kubenswrapper[4661]: I0120 18:27:41.620787 4661 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4b7a79c6-309f-494e-87c6-429326682d11-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 20 18:27:41 crc kubenswrapper[4661]: I0120 18:27:41.623246 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4b7a79c6-309f-494e-87c6-429326682d11-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4b7a79c6-309f-494e-87c6-429326682d11" (UID: "4b7a79c6-309f-494e-87c6-429326682d11"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 18:27:41 crc kubenswrapper[4661]: I0120 18:27:41.672022 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4b7a79c6-309f-494e-87c6-429326682d11-config-data" (OuterVolumeSpecName: "config-data") pod "4b7a79c6-309f-494e-87c6-429326682d11" (UID: "4b7a79c6-309f-494e-87c6-429326682d11"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 18:27:41 crc kubenswrapper[4661]: I0120 18:27:41.692214 4661 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 20 18:27:41 crc kubenswrapper[4661]: I0120 18:27:41.721904 4661 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4b7a79c6-309f-494e-87c6-429326682d11-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 20 18:27:41 crc kubenswrapper[4661]: I0120 18:27:41.722156 4661 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4b7a79c6-309f-494e-87c6-429326682d11-config-data\") on node \"crc\" DevicePath \"\"" Jan 20 18:27:41 crc kubenswrapper[4661]: I0120 18:27:41.823212 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b5862fe2-a3a1-4918-bf74-1c8df23a2584-combined-ca-bundle\") pod \"b5862fe2-a3a1-4918-bf74-1c8df23a2584\" (UID: \"b5862fe2-a3a1-4918-bf74-1c8df23a2584\") " Jan 20 18:27:41 crc kubenswrapper[4661]: I0120 18:27:41.823332 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b5862fe2-a3a1-4918-bf74-1c8df23a2584-logs\") pod \"b5862fe2-a3a1-4918-bf74-1c8df23a2584\" (UID: \"b5862fe2-a3a1-4918-bf74-1c8df23a2584\") " Jan 20 18:27:41 crc kubenswrapper[4661]: I0120 18:27:41.823381 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b5862fe2-a3a1-4918-bf74-1c8df23a2584-config-data\") pod \"b5862fe2-a3a1-4918-bf74-1c8df23a2584\" (UID: \"b5862fe2-a3a1-4918-bf74-1c8df23a2584\") " Jan 20 18:27:41 crc kubenswrapper[4661]: I0120 18:27:41.823505 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z5gzr\" (UniqueName: \"kubernetes.io/projected/b5862fe2-a3a1-4918-bf74-1c8df23a2584-kube-api-access-z5gzr\") pod \"b5862fe2-a3a1-4918-bf74-1c8df23a2584\" (UID: \"b5862fe2-a3a1-4918-bf74-1c8df23a2584\") " Jan 20 18:27:41 crc kubenswrapper[4661]: I0120 18:27:41.824551 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b5862fe2-a3a1-4918-bf74-1c8df23a2584-logs" (OuterVolumeSpecName: "logs") pod "b5862fe2-a3a1-4918-bf74-1c8df23a2584" (UID: "b5862fe2-a3a1-4918-bf74-1c8df23a2584"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 18:27:41 crc kubenswrapper[4661]: I0120 18:27:41.827059 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b5862fe2-a3a1-4918-bf74-1c8df23a2584-kube-api-access-z5gzr" (OuterVolumeSpecName: "kube-api-access-z5gzr") pod "b5862fe2-a3a1-4918-bf74-1c8df23a2584" (UID: "b5862fe2-a3a1-4918-bf74-1c8df23a2584"). InnerVolumeSpecName "kube-api-access-z5gzr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 18:27:41 crc kubenswrapper[4661]: I0120 18:27:41.851747 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b5862fe2-a3a1-4918-bf74-1c8df23a2584-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b5862fe2-a3a1-4918-bf74-1c8df23a2584" (UID: "b5862fe2-a3a1-4918-bf74-1c8df23a2584"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 18:27:41 crc kubenswrapper[4661]: I0120 18:27:41.863601 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b5862fe2-a3a1-4918-bf74-1c8df23a2584-config-data" (OuterVolumeSpecName: "config-data") pod "b5862fe2-a3a1-4918-bf74-1c8df23a2584" (UID: "b5862fe2-a3a1-4918-bf74-1c8df23a2584"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 18:27:41 crc kubenswrapper[4661]: I0120 18:27:41.925340 4661 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b5862fe2-a3a1-4918-bf74-1c8df23a2584-logs\") on node \"crc\" DevicePath \"\"" Jan 20 18:27:41 crc kubenswrapper[4661]: I0120 18:27:41.925370 4661 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b5862fe2-a3a1-4918-bf74-1c8df23a2584-config-data\") on node \"crc\" DevicePath \"\"" Jan 20 18:27:41 crc kubenswrapper[4661]: I0120 18:27:41.925380 4661 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z5gzr\" (UniqueName: \"kubernetes.io/projected/b5862fe2-a3a1-4918-bf74-1c8df23a2584-kube-api-access-z5gzr\") on node \"crc\" DevicePath \"\"" Jan 20 18:27:41 crc kubenswrapper[4661]: I0120 18:27:41.925392 4661 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b5862fe2-a3a1-4918-bf74-1c8df23a2584-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 20 18:27:42 crc kubenswrapper[4661]: I0120 18:27:42.166310 4661 generic.go:334] "Generic (PLEG): container finished" podID="b5862fe2-a3a1-4918-bf74-1c8df23a2584" containerID="3675801446bfa6cf88d7b4cf903d877df797578ba07e86f6d5bcfb1d67b142d4" exitCode=0 Jan 20 18:27:42 crc kubenswrapper[4661]: I0120 18:27:42.166394 4661 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 20 18:27:42 crc kubenswrapper[4661]: I0120 18:27:42.168342 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"b5862fe2-a3a1-4918-bf74-1c8df23a2584","Type":"ContainerDied","Data":"3675801446bfa6cf88d7b4cf903d877df797578ba07e86f6d5bcfb1d67b142d4"} Jan 20 18:27:42 crc kubenswrapper[4661]: I0120 18:27:42.168383 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"b5862fe2-a3a1-4918-bf74-1c8df23a2584","Type":"ContainerDied","Data":"e4e7aeb9e17fbbd565ca3e7a89214730bf634823920fa2d944f96361953bedce"} Jan 20 18:27:42 crc kubenswrapper[4661]: I0120 18:27:42.168401 4661 scope.go:117] "RemoveContainer" containerID="3675801446bfa6cf88d7b4cf903d877df797578ba07e86f6d5bcfb1d67b142d4" Jan 20 18:27:42 crc kubenswrapper[4661]: I0120 18:27:42.172372 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4b7a79c6-309f-494e-87c6-429326682d11","Type":"ContainerDied","Data":"bad56f2668fa3dfe5ff84fff1c654c919f342f78286c9c0ccf655e3444e58f61"} Jan 20 18:27:42 crc kubenswrapper[4661]: I0120 18:27:42.172420 4661 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 20 18:27:42 crc kubenswrapper[4661]: I0120 18:27:42.202017 4661 scope.go:117] "RemoveContainer" containerID="11ab9a9b7ece251b799f535eaec6ff94347521b988eb1201b772db1fb4d1ef57" Jan 20 18:27:42 crc kubenswrapper[4661]: I0120 18:27:42.212715 4661 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 20 18:27:42 crc kubenswrapper[4661]: I0120 18:27:42.227438 4661 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 20 18:27:42 crc kubenswrapper[4661]: I0120 18:27:42.240049 4661 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 20 18:27:42 crc kubenswrapper[4661]: I0120 18:27:42.243140 4661 scope.go:117] "RemoveContainer" containerID="3675801446bfa6cf88d7b4cf903d877df797578ba07e86f6d5bcfb1d67b142d4" Jan 20 18:27:42 crc kubenswrapper[4661]: E0120 18:27:42.246073 4661 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3675801446bfa6cf88d7b4cf903d877df797578ba07e86f6d5bcfb1d67b142d4\": container with ID starting with 3675801446bfa6cf88d7b4cf903d877df797578ba07e86f6d5bcfb1d67b142d4 not found: ID does not exist" containerID="3675801446bfa6cf88d7b4cf903d877df797578ba07e86f6d5bcfb1d67b142d4" Jan 20 18:27:42 crc kubenswrapper[4661]: I0120 18:27:42.246110 4661 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3675801446bfa6cf88d7b4cf903d877df797578ba07e86f6d5bcfb1d67b142d4"} err="failed to get container status \"3675801446bfa6cf88d7b4cf903d877df797578ba07e86f6d5bcfb1d67b142d4\": rpc error: code = NotFound desc = could not find container \"3675801446bfa6cf88d7b4cf903d877df797578ba07e86f6d5bcfb1d67b142d4\": container with ID starting with 3675801446bfa6cf88d7b4cf903d877df797578ba07e86f6d5bcfb1d67b142d4 not found: ID does not exist" Jan 20 18:27:42 crc kubenswrapper[4661]: I0120 18:27:42.246134 4661 scope.go:117] "RemoveContainer" containerID="11ab9a9b7ece251b799f535eaec6ff94347521b988eb1201b772db1fb4d1ef57" Jan 20 18:27:42 crc kubenswrapper[4661]: E0120 18:27:42.247533 4661 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"11ab9a9b7ece251b799f535eaec6ff94347521b988eb1201b772db1fb4d1ef57\": container with ID starting with 11ab9a9b7ece251b799f535eaec6ff94347521b988eb1201b772db1fb4d1ef57 not found: ID does not exist" containerID="11ab9a9b7ece251b799f535eaec6ff94347521b988eb1201b772db1fb4d1ef57" Jan 20 18:27:42 crc kubenswrapper[4661]: I0120 18:27:42.247556 4661 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"11ab9a9b7ece251b799f535eaec6ff94347521b988eb1201b772db1fb4d1ef57"} err="failed to get container status \"11ab9a9b7ece251b799f535eaec6ff94347521b988eb1201b772db1fb4d1ef57\": rpc error: code = NotFound desc = could not find container \"11ab9a9b7ece251b799f535eaec6ff94347521b988eb1201b772db1fb4d1ef57\": container with ID starting with 11ab9a9b7ece251b799f535eaec6ff94347521b988eb1201b772db1fb4d1ef57 not found: ID does not exist" Jan 20 18:27:42 crc kubenswrapper[4661]: I0120 18:27:42.247578 4661 scope.go:117] "RemoveContainer" containerID="8059494993e39192f483ebc8c2da67d8f111007964dd7398a66b980da12b1347" Jan 20 18:27:42 crc kubenswrapper[4661]: I0120 18:27:42.250743 4661 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 20 18:27:42 crc kubenswrapper[4661]: I0120 18:27:42.264789 4661 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 20 18:27:42 crc kubenswrapper[4661]: E0120 18:27:42.265192 4661 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4b7a79c6-309f-494e-87c6-429326682d11" containerName="proxy-httpd" Jan 20 18:27:42 crc kubenswrapper[4661]: I0120 18:27:42.265212 4661 state_mem.go:107] "Deleted CPUSet assignment" podUID="4b7a79c6-309f-494e-87c6-429326682d11" containerName="proxy-httpd" Jan 20 18:27:42 crc kubenswrapper[4661]: E0120 18:27:42.265231 4661 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4b7a79c6-309f-494e-87c6-429326682d11" containerName="ceilometer-central-agent" Jan 20 18:27:42 crc kubenswrapper[4661]: I0120 18:27:42.265239 4661 state_mem.go:107] "Deleted CPUSet assignment" podUID="4b7a79c6-309f-494e-87c6-429326682d11" containerName="ceilometer-central-agent" Jan 20 18:27:42 crc kubenswrapper[4661]: E0120 18:27:42.265250 4661 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4b7a79c6-309f-494e-87c6-429326682d11" containerName="ceilometer-notification-agent" Jan 20 18:27:42 crc kubenswrapper[4661]: I0120 18:27:42.265256 4661 state_mem.go:107] "Deleted CPUSet assignment" podUID="4b7a79c6-309f-494e-87c6-429326682d11" containerName="ceilometer-notification-agent" Jan 20 18:27:42 crc kubenswrapper[4661]: E0120 18:27:42.265270 4661 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b5862fe2-a3a1-4918-bf74-1c8df23a2584" containerName="nova-api-api" Jan 20 18:27:42 crc kubenswrapper[4661]: I0120 18:27:42.265276 4661 state_mem.go:107] "Deleted CPUSet assignment" podUID="b5862fe2-a3a1-4918-bf74-1c8df23a2584" containerName="nova-api-api" Jan 20 18:27:42 crc kubenswrapper[4661]: E0120 18:27:42.265293 4661 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4b7a79c6-309f-494e-87c6-429326682d11" containerName="sg-core" Jan 20 18:27:42 crc kubenswrapper[4661]: I0120 18:27:42.265301 4661 state_mem.go:107] "Deleted CPUSet assignment" podUID="4b7a79c6-309f-494e-87c6-429326682d11" containerName="sg-core" Jan 20 18:27:42 crc kubenswrapper[4661]: E0120 18:27:42.265312 4661 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b5862fe2-a3a1-4918-bf74-1c8df23a2584" containerName="nova-api-log" Jan 20 18:27:42 crc kubenswrapper[4661]: I0120 18:27:42.265320 4661 state_mem.go:107] "Deleted CPUSet assignment" podUID="b5862fe2-a3a1-4918-bf74-1c8df23a2584" containerName="nova-api-log" Jan 20 18:27:42 crc kubenswrapper[4661]: I0120 18:27:42.265512 4661 memory_manager.go:354] "RemoveStaleState removing state" podUID="b5862fe2-a3a1-4918-bf74-1c8df23a2584" containerName="nova-api-log" Jan 20 18:27:42 crc kubenswrapper[4661]: I0120 18:27:42.265526 4661 memory_manager.go:354] "RemoveStaleState removing state" podUID="4b7a79c6-309f-494e-87c6-429326682d11" containerName="ceilometer-notification-agent" Jan 20 18:27:42 crc kubenswrapper[4661]: I0120 18:27:42.265534 4661 memory_manager.go:354] "RemoveStaleState removing state" podUID="4b7a79c6-309f-494e-87c6-429326682d11" containerName="sg-core" Jan 20 18:27:42 crc kubenswrapper[4661]: I0120 18:27:42.265547 4661 memory_manager.go:354] "RemoveStaleState removing state" podUID="b5862fe2-a3a1-4918-bf74-1c8df23a2584" containerName="nova-api-api" Jan 20 18:27:42 crc kubenswrapper[4661]: I0120 18:27:42.265568 4661 memory_manager.go:354] "RemoveStaleState removing state" podUID="4b7a79c6-309f-494e-87c6-429326682d11" containerName="proxy-httpd" Jan 20 18:27:42 crc kubenswrapper[4661]: I0120 18:27:42.265582 4661 memory_manager.go:354] "RemoveStaleState removing state" podUID="4b7a79c6-309f-494e-87c6-429326682d11" containerName="ceilometer-central-agent" Jan 20 18:27:42 crc kubenswrapper[4661]: I0120 18:27:42.267255 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 20 18:27:42 crc kubenswrapper[4661]: I0120 18:27:42.270759 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 20 18:27:42 crc kubenswrapper[4661]: I0120 18:27:42.271108 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 20 18:27:42 crc kubenswrapper[4661]: I0120 18:27:42.271256 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Jan 20 18:27:42 crc kubenswrapper[4661]: I0120 18:27:42.284622 4661 scope.go:117] "RemoveContainer" containerID="e128093979c7147f4b1158a5d112ac5fca72fffa6d4813133733b7d4c8b7e5bf" Jan 20 18:27:42 crc kubenswrapper[4661]: I0120 18:27:42.288160 4661 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 20 18:27:42 crc kubenswrapper[4661]: I0120 18:27:42.289770 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 20 18:27:42 crc kubenswrapper[4661]: I0120 18:27:42.293098 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 20 18:27:42 crc kubenswrapper[4661]: I0120 18:27:42.293429 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Jan 20 18:27:42 crc kubenswrapper[4661]: I0120 18:27:42.293559 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Jan 20 18:27:42 crc kubenswrapper[4661]: I0120 18:27:42.318313 4661 scope.go:117] "RemoveContainer" containerID="825b326842fc66dfde87ec11b4e620b73492f567e913b75df433a1523999470a" Jan 20 18:27:42 crc kubenswrapper[4661]: I0120 18:27:42.318831 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 20 18:27:42 crc kubenswrapper[4661]: I0120 18:27:42.330892 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cf4af574-77f9-45e3-8791-1a9a5ca67b38-scripts\") pod \"ceilometer-0\" (UID: \"cf4af574-77f9-45e3-8791-1a9a5ca67b38\") " pod="openstack/ceilometer-0" Jan 20 18:27:42 crc kubenswrapper[4661]: I0120 18:27:42.330940 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ff41f462-d3c3-445d-b451-6e01d59e6da8-logs\") pod \"nova-api-0\" (UID: \"ff41f462-d3c3-445d-b451-6e01d59e6da8\") " pod="openstack/nova-api-0" Jan 20 18:27:42 crc kubenswrapper[4661]: I0120 18:27:42.330959 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff41f462-d3c3-445d-b451-6e01d59e6da8-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"ff41f462-d3c3-445d-b451-6e01d59e6da8\") " pod="openstack/nova-api-0" Jan 20 18:27:42 crc kubenswrapper[4661]: I0120 18:27:42.330976 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jm5wr\" (UniqueName: \"kubernetes.io/projected/ff41f462-d3c3-445d-b451-6e01d59e6da8-kube-api-access-jm5wr\") pod \"nova-api-0\" (UID: \"ff41f462-d3c3-445d-b451-6e01d59e6da8\") " pod="openstack/nova-api-0" Jan 20 18:27:42 crc kubenswrapper[4661]: I0120 18:27:42.331003 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/cf4af574-77f9-45e3-8791-1a9a5ca67b38-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"cf4af574-77f9-45e3-8791-1a9a5ca67b38\") " pod="openstack/ceilometer-0" Jan 20 18:27:42 crc kubenswrapper[4661]: I0120 18:27:42.331030 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cf4af574-77f9-45e3-8791-1a9a5ca67b38-config-data\") pod \"ceilometer-0\" (UID: \"cf4af574-77f9-45e3-8791-1a9a5ca67b38\") " pod="openstack/ceilometer-0" Jan 20 18:27:42 crc kubenswrapper[4661]: I0120 18:27:42.331068 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/cf4af574-77f9-45e3-8791-1a9a5ca67b38-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"cf4af574-77f9-45e3-8791-1a9a5ca67b38\") " pod="openstack/ceilometer-0" Jan 20 18:27:42 crc kubenswrapper[4661]: I0120 18:27:42.331089 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zf95r\" (UniqueName: \"kubernetes.io/projected/cf4af574-77f9-45e3-8791-1a9a5ca67b38-kube-api-access-zf95r\") pod \"ceilometer-0\" (UID: \"cf4af574-77f9-45e3-8791-1a9a5ca67b38\") " pod="openstack/ceilometer-0" Jan 20 18:27:42 crc kubenswrapper[4661]: I0120 18:27:42.331125 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cf4af574-77f9-45e3-8791-1a9a5ca67b38-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"cf4af574-77f9-45e3-8791-1a9a5ca67b38\") " pod="openstack/ceilometer-0" Jan 20 18:27:42 crc kubenswrapper[4661]: I0120 18:27:42.331143 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ff41f462-d3c3-445d-b451-6e01d59e6da8-public-tls-certs\") pod \"nova-api-0\" (UID: \"ff41f462-d3c3-445d-b451-6e01d59e6da8\") " pod="openstack/nova-api-0" Jan 20 18:27:42 crc kubenswrapper[4661]: I0120 18:27:42.331158 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cf4af574-77f9-45e3-8791-1a9a5ca67b38-log-httpd\") pod \"ceilometer-0\" (UID: \"cf4af574-77f9-45e3-8791-1a9a5ca67b38\") " pod="openstack/ceilometer-0" Jan 20 18:27:42 crc kubenswrapper[4661]: I0120 18:27:42.331184 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ff41f462-d3c3-445d-b451-6e01d59e6da8-config-data\") pod \"nova-api-0\" (UID: \"ff41f462-d3c3-445d-b451-6e01d59e6da8\") " pod="openstack/nova-api-0" Jan 20 18:27:42 crc kubenswrapper[4661]: I0120 18:27:42.331211 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ff41f462-d3c3-445d-b451-6e01d59e6da8-internal-tls-certs\") pod \"nova-api-0\" (UID: \"ff41f462-d3c3-445d-b451-6e01d59e6da8\") " pod="openstack/nova-api-0" Jan 20 18:27:42 crc kubenswrapper[4661]: I0120 18:27:42.331229 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cf4af574-77f9-45e3-8791-1a9a5ca67b38-run-httpd\") pod \"ceilometer-0\" (UID: \"cf4af574-77f9-45e3-8791-1a9a5ca67b38\") " pod="openstack/ceilometer-0" Jan 20 18:27:42 crc kubenswrapper[4661]: I0120 18:27:42.343032 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 20 18:27:42 crc kubenswrapper[4661]: I0120 18:27:42.357802 4661 scope.go:117] "RemoveContainer" containerID="7fdfe51495ff10fd8052a516b680fd64efc0deb7504015f56e21edf0431bab4f" Jan 20 18:27:42 crc kubenswrapper[4661]: I0120 18:27:42.432998 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/cf4af574-77f9-45e3-8791-1a9a5ca67b38-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"cf4af574-77f9-45e3-8791-1a9a5ca67b38\") " pod="openstack/ceilometer-0" Jan 20 18:27:42 crc kubenswrapper[4661]: I0120 18:27:42.433217 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zf95r\" (UniqueName: \"kubernetes.io/projected/cf4af574-77f9-45e3-8791-1a9a5ca67b38-kube-api-access-zf95r\") pod \"ceilometer-0\" (UID: \"cf4af574-77f9-45e3-8791-1a9a5ca67b38\") " pod="openstack/ceilometer-0" Jan 20 18:27:42 crc kubenswrapper[4661]: I0120 18:27:42.433357 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cf4af574-77f9-45e3-8791-1a9a5ca67b38-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"cf4af574-77f9-45e3-8791-1a9a5ca67b38\") " pod="openstack/ceilometer-0" Jan 20 18:27:42 crc kubenswrapper[4661]: I0120 18:27:42.433472 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ff41f462-d3c3-445d-b451-6e01d59e6da8-public-tls-certs\") pod \"nova-api-0\" (UID: \"ff41f462-d3c3-445d-b451-6e01d59e6da8\") " pod="openstack/nova-api-0" Jan 20 18:27:42 crc kubenswrapper[4661]: I0120 18:27:42.434012 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cf4af574-77f9-45e3-8791-1a9a5ca67b38-log-httpd\") pod \"ceilometer-0\" (UID: \"cf4af574-77f9-45e3-8791-1a9a5ca67b38\") " pod="openstack/ceilometer-0" Jan 20 18:27:42 crc kubenswrapper[4661]: I0120 18:27:42.434118 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ff41f462-d3c3-445d-b451-6e01d59e6da8-config-data\") pod \"nova-api-0\" (UID: \"ff41f462-d3c3-445d-b451-6e01d59e6da8\") " pod="openstack/nova-api-0" Jan 20 18:27:42 crc kubenswrapper[4661]: I0120 18:27:42.434206 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cf4af574-77f9-45e3-8791-1a9a5ca67b38-run-httpd\") pod \"ceilometer-0\" (UID: \"cf4af574-77f9-45e3-8791-1a9a5ca67b38\") " pod="openstack/ceilometer-0" Jan 20 18:27:42 crc kubenswrapper[4661]: I0120 18:27:42.434274 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ff41f462-d3c3-445d-b451-6e01d59e6da8-internal-tls-certs\") pod \"nova-api-0\" (UID: \"ff41f462-d3c3-445d-b451-6e01d59e6da8\") " pod="openstack/nova-api-0" Jan 20 18:27:42 crc kubenswrapper[4661]: I0120 18:27:42.434360 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cf4af574-77f9-45e3-8791-1a9a5ca67b38-scripts\") pod \"ceilometer-0\" (UID: \"cf4af574-77f9-45e3-8791-1a9a5ca67b38\") " pod="openstack/ceilometer-0" Jan 20 18:27:42 crc kubenswrapper[4661]: I0120 18:27:42.434438 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ff41f462-d3c3-445d-b451-6e01d59e6da8-logs\") pod \"nova-api-0\" (UID: \"ff41f462-d3c3-445d-b451-6e01d59e6da8\") " pod="openstack/nova-api-0" Jan 20 18:27:42 crc kubenswrapper[4661]: I0120 18:27:42.434539 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff41f462-d3c3-445d-b451-6e01d59e6da8-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"ff41f462-d3c3-445d-b451-6e01d59e6da8\") " pod="openstack/nova-api-0" Jan 20 18:27:42 crc kubenswrapper[4661]: I0120 18:27:42.434609 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jm5wr\" (UniqueName: \"kubernetes.io/projected/ff41f462-d3c3-445d-b451-6e01d59e6da8-kube-api-access-jm5wr\") pod \"nova-api-0\" (UID: \"ff41f462-d3c3-445d-b451-6e01d59e6da8\") " pod="openstack/nova-api-0" Jan 20 18:27:42 crc kubenswrapper[4661]: I0120 18:27:42.434711 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/cf4af574-77f9-45e3-8791-1a9a5ca67b38-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"cf4af574-77f9-45e3-8791-1a9a5ca67b38\") " pod="openstack/ceilometer-0" Jan 20 18:27:42 crc kubenswrapper[4661]: I0120 18:27:42.434797 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cf4af574-77f9-45e3-8791-1a9a5ca67b38-config-data\") pod \"ceilometer-0\" (UID: \"cf4af574-77f9-45e3-8791-1a9a5ca67b38\") " pod="openstack/ceilometer-0" Jan 20 18:27:42 crc kubenswrapper[4661]: I0120 18:27:42.434886 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cf4af574-77f9-45e3-8791-1a9a5ca67b38-log-httpd\") pod \"ceilometer-0\" (UID: \"cf4af574-77f9-45e3-8791-1a9a5ca67b38\") " pod="openstack/ceilometer-0" Jan 20 18:27:42 crc kubenswrapper[4661]: I0120 18:27:42.437950 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ff41f462-d3c3-445d-b451-6e01d59e6da8-public-tls-certs\") pod \"nova-api-0\" (UID: \"ff41f462-d3c3-445d-b451-6e01d59e6da8\") " pod="openstack/nova-api-0" Jan 20 18:27:42 crc kubenswrapper[4661]: I0120 18:27:42.438934 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ff41f462-d3c3-445d-b451-6e01d59e6da8-logs\") pod \"nova-api-0\" (UID: \"ff41f462-d3c3-445d-b451-6e01d59e6da8\") " pod="openstack/nova-api-0" Jan 20 18:27:42 crc kubenswrapper[4661]: I0120 18:27:42.438408 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cf4af574-77f9-45e3-8791-1a9a5ca67b38-config-data\") pod \"ceilometer-0\" (UID: \"cf4af574-77f9-45e3-8791-1a9a5ca67b38\") " pod="openstack/ceilometer-0" Jan 20 18:27:42 crc kubenswrapper[4661]: I0120 18:27:42.439178 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cf4af574-77f9-45e3-8791-1a9a5ca67b38-run-httpd\") pod \"ceilometer-0\" (UID: \"cf4af574-77f9-45e3-8791-1a9a5ca67b38\") " pod="openstack/ceilometer-0" Jan 20 18:27:42 crc kubenswrapper[4661]: I0120 18:27:42.439406 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cf4af574-77f9-45e3-8791-1a9a5ca67b38-scripts\") pod \"ceilometer-0\" (UID: \"cf4af574-77f9-45e3-8791-1a9a5ca67b38\") " pod="openstack/ceilometer-0" Jan 20 18:27:42 crc kubenswrapper[4661]: I0120 18:27:42.440319 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ff41f462-d3c3-445d-b451-6e01d59e6da8-config-data\") pod \"nova-api-0\" (UID: \"ff41f462-d3c3-445d-b451-6e01d59e6da8\") " pod="openstack/nova-api-0" Jan 20 18:27:42 crc kubenswrapper[4661]: I0120 18:27:42.443882 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/cf4af574-77f9-45e3-8791-1a9a5ca67b38-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"cf4af574-77f9-45e3-8791-1a9a5ca67b38\") " pod="openstack/ceilometer-0" Jan 20 18:27:42 crc kubenswrapper[4661]: I0120 18:27:42.447801 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cf4af574-77f9-45e3-8791-1a9a5ca67b38-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"cf4af574-77f9-45e3-8791-1a9a5ca67b38\") " pod="openstack/ceilometer-0" Jan 20 18:27:42 crc kubenswrapper[4661]: I0120 18:27:42.448353 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ff41f462-d3c3-445d-b451-6e01d59e6da8-internal-tls-certs\") pod \"nova-api-0\" (UID: \"ff41f462-d3c3-445d-b451-6e01d59e6da8\") " pod="openstack/nova-api-0" Jan 20 18:27:42 crc kubenswrapper[4661]: I0120 18:27:42.450697 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff41f462-d3c3-445d-b451-6e01d59e6da8-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"ff41f462-d3c3-445d-b451-6e01d59e6da8\") " pod="openstack/nova-api-0" Jan 20 18:27:42 crc kubenswrapper[4661]: I0120 18:27:42.451084 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zf95r\" (UniqueName: \"kubernetes.io/projected/cf4af574-77f9-45e3-8791-1a9a5ca67b38-kube-api-access-zf95r\") pod \"ceilometer-0\" (UID: \"cf4af574-77f9-45e3-8791-1a9a5ca67b38\") " pod="openstack/ceilometer-0" Jan 20 18:27:42 crc kubenswrapper[4661]: I0120 18:27:42.451293 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/cf4af574-77f9-45e3-8791-1a9a5ca67b38-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"cf4af574-77f9-45e3-8791-1a9a5ca67b38\") " pod="openstack/ceilometer-0" Jan 20 18:27:42 crc kubenswrapper[4661]: I0120 18:27:42.454974 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jm5wr\" (UniqueName: \"kubernetes.io/projected/ff41f462-d3c3-445d-b451-6e01d59e6da8-kube-api-access-jm5wr\") pod \"nova-api-0\" (UID: \"ff41f462-d3c3-445d-b451-6e01d59e6da8\") " pod="openstack/nova-api-0" Jan 20 18:27:42 crc kubenswrapper[4661]: I0120 18:27:42.583886 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 20 18:27:42 crc kubenswrapper[4661]: I0120 18:27:42.615048 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 20 18:27:43 crc kubenswrapper[4661]: I0120 18:27:43.064287 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 20 18:27:43 crc kubenswrapper[4661]: I0120 18:27:43.185471 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"cf4af574-77f9-45e3-8791-1a9a5ca67b38","Type":"ContainerStarted","Data":"4a5532fca5dc717a4ca47aa716bf083c5edb060f6e8f7c4a15df81205046b36c"} Jan 20 18:27:43 crc kubenswrapper[4661]: W0120 18:27:43.219341 4661 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podff41f462_d3c3_445d_b451_6e01d59e6da8.slice/crio-c6cc9793cf0deb53ab4a89430d8482f3f6ea345fde99c4a6dd69d23e02678c2f WatchSource:0}: Error finding container c6cc9793cf0deb53ab4a89430d8482f3f6ea345fde99c4a6dd69d23e02678c2f: Status 404 returned error can't find the container with id c6cc9793cf0deb53ab4a89430d8482f3f6ea345fde99c4a6dd69d23e02678c2f Jan 20 18:27:43 crc kubenswrapper[4661]: I0120 18:27:43.228609 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 20 18:27:43 crc kubenswrapper[4661]: I0120 18:27:43.442950 4661 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Jan 20 18:27:43 crc kubenswrapper[4661]: I0120 18:27:43.477786 4661 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Jan 20 18:27:44 crc kubenswrapper[4661]: I0120 18:27:44.155762 4661 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4b7a79c6-309f-494e-87c6-429326682d11" path="/var/lib/kubelet/pods/4b7a79c6-309f-494e-87c6-429326682d11/volumes" Jan 20 18:27:44 crc kubenswrapper[4661]: I0120 18:27:44.156996 4661 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b5862fe2-a3a1-4918-bf74-1c8df23a2584" path="/var/lib/kubelet/pods/b5862fe2-a3a1-4918-bf74-1c8df23a2584/volumes" Jan 20 18:27:44 crc kubenswrapper[4661]: I0120 18:27:44.197044 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"ff41f462-d3c3-445d-b451-6e01d59e6da8","Type":"ContainerStarted","Data":"f363df48149eaf5b114f51dfc4a4c0ac4129bef329a284890018fedc2458d48e"} Jan 20 18:27:44 crc kubenswrapper[4661]: I0120 18:27:44.198006 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"ff41f462-d3c3-445d-b451-6e01d59e6da8","Type":"ContainerStarted","Data":"6ee3d620884cfcf09aae32e9feb55b7bd640425d7ab9d5b4333f5f9509ed04c8"} Jan 20 18:27:44 crc kubenswrapper[4661]: I0120 18:27:44.198075 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"ff41f462-d3c3-445d-b451-6e01d59e6da8","Type":"ContainerStarted","Data":"c6cc9793cf0deb53ab4a89430d8482f3f6ea345fde99c4a6dd69d23e02678c2f"} Jan 20 18:27:44 crc kubenswrapper[4661]: I0120 18:27:44.198908 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"cf4af574-77f9-45e3-8791-1a9a5ca67b38","Type":"ContainerStarted","Data":"b95873b03817a483b37b6ed1e7535453263afc186b179fc7fc1023993731b6fd"} Jan 20 18:27:44 crc kubenswrapper[4661]: I0120 18:27:44.219657 4661 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Jan 20 18:27:44 crc kubenswrapper[4661]: I0120 18:27:44.230840 4661 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.2308196750000002 podStartE2EDuration="2.230819675s" podCreationTimestamp="2026-01-20 18:27:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 18:27:44.213448452 +0000 UTC m=+1320.544238114" watchObservedRunningTime="2026-01-20 18:27:44.230819675 +0000 UTC m=+1320.561609337" Jan 20 18:27:44 crc kubenswrapper[4661]: I0120 18:27:44.379650 4661 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-cell-mapping-jqfj9"] Jan 20 18:27:44 crc kubenswrapper[4661]: I0120 18:27:44.380764 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-jqfj9" Jan 20 18:27:44 crc kubenswrapper[4661]: I0120 18:27:44.385273 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-config-data" Jan 20 18:27:44 crc kubenswrapper[4661]: I0120 18:27:44.385459 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-scripts" Jan 20 18:27:44 crc kubenswrapper[4661]: I0120 18:27:44.389330 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-jqfj9"] Jan 20 18:27:44 crc kubenswrapper[4661]: I0120 18:27:44.483850 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7cb4b490-1124-4361-b5fb-ca6db9245b74-scripts\") pod \"nova-cell1-cell-mapping-jqfj9\" (UID: \"7cb4b490-1124-4361-b5fb-ca6db9245b74\") " pod="openstack/nova-cell1-cell-mapping-jqfj9" Jan 20 18:27:44 crc kubenswrapper[4661]: I0120 18:27:44.483961 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7cb4b490-1124-4361-b5fb-ca6db9245b74-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-jqfj9\" (UID: \"7cb4b490-1124-4361-b5fb-ca6db9245b74\") " pod="openstack/nova-cell1-cell-mapping-jqfj9" Jan 20 18:27:44 crc kubenswrapper[4661]: I0120 18:27:44.484034 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vpxlq\" (UniqueName: \"kubernetes.io/projected/7cb4b490-1124-4361-b5fb-ca6db9245b74-kube-api-access-vpxlq\") pod \"nova-cell1-cell-mapping-jqfj9\" (UID: \"7cb4b490-1124-4361-b5fb-ca6db9245b74\") " pod="openstack/nova-cell1-cell-mapping-jqfj9" Jan 20 18:27:44 crc kubenswrapper[4661]: I0120 18:27:44.484054 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7cb4b490-1124-4361-b5fb-ca6db9245b74-config-data\") pod \"nova-cell1-cell-mapping-jqfj9\" (UID: \"7cb4b490-1124-4361-b5fb-ca6db9245b74\") " pod="openstack/nova-cell1-cell-mapping-jqfj9" Jan 20 18:27:44 crc kubenswrapper[4661]: I0120 18:27:44.585960 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vpxlq\" (UniqueName: \"kubernetes.io/projected/7cb4b490-1124-4361-b5fb-ca6db9245b74-kube-api-access-vpxlq\") pod \"nova-cell1-cell-mapping-jqfj9\" (UID: \"7cb4b490-1124-4361-b5fb-ca6db9245b74\") " pod="openstack/nova-cell1-cell-mapping-jqfj9" Jan 20 18:27:44 crc kubenswrapper[4661]: I0120 18:27:44.586005 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7cb4b490-1124-4361-b5fb-ca6db9245b74-config-data\") pod \"nova-cell1-cell-mapping-jqfj9\" (UID: \"7cb4b490-1124-4361-b5fb-ca6db9245b74\") " pod="openstack/nova-cell1-cell-mapping-jqfj9" Jan 20 18:27:44 crc kubenswrapper[4661]: I0120 18:27:44.586075 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7cb4b490-1124-4361-b5fb-ca6db9245b74-scripts\") pod \"nova-cell1-cell-mapping-jqfj9\" (UID: \"7cb4b490-1124-4361-b5fb-ca6db9245b74\") " pod="openstack/nova-cell1-cell-mapping-jqfj9" Jan 20 18:27:44 crc kubenswrapper[4661]: I0120 18:27:44.586127 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7cb4b490-1124-4361-b5fb-ca6db9245b74-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-jqfj9\" (UID: \"7cb4b490-1124-4361-b5fb-ca6db9245b74\") " pod="openstack/nova-cell1-cell-mapping-jqfj9" Jan 20 18:27:44 crc kubenswrapper[4661]: I0120 18:27:44.591298 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7cb4b490-1124-4361-b5fb-ca6db9245b74-config-data\") pod \"nova-cell1-cell-mapping-jqfj9\" (UID: \"7cb4b490-1124-4361-b5fb-ca6db9245b74\") " pod="openstack/nova-cell1-cell-mapping-jqfj9" Jan 20 18:27:44 crc kubenswrapper[4661]: I0120 18:27:44.592085 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7cb4b490-1124-4361-b5fb-ca6db9245b74-scripts\") pod \"nova-cell1-cell-mapping-jqfj9\" (UID: \"7cb4b490-1124-4361-b5fb-ca6db9245b74\") " pod="openstack/nova-cell1-cell-mapping-jqfj9" Jan 20 18:27:44 crc kubenswrapper[4661]: I0120 18:27:44.592123 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7cb4b490-1124-4361-b5fb-ca6db9245b74-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-jqfj9\" (UID: \"7cb4b490-1124-4361-b5fb-ca6db9245b74\") " pod="openstack/nova-cell1-cell-mapping-jqfj9" Jan 20 18:27:44 crc kubenswrapper[4661]: I0120 18:27:44.606266 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vpxlq\" (UniqueName: \"kubernetes.io/projected/7cb4b490-1124-4361-b5fb-ca6db9245b74-kube-api-access-vpxlq\") pod \"nova-cell1-cell-mapping-jqfj9\" (UID: \"7cb4b490-1124-4361-b5fb-ca6db9245b74\") " pod="openstack/nova-cell1-cell-mapping-jqfj9" Jan 20 18:27:44 crc kubenswrapper[4661]: I0120 18:27:44.882717 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-jqfj9" Jan 20 18:27:45 crc kubenswrapper[4661]: I0120 18:27:45.223075 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"cf4af574-77f9-45e3-8791-1a9a5ca67b38","Type":"ContainerStarted","Data":"3fab0463fdd09147fb51758b6c3d597b4e32e27d4493ec06b416f7571e4aaa15"} Jan 20 18:27:45 crc kubenswrapper[4661]: I0120 18:27:45.322769 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-jqfj9"] Jan 20 18:27:45 crc kubenswrapper[4661]: I0120 18:27:45.697856 4661 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5b856c5697-x6dtq" Jan 20 18:27:45 crc kubenswrapper[4661]: I0120 18:27:45.775515 4661 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-566b5b7845-222f9"] Jan 20 18:27:45 crc kubenswrapper[4661]: I0120 18:27:45.775799 4661 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-566b5b7845-222f9" podUID="1a7bdc38-707c-4869-8724-6e6319f3ccf6" containerName="dnsmasq-dns" containerID="cri-o://abcf39dc50198c72691c46c759fdbd2613e5ec363520acc4e246fff40428c656" gracePeriod=10 Jan 20 18:27:46 crc kubenswrapper[4661]: I0120 18:27:46.262779 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"cf4af574-77f9-45e3-8791-1a9a5ca67b38","Type":"ContainerStarted","Data":"f06e4917bda1c71acade3fe59d15bbc3b29245bded338777e34e0f592a9bf5a0"} Jan 20 18:27:46 crc kubenswrapper[4661]: I0120 18:27:46.268993 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-jqfj9" event={"ID":"7cb4b490-1124-4361-b5fb-ca6db9245b74","Type":"ContainerStarted","Data":"e0105e71385efa784d6b559e5f64ab0b499e1fce55bafb85eb78a914adb65420"} Jan 20 18:27:46 crc kubenswrapper[4661]: I0120 18:27:46.269045 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-jqfj9" event={"ID":"7cb4b490-1124-4361-b5fb-ca6db9245b74","Type":"ContainerStarted","Data":"ea8e9dd92701d88d3fbde23b5129afbe87d6c5e6dc702d06f4ffe5e9f199a9b7"} Jan 20 18:27:46 crc kubenswrapper[4661]: I0120 18:27:46.285436 4661 generic.go:334] "Generic (PLEG): container finished" podID="1a7bdc38-707c-4869-8724-6e6319f3ccf6" containerID="abcf39dc50198c72691c46c759fdbd2613e5ec363520acc4e246fff40428c656" exitCode=0 Jan 20 18:27:46 crc kubenswrapper[4661]: I0120 18:27:46.285474 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-566b5b7845-222f9" event={"ID":"1a7bdc38-707c-4869-8724-6e6319f3ccf6","Type":"ContainerDied","Data":"abcf39dc50198c72691c46c759fdbd2613e5ec363520acc4e246fff40428c656"} Jan 20 18:27:46 crc kubenswrapper[4661]: I0120 18:27:46.323510 4661 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-566b5b7845-222f9" Jan 20 18:27:46 crc kubenswrapper[4661]: I0120 18:27:46.351024 4661 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-cell-mapping-jqfj9" podStartSLOduration=2.351007994 podStartE2EDuration="2.351007994s" podCreationTimestamp="2026-01-20 18:27:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 18:27:46.302120089 +0000 UTC m=+1322.632909761" watchObservedRunningTime="2026-01-20 18:27:46.351007994 +0000 UTC m=+1322.681797656" Jan 20 18:27:46 crc kubenswrapper[4661]: I0120 18:27:46.417019 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gdpv9\" (UniqueName: \"kubernetes.io/projected/1a7bdc38-707c-4869-8724-6e6319f3ccf6-kube-api-access-gdpv9\") pod \"1a7bdc38-707c-4869-8724-6e6319f3ccf6\" (UID: \"1a7bdc38-707c-4869-8724-6e6319f3ccf6\") " Jan 20 18:27:46 crc kubenswrapper[4661]: I0120 18:27:46.417077 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1a7bdc38-707c-4869-8724-6e6319f3ccf6-dns-svc\") pod \"1a7bdc38-707c-4869-8724-6e6319f3ccf6\" (UID: \"1a7bdc38-707c-4869-8724-6e6319f3ccf6\") " Jan 20 18:27:46 crc kubenswrapper[4661]: I0120 18:27:46.417110 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1a7bdc38-707c-4869-8724-6e6319f3ccf6-ovsdbserver-sb\") pod \"1a7bdc38-707c-4869-8724-6e6319f3ccf6\" (UID: \"1a7bdc38-707c-4869-8724-6e6319f3ccf6\") " Jan 20 18:27:46 crc kubenswrapper[4661]: I0120 18:27:46.417153 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1a7bdc38-707c-4869-8724-6e6319f3ccf6-config\") pod \"1a7bdc38-707c-4869-8724-6e6319f3ccf6\" (UID: \"1a7bdc38-707c-4869-8724-6e6319f3ccf6\") " Jan 20 18:27:46 crc kubenswrapper[4661]: I0120 18:27:46.417230 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1a7bdc38-707c-4869-8724-6e6319f3ccf6-ovsdbserver-nb\") pod \"1a7bdc38-707c-4869-8724-6e6319f3ccf6\" (UID: \"1a7bdc38-707c-4869-8724-6e6319f3ccf6\") " Jan 20 18:27:46 crc kubenswrapper[4661]: I0120 18:27:46.428500 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1a7bdc38-707c-4869-8724-6e6319f3ccf6-kube-api-access-gdpv9" (OuterVolumeSpecName: "kube-api-access-gdpv9") pod "1a7bdc38-707c-4869-8724-6e6319f3ccf6" (UID: "1a7bdc38-707c-4869-8724-6e6319f3ccf6"). InnerVolumeSpecName "kube-api-access-gdpv9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 18:27:46 crc kubenswrapper[4661]: I0120 18:27:46.477268 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1a7bdc38-707c-4869-8724-6e6319f3ccf6-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "1a7bdc38-707c-4869-8724-6e6319f3ccf6" (UID: "1a7bdc38-707c-4869-8724-6e6319f3ccf6"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 18:27:46 crc kubenswrapper[4661]: I0120 18:27:46.503072 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1a7bdc38-707c-4869-8724-6e6319f3ccf6-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "1a7bdc38-707c-4869-8724-6e6319f3ccf6" (UID: "1a7bdc38-707c-4869-8724-6e6319f3ccf6"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 18:27:46 crc kubenswrapper[4661]: I0120 18:27:46.505051 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1a7bdc38-707c-4869-8724-6e6319f3ccf6-config" (OuterVolumeSpecName: "config") pod "1a7bdc38-707c-4869-8724-6e6319f3ccf6" (UID: "1a7bdc38-707c-4869-8724-6e6319f3ccf6"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 18:27:46 crc kubenswrapper[4661]: I0120 18:27:46.510959 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1a7bdc38-707c-4869-8724-6e6319f3ccf6-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "1a7bdc38-707c-4869-8724-6e6319f3ccf6" (UID: "1a7bdc38-707c-4869-8724-6e6319f3ccf6"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 18:27:46 crc kubenswrapper[4661]: I0120 18:27:46.520020 4661 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gdpv9\" (UniqueName: \"kubernetes.io/projected/1a7bdc38-707c-4869-8724-6e6319f3ccf6-kube-api-access-gdpv9\") on node \"crc\" DevicePath \"\"" Jan 20 18:27:46 crc kubenswrapper[4661]: I0120 18:27:46.520083 4661 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1a7bdc38-707c-4869-8724-6e6319f3ccf6-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 20 18:27:46 crc kubenswrapper[4661]: I0120 18:27:46.520094 4661 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1a7bdc38-707c-4869-8724-6e6319f3ccf6-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 20 18:27:46 crc kubenswrapper[4661]: I0120 18:27:46.520104 4661 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1a7bdc38-707c-4869-8724-6e6319f3ccf6-config\") on node \"crc\" DevicePath \"\"" Jan 20 18:27:46 crc kubenswrapper[4661]: I0120 18:27:46.520113 4661 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1a7bdc38-707c-4869-8724-6e6319f3ccf6-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 20 18:27:47 crc kubenswrapper[4661]: I0120 18:27:47.311321 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"cf4af574-77f9-45e3-8791-1a9a5ca67b38","Type":"ContainerStarted","Data":"1820bd00bb83f6e1cf122d3212d4f4b1f8a9c754950efa1716a5267c6ae93be6"} Jan 20 18:27:47 crc kubenswrapper[4661]: I0120 18:27:47.311735 4661 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 20 18:27:47 crc kubenswrapper[4661]: I0120 18:27:47.318012 4661 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-566b5b7845-222f9" Jan 20 18:27:47 crc kubenswrapper[4661]: I0120 18:27:47.318014 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-566b5b7845-222f9" event={"ID":"1a7bdc38-707c-4869-8724-6e6319f3ccf6","Type":"ContainerDied","Data":"596eed7ca8913285e1a5b09db8048c7d8b6aabe62385555533164f3e3eb9b686"} Jan 20 18:27:47 crc kubenswrapper[4661]: I0120 18:27:47.318088 4661 scope.go:117] "RemoveContainer" containerID="abcf39dc50198c72691c46c759fdbd2613e5ec363520acc4e246fff40428c656" Jan 20 18:27:47 crc kubenswrapper[4661]: I0120 18:27:47.345557 4661 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.669633947 podStartE2EDuration="5.345533729s" podCreationTimestamp="2026-01-20 18:27:42 +0000 UTC" firstStartedPulling="2026-01-20 18:27:43.060893568 +0000 UTC m=+1319.391683230" lastFinishedPulling="2026-01-20 18:27:46.73679335 +0000 UTC m=+1323.067583012" observedRunningTime="2026-01-20 18:27:47.333263019 +0000 UTC m=+1323.664052681" watchObservedRunningTime="2026-01-20 18:27:47.345533729 +0000 UTC m=+1323.676323391" Jan 20 18:27:47 crc kubenswrapper[4661]: I0120 18:27:47.347912 4661 scope.go:117] "RemoveContainer" containerID="ac95899342d34113b0d790041f47b4b0e1a8ddc101ff0079d6f661d2f6afd04e" Jan 20 18:27:47 crc kubenswrapper[4661]: I0120 18:27:47.404611 4661 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-566b5b7845-222f9"] Jan 20 18:27:47 crc kubenswrapper[4661]: I0120 18:27:47.425498 4661 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-566b5b7845-222f9"] Jan 20 18:27:48 crc kubenswrapper[4661]: I0120 18:27:48.161121 4661 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1a7bdc38-707c-4869-8724-6e6319f3ccf6" path="/var/lib/kubelet/pods/1a7bdc38-707c-4869-8724-6e6319f3ccf6/volumes" Jan 20 18:27:51 crc kubenswrapper[4661]: I0120 18:27:51.259411 4661 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-566b5b7845-222f9" podUID="1a7bdc38-707c-4869-8724-6e6319f3ccf6" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.174:5353: i/o timeout" Jan 20 18:27:51 crc kubenswrapper[4661]: I0120 18:27:51.356019 4661 generic.go:334] "Generic (PLEG): container finished" podID="7cb4b490-1124-4361-b5fb-ca6db9245b74" containerID="e0105e71385efa784d6b559e5f64ab0b499e1fce55bafb85eb78a914adb65420" exitCode=0 Jan 20 18:27:51 crc kubenswrapper[4661]: I0120 18:27:51.356068 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-jqfj9" event={"ID":"7cb4b490-1124-4361-b5fb-ca6db9245b74","Type":"ContainerDied","Data":"e0105e71385efa784d6b559e5f64ab0b499e1fce55bafb85eb78a914adb65420"} Jan 20 18:27:52 crc kubenswrapper[4661]: I0120 18:27:52.616284 4661 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 20 18:27:52 crc kubenswrapper[4661]: I0120 18:27:52.617588 4661 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 20 18:27:52 crc kubenswrapper[4661]: I0120 18:27:52.746036 4661 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-jqfj9" Jan 20 18:27:52 crc kubenswrapper[4661]: I0120 18:27:52.822764 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vpxlq\" (UniqueName: \"kubernetes.io/projected/7cb4b490-1124-4361-b5fb-ca6db9245b74-kube-api-access-vpxlq\") pod \"7cb4b490-1124-4361-b5fb-ca6db9245b74\" (UID: \"7cb4b490-1124-4361-b5fb-ca6db9245b74\") " Jan 20 18:27:52 crc kubenswrapper[4661]: I0120 18:27:52.822818 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7cb4b490-1124-4361-b5fb-ca6db9245b74-scripts\") pod \"7cb4b490-1124-4361-b5fb-ca6db9245b74\" (UID: \"7cb4b490-1124-4361-b5fb-ca6db9245b74\") " Jan 20 18:27:52 crc kubenswrapper[4661]: I0120 18:27:52.822910 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7cb4b490-1124-4361-b5fb-ca6db9245b74-config-data\") pod \"7cb4b490-1124-4361-b5fb-ca6db9245b74\" (UID: \"7cb4b490-1124-4361-b5fb-ca6db9245b74\") " Jan 20 18:27:52 crc kubenswrapper[4661]: I0120 18:27:52.823016 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7cb4b490-1124-4361-b5fb-ca6db9245b74-combined-ca-bundle\") pod \"7cb4b490-1124-4361-b5fb-ca6db9245b74\" (UID: \"7cb4b490-1124-4361-b5fb-ca6db9245b74\") " Jan 20 18:27:52 crc kubenswrapper[4661]: I0120 18:27:52.844087 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7cb4b490-1124-4361-b5fb-ca6db9245b74-scripts" (OuterVolumeSpecName: "scripts") pod "7cb4b490-1124-4361-b5fb-ca6db9245b74" (UID: "7cb4b490-1124-4361-b5fb-ca6db9245b74"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 18:27:52 crc kubenswrapper[4661]: I0120 18:27:52.855907 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7cb4b490-1124-4361-b5fb-ca6db9245b74-kube-api-access-vpxlq" (OuterVolumeSpecName: "kube-api-access-vpxlq") pod "7cb4b490-1124-4361-b5fb-ca6db9245b74" (UID: "7cb4b490-1124-4361-b5fb-ca6db9245b74"). InnerVolumeSpecName "kube-api-access-vpxlq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 18:27:52 crc kubenswrapper[4661]: I0120 18:27:52.877622 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7cb4b490-1124-4361-b5fb-ca6db9245b74-config-data" (OuterVolumeSpecName: "config-data") pod "7cb4b490-1124-4361-b5fb-ca6db9245b74" (UID: "7cb4b490-1124-4361-b5fb-ca6db9245b74"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 18:27:52 crc kubenswrapper[4661]: I0120 18:27:52.898760 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7cb4b490-1124-4361-b5fb-ca6db9245b74-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7cb4b490-1124-4361-b5fb-ca6db9245b74" (UID: "7cb4b490-1124-4361-b5fb-ca6db9245b74"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 18:27:52 crc kubenswrapper[4661]: I0120 18:27:52.924965 4661 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7cb4b490-1124-4361-b5fb-ca6db9245b74-scripts\") on node \"crc\" DevicePath \"\"" Jan 20 18:27:52 crc kubenswrapper[4661]: I0120 18:27:52.925019 4661 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vpxlq\" (UniqueName: \"kubernetes.io/projected/7cb4b490-1124-4361-b5fb-ca6db9245b74-kube-api-access-vpxlq\") on node \"crc\" DevicePath \"\"" Jan 20 18:27:52 crc kubenswrapper[4661]: I0120 18:27:52.925033 4661 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7cb4b490-1124-4361-b5fb-ca6db9245b74-config-data\") on node \"crc\" DevicePath \"\"" Jan 20 18:27:52 crc kubenswrapper[4661]: I0120 18:27:52.925064 4661 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7cb4b490-1124-4361-b5fb-ca6db9245b74-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 20 18:27:53 crc kubenswrapper[4661]: I0120 18:27:53.372630 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-jqfj9" event={"ID":"7cb4b490-1124-4361-b5fb-ca6db9245b74","Type":"ContainerDied","Data":"ea8e9dd92701d88d3fbde23b5129afbe87d6c5e6dc702d06f4ffe5e9f199a9b7"} Jan 20 18:27:53 crc kubenswrapper[4661]: I0120 18:27:53.372713 4661 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ea8e9dd92701d88d3fbde23b5129afbe87d6c5e6dc702d06f4ffe5e9f199a9b7" Jan 20 18:27:53 crc kubenswrapper[4661]: I0120 18:27:53.372768 4661 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-jqfj9" Jan 20 18:27:53 crc kubenswrapper[4661]: I0120 18:27:53.563335 4661 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 20 18:27:53 crc kubenswrapper[4661]: I0120 18:27:53.571909 4661 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 20 18:27:53 crc kubenswrapper[4661]: I0120 18:27:53.572170 4661 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="3ac36711-8044-4bf7-a7a3-12f6805f0538" containerName="nova-scheduler-scheduler" containerID="cri-o://aa8e7288364cad916dff48d1f1dbcfccccb4192b05929a79050678fdbc4ecc24" gracePeriod=30 Jan 20 18:27:53 crc kubenswrapper[4661]: I0120 18:27:53.618564 4661 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 20 18:27:53 crc kubenswrapper[4661]: I0120 18:27:53.618802 4661 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="b10d37a8-0883-4e5a-9594-b6c216b03c38" containerName="nova-metadata-log" containerID="cri-o://49b02e4ed668519864414d43c44c2b4ef92f630699e9f34bb24814112bd47ce5" gracePeriod=30 Jan 20 18:27:53 crc kubenswrapper[4661]: I0120 18:27:53.618914 4661 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="b10d37a8-0883-4e5a-9594-b6c216b03c38" containerName="nova-metadata-metadata" containerID="cri-o://49efe543fd7bba20e2fb74d878c116c5c2b7c2095c0136a8ea6e06ae5ceb501d" gracePeriod=30 Jan 20 18:27:53 crc kubenswrapper[4661]: I0120 18:27:53.631885 4661 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="ff41f462-d3c3-445d-b451-6e01d59e6da8" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.184:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 20 18:27:53 crc kubenswrapper[4661]: I0120 18:27:53.631887 4661 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="ff41f462-d3c3-445d-b451-6e01d59e6da8" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.184:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 20 18:27:54 crc kubenswrapper[4661]: I0120 18:27:54.382371 4661 generic.go:334] "Generic (PLEG): container finished" podID="b10d37a8-0883-4e5a-9594-b6c216b03c38" containerID="49b02e4ed668519864414d43c44c2b4ef92f630699e9f34bb24814112bd47ce5" exitCode=143 Jan 20 18:27:54 crc kubenswrapper[4661]: I0120 18:27:54.382451 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"b10d37a8-0883-4e5a-9594-b6c216b03c38","Type":"ContainerDied","Data":"49b02e4ed668519864414d43c44c2b4ef92f630699e9f34bb24814112bd47ce5"} Jan 20 18:27:54 crc kubenswrapper[4661]: I0120 18:27:54.382889 4661 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="ff41f462-d3c3-445d-b451-6e01d59e6da8" containerName="nova-api-log" containerID="cri-o://6ee3d620884cfcf09aae32e9feb55b7bd640425d7ab9d5b4333f5f9509ed04c8" gracePeriod=30 Jan 20 18:27:54 crc kubenswrapper[4661]: I0120 18:27:54.382946 4661 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="ff41f462-d3c3-445d-b451-6e01d59e6da8" containerName="nova-api-api" containerID="cri-o://f363df48149eaf5b114f51dfc4a4c0ac4129bef329a284890018fedc2458d48e" gracePeriod=30 Jan 20 18:27:54 crc kubenswrapper[4661]: I0120 18:27:54.974821 4661 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 20 18:27:55 crc kubenswrapper[4661]: I0120 18:27:55.070687 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9f2vx\" (UniqueName: \"kubernetes.io/projected/3ac36711-8044-4bf7-a7a3-12f6805f0538-kube-api-access-9f2vx\") pod \"3ac36711-8044-4bf7-a7a3-12f6805f0538\" (UID: \"3ac36711-8044-4bf7-a7a3-12f6805f0538\") " Jan 20 18:27:55 crc kubenswrapper[4661]: I0120 18:27:55.071000 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3ac36711-8044-4bf7-a7a3-12f6805f0538-combined-ca-bundle\") pod \"3ac36711-8044-4bf7-a7a3-12f6805f0538\" (UID: \"3ac36711-8044-4bf7-a7a3-12f6805f0538\") " Jan 20 18:27:55 crc kubenswrapper[4661]: I0120 18:27:55.071828 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3ac36711-8044-4bf7-a7a3-12f6805f0538-config-data\") pod \"3ac36711-8044-4bf7-a7a3-12f6805f0538\" (UID: \"3ac36711-8044-4bf7-a7a3-12f6805f0538\") " Jan 20 18:27:55 crc kubenswrapper[4661]: I0120 18:27:55.082306 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ac36711-8044-4bf7-a7a3-12f6805f0538-kube-api-access-9f2vx" (OuterVolumeSpecName: "kube-api-access-9f2vx") pod "3ac36711-8044-4bf7-a7a3-12f6805f0538" (UID: "3ac36711-8044-4bf7-a7a3-12f6805f0538"). InnerVolumeSpecName "kube-api-access-9f2vx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 18:27:55 crc kubenswrapper[4661]: I0120 18:27:55.100791 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ac36711-8044-4bf7-a7a3-12f6805f0538-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3ac36711-8044-4bf7-a7a3-12f6805f0538" (UID: "3ac36711-8044-4bf7-a7a3-12f6805f0538"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 18:27:55 crc kubenswrapper[4661]: I0120 18:27:55.139795 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ac36711-8044-4bf7-a7a3-12f6805f0538-config-data" (OuterVolumeSpecName: "config-data") pod "3ac36711-8044-4bf7-a7a3-12f6805f0538" (UID: "3ac36711-8044-4bf7-a7a3-12f6805f0538"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 18:27:55 crc kubenswrapper[4661]: I0120 18:27:55.174728 4661 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3ac36711-8044-4bf7-a7a3-12f6805f0538-config-data\") on node \"crc\" DevicePath \"\"" Jan 20 18:27:55 crc kubenswrapper[4661]: I0120 18:27:55.174765 4661 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9f2vx\" (UniqueName: \"kubernetes.io/projected/3ac36711-8044-4bf7-a7a3-12f6805f0538-kube-api-access-9f2vx\") on node \"crc\" DevicePath \"\"" Jan 20 18:27:55 crc kubenswrapper[4661]: I0120 18:27:55.174779 4661 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3ac36711-8044-4bf7-a7a3-12f6805f0538-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 20 18:27:55 crc kubenswrapper[4661]: I0120 18:27:55.391041 4661 generic.go:334] "Generic (PLEG): container finished" podID="3ac36711-8044-4bf7-a7a3-12f6805f0538" containerID="aa8e7288364cad916dff48d1f1dbcfccccb4192b05929a79050678fdbc4ecc24" exitCode=0 Jan 20 18:27:55 crc kubenswrapper[4661]: I0120 18:27:55.391113 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"3ac36711-8044-4bf7-a7a3-12f6805f0538","Type":"ContainerDied","Data":"aa8e7288364cad916dff48d1f1dbcfccccb4192b05929a79050678fdbc4ecc24"} Jan 20 18:27:55 crc kubenswrapper[4661]: I0120 18:27:55.391140 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"3ac36711-8044-4bf7-a7a3-12f6805f0538","Type":"ContainerDied","Data":"ef0c09c5229e734bc9d5ee60b13fa0c34ae3ed4bacf8e905ddb11d40461fa6d8"} Jan 20 18:27:55 crc kubenswrapper[4661]: I0120 18:27:55.391169 4661 scope.go:117] "RemoveContainer" containerID="aa8e7288364cad916dff48d1f1dbcfccccb4192b05929a79050678fdbc4ecc24" Jan 20 18:27:55 crc kubenswrapper[4661]: I0120 18:27:55.391723 4661 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 20 18:27:55 crc kubenswrapper[4661]: I0120 18:27:55.392845 4661 generic.go:334] "Generic (PLEG): container finished" podID="ff41f462-d3c3-445d-b451-6e01d59e6da8" containerID="6ee3d620884cfcf09aae32e9feb55b7bd640425d7ab9d5b4333f5f9509ed04c8" exitCode=143 Jan 20 18:27:55 crc kubenswrapper[4661]: I0120 18:27:55.392891 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"ff41f462-d3c3-445d-b451-6e01d59e6da8","Type":"ContainerDied","Data":"6ee3d620884cfcf09aae32e9feb55b7bd640425d7ab9d5b4333f5f9509ed04c8"} Jan 20 18:27:55 crc kubenswrapper[4661]: I0120 18:27:55.425322 4661 scope.go:117] "RemoveContainer" containerID="aa8e7288364cad916dff48d1f1dbcfccccb4192b05929a79050678fdbc4ecc24" Jan 20 18:27:55 crc kubenswrapper[4661]: E0120 18:27:55.426819 4661 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"aa8e7288364cad916dff48d1f1dbcfccccb4192b05929a79050678fdbc4ecc24\": container with ID starting with aa8e7288364cad916dff48d1f1dbcfccccb4192b05929a79050678fdbc4ecc24 not found: ID does not exist" containerID="aa8e7288364cad916dff48d1f1dbcfccccb4192b05929a79050678fdbc4ecc24" Jan 20 18:27:55 crc kubenswrapper[4661]: I0120 18:27:55.426875 4661 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aa8e7288364cad916dff48d1f1dbcfccccb4192b05929a79050678fdbc4ecc24"} err="failed to get container status \"aa8e7288364cad916dff48d1f1dbcfccccb4192b05929a79050678fdbc4ecc24\": rpc error: code = NotFound desc = could not find container \"aa8e7288364cad916dff48d1f1dbcfccccb4192b05929a79050678fdbc4ecc24\": container with ID starting with aa8e7288364cad916dff48d1f1dbcfccccb4192b05929a79050678fdbc4ecc24 not found: ID does not exist" Jan 20 18:27:55 crc kubenswrapper[4661]: I0120 18:27:55.432785 4661 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 20 18:27:55 crc kubenswrapper[4661]: I0120 18:27:55.472133 4661 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Jan 20 18:27:55 crc kubenswrapper[4661]: I0120 18:27:55.492878 4661 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 20 18:27:55 crc kubenswrapper[4661]: E0120 18:27:55.493679 4661 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1a7bdc38-707c-4869-8724-6e6319f3ccf6" containerName="dnsmasq-dns" Jan 20 18:27:55 crc kubenswrapper[4661]: I0120 18:27:55.493737 4661 state_mem.go:107] "Deleted CPUSet assignment" podUID="1a7bdc38-707c-4869-8724-6e6319f3ccf6" containerName="dnsmasq-dns" Jan 20 18:27:55 crc kubenswrapper[4661]: E0120 18:27:55.493766 4661 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3ac36711-8044-4bf7-a7a3-12f6805f0538" containerName="nova-scheduler-scheduler" Jan 20 18:27:55 crc kubenswrapper[4661]: I0120 18:27:55.493776 4661 state_mem.go:107] "Deleted CPUSet assignment" podUID="3ac36711-8044-4bf7-a7a3-12f6805f0538" containerName="nova-scheduler-scheduler" Jan 20 18:27:55 crc kubenswrapper[4661]: E0120 18:27:55.493792 4661 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1a7bdc38-707c-4869-8724-6e6319f3ccf6" containerName="init" Jan 20 18:27:55 crc kubenswrapper[4661]: I0120 18:27:55.493807 4661 state_mem.go:107] "Deleted CPUSet assignment" podUID="1a7bdc38-707c-4869-8724-6e6319f3ccf6" containerName="init" Jan 20 18:27:55 crc kubenswrapper[4661]: E0120 18:27:55.493823 4661 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7cb4b490-1124-4361-b5fb-ca6db9245b74" containerName="nova-manage" Jan 20 18:27:55 crc kubenswrapper[4661]: I0120 18:27:55.493830 4661 state_mem.go:107] "Deleted CPUSet assignment" podUID="7cb4b490-1124-4361-b5fb-ca6db9245b74" containerName="nova-manage" Jan 20 18:27:55 crc kubenswrapper[4661]: I0120 18:27:55.494242 4661 memory_manager.go:354] "RemoveStaleState removing state" podUID="1a7bdc38-707c-4869-8724-6e6319f3ccf6" containerName="dnsmasq-dns" Jan 20 18:27:55 crc kubenswrapper[4661]: I0120 18:27:55.494269 4661 memory_manager.go:354] "RemoveStaleState removing state" podUID="3ac36711-8044-4bf7-a7a3-12f6805f0538" containerName="nova-scheduler-scheduler" Jan 20 18:27:55 crc kubenswrapper[4661]: I0120 18:27:55.494298 4661 memory_manager.go:354] "RemoveStaleState removing state" podUID="7cb4b490-1124-4361-b5fb-ca6db9245b74" containerName="nova-manage" Jan 20 18:27:55 crc kubenswrapper[4661]: I0120 18:27:55.496083 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 20 18:27:55 crc kubenswrapper[4661]: I0120 18:27:55.502108 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 20 18:27:55 crc kubenswrapper[4661]: I0120 18:27:55.510998 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 20 18:27:55 crc kubenswrapper[4661]: I0120 18:27:55.582640 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tt2r4\" (UniqueName: \"kubernetes.io/projected/47ac760b-6ca8-4048-a60e-e3717fcb25ec-kube-api-access-tt2r4\") pod \"nova-scheduler-0\" (UID: \"47ac760b-6ca8-4048-a60e-e3717fcb25ec\") " pod="openstack/nova-scheduler-0" Jan 20 18:27:55 crc kubenswrapper[4661]: I0120 18:27:55.582944 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/47ac760b-6ca8-4048-a60e-e3717fcb25ec-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"47ac760b-6ca8-4048-a60e-e3717fcb25ec\") " pod="openstack/nova-scheduler-0" Jan 20 18:27:55 crc kubenswrapper[4661]: I0120 18:27:55.583130 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/47ac760b-6ca8-4048-a60e-e3717fcb25ec-config-data\") pod \"nova-scheduler-0\" (UID: \"47ac760b-6ca8-4048-a60e-e3717fcb25ec\") " pod="openstack/nova-scheduler-0" Jan 20 18:27:55 crc kubenswrapper[4661]: I0120 18:27:55.686727 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/47ac760b-6ca8-4048-a60e-e3717fcb25ec-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"47ac760b-6ca8-4048-a60e-e3717fcb25ec\") " pod="openstack/nova-scheduler-0" Jan 20 18:27:55 crc kubenswrapper[4661]: I0120 18:27:55.686888 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/47ac760b-6ca8-4048-a60e-e3717fcb25ec-config-data\") pod \"nova-scheduler-0\" (UID: \"47ac760b-6ca8-4048-a60e-e3717fcb25ec\") " pod="openstack/nova-scheduler-0" Jan 20 18:27:55 crc kubenswrapper[4661]: I0120 18:27:55.692927 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tt2r4\" (UniqueName: \"kubernetes.io/projected/47ac760b-6ca8-4048-a60e-e3717fcb25ec-kube-api-access-tt2r4\") pod \"nova-scheduler-0\" (UID: \"47ac760b-6ca8-4048-a60e-e3717fcb25ec\") " pod="openstack/nova-scheduler-0" Jan 20 18:27:55 crc kubenswrapper[4661]: I0120 18:27:55.701872 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/47ac760b-6ca8-4048-a60e-e3717fcb25ec-config-data\") pod \"nova-scheduler-0\" (UID: \"47ac760b-6ca8-4048-a60e-e3717fcb25ec\") " pod="openstack/nova-scheduler-0" Jan 20 18:27:55 crc kubenswrapper[4661]: I0120 18:27:55.702498 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/47ac760b-6ca8-4048-a60e-e3717fcb25ec-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"47ac760b-6ca8-4048-a60e-e3717fcb25ec\") " pod="openstack/nova-scheduler-0" Jan 20 18:27:55 crc kubenswrapper[4661]: I0120 18:27:55.724458 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tt2r4\" (UniqueName: \"kubernetes.io/projected/47ac760b-6ca8-4048-a60e-e3717fcb25ec-kube-api-access-tt2r4\") pod \"nova-scheduler-0\" (UID: \"47ac760b-6ca8-4048-a60e-e3717fcb25ec\") " pod="openstack/nova-scheduler-0" Jan 20 18:27:55 crc kubenswrapper[4661]: I0120 18:27:55.821971 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 20 18:27:56 crc kubenswrapper[4661]: I0120 18:27:56.152555 4661 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ac36711-8044-4bf7-a7a3-12f6805f0538" path="/var/lib/kubelet/pods/3ac36711-8044-4bf7-a7a3-12f6805f0538/volumes" Jan 20 18:27:56 crc kubenswrapper[4661]: I0120 18:27:56.322826 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 20 18:27:56 crc kubenswrapper[4661]: W0120 18:27:56.330770 4661 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod47ac760b_6ca8_4048_a60e_e3717fcb25ec.slice/crio-0633c5d0c307dacc60bc570c8052f462e8752b976218165369fbe1e8e23fb638 WatchSource:0}: Error finding container 0633c5d0c307dacc60bc570c8052f462e8752b976218165369fbe1e8e23fb638: Status 404 returned error can't find the container with id 0633c5d0c307dacc60bc570c8052f462e8752b976218165369fbe1e8e23fb638 Jan 20 18:27:56 crc kubenswrapper[4661]: I0120 18:27:56.401655 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"47ac760b-6ca8-4048-a60e-e3717fcb25ec","Type":"ContainerStarted","Data":"0633c5d0c307dacc60bc570c8052f462e8752b976218165369fbe1e8e23fb638"} Jan 20 18:27:56 crc kubenswrapper[4661]: I0120 18:27:56.774661 4661 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="b10d37a8-0883-4e5a-9594-b6c216b03c38" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.178:8775/\": read tcp 10.217.0.2:47632->10.217.0.178:8775: read: connection reset by peer" Jan 20 18:27:56 crc kubenswrapper[4661]: I0120 18:27:56.775020 4661 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="b10d37a8-0883-4e5a-9594-b6c216b03c38" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.178:8775/\": read tcp 10.217.0.2:47636->10.217.0.178:8775: read: connection reset by peer" Jan 20 18:27:57 crc kubenswrapper[4661]: I0120 18:27:57.238761 4661 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 20 18:27:57 crc kubenswrapper[4661]: I0120 18:27:57.325952 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b10d37a8-0883-4e5a-9594-b6c216b03c38-logs\") pod \"b10d37a8-0883-4e5a-9594-b6c216b03c38\" (UID: \"b10d37a8-0883-4e5a-9594-b6c216b03c38\") " Jan 20 18:27:57 crc kubenswrapper[4661]: I0120 18:27:57.326032 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b10d37a8-0883-4e5a-9594-b6c216b03c38-config-data\") pod \"b10d37a8-0883-4e5a-9594-b6c216b03c38\" (UID: \"b10d37a8-0883-4e5a-9594-b6c216b03c38\") " Jan 20 18:27:57 crc kubenswrapper[4661]: I0120 18:27:57.326209 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b10d37a8-0883-4e5a-9594-b6c216b03c38-combined-ca-bundle\") pod \"b10d37a8-0883-4e5a-9594-b6c216b03c38\" (UID: \"b10d37a8-0883-4e5a-9594-b6c216b03c38\") " Jan 20 18:27:57 crc kubenswrapper[4661]: I0120 18:27:57.326240 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/b10d37a8-0883-4e5a-9594-b6c216b03c38-nova-metadata-tls-certs\") pod \"b10d37a8-0883-4e5a-9594-b6c216b03c38\" (UID: \"b10d37a8-0883-4e5a-9594-b6c216b03c38\") " Jan 20 18:27:57 crc kubenswrapper[4661]: I0120 18:27:57.326265 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xstc2\" (UniqueName: \"kubernetes.io/projected/b10d37a8-0883-4e5a-9594-b6c216b03c38-kube-api-access-xstc2\") pod \"b10d37a8-0883-4e5a-9594-b6c216b03c38\" (UID: \"b10d37a8-0883-4e5a-9594-b6c216b03c38\") " Jan 20 18:27:57 crc kubenswrapper[4661]: I0120 18:27:57.329211 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b10d37a8-0883-4e5a-9594-b6c216b03c38-logs" (OuterVolumeSpecName: "logs") pod "b10d37a8-0883-4e5a-9594-b6c216b03c38" (UID: "b10d37a8-0883-4e5a-9594-b6c216b03c38"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 18:27:57 crc kubenswrapper[4661]: I0120 18:27:57.337934 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b10d37a8-0883-4e5a-9594-b6c216b03c38-kube-api-access-xstc2" (OuterVolumeSpecName: "kube-api-access-xstc2") pod "b10d37a8-0883-4e5a-9594-b6c216b03c38" (UID: "b10d37a8-0883-4e5a-9594-b6c216b03c38"). InnerVolumeSpecName "kube-api-access-xstc2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 18:27:57 crc kubenswrapper[4661]: I0120 18:27:57.370847 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b10d37a8-0883-4e5a-9594-b6c216b03c38-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b10d37a8-0883-4e5a-9594-b6c216b03c38" (UID: "b10d37a8-0883-4e5a-9594-b6c216b03c38"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 18:27:57 crc kubenswrapper[4661]: I0120 18:27:57.395189 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b10d37a8-0883-4e5a-9594-b6c216b03c38-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "b10d37a8-0883-4e5a-9594-b6c216b03c38" (UID: "b10d37a8-0883-4e5a-9594-b6c216b03c38"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 18:27:57 crc kubenswrapper[4661]: I0120 18:27:57.396609 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b10d37a8-0883-4e5a-9594-b6c216b03c38-config-data" (OuterVolumeSpecName: "config-data") pod "b10d37a8-0883-4e5a-9594-b6c216b03c38" (UID: "b10d37a8-0883-4e5a-9594-b6c216b03c38"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 18:27:57 crc kubenswrapper[4661]: I0120 18:27:57.412255 4661 generic.go:334] "Generic (PLEG): container finished" podID="b10d37a8-0883-4e5a-9594-b6c216b03c38" containerID="49efe543fd7bba20e2fb74d878c116c5c2b7c2095c0136a8ea6e06ae5ceb501d" exitCode=0 Jan 20 18:27:57 crc kubenswrapper[4661]: I0120 18:27:57.412322 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"b10d37a8-0883-4e5a-9594-b6c216b03c38","Type":"ContainerDied","Data":"49efe543fd7bba20e2fb74d878c116c5c2b7c2095c0136a8ea6e06ae5ceb501d"} Jan 20 18:27:57 crc kubenswrapper[4661]: I0120 18:27:57.412348 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"b10d37a8-0883-4e5a-9594-b6c216b03c38","Type":"ContainerDied","Data":"5af71259454017f1ba4c791f64cd2e0b36bc1ecb0c9668f45f06e1cf5bfb1432"} Jan 20 18:27:57 crc kubenswrapper[4661]: I0120 18:27:57.412363 4661 scope.go:117] "RemoveContainer" containerID="49efe543fd7bba20e2fb74d878c116c5c2b7c2095c0136a8ea6e06ae5ceb501d" Jan 20 18:27:57 crc kubenswrapper[4661]: I0120 18:27:57.412485 4661 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 20 18:27:57 crc kubenswrapper[4661]: I0120 18:27:57.414684 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"47ac760b-6ca8-4048-a60e-e3717fcb25ec","Type":"ContainerStarted","Data":"4cd37839a436a7a3ab6ef738113b4d24dbcfa6a9b33aeb621a49fd1ca5472bb4"} Jan 20 18:27:57 crc kubenswrapper[4661]: I0120 18:27:57.430843 4661 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b10d37a8-0883-4e5a-9594-b6c216b03c38-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 20 18:27:57 crc kubenswrapper[4661]: I0120 18:27:57.430905 4661 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/b10d37a8-0883-4e5a-9594-b6c216b03c38-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 20 18:27:57 crc kubenswrapper[4661]: I0120 18:27:57.430922 4661 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xstc2\" (UniqueName: \"kubernetes.io/projected/b10d37a8-0883-4e5a-9594-b6c216b03c38-kube-api-access-xstc2\") on node \"crc\" DevicePath \"\"" Jan 20 18:27:57 crc kubenswrapper[4661]: I0120 18:27:57.430932 4661 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b10d37a8-0883-4e5a-9594-b6c216b03c38-logs\") on node \"crc\" DevicePath \"\"" Jan 20 18:27:57 crc kubenswrapper[4661]: I0120 18:27:57.430942 4661 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b10d37a8-0883-4e5a-9594-b6c216b03c38-config-data\") on node \"crc\" DevicePath \"\"" Jan 20 18:27:57 crc kubenswrapper[4661]: I0120 18:27:57.440815 4661 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.4408004 podStartE2EDuration="2.4408004s" podCreationTimestamp="2026-01-20 18:27:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 18:27:57.439817935 +0000 UTC m=+1333.770607597" watchObservedRunningTime="2026-01-20 18:27:57.4408004 +0000 UTC m=+1333.771590062" Jan 20 18:27:57 crc kubenswrapper[4661]: I0120 18:27:57.498194 4661 scope.go:117] "RemoveContainer" containerID="49b02e4ed668519864414d43c44c2b4ef92f630699e9f34bb24814112bd47ce5" Jan 20 18:27:57 crc kubenswrapper[4661]: I0120 18:27:57.507517 4661 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 20 18:27:57 crc kubenswrapper[4661]: I0120 18:27:57.528554 4661 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 20 18:27:57 crc kubenswrapper[4661]: I0120 18:27:57.528582 4661 scope.go:117] "RemoveContainer" containerID="49efe543fd7bba20e2fb74d878c116c5c2b7c2095c0136a8ea6e06ae5ceb501d" Jan 20 18:27:57 crc kubenswrapper[4661]: E0120 18:27:57.529079 4661 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"49efe543fd7bba20e2fb74d878c116c5c2b7c2095c0136a8ea6e06ae5ceb501d\": container with ID starting with 49efe543fd7bba20e2fb74d878c116c5c2b7c2095c0136a8ea6e06ae5ceb501d not found: ID does not exist" containerID="49efe543fd7bba20e2fb74d878c116c5c2b7c2095c0136a8ea6e06ae5ceb501d" Jan 20 18:27:57 crc kubenswrapper[4661]: I0120 18:27:57.529117 4661 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"49efe543fd7bba20e2fb74d878c116c5c2b7c2095c0136a8ea6e06ae5ceb501d"} err="failed to get container status \"49efe543fd7bba20e2fb74d878c116c5c2b7c2095c0136a8ea6e06ae5ceb501d\": rpc error: code = NotFound desc = could not find container \"49efe543fd7bba20e2fb74d878c116c5c2b7c2095c0136a8ea6e06ae5ceb501d\": container with ID starting with 49efe543fd7bba20e2fb74d878c116c5c2b7c2095c0136a8ea6e06ae5ceb501d not found: ID does not exist" Jan 20 18:27:57 crc kubenswrapper[4661]: I0120 18:27:57.529146 4661 scope.go:117] "RemoveContainer" containerID="49b02e4ed668519864414d43c44c2b4ef92f630699e9f34bb24814112bd47ce5" Jan 20 18:27:57 crc kubenswrapper[4661]: E0120 18:27:57.529884 4661 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"49b02e4ed668519864414d43c44c2b4ef92f630699e9f34bb24814112bd47ce5\": container with ID starting with 49b02e4ed668519864414d43c44c2b4ef92f630699e9f34bb24814112bd47ce5 not found: ID does not exist" containerID="49b02e4ed668519864414d43c44c2b4ef92f630699e9f34bb24814112bd47ce5" Jan 20 18:27:57 crc kubenswrapper[4661]: I0120 18:27:57.529911 4661 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"49b02e4ed668519864414d43c44c2b4ef92f630699e9f34bb24814112bd47ce5"} err="failed to get container status \"49b02e4ed668519864414d43c44c2b4ef92f630699e9f34bb24814112bd47ce5\": rpc error: code = NotFound desc = could not find container \"49b02e4ed668519864414d43c44c2b4ef92f630699e9f34bb24814112bd47ce5\": container with ID starting with 49b02e4ed668519864414d43c44c2b4ef92f630699e9f34bb24814112bd47ce5 not found: ID does not exist" Jan 20 18:27:57 crc kubenswrapper[4661]: I0120 18:27:57.541506 4661 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 20 18:27:57 crc kubenswrapper[4661]: E0120 18:27:57.541892 4661 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b10d37a8-0883-4e5a-9594-b6c216b03c38" containerName="nova-metadata-log" Jan 20 18:27:57 crc kubenswrapper[4661]: I0120 18:27:57.541905 4661 state_mem.go:107] "Deleted CPUSet assignment" podUID="b10d37a8-0883-4e5a-9594-b6c216b03c38" containerName="nova-metadata-log" Jan 20 18:27:57 crc kubenswrapper[4661]: E0120 18:27:57.541930 4661 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b10d37a8-0883-4e5a-9594-b6c216b03c38" containerName="nova-metadata-metadata" Jan 20 18:27:57 crc kubenswrapper[4661]: I0120 18:27:57.541935 4661 state_mem.go:107] "Deleted CPUSet assignment" podUID="b10d37a8-0883-4e5a-9594-b6c216b03c38" containerName="nova-metadata-metadata" Jan 20 18:27:57 crc kubenswrapper[4661]: I0120 18:27:57.542097 4661 memory_manager.go:354] "RemoveStaleState removing state" podUID="b10d37a8-0883-4e5a-9594-b6c216b03c38" containerName="nova-metadata-log" Jan 20 18:27:57 crc kubenswrapper[4661]: I0120 18:27:57.542122 4661 memory_manager.go:354] "RemoveStaleState removing state" podUID="b10d37a8-0883-4e5a-9594-b6c216b03c38" containerName="nova-metadata-metadata" Jan 20 18:27:57 crc kubenswrapper[4661]: I0120 18:27:57.543132 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 20 18:27:57 crc kubenswrapper[4661]: I0120 18:27:57.546619 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Jan 20 18:27:57 crc kubenswrapper[4661]: I0120 18:27:57.546941 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 20 18:27:57 crc kubenswrapper[4661]: I0120 18:27:57.554207 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 20 18:27:57 crc kubenswrapper[4661]: I0120 18:27:57.634430 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/253d6878-90af-44c0-b6d2-dfb0d79a2190-config-data\") pod \"nova-metadata-0\" (UID: \"253d6878-90af-44c0-b6d2-dfb0d79a2190\") " pod="openstack/nova-metadata-0" Jan 20 18:27:57 crc kubenswrapper[4661]: I0120 18:27:57.634568 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/253d6878-90af-44c0-b6d2-dfb0d79a2190-logs\") pod \"nova-metadata-0\" (UID: \"253d6878-90af-44c0-b6d2-dfb0d79a2190\") " pod="openstack/nova-metadata-0" Jan 20 18:27:57 crc kubenswrapper[4661]: I0120 18:27:57.634661 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4j9rr\" (UniqueName: \"kubernetes.io/projected/253d6878-90af-44c0-b6d2-dfb0d79a2190-kube-api-access-4j9rr\") pod \"nova-metadata-0\" (UID: \"253d6878-90af-44c0-b6d2-dfb0d79a2190\") " pod="openstack/nova-metadata-0" Jan 20 18:27:57 crc kubenswrapper[4661]: I0120 18:27:57.634829 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/253d6878-90af-44c0-b6d2-dfb0d79a2190-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"253d6878-90af-44c0-b6d2-dfb0d79a2190\") " pod="openstack/nova-metadata-0" Jan 20 18:27:57 crc kubenswrapper[4661]: I0120 18:27:57.634936 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/253d6878-90af-44c0-b6d2-dfb0d79a2190-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"253d6878-90af-44c0-b6d2-dfb0d79a2190\") " pod="openstack/nova-metadata-0" Jan 20 18:27:57 crc kubenswrapper[4661]: I0120 18:27:57.736994 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/253d6878-90af-44c0-b6d2-dfb0d79a2190-logs\") pod \"nova-metadata-0\" (UID: \"253d6878-90af-44c0-b6d2-dfb0d79a2190\") " pod="openstack/nova-metadata-0" Jan 20 18:27:57 crc kubenswrapper[4661]: I0120 18:27:57.737352 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/253d6878-90af-44c0-b6d2-dfb0d79a2190-logs\") pod \"nova-metadata-0\" (UID: \"253d6878-90af-44c0-b6d2-dfb0d79a2190\") " pod="openstack/nova-metadata-0" Jan 20 18:27:57 crc kubenswrapper[4661]: I0120 18:27:57.737428 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4j9rr\" (UniqueName: \"kubernetes.io/projected/253d6878-90af-44c0-b6d2-dfb0d79a2190-kube-api-access-4j9rr\") pod \"nova-metadata-0\" (UID: \"253d6878-90af-44c0-b6d2-dfb0d79a2190\") " pod="openstack/nova-metadata-0" Jan 20 18:27:57 crc kubenswrapper[4661]: I0120 18:27:57.737777 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/253d6878-90af-44c0-b6d2-dfb0d79a2190-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"253d6878-90af-44c0-b6d2-dfb0d79a2190\") " pod="openstack/nova-metadata-0" Jan 20 18:27:57 crc kubenswrapper[4661]: I0120 18:27:57.738234 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/253d6878-90af-44c0-b6d2-dfb0d79a2190-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"253d6878-90af-44c0-b6d2-dfb0d79a2190\") " pod="openstack/nova-metadata-0" Jan 20 18:27:57 crc kubenswrapper[4661]: I0120 18:27:57.738300 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/253d6878-90af-44c0-b6d2-dfb0d79a2190-config-data\") pod \"nova-metadata-0\" (UID: \"253d6878-90af-44c0-b6d2-dfb0d79a2190\") " pod="openstack/nova-metadata-0" Jan 20 18:27:57 crc kubenswrapper[4661]: I0120 18:27:57.742109 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/253d6878-90af-44c0-b6d2-dfb0d79a2190-config-data\") pod \"nova-metadata-0\" (UID: \"253d6878-90af-44c0-b6d2-dfb0d79a2190\") " pod="openstack/nova-metadata-0" Jan 20 18:27:57 crc kubenswrapper[4661]: I0120 18:27:57.742356 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/253d6878-90af-44c0-b6d2-dfb0d79a2190-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"253d6878-90af-44c0-b6d2-dfb0d79a2190\") " pod="openstack/nova-metadata-0" Jan 20 18:27:57 crc kubenswrapper[4661]: I0120 18:27:57.742881 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/253d6878-90af-44c0-b6d2-dfb0d79a2190-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"253d6878-90af-44c0-b6d2-dfb0d79a2190\") " pod="openstack/nova-metadata-0" Jan 20 18:27:57 crc kubenswrapper[4661]: I0120 18:27:57.756597 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4j9rr\" (UniqueName: \"kubernetes.io/projected/253d6878-90af-44c0-b6d2-dfb0d79a2190-kube-api-access-4j9rr\") pod \"nova-metadata-0\" (UID: \"253d6878-90af-44c0-b6d2-dfb0d79a2190\") " pod="openstack/nova-metadata-0" Jan 20 18:27:57 crc kubenswrapper[4661]: I0120 18:27:57.863056 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 20 18:27:58 crc kubenswrapper[4661]: I0120 18:27:58.162356 4661 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b10d37a8-0883-4e5a-9594-b6c216b03c38" path="/var/lib/kubelet/pods/b10d37a8-0883-4e5a-9594-b6c216b03c38/volumes" Jan 20 18:27:58 crc kubenswrapper[4661]: I0120 18:27:58.327502 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 20 18:27:58 crc kubenswrapper[4661]: I0120 18:27:58.432043 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"253d6878-90af-44c0-b6d2-dfb0d79a2190","Type":"ContainerStarted","Data":"e6587d330f69b60ea0bbd4f7fe436e0685047547875de2c9e7d5558b800f5f68"} Jan 20 18:27:59 crc kubenswrapper[4661]: I0120 18:27:59.324196 4661 patch_prober.go:28] interesting pod/machine-config-daemon-svf7c container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 20 18:27:59 crc kubenswrapper[4661]: I0120 18:27:59.324855 4661 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-svf7c" podUID="78855c94-da90-4523-8d65-70f7fd153dee" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 20 18:27:59 crc kubenswrapper[4661]: I0120 18:27:59.449938 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"253d6878-90af-44c0-b6d2-dfb0d79a2190","Type":"ContainerStarted","Data":"49fe055b3d84d2fb28d49e7468ceb28f2f1d6e3d2cb1e0447472867cd7b0460d"} Jan 20 18:27:59 crc kubenswrapper[4661]: I0120 18:27:59.450297 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"253d6878-90af-44c0-b6d2-dfb0d79a2190","Type":"ContainerStarted","Data":"b26c2b57f1a2bf8d6f2af42cc4049e2d406c4d1ceff497d102cfc7ba423e8599"} Jan 20 18:27:59 crc kubenswrapper[4661]: I0120 18:27:59.495033 4661 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.49500653 podStartE2EDuration="2.49500653s" podCreationTimestamp="2026-01-20 18:27:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 18:27:59.484885276 +0000 UTC m=+1335.815674968" watchObservedRunningTime="2026-01-20 18:27:59.49500653 +0000 UTC m=+1335.825796202" Jan 20 18:28:00 crc kubenswrapper[4661]: I0120 18:28:00.457381 4661 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 20 18:28:00 crc kubenswrapper[4661]: I0120 18:28:00.461861 4661 generic.go:334] "Generic (PLEG): container finished" podID="ff41f462-d3c3-445d-b451-6e01d59e6da8" containerID="f363df48149eaf5b114f51dfc4a4c0ac4129bef329a284890018fedc2458d48e" exitCode=0 Jan 20 18:28:00 crc kubenswrapper[4661]: I0120 18:28:00.461916 4661 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 20 18:28:00 crc kubenswrapper[4661]: I0120 18:28:00.461966 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"ff41f462-d3c3-445d-b451-6e01d59e6da8","Type":"ContainerDied","Data":"f363df48149eaf5b114f51dfc4a4c0ac4129bef329a284890018fedc2458d48e"} Jan 20 18:28:00 crc kubenswrapper[4661]: I0120 18:28:00.462008 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"ff41f462-d3c3-445d-b451-6e01d59e6da8","Type":"ContainerDied","Data":"c6cc9793cf0deb53ab4a89430d8482f3f6ea345fde99c4a6dd69d23e02678c2f"} Jan 20 18:28:00 crc kubenswrapper[4661]: I0120 18:28:00.462028 4661 scope.go:117] "RemoveContainer" containerID="f363df48149eaf5b114f51dfc4a4c0ac4129bef329a284890018fedc2458d48e" Jan 20 18:28:00 crc kubenswrapper[4661]: I0120 18:28:00.497414 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ff41f462-d3c3-445d-b451-6e01d59e6da8-config-data\") pod \"ff41f462-d3c3-445d-b451-6e01d59e6da8\" (UID: \"ff41f462-d3c3-445d-b451-6e01d59e6da8\") " Jan 20 18:28:00 crc kubenswrapper[4661]: I0120 18:28:00.497474 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jm5wr\" (UniqueName: \"kubernetes.io/projected/ff41f462-d3c3-445d-b451-6e01d59e6da8-kube-api-access-jm5wr\") pod \"ff41f462-d3c3-445d-b451-6e01d59e6da8\" (UID: \"ff41f462-d3c3-445d-b451-6e01d59e6da8\") " Jan 20 18:28:00 crc kubenswrapper[4661]: I0120 18:28:00.497497 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ff41f462-d3c3-445d-b451-6e01d59e6da8-internal-tls-certs\") pod \"ff41f462-d3c3-445d-b451-6e01d59e6da8\" (UID: \"ff41f462-d3c3-445d-b451-6e01d59e6da8\") " Jan 20 18:28:00 crc kubenswrapper[4661]: I0120 18:28:00.497524 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ff41f462-d3c3-445d-b451-6e01d59e6da8-logs\") pod \"ff41f462-d3c3-445d-b451-6e01d59e6da8\" (UID: \"ff41f462-d3c3-445d-b451-6e01d59e6da8\") " Jan 20 18:28:00 crc kubenswrapper[4661]: I0120 18:28:00.497645 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff41f462-d3c3-445d-b451-6e01d59e6da8-combined-ca-bundle\") pod \"ff41f462-d3c3-445d-b451-6e01d59e6da8\" (UID: \"ff41f462-d3c3-445d-b451-6e01d59e6da8\") " Jan 20 18:28:00 crc kubenswrapper[4661]: I0120 18:28:00.497719 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ff41f462-d3c3-445d-b451-6e01d59e6da8-public-tls-certs\") pod \"ff41f462-d3c3-445d-b451-6e01d59e6da8\" (UID: \"ff41f462-d3c3-445d-b451-6e01d59e6da8\") " Jan 20 18:28:00 crc kubenswrapper[4661]: I0120 18:28:00.498588 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ff41f462-d3c3-445d-b451-6e01d59e6da8-logs" (OuterVolumeSpecName: "logs") pod "ff41f462-d3c3-445d-b451-6e01d59e6da8" (UID: "ff41f462-d3c3-445d-b451-6e01d59e6da8"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 18:28:00 crc kubenswrapper[4661]: I0120 18:28:00.500809 4661 scope.go:117] "RemoveContainer" containerID="6ee3d620884cfcf09aae32e9feb55b7bd640425d7ab9d5b4333f5f9509ed04c8" Jan 20 18:28:00 crc kubenswrapper[4661]: I0120 18:28:00.509802 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ff41f462-d3c3-445d-b451-6e01d59e6da8-kube-api-access-jm5wr" (OuterVolumeSpecName: "kube-api-access-jm5wr") pod "ff41f462-d3c3-445d-b451-6e01d59e6da8" (UID: "ff41f462-d3c3-445d-b451-6e01d59e6da8"). InnerVolumeSpecName "kube-api-access-jm5wr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 18:28:00 crc kubenswrapper[4661]: I0120 18:28:00.534551 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ff41f462-d3c3-445d-b451-6e01d59e6da8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ff41f462-d3c3-445d-b451-6e01d59e6da8" (UID: "ff41f462-d3c3-445d-b451-6e01d59e6da8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 18:28:00 crc kubenswrapper[4661]: I0120 18:28:00.541136 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ff41f462-d3c3-445d-b451-6e01d59e6da8-config-data" (OuterVolumeSpecName: "config-data") pod "ff41f462-d3c3-445d-b451-6e01d59e6da8" (UID: "ff41f462-d3c3-445d-b451-6e01d59e6da8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 18:28:00 crc kubenswrapper[4661]: I0120 18:28:00.558238 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ff41f462-d3c3-445d-b451-6e01d59e6da8-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "ff41f462-d3c3-445d-b451-6e01d59e6da8" (UID: "ff41f462-d3c3-445d-b451-6e01d59e6da8"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 18:28:00 crc kubenswrapper[4661]: I0120 18:28:00.572615 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ff41f462-d3c3-445d-b451-6e01d59e6da8-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "ff41f462-d3c3-445d-b451-6e01d59e6da8" (UID: "ff41f462-d3c3-445d-b451-6e01d59e6da8"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 18:28:00 crc kubenswrapper[4661]: I0120 18:28:00.599806 4661 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff41f462-d3c3-445d-b451-6e01d59e6da8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 20 18:28:00 crc kubenswrapper[4661]: I0120 18:28:00.599838 4661 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ff41f462-d3c3-445d-b451-6e01d59e6da8-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 20 18:28:00 crc kubenswrapper[4661]: I0120 18:28:00.599847 4661 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ff41f462-d3c3-445d-b451-6e01d59e6da8-config-data\") on node \"crc\" DevicePath \"\"" Jan 20 18:28:00 crc kubenswrapper[4661]: I0120 18:28:00.599855 4661 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jm5wr\" (UniqueName: \"kubernetes.io/projected/ff41f462-d3c3-445d-b451-6e01d59e6da8-kube-api-access-jm5wr\") on node \"crc\" DevicePath \"\"" Jan 20 18:28:00 crc kubenswrapper[4661]: I0120 18:28:00.599872 4661 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ff41f462-d3c3-445d-b451-6e01d59e6da8-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 20 18:28:00 crc kubenswrapper[4661]: I0120 18:28:00.599880 4661 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ff41f462-d3c3-445d-b451-6e01d59e6da8-logs\") on node \"crc\" DevicePath \"\"" Jan 20 18:28:00 crc kubenswrapper[4661]: I0120 18:28:00.654392 4661 scope.go:117] "RemoveContainer" containerID="f363df48149eaf5b114f51dfc4a4c0ac4129bef329a284890018fedc2458d48e" Jan 20 18:28:00 crc kubenswrapper[4661]: E0120 18:28:00.654870 4661 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f363df48149eaf5b114f51dfc4a4c0ac4129bef329a284890018fedc2458d48e\": container with ID starting with f363df48149eaf5b114f51dfc4a4c0ac4129bef329a284890018fedc2458d48e not found: ID does not exist" containerID="f363df48149eaf5b114f51dfc4a4c0ac4129bef329a284890018fedc2458d48e" Jan 20 18:28:00 crc kubenswrapper[4661]: I0120 18:28:00.654916 4661 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f363df48149eaf5b114f51dfc4a4c0ac4129bef329a284890018fedc2458d48e"} err="failed to get container status \"f363df48149eaf5b114f51dfc4a4c0ac4129bef329a284890018fedc2458d48e\": rpc error: code = NotFound desc = could not find container \"f363df48149eaf5b114f51dfc4a4c0ac4129bef329a284890018fedc2458d48e\": container with ID starting with f363df48149eaf5b114f51dfc4a4c0ac4129bef329a284890018fedc2458d48e not found: ID does not exist" Jan 20 18:28:00 crc kubenswrapper[4661]: I0120 18:28:00.654944 4661 scope.go:117] "RemoveContainer" containerID="6ee3d620884cfcf09aae32e9feb55b7bd640425d7ab9d5b4333f5f9509ed04c8" Jan 20 18:28:00 crc kubenswrapper[4661]: E0120 18:28:00.655233 4661 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6ee3d620884cfcf09aae32e9feb55b7bd640425d7ab9d5b4333f5f9509ed04c8\": container with ID starting with 6ee3d620884cfcf09aae32e9feb55b7bd640425d7ab9d5b4333f5f9509ed04c8 not found: ID does not exist" containerID="6ee3d620884cfcf09aae32e9feb55b7bd640425d7ab9d5b4333f5f9509ed04c8" Jan 20 18:28:00 crc kubenswrapper[4661]: I0120 18:28:00.655261 4661 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6ee3d620884cfcf09aae32e9feb55b7bd640425d7ab9d5b4333f5f9509ed04c8"} err="failed to get container status \"6ee3d620884cfcf09aae32e9feb55b7bd640425d7ab9d5b4333f5f9509ed04c8\": rpc error: code = NotFound desc = could not find container \"6ee3d620884cfcf09aae32e9feb55b7bd640425d7ab9d5b4333f5f9509ed04c8\": container with ID starting with 6ee3d620884cfcf09aae32e9feb55b7bd640425d7ab9d5b4333f5f9509ed04c8 not found: ID does not exist" Jan 20 18:28:00 crc kubenswrapper[4661]: I0120 18:28:00.794099 4661 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 20 18:28:00 crc kubenswrapper[4661]: I0120 18:28:00.805106 4661 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 20 18:28:00 crc kubenswrapper[4661]: I0120 18:28:00.824002 4661 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 20 18:28:00 crc kubenswrapper[4661]: E0120 18:28:00.824438 4661 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ff41f462-d3c3-445d-b451-6e01d59e6da8" containerName="nova-api-log" Jan 20 18:28:00 crc kubenswrapper[4661]: I0120 18:28:00.824457 4661 state_mem.go:107] "Deleted CPUSet assignment" podUID="ff41f462-d3c3-445d-b451-6e01d59e6da8" containerName="nova-api-log" Jan 20 18:28:00 crc kubenswrapper[4661]: E0120 18:28:00.824491 4661 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ff41f462-d3c3-445d-b451-6e01d59e6da8" containerName="nova-api-api" Jan 20 18:28:00 crc kubenswrapper[4661]: I0120 18:28:00.824499 4661 state_mem.go:107] "Deleted CPUSet assignment" podUID="ff41f462-d3c3-445d-b451-6e01d59e6da8" containerName="nova-api-api" Jan 20 18:28:00 crc kubenswrapper[4661]: I0120 18:28:00.825072 4661 memory_manager.go:354] "RemoveStaleState removing state" podUID="ff41f462-d3c3-445d-b451-6e01d59e6da8" containerName="nova-api-log" Jan 20 18:28:00 crc kubenswrapper[4661]: I0120 18:28:00.825103 4661 memory_manager.go:354] "RemoveStaleState removing state" podUID="ff41f462-d3c3-445d-b451-6e01d59e6da8" containerName="nova-api-api" Jan 20 18:28:00 crc kubenswrapper[4661]: I0120 18:28:00.826176 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 20 18:28:00 crc kubenswrapper[4661]: I0120 18:28:00.827654 4661 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 20 18:28:00 crc kubenswrapper[4661]: I0120 18:28:00.832462 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Jan 20 18:28:00 crc kubenswrapper[4661]: I0120 18:28:00.833231 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 20 18:28:00 crc kubenswrapper[4661]: I0120 18:28:00.835972 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Jan 20 18:28:00 crc kubenswrapper[4661]: I0120 18:28:00.836988 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 20 18:28:00 crc kubenswrapper[4661]: I0120 18:28:00.906802 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m2r59\" (UniqueName: \"kubernetes.io/projected/c36f847e-e718-445f-927e-4c6145c5ac8d-kube-api-access-m2r59\") pod \"nova-api-0\" (UID: \"c36f847e-e718-445f-927e-4c6145c5ac8d\") " pod="openstack/nova-api-0" Jan 20 18:28:00 crc kubenswrapper[4661]: I0120 18:28:00.906861 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c36f847e-e718-445f-927e-4c6145c5ac8d-internal-tls-certs\") pod \"nova-api-0\" (UID: \"c36f847e-e718-445f-927e-4c6145c5ac8d\") " pod="openstack/nova-api-0" Jan 20 18:28:00 crc kubenswrapper[4661]: I0120 18:28:00.906890 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c36f847e-e718-445f-927e-4c6145c5ac8d-public-tls-certs\") pod \"nova-api-0\" (UID: \"c36f847e-e718-445f-927e-4c6145c5ac8d\") " pod="openstack/nova-api-0" Jan 20 18:28:00 crc kubenswrapper[4661]: I0120 18:28:00.906936 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c36f847e-e718-445f-927e-4c6145c5ac8d-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"c36f847e-e718-445f-927e-4c6145c5ac8d\") " pod="openstack/nova-api-0" Jan 20 18:28:00 crc kubenswrapper[4661]: I0120 18:28:00.907007 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c36f847e-e718-445f-927e-4c6145c5ac8d-logs\") pod \"nova-api-0\" (UID: \"c36f847e-e718-445f-927e-4c6145c5ac8d\") " pod="openstack/nova-api-0" Jan 20 18:28:00 crc kubenswrapper[4661]: I0120 18:28:00.907069 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c36f847e-e718-445f-927e-4c6145c5ac8d-config-data\") pod \"nova-api-0\" (UID: \"c36f847e-e718-445f-927e-4c6145c5ac8d\") " pod="openstack/nova-api-0" Jan 20 18:28:01 crc kubenswrapper[4661]: I0120 18:28:01.008241 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c36f847e-e718-445f-927e-4c6145c5ac8d-logs\") pod \"nova-api-0\" (UID: \"c36f847e-e718-445f-927e-4c6145c5ac8d\") " pod="openstack/nova-api-0" Jan 20 18:28:01 crc kubenswrapper[4661]: I0120 18:28:01.008349 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c36f847e-e718-445f-927e-4c6145c5ac8d-config-data\") pod \"nova-api-0\" (UID: \"c36f847e-e718-445f-927e-4c6145c5ac8d\") " pod="openstack/nova-api-0" Jan 20 18:28:01 crc kubenswrapper[4661]: I0120 18:28:01.008424 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m2r59\" (UniqueName: \"kubernetes.io/projected/c36f847e-e718-445f-927e-4c6145c5ac8d-kube-api-access-m2r59\") pod \"nova-api-0\" (UID: \"c36f847e-e718-445f-927e-4c6145c5ac8d\") " pod="openstack/nova-api-0" Jan 20 18:28:01 crc kubenswrapper[4661]: I0120 18:28:01.008452 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c36f847e-e718-445f-927e-4c6145c5ac8d-internal-tls-certs\") pod \"nova-api-0\" (UID: \"c36f847e-e718-445f-927e-4c6145c5ac8d\") " pod="openstack/nova-api-0" Jan 20 18:28:01 crc kubenswrapper[4661]: I0120 18:28:01.008482 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c36f847e-e718-445f-927e-4c6145c5ac8d-public-tls-certs\") pod \"nova-api-0\" (UID: \"c36f847e-e718-445f-927e-4c6145c5ac8d\") " pod="openstack/nova-api-0" Jan 20 18:28:01 crc kubenswrapper[4661]: I0120 18:28:01.008514 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c36f847e-e718-445f-927e-4c6145c5ac8d-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"c36f847e-e718-445f-927e-4c6145c5ac8d\") " pod="openstack/nova-api-0" Jan 20 18:28:01 crc kubenswrapper[4661]: I0120 18:28:01.009636 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c36f847e-e718-445f-927e-4c6145c5ac8d-logs\") pod \"nova-api-0\" (UID: \"c36f847e-e718-445f-927e-4c6145c5ac8d\") " pod="openstack/nova-api-0" Jan 20 18:28:01 crc kubenswrapper[4661]: I0120 18:28:01.014012 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c36f847e-e718-445f-927e-4c6145c5ac8d-internal-tls-certs\") pod \"nova-api-0\" (UID: \"c36f847e-e718-445f-927e-4c6145c5ac8d\") " pod="openstack/nova-api-0" Jan 20 18:28:01 crc kubenswrapper[4661]: I0120 18:28:01.014244 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c36f847e-e718-445f-927e-4c6145c5ac8d-public-tls-certs\") pod \"nova-api-0\" (UID: \"c36f847e-e718-445f-927e-4c6145c5ac8d\") " pod="openstack/nova-api-0" Jan 20 18:28:01 crc kubenswrapper[4661]: I0120 18:28:01.015400 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c36f847e-e718-445f-927e-4c6145c5ac8d-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"c36f847e-e718-445f-927e-4c6145c5ac8d\") " pod="openstack/nova-api-0" Jan 20 18:28:01 crc kubenswrapper[4661]: I0120 18:28:01.016135 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c36f847e-e718-445f-927e-4c6145c5ac8d-config-data\") pod \"nova-api-0\" (UID: \"c36f847e-e718-445f-927e-4c6145c5ac8d\") " pod="openstack/nova-api-0" Jan 20 18:28:01 crc kubenswrapper[4661]: I0120 18:28:01.028484 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m2r59\" (UniqueName: \"kubernetes.io/projected/c36f847e-e718-445f-927e-4c6145c5ac8d-kube-api-access-m2r59\") pod \"nova-api-0\" (UID: \"c36f847e-e718-445f-927e-4c6145c5ac8d\") " pod="openstack/nova-api-0" Jan 20 18:28:01 crc kubenswrapper[4661]: I0120 18:28:01.144995 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 20 18:28:01 crc kubenswrapper[4661]: I0120 18:28:01.704157 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 20 18:28:02 crc kubenswrapper[4661]: I0120 18:28:02.154297 4661 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ff41f462-d3c3-445d-b451-6e01d59e6da8" path="/var/lib/kubelet/pods/ff41f462-d3c3-445d-b451-6e01d59e6da8/volumes" Jan 20 18:28:02 crc kubenswrapper[4661]: I0120 18:28:02.482093 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"c36f847e-e718-445f-927e-4c6145c5ac8d","Type":"ContainerStarted","Data":"37d5a14772d3da24d09e25f45338f798e9462ce274c068ea7fac29ca5cfa5fb6"} Jan 20 18:28:02 crc kubenswrapper[4661]: I0120 18:28:02.482389 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"c36f847e-e718-445f-927e-4c6145c5ac8d","Type":"ContainerStarted","Data":"b5230679071be8957c517d50b6caf1e32828495eb059b02f7c0fc139dea8dcf7"} Jan 20 18:28:02 crc kubenswrapper[4661]: I0120 18:28:02.482453 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"c36f847e-e718-445f-927e-4c6145c5ac8d","Type":"ContainerStarted","Data":"aeaed4a46aa881e5fcc65bf79af8c36417f2cfebb99656942ed5c7dc02cc595e"} Jan 20 18:28:02 crc kubenswrapper[4661]: I0120 18:28:02.505904 4661 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.5058855859999998 podStartE2EDuration="2.505885586s" podCreationTimestamp="2026-01-20 18:28:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 18:28:02.498546885 +0000 UTC m=+1338.829336547" watchObservedRunningTime="2026-01-20 18:28:02.505885586 +0000 UTC m=+1338.836675248" Jan 20 18:28:02 crc kubenswrapper[4661]: I0120 18:28:02.863450 4661 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 20 18:28:02 crc kubenswrapper[4661]: I0120 18:28:02.863618 4661 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 20 18:28:05 crc kubenswrapper[4661]: I0120 18:28:05.822999 4661 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Jan 20 18:28:05 crc kubenswrapper[4661]: I0120 18:28:05.857559 4661 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Jan 20 18:28:06 crc kubenswrapper[4661]: I0120 18:28:06.546600 4661 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Jan 20 18:28:07 crc kubenswrapper[4661]: I0120 18:28:07.863925 4661 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 20 18:28:07 crc kubenswrapper[4661]: I0120 18:28:07.864065 4661 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 20 18:28:08 crc kubenswrapper[4661]: I0120 18:28:08.880064 4661 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="253d6878-90af-44c0-b6d2-dfb0d79a2190" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.187:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 20 18:28:08 crc kubenswrapper[4661]: I0120 18:28:08.880374 4661 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="253d6878-90af-44c0-b6d2-dfb0d79a2190" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.187:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 20 18:28:11 crc kubenswrapper[4661]: I0120 18:28:11.145253 4661 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 20 18:28:11 crc kubenswrapper[4661]: I0120 18:28:11.145654 4661 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 20 18:28:12 crc kubenswrapper[4661]: I0120 18:28:12.160831 4661 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="c36f847e-e718-445f-927e-4c6145c5ac8d" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.188:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 20 18:28:12 crc kubenswrapper[4661]: I0120 18:28:12.160854 4661 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="c36f847e-e718-445f-927e-4c6145c5ac8d" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.188:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 20 18:28:12 crc kubenswrapper[4661]: I0120 18:28:12.597906 4661 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Jan 20 18:28:14 crc kubenswrapper[4661]: I0120 18:28:14.707739 4661 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-5w4m2" podUID="04a8f9c5-45fc-47db-adf2-3de38af2cf96" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.74:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 20 18:28:17 crc kubenswrapper[4661]: I0120 18:28:17.871851 4661 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 20 18:28:17 crc kubenswrapper[4661]: I0120 18:28:17.872493 4661 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 20 18:28:17 crc kubenswrapper[4661]: I0120 18:28:17.882177 4661 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 20 18:28:17 crc kubenswrapper[4661]: I0120 18:28:17.884184 4661 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 20 18:28:21 crc kubenswrapper[4661]: I0120 18:28:21.151656 4661 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 20 18:28:21 crc kubenswrapper[4661]: I0120 18:28:21.152488 4661 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 20 18:28:21 crc kubenswrapper[4661]: I0120 18:28:21.168349 4661 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 20 18:28:21 crc kubenswrapper[4661]: I0120 18:28:21.175031 4661 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 20 18:28:21 crc kubenswrapper[4661]: I0120 18:28:21.834032 4661 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 20 18:28:21 crc kubenswrapper[4661]: I0120 18:28:21.842882 4661 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 20 18:28:29 crc kubenswrapper[4661]: I0120 18:28:29.324271 4661 patch_prober.go:28] interesting pod/machine-config-daemon-svf7c container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 20 18:28:29 crc kubenswrapper[4661]: I0120 18:28:29.325125 4661 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-svf7c" podUID="78855c94-da90-4523-8d65-70f7fd153dee" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 20 18:28:29 crc kubenswrapper[4661]: I0120 18:28:29.925880 4661 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 20 18:28:30 crc kubenswrapper[4661]: I0120 18:28:30.970214 4661 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 20 18:28:34 crc kubenswrapper[4661]: I0120 18:28:34.966007 4661 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-0" podUID="b764feba-067a-4a59-a23b-9a9b7725f420" containerName="rabbitmq" containerID="cri-o://f76ef2f693bfe5015a9e3c2fa43181da0038d57efb1604983c4edf8f96f9dfef" gracePeriod=604795 Jan 20 18:28:35 crc kubenswrapper[4661]: I0120 18:28:35.120581 4661 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="b764feba-067a-4a59-a23b-9a9b7725f420" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.98:5671: connect: connection refused" Jan 20 18:28:35 crc kubenswrapper[4661]: I0120 18:28:35.242799 4661 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-cell1-server-0" podUID="19a9e039-d4eb-475e-9ca9-6a6f6bfeb36c" containerName="rabbitmq" containerID="cri-o://54ed9a66e76eeab8c7020b313888bd2f37e08aaa23c11a301fd10b6bde71813e" gracePeriod=604796 Jan 20 18:28:35 crc kubenswrapper[4661]: I0120 18:28:35.472617 4661 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="19a9e039-d4eb-475e-9ca9-6a6f6bfeb36c" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.99:5671: connect: connection refused" Jan 20 18:28:41 crc kubenswrapper[4661]: I0120 18:28:41.554659 4661 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 20 18:28:41 crc kubenswrapper[4661]: I0120 18:28:41.645871 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/b764feba-067a-4a59-a23b-9a9b7725f420-erlang-cookie-secret\") pod \"b764feba-067a-4a59-a23b-9a9b7725f420\" (UID: \"b764feba-067a-4a59-a23b-9a9b7725f420\") " Jan 20 18:28:41 crc kubenswrapper[4661]: I0120 18:28:41.645920 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/b764feba-067a-4a59-a23b-9a9b7725f420-plugins-conf\") pod \"b764feba-067a-4a59-a23b-9a9b7725f420\" (UID: \"b764feba-067a-4a59-a23b-9a9b7725f420\") " Jan 20 18:28:41 crc kubenswrapper[4661]: I0120 18:28:41.645982 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b764feba-067a-4a59-a23b-9a9b7725f420-config-data\") pod \"b764feba-067a-4a59-a23b-9a9b7725f420\" (UID: \"b764feba-067a-4a59-a23b-9a9b7725f420\") " Jan 20 18:28:41 crc kubenswrapper[4661]: I0120 18:28:41.646030 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/b764feba-067a-4a59-a23b-9a9b7725f420-server-conf\") pod \"b764feba-067a-4a59-a23b-9a9b7725f420\" (UID: \"b764feba-067a-4a59-a23b-9a9b7725f420\") " Jan 20 18:28:41 crc kubenswrapper[4661]: I0120 18:28:41.646083 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/b764feba-067a-4a59-a23b-9a9b7725f420-rabbitmq-tls\") pod \"b764feba-067a-4a59-a23b-9a9b7725f420\" (UID: \"b764feba-067a-4a59-a23b-9a9b7725f420\") " Jan 20 18:28:41 crc kubenswrapper[4661]: I0120 18:28:41.646112 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/b764feba-067a-4a59-a23b-9a9b7725f420-rabbitmq-erlang-cookie\") pod \"b764feba-067a-4a59-a23b-9a9b7725f420\" (UID: \"b764feba-067a-4a59-a23b-9a9b7725f420\") " Jan 20 18:28:41 crc kubenswrapper[4661]: I0120 18:28:41.646144 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"b764feba-067a-4a59-a23b-9a9b7725f420\" (UID: \"b764feba-067a-4a59-a23b-9a9b7725f420\") " Jan 20 18:28:41 crc kubenswrapper[4661]: I0120 18:28:41.646173 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j7sz2\" (UniqueName: \"kubernetes.io/projected/b764feba-067a-4a59-a23b-9a9b7725f420-kube-api-access-j7sz2\") pod \"b764feba-067a-4a59-a23b-9a9b7725f420\" (UID: \"b764feba-067a-4a59-a23b-9a9b7725f420\") " Jan 20 18:28:41 crc kubenswrapper[4661]: I0120 18:28:41.646191 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/b764feba-067a-4a59-a23b-9a9b7725f420-rabbitmq-confd\") pod \"b764feba-067a-4a59-a23b-9a9b7725f420\" (UID: \"b764feba-067a-4a59-a23b-9a9b7725f420\") " Jan 20 18:28:41 crc kubenswrapper[4661]: I0120 18:28:41.646213 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/b764feba-067a-4a59-a23b-9a9b7725f420-rabbitmq-plugins\") pod \"b764feba-067a-4a59-a23b-9a9b7725f420\" (UID: \"b764feba-067a-4a59-a23b-9a9b7725f420\") " Jan 20 18:28:41 crc kubenswrapper[4661]: I0120 18:28:41.646249 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/b764feba-067a-4a59-a23b-9a9b7725f420-pod-info\") pod \"b764feba-067a-4a59-a23b-9a9b7725f420\" (UID: \"b764feba-067a-4a59-a23b-9a9b7725f420\") " Jan 20 18:28:41 crc kubenswrapper[4661]: I0120 18:28:41.647118 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b764feba-067a-4a59-a23b-9a9b7725f420-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "b764feba-067a-4a59-a23b-9a9b7725f420" (UID: "b764feba-067a-4a59-a23b-9a9b7725f420"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 18:28:41 crc kubenswrapper[4661]: I0120 18:28:41.647725 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b764feba-067a-4a59-a23b-9a9b7725f420-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "b764feba-067a-4a59-a23b-9a9b7725f420" (UID: "b764feba-067a-4a59-a23b-9a9b7725f420"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 18:28:41 crc kubenswrapper[4661]: I0120 18:28:41.660122 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b764feba-067a-4a59-a23b-9a9b7725f420-kube-api-access-j7sz2" (OuterVolumeSpecName: "kube-api-access-j7sz2") pod "b764feba-067a-4a59-a23b-9a9b7725f420" (UID: "b764feba-067a-4a59-a23b-9a9b7725f420"). InnerVolumeSpecName "kube-api-access-j7sz2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 18:28:41 crc kubenswrapper[4661]: I0120 18:28:41.667308 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage09-crc" (OuterVolumeSpecName: "persistence") pod "b764feba-067a-4a59-a23b-9a9b7725f420" (UID: "b764feba-067a-4a59-a23b-9a9b7725f420"). InnerVolumeSpecName "local-storage09-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 20 18:28:41 crc kubenswrapper[4661]: I0120 18:28:41.671450 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b764feba-067a-4a59-a23b-9a9b7725f420-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "b764feba-067a-4a59-a23b-9a9b7725f420" (UID: "b764feba-067a-4a59-a23b-9a9b7725f420"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 18:28:41 crc kubenswrapper[4661]: I0120 18:28:41.674967 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b764feba-067a-4a59-a23b-9a9b7725f420-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "b764feba-067a-4a59-a23b-9a9b7725f420" (UID: "b764feba-067a-4a59-a23b-9a9b7725f420"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 18:28:41 crc kubenswrapper[4661]: I0120 18:28:41.688840 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/b764feba-067a-4a59-a23b-9a9b7725f420-pod-info" (OuterVolumeSpecName: "pod-info") pod "b764feba-067a-4a59-a23b-9a9b7725f420" (UID: "b764feba-067a-4a59-a23b-9a9b7725f420"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Jan 20 18:28:41 crc kubenswrapper[4661]: I0120 18:28:41.691637 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b764feba-067a-4a59-a23b-9a9b7725f420-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "b764feba-067a-4a59-a23b-9a9b7725f420" (UID: "b764feba-067a-4a59-a23b-9a9b7725f420"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 18:28:41 crc kubenswrapper[4661]: I0120 18:28:41.704571 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b764feba-067a-4a59-a23b-9a9b7725f420-config-data" (OuterVolumeSpecName: "config-data") pod "b764feba-067a-4a59-a23b-9a9b7725f420" (UID: "b764feba-067a-4a59-a23b-9a9b7725f420"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 18:28:41 crc kubenswrapper[4661]: I0120 18:28:41.748318 4661 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/b764feba-067a-4a59-a23b-9a9b7725f420-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Jan 20 18:28:41 crc kubenswrapper[4661]: I0120 18:28:41.748351 4661 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/b764feba-067a-4a59-a23b-9a9b7725f420-plugins-conf\") on node \"crc\" DevicePath \"\"" Jan 20 18:28:41 crc kubenswrapper[4661]: I0120 18:28:41.748360 4661 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b764feba-067a-4a59-a23b-9a9b7725f420-config-data\") on node \"crc\" DevicePath \"\"" Jan 20 18:28:41 crc kubenswrapper[4661]: I0120 18:28:41.748370 4661 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/b764feba-067a-4a59-a23b-9a9b7725f420-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Jan 20 18:28:41 crc kubenswrapper[4661]: I0120 18:28:41.748378 4661 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/b764feba-067a-4a59-a23b-9a9b7725f420-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Jan 20 18:28:41 crc kubenswrapper[4661]: I0120 18:28:41.748400 4661 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" " Jan 20 18:28:41 crc kubenswrapper[4661]: I0120 18:28:41.748408 4661 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j7sz2\" (UniqueName: \"kubernetes.io/projected/b764feba-067a-4a59-a23b-9a9b7725f420-kube-api-access-j7sz2\") on node \"crc\" DevicePath \"\"" Jan 20 18:28:41 crc kubenswrapper[4661]: I0120 18:28:41.748416 4661 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/b764feba-067a-4a59-a23b-9a9b7725f420-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Jan 20 18:28:41 crc kubenswrapper[4661]: I0120 18:28:41.748425 4661 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/b764feba-067a-4a59-a23b-9a9b7725f420-pod-info\") on node \"crc\" DevicePath \"\"" Jan 20 18:28:41 crc kubenswrapper[4661]: I0120 18:28:41.773037 4661 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 20 18:28:41 crc kubenswrapper[4661]: I0120 18:28:41.788589 4661 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage09-crc" (UniqueName: "kubernetes.io/local-volume/local-storage09-crc") on node "crc" Jan 20 18:28:41 crc kubenswrapper[4661]: I0120 18:28:41.820067 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b764feba-067a-4a59-a23b-9a9b7725f420-server-conf" (OuterVolumeSpecName: "server-conf") pod "b764feba-067a-4a59-a23b-9a9b7725f420" (UID: "b764feba-067a-4a59-a23b-9a9b7725f420"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 18:28:41 crc kubenswrapper[4661]: I0120 18:28:41.850867 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/19a9e039-d4eb-475e-9ca9-6a6f6bfeb36c-rabbitmq-tls\") pod \"19a9e039-d4eb-475e-9ca9-6a6f6bfeb36c\" (UID: \"19a9e039-d4eb-475e-9ca9-6a6f6bfeb36c\") " Jan 20 18:28:41 crc kubenswrapper[4661]: I0120 18:28:41.850980 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/19a9e039-d4eb-475e-9ca9-6a6f6bfeb36c-server-conf\") pod \"19a9e039-d4eb-475e-9ca9-6a6f6bfeb36c\" (UID: \"19a9e039-d4eb-475e-9ca9-6a6f6bfeb36c\") " Jan 20 18:28:41 crc kubenswrapper[4661]: I0120 18:28:41.851012 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/19a9e039-d4eb-475e-9ca9-6a6f6bfeb36c-rabbitmq-confd\") pod \"19a9e039-d4eb-475e-9ca9-6a6f6bfeb36c\" (UID: \"19a9e039-d4eb-475e-9ca9-6a6f6bfeb36c\") " Jan 20 18:28:41 crc kubenswrapper[4661]: I0120 18:28:41.851048 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/19a9e039-d4eb-475e-9ca9-6a6f6bfeb36c-rabbitmq-plugins\") pod \"19a9e039-d4eb-475e-9ca9-6a6f6bfeb36c\" (UID: \"19a9e039-d4eb-475e-9ca9-6a6f6bfeb36c\") " Jan 20 18:28:41 crc kubenswrapper[4661]: I0120 18:28:41.851083 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bzql6\" (UniqueName: \"kubernetes.io/projected/19a9e039-d4eb-475e-9ca9-6a6f6bfeb36c-kube-api-access-bzql6\") pod \"19a9e039-d4eb-475e-9ca9-6a6f6bfeb36c\" (UID: \"19a9e039-d4eb-475e-9ca9-6a6f6bfeb36c\") " Jan 20 18:28:41 crc kubenswrapper[4661]: I0120 18:28:41.851101 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"19a9e039-d4eb-475e-9ca9-6a6f6bfeb36c\" (UID: \"19a9e039-d4eb-475e-9ca9-6a6f6bfeb36c\") " Jan 20 18:28:41 crc kubenswrapper[4661]: I0120 18:28:41.851151 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/19a9e039-d4eb-475e-9ca9-6a6f6bfeb36c-plugins-conf\") pod \"19a9e039-d4eb-475e-9ca9-6a6f6bfeb36c\" (UID: \"19a9e039-d4eb-475e-9ca9-6a6f6bfeb36c\") " Jan 20 18:28:41 crc kubenswrapper[4661]: I0120 18:28:41.851196 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/19a9e039-d4eb-475e-9ca9-6a6f6bfeb36c-rabbitmq-erlang-cookie\") pod \"19a9e039-d4eb-475e-9ca9-6a6f6bfeb36c\" (UID: \"19a9e039-d4eb-475e-9ca9-6a6f6bfeb36c\") " Jan 20 18:28:41 crc kubenswrapper[4661]: I0120 18:28:41.851239 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/19a9e039-d4eb-475e-9ca9-6a6f6bfeb36c-config-data\") pod \"19a9e039-d4eb-475e-9ca9-6a6f6bfeb36c\" (UID: \"19a9e039-d4eb-475e-9ca9-6a6f6bfeb36c\") " Jan 20 18:28:41 crc kubenswrapper[4661]: I0120 18:28:41.851263 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/19a9e039-d4eb-475e-9ca9-6a6f6bfeb36c-pod-info\") pod \"19a9e039-d4eb-475e-9ca9-6a6f6bfeb36c\" (UID: \"19a9e039-d4eb-475e-9ca9-6a6f6bfeb36c\") " Jan 20 18:28:41 crc kubenswrapper[4661]: I0120 18:28:41.851319 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/19a9e039-d4eb-475e-9ca9-6a6f6bfeb36c-erlang-cookie-secret\") pod \"19a9e039-d4eb-475e-9ca9-6a6f6bfeb36c\" (UID: \"19a9e039-d4eb-475e-9ca9-6a6f6bfeb36c\") " Jan 20 18:28:41 crc kubenswrapper[4661]: I0120 18:28:41.851766 4661 reconciler_common.go:293] "Volume detached for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" DevicePath \"\"" Jan 20 18:28:41 crc kubenswrapper[4661]: I0120 18:28:41.851789 4661 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/b764feba-067a-4a59-a23b-9a9b7725f420-server-conf\") on node \"crc\" DevicePath \"\"" Jan 20 18:28:41 crc kubenswrapper[4661]: I0120 18:28:41.854952 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/19a9e039-d4eb-475e-9ca9-6a6f6bfeb36c-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "19a9e039-d4eb-475e-9ca9-6a6f6bfeb36c" (UID: "19a9e039-d4eb-475e-9ca9-6a6f6bfeb36c"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 18:28:41 crc kubenswrapper[4661]: I0120 18:28:41.855315 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/19a9e039-d4eb-475e-9ca9-6a6f6bfeb36c-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "19a9e039-d4eb-475e-9ca9-6a6f6bfeb36c" (UID: "19a9e039-d4eb-475e-9ca9-6a6f6bfeb36c"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 18:28:41 crc kubenswrapper[4661]: I0120 18:28:41.855774 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/19a9e039-d4eb-475e-9ca9-6a6f6bfeb36c-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "19a9e039-d4eb-475e-9ca9-6a6f6bfeb36c" (UID: "19a9e039-d4eb-475e-9ca9-6a6f6bfeb36c"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 18:28:41 crc kubenswrapper[4661]: I0120 18:28:41.861555 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/19a9e039-d4eb-475e-9ca9-6a6f6bfeb36c-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "19a9e039-d4eb-475e-9ca9-6a6f6bfeb36c" (UID: "19a9e039-d4eb-475e-9ca9-6a6f6bfeb36c"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 18:28:41 crc kubenswrapper[4661]: I0120 18:28:41.868865 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage04-crc" (OuterVolumeSpecName: "persistence") pod "19a9e039-d4eb-475e-9ca9-6a6f6bfeb36c" (UID: "19a9e039-d4eb-475e-9ca9-6a6f6bfeb36c"). InnerVolumeSpecName "local-storage04-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 20 18:28:41 crc kubenswrapper[4661]: I0120 18:28:41.869380 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/19a9e039-d4eb-475e-9ca9-6a6f6bfeb36c-kube-api-access-bzql6" (OuterVolumeSpecName: "kube-api-access-bzql6") pod "19a9e039-d4eb-475e-9ca9-6a6f6bfeb36c" (UID: "19a9e039-d4eb-475e-9ca9-6a6f6bfeb36c"). InnerVolumeSpecName "kube-api-access-bzql6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 18:28:41 crc kubenswrapper[4661]: I0120 18:28:41.871094 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/19a9e039-d4eb-475e-9ca9-6a6f6bfeb36c-pod-info" (OuterVolumeSpecName: "pod-info") pod "19a9e039-d4eb-475e-9ca9-6a6f6bfeb36c" (UID: "19a9e039-d4eb-475e-9ca9-6a6f6bfeb36c"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Jan 20 18:28:41 crc kubenswrapper[4661]: I0120 18:28:41.887844 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/19a9e039-d4eb-475e-9ca9-6a6f6bfeb36c-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "19a9e039-d4eb-475e-9ca9-6a6f6bfeb36c" (UID: "19a9e039-d4eb-475e-9ca9-6a6f6bfeb36c"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 18:28:41 crc kubenswrapper[4661]: I0120 18:28:41.908060 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/19a9e039-d4eb-475e-9ca9-6a6f6bfeb36c-config-data" (OuterVolumeSpecName: "config-data") pod "19a9e039-d4eb-475e-9ca9-6a6f6bfeb36c" (UID: "19a9e039-d4eb-475e-9ca9-6a6f6bfeb36c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 18:28:41 crc kubenswrapper[4661]: I0120 18:28:41.946256 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/19a9e039-d4eb-475e-9ca9-6a6f6bfeb36c-server-conf" (OuterVolumeSpecName: "server-conf") pod "19a9e039-d4eb-475e-9ca9-6a6f6bfeb36c" (UID: "19a9e039-d4eb-475e-9ca9-6a6f6bfeb36c"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 18:28:41 crc kubenswrapper[4661]: I0120 18:28:41.946407 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b764feba-067a-4a59-a23b-9a9b7725f420-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "b764feba-067a-4a59-a23b-9a9b7725f420" (UID: "b764feba-067a-4a59-a23b-9a9b7725f420"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 18:28:41 crc kubenswrapper[4661]: I0120 18:28:41.953571 4661 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/b764feba-067a-4a59-a23b-9a9b7725f420-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Jan 20 18:28:41 crc kubenswrapper[4661]: I0120 18:28:41.953611 4661 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/19a9e039-d4eb-475e-9ca9-6a6f6bfeb36c-server-conf\") on node \"crc\" DevicePath \"\"" Jan 20 18:28:41 crc kubenswrapper[4661]: I0120 18:28:41.953624 4661 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/19a9e039-d4eb-475e-9ca9-6a6f6bfeb36c-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Jan 20 18:28:41 crc kubenswrapper[4661]: I0120 18:28:41.953635 4661 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bzql6\" (UniqueName: \"kubernetes.io/projected/19a9e039-d4eb-475e-9ca9-6a6f6bfeb36c-kube-api-access-bzql6\") on node \"crc\" DevicePath \"\"" Jan 20 18:28:41 crc kubenswrapper[4661]: I0120 18:28:41.953662 4661 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" " Jan 20 18:28:41 crc kubenswrapper[4661]: I0120 18:28:41.953696 4661 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/19a9e039-d4eb-475e-9ca9-6a6f6bfeb36c-plugins-conf\") on node \"crc\" DevicePath \"\"" Jan 20 18:28:41 crc kubenswrapper[4661]: I0120 18:28:41.953708 4661 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/19a9e039-d4eb-475e-9ca9-6a6f6bfeb36c-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Jan 20 18:28:41 crc kubenswrapper[4661]: I0120 18:28:41.953720 4661 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/19a9e039-d4eb-475e-9ca9-6a6f6bfeb36c-config-data\") on node \"crc\" DevicePath \"\"" Jan 20 18:28:41 crc kubenswrapper[4661]: I0120 18:28:41.953730 4661 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/19a9e039-d4eb-475e-9ca9-6a6f6bfeb36c-pod-info\") on node \"crc\" DevicePath \"\"" Jan 20 18:28:41 crc kubenswrapper[4661]: I0120 18:28:41.953740 4661 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/19a9e039-d4eb-475e-9ca9-6a6f6bfeb36c-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Jan 20 18:28:41 crc kubenswrapper[4661]: I0120 18:28:41.953749 4661 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/19a9e039-d4eb-475e-9ca9-6a6f6bfeb36c-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Jan 20 18:28:41 crc kubenswrapper[4661]: I0120 18:28:41.974466 4661 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage04-crc" (UniqueName: "kubernetes.io/local-volume/local-storage04-crc") on node "crc" Jan 20 18:28:42 crc kubenswrapper[4661]: I0120 18:28:42.009378 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/19a9e039-d4eb-475e-9ca9-6a6f6bfeb36c-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "19a9e039-d4eb-475e-9ca9-6a6f6bfeb36c" (UID: "19a9e039-d4eb-475e-9ca9-6a6f6bfeb36c"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 18:28:42 crc kubenswrapper[4661]: I0120 18:28:42.018580 4661 generic.go:334] "Generic (PLEG): container finished" podID="b764feba-067a-4a59-a23b-9a9b7725f420" containerID="f76ef2f693bfe5015a9e3c2fa43181da0038d57efb1604983c4edf8f96f9dfef" exitCode=0 Jan 20 18:28:42 crc kubenswrapper[4661]: I0120 18:28:42.018643 4661 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 20 18:28:42 crc kubenswrapper[4661]: I0120 18:28:42.018696 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"b764feba-067a-4a59-a23b-9a9b7725f420","Type":"ContainerDied","Data":"f76ef2f693bfe5015a9e3c2fa43181da0038d57efb1604983c4edf8f96f9dfef"} Jan 20 18:28:42 crc kubenswrapper[4661]: I0120 18:28:42.018743 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"b764feba-067a-4a59-a23b-9a9b7725f420","Type":"ContainerDied","Data":"57b498dfb51f9d23488313c50d7e93db17366501a94de9dedc3fc5727d94708b"} Jan 20 18:28:42 crc kubenswrapper[4661]: I0120 18:28:42.018819 4661 scope.go:117] "RemoveContainer" containerID="f76ef2f693bfe5015a9e3c2fa43181da0038d57efb1604983c4edf8f96f9dfef" Jan 20 18:28:42 crc kubenswrapper[4661]: I0120 18:28:42.024484 4661 generic.go:334] "Generic (PLEG): container finished" podID="19a9e039-d4eb-475e-9ca9-6a6f6bfeb36c" containerID="54ed9a66e76eeab8c7020b313888bd2f37e08aaa23c11a301fd10b6bde71813e" exitCode=0 Jan 20 18:28:42 crc kubenswrapper[4661]: I0120 18:28:42.024515 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"19a9e039-d4eb-475e-9ca9-6a6f6bfeb36c","Type":"ContainerDied","Data":"54ed9a66e76eeab8c7020b313888bd2f37e08aaa23c11a301fd10b6bde71813e"} Jan 20 18:28:42 crc kubenswrapper[4661]: I0120 18:28:42.024536 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"19a9e039-d4eb-475e-9ca9-6a6f6bfeb36c","Type":"ContainerDied","Data":"455c04a109377b319c20e188276ca2154a9c7a825716089f2558649fcee5ea68"} Jan 20 18:28:42 crc kubenswrapper[4661]: I0120 18:28:42.024585 4661 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 20 18:28:42 crc kubenswrapper[4661]: I0120 18:28:42.042895 4661 scope.go:117] "RemoveContainer" containerID="8ded1c3a4a61ed6debad60aa74cc2e5774f7de46bb912d100ebb824fcb556ec7" Jan 20 18:28:42 crc kubenswrapper[4661]: I0120 18:28:42.056255 4661 reconciler_common.go:293] "Volume detached for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" DevicePath \"\"" Jan 20 18:28:42 crc kubenswrapper[4661]: I0120 18:28:42.056290 4661 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/19a9e039-d4eb-475e-9ca9-6a6f6bfeb36c-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Jan 20 18:28:42 crc kubenswrapper[4661]: I0120 18:28:42.063244 4661 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 20 18:28:42 crc kubenswrapper[4661]: I0120 18:28:42.081296 4661 scope.go:117] "RemoveContainer" containerID="f76ef2f693bfe5015a9e3c2fa43181da0038d57efb1604983c4edf8f96f9dfef" Jan 20 18:28:42 crc kubenswrapper[4661]: E0120 18:28:42.081736 4661 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f76ef2f693bfe5015a9e3c2fa43181da0038d57efb1604983c4edf8f96f9dfef\": container with ID starting with f76ef2f693bfe5015a9e3c2fa43181da0038d57efb1604983c4edf8f96f9dfef not found: ID does not exist" containerID="f76ef2f693bfe5015a9e3c2fa43181da0038d57efb1604983c4edf8f96f9dfef" Jan 20 18:28:42 crc kubenswrapper[4661]: I0120 18:28:42.081762 4661 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f76ef2f693bfe5015a9e3c2fa43181da0038d57efb1604983c4edf8f96f9dfef"} err="failed to get container status \"f76ef2f693bfe5015a9e3c2fa43181da0038d57efb1604983c4edf8f96f9dfef\": rpc error: code = NotFound desc = could not find container \"f76ef2f693bfe5015a9e3c2fa43181da0038d57efb1604983c4edf8f96f9dfef\": container with ID starting with f76ef2f693bfe5015a9e3c2fa43181da0038d57efb1604983c4edf8f96f9dfef not found: ID does not exist" Jan 20 18:28:42 crc kubenswrapper[4661]: I0120 18:28:42.081781 4661 scope.go:117] "RemoveContainer" containerID="8ded1c3a4a61ed6debad60aa74cc2e5774f7de46bb912d100ebb824fcb556ec7" Jan 20 18:28:42 crc kubenswrapper[4661]: E0120 18:28:42.082005 4661 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8ded1c3a4a61ed6debad60aa74cc2e5774f7de46bb912d100ebb824fcb556ec7\": container with ID starting with 8ded1c3a4a61ed6debad60aa74cc2e5774f7de46bb912d100ebb824fcb556ec7 not found: ID does not exist" containerID="8ded1c3a4a61ed6debad60aa74cc2e5774f7de46bb912d100ebb824fcb556ec7" Jan 20 18:28:42 crc kubenswrapper[4661]: I0120 18:28:42.082028 4661 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8ded1c3a4a61ed6debad60aa74cc2e5774f7de46bb912d100ebb824fcb556ec7"} err="failed to get container status \"8ded1c3a4a61ed6debad60aa74cc2e5774f7de46bb912d100ebb824fcb556ec7\": rpc error: code = NotFound desc = could not find container \"8ded1c3a4a61ed6debad60aa74cc2e5774f7de46bb912d100ebb824fcb556ec7\": container with ID starting with 8ded1c3a4a61ed6debad60aa74cc2e5774f7de46bb912d100ebb824fcb556ec7 not found: ID does not exist" Jan 20 18:28:42 crc kubenswrapper[4661]: I0120 18:28:42.082042 4661 scope.go:117] "RemoveContainer" containerID="54ed9a66e76eeab8c7020b313888bd2f37e08aaa23c11a301fd10b6bde71813e" Jan 20 18:28:42 crc kubenswrapper[4661]: I0120 18:28:42.091558 4661 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 20 18:28:42 crc kubenswrapper[4661]: I0120 18:28:42.099481 4661 scope.go:117] "RemoveContainer" containerID="c76853192ee68da9d687430ebad4adc079589fbe7ce8c6c6524d5c045257a90f" Jan 20 18:28:42 crc kubenswrapper[4661]: I0120 18:28:42.113919 4661 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 20 18:28:42 crc kubenswrapper[4661]: I0120 18:28:42.128362 4661 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 20 18:28:42 crc kubenswrapper[4661]: I0120 18:28:42.140004 4661 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Jan 20 18:28:42 crc kubenswrapper[4661]: E0120 18:28:42.140507 4661 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="19a9e039-d4eb-475e-9ca9-6a6f6bfeb36c" containerName="rabbitmq" Jan 20 18:28:42 crc kubenswrapper[4661]: I0120 18:28:42.140529 4661 state_mem.go:107] "Deleted CPUSet assignment" podUID="19a9e039-d4eb-475e-9ca9-6a6f6bfeb36c" containerName="rabbitmq" Jan 20 18:28:42 crc kubenswrapper[4661]: E0120 18:28:42.140545 4661 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b764feba-067a-4a59-a23b-9a9b7725f420" containerName="setup-container" Jan 20 18:28:42 crc kubenswrapper[4661]: I0120 18:28:42.140553 4661 state_mem.go:107] "Deleted CPUSet assignment" podUID="b764feba-067a-4a59-a23b-9a9b7725f420" containerName="setup-container" Jan 20 18:28:42 crc kubenswrapper[4661]: E0120 18:28:42.140572 4661 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b764feba-067a-4a59-a23b-9a9b7725f420" containerName="rabbitmq" Jan 20 18:28:42 crc kubenswrapper[4661]: I0120 18:28:42.140580 4661 state_mem.go:107] "Deleted CPUSet assignment" podUID="b764feba-067a-4a59-a23b-9a9b7725f420" containerName="rabbitmq" Jan 20 18:28:42 crc kubenswrapper[4661]: E0120 18:28:42.140608 4661 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="19a9e039-d4eb-475e-9ca9-6a6f6bfeb36c" containerName="setup-container" Jan 20 18:28:42 crc kubenswrapper[4661]: I0120 18:28:42.140617 4661 state_mem.go:107] "Deleted CPUSet assignment" podUID="19a9e039-d4eb-475e-9ca9-6a6f6bfeb36c" containerName="setup-container" Jan 20 18:28:42 crc kubenswrapper[4661]: I0120 18:28:42.140839 4661 memory_manager.go:354] "RemoveStaleState removing state" podUID="b764feba-067a-4a59-a23b-9a9b7725f420" containerName="rabbitmq" Jan 20 18:28:42 crc kubenswrapper[4661]: I0120 18:28:42.140863 4661 memory_manager.go:354] "RemoveStaleState removing state" podUID="19a9e039-d4eb-475e-9ca9-6a6f6bfeb36c" containerName="rabbitmq" Jan 20 18:28:42 crc kubenswrapper[4661]: I0120 18:28:42.142029 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 20 18:28:42 crc kubenswrapper[4661]: I0120 18:28:42.145396 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Jan 20 18:28:42 crc kubenswrapper[4661]: I0120 18:28:42.145409 4661 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Jan 20 18:28:42 crc kubenswrapper[4661]: I0120 18:28:42.148770 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Jan 20 18:28:42 crc kubenswrapper[4661]: I0120 18:28:42.148959 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Jan 20 18:28:42 crc kubenswrapper[4661]: I0120 18:28:42.149066 4661 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Jan 20 18:28:42 crc kubenswrapper[4661]: I0120 18:28:42.149222 4661 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Jan 20 18:28:42 crc kubenswrapper[4661]: I0120 18:28:42.158069 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-zv9vw" Jan 20 18:28:42 crc kubenswrapper[4661]: I0120 18:28:42.168236 4661 scope.go:117] "RemoveContainer" containerID="54ed9a66e76eeab8c7020b313888bd2f37e08aaa23c11a301fd10b6bde71813e" Jan 20 18:28:42 crc kubenswrapper[4661]: E0120 18:28:42.168655 4661 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"54ed9a66e76eeab8c7020b313888bd2f37e08aaa23c11a301fd10b6bde71813e\": container with ID starting with 54ed9a66e76eeab8c7020b313888bd2f37e08aaa23c11a301fd10b6bde71813e not found: ID does not exist" containerID="54ed9a66e76eeab8c7020b313888bd2f37e08aaa23c11a301fd10b6bde71813e" Jan 20 18:28:42 crc kubenswrapper[4661]: I0120 18:28:42.168699 4661 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"54ed9a66e76eeab8c7020b313888bd2f37e08aaa23c11a301fd10b6bde71813e"} err="failed to get container status \"54ed9a66e76eeab8c7020b313888bd2f37e08aaa23c11a301fd10b6bde71813e\": rpc error: code = NotFound desc = could not find container \"54ed9a66e76eeab8c7020b313888bd2f37e08aaa23c11a301fd10b6bde71813e\": container with ID starting with 54ed9a66e76eeab8c7020b313888bd2f37e08aaa23c11a301fd10b6bde71813e not found: ID does not exist" Jan 20 18:28:42 crc kubenswrapper[4661]: I0120 18:28:42.168722 4661 scope.go:117] "RemoveContainer" containerID="c76853192ee68da9d687430ebad4adc079589fbe7ce8c6c6524d5c045257a90f" Jan 20 18:28:42 crc kubenswrapper[4661]: E0120 18:28:42.168922 4661 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c76853192ee68da9d687430ebad4adc079589fbe7ce8c6c6524d5c045257a90f\": container with ID starting with c76853192ee68da9d687430ebad4adc079589fbe7ce8c6c6524d5c045257a90f not found: ID does not exist" containerID="c76853192ee68da9d687430ebad4adc079589fbe7ce8c6c6524d5c045257a90f" Jan 20 18:28:42 crc kubenswrapper[4661]: I0120 18:28:42.168945 4661 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c76853192ee68da9d687430ebad4adc079589fbe7ce8c6c6524d5c045257a90f"} err="failed to get container status \"c76853192ee68da9d687430ebad4adc079589fbe7ce8c6c6524d5c045257a90f\": rpc error: code = NotFound desc = could not find container \"c76853192ee68da9d687430ebad4adc079589fbe7ce8c6c6524d5c045257a90f\": container with ID starting with c76853192ee68da9d687430ebad4adc079589fbe7ce8c6c6524d5c045257a90f not found: ID does not exist" Jan 20 18:28:42 crc kubenswrapper[4661]: I0120 18:28:42.172712 4661 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="19a9e039-d4eb-475e-9ca9-6a6f6bfeb36c" path="/var/lib/kubelet/pods/19a9e039-d4eb-475e-9ca9-6a6f6bfeb36c/volumes" Jan 20 18:28:42 crc kubenswrapper[4661]: I0120 18:28:42.173443 4661 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b764feba-067a-4a59-a23b-9a9b7725f420" path="/var/lib/kubelet/pods/b764feba-067a-4a59-a23b-9a9b7725f420/volumes" Jan 20 18:28:42 crc kubenswrapper[4661]: I0120 18:28:42.182879 4661 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 20 18:28:42 crc kubenswrapper[4661]: I0120 18:28:42.184872 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 20 18:28:42 crc kubenswrapper[4661]: I0120 18:28:42.186946 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Jan 20 18:28:42 crc kubenswrapper[4661]: I0120 18:28:42.187150 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Jan 20 18:28:42 crc kubenswrapper[4661]: I0120 18:28:42.187740 4661 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Jan 20 18:28:42 crc kubenswrapper[4661]: I0120 18:28:42.187829 4661 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Jan 20 18:28:42 crc kubenswrapper[4661]: I0120 18:28:42.187846 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Jan 20 18:28:42 crc kubenswrapper[4661]: I0120 18:28:42.188096 4661 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Jan 20 18:28:42 crc kubenswrapper[4661]: I0120 18:28:42.188336 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-4qjvk" Jan 20 18:28:42 crc kubenswrapper[4661]: I0120 18:28:42.206160 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 20 18:28:42 crc kubenswrapper[4661]: I0120 18:28:42.217872 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 20 18:28:42 crc kubenswrapper[4661]: I0120 18:28:42.258296 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/5a690866-3b40-4a9f-ba41-a5a3a6d76c95-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"5a690866-3b40-4a9f-ba41-a5a3a6d76c95\") " pod="openstack/rabbitmq-server-0" Jan 20 18:28:42 crc kubenswrapper[4661]: I0120 18:28:42.258373 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/5a690866-3b40-4a9f-ba41-a5a3a6d76c95-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"5a690866-3b40-4a9f-ba41-a5a3a6d76c95\") " pod="openstack/rabbitmq-server-0" Jan 20 18:28:42 crc kubenswrapper[4661]: I0120 18:28:42.258440 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"rabbitmq-server-0\" (UID: \"5a690866-3b40-4a9f-ba41-a5a3a6d76c95\") " pod="openstack/rabbitmq-server-0" Jan 20 18:28:42 crc kubenswrapper[4661]: I0120 18:28:42.258461 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/5a690866-3b40-4a9f-ba41-a5a3a6d76c95-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"5a690866-3b40-4a9f-ba41-a5a3a6d76c95\") " pod="openstack/rabbitmq-server-0" Jan 20 18:28:42 crc kubenswrapper[4661]: I0120 18:28:42.258474 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zqjbl\" (UniqueName: \"kubernetes.io/projected/5a690866-3b40-4a9f-ba41-a5a3a6d76c95-kube-api-access-zqjbl\") pod \"rabbitmq-server-0\" (UID: \"5a690866-3b40-4a9f-ba41-a5a3a6d76c95\") " pod="openstack/rabbitmq-server-0" Jan 20 18:28:42 crc kubenswrapper[4661]: I0120 18:28:42.258491 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/5a690866-3b40-4a9f-ba41-a5a3a6d76c95-pod-info\") pod \"rabbitmq-server-0\" (UID: \"5a690866-3b40-4a9f-ba41-a5a3a6d76c95\") " pod="openstack/rabbitmq-server-0" Jan 20 18:28:42 crc kubenswrapper[4661]: I0120 18:28:42.258511 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/5a690866-3b40-4a9f-ba41-a5a3a6d76c95-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"5a690866-3b40-4a9f-ba41-a5a3a6d76c95\") " pod="openstack/rabbitmq-server-0" Jan 20 18:28:42 crc kubenswrapper[4661]: I0120 18:28:42.258528 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/5a690866-3b40-4a9f-ba41-a5a3a6d76c95-config-data\") pod \"rabbitmq-server-0\" (UID: \"5a690866-3b40-4a9f-ba41-a5a3a6d76c95\") " pod="openstack/rabbitmq-server-0" Jan 20 18:28:42 crc kubenswrapper[4661]: I0120 18:28:42.258553 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/5a690866-3b40-4a9f-ba41-a5a3a6d76c95-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"5a690866-3b40-4a9f-ba41-a5a3a6d76c95\") " pod="openstack/rabbitmq-server-0" Jan 20 18:28:42 crc kubenswrapper[4661]: I0120 18:28:42.258586 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/5a690866-3b40-4a9f-ba41-a5a3a6d76c95-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"5a690866-3b40-4a9f-ba41-a5a3a6d76c95\") " pod="openstack/rabbitmq-server-0" Jan 20 18:28:42 crc kubenswrapper[4661]: I0120 18:28:42.258599 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/5a690866-3b40-4a9f-ba41-a5a3a6d76c95-server-conf\") pod \"rabbitmq-server-0\" (UID: \"5a690866-3b40-4a9f-ba41-a5a3a6d76c95\") " pod="openstack/rabbitmq-server-0" Jan 20 18:28:42 crc kubenswrapper[4661]: I0120 18:28:42.360330 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/5a690866-3b40-4a9f-ba41-a5a3a6d76c95-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"5a690866-3b40-4a9f-ba41-a5a3a6d76c95\") " pod="openstack/rabbitmq-server-0" Jan 20 18:28:42 crc kubenswrapper[4661]: I0120 18:28:42.360486 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/7301e169-326c-4397-89f7-28b94553cef4-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"7301e169-326c-4397-89f7-28b94553cef4\") " pod="openstack/rabbitmq-cell1-server-0" Jan 20 18:28:42 crc kubenswrapper[4661]: I0120 18:28:42.360514 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/7301e169-326c-4397-89f7-28b94553cef4-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"7301e169-326c-4397-89f7-28b94553cef4\") " pod="openstack/rabbitmq-cell1-server-0" Jan 20 18:28:42 crc kubenswrapper[4661]: I0120 18:28:42.360551 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/5a690866-3b40-4a9f-ba41-a5a3a6d76c95-server-conf\") pod \"rabbitmq-server-0\" (UID: \"5a690866-3b40-4a9f-ba41-a5a3a6d76c95\") " pod="openstack/rabbitmq-server-0" Jan 20 18:28:42 crc kubenswrapper[4661]: I0120 18:28:42.360568 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/5a690866-3b40-4a9f-ba41-a5a3a6d76c95-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"5a690866-3b40-4a9f-ba41-a5a3a6d76c95\") " pod="openstack/rabbitmq-server-0" Jan 20 18:28:42 crc kubenswrapper[4661]: I0120 18:28:42.360587 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7301e169-326c-4397-89f7-28b94553cef4-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"7301e169-326c-4397-89f7-28b94553cef4\") " pod="openstack/rabbitmq-cell1-server-0" Jan 20 18:28:42 crc kubenswrapper[4661]: I0120 18:28:42.360634 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/7301e169-326c-4397-89f7-28b94553cef4-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"7301e169-326c-4397-89f7-28b94553cef4\") " pod="openstack/rabbitmq-cell1-server-0" Jan 20 18:28:42 crc kubenswrapper[4661]: I0120 18:28:42.360652 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/7301e169-326c-4397-89f7-28b94553cef4-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"7301e169-326c-4397-89f7-28b94553cef4\") " pod="openstack/rabbitmq-cell1-server-0" Jan 20 18:28:42 crc kubenswrapper[4661]: I0120 18:28:42.360689 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/5a690866-3b40-4a9f-ba41-a5a3a6d76c95-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"5a690866-3b40-4a9f-ba41-a5a3a6d76c95\") " pod="openstack/rabbitmq-server-0" Jan 20 18:28:42 crc kubenswrapper[4661]: I0120 18:28:42.360726 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/5a690866-3b40-4a9f-ba41-a5a3a6d76c95-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"5a690866-3b40-4a9f-ba41-a5a3a6d76c95\") " pod="openstack/rabbitmq-server-0" Jan 20 18:28:42 crc kubenswrapper[4661]: I0120 18:28:42.360760 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/7301e169-326c-4397-89f7-28b94553cef4-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"7301e169-326c-4397-89f7-28b94553cef4\") " pod="openstack/rabbitmq-cell1-server-0" Jan 20 18:28:42 crc kubenswrapper[4661]: I0120 18:28:42.360847 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/7301e169-326c-4397-89f7-28b94553cef4-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"7301e169-326c-4397-89f7-28b94553cef4\") " pod="openstack/rabbitmq-cell1-server-0" Jan 20 18:28:42 crc kubenswrapper[4661]: I0120 18:28:42.360869 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/7301e169-326c-4397-89f7-28b94553cef4-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"7301e169-326c-4397-89f7-28b94553cef4\") " pod="openstack/rabbitmq-cell1-server-0" Jan 20 18:28:42 crc kubenswrapper[4661]: I0120 18:28:42.360887 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/7301e169-326c-4397-89f7-28b94553cef4-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"7301e169-326c-4397-89f7-28b94553cef4\") " pod="openstack/rabbitmq-cell1-server-0" Jan 20 18:28:42 crc kubenswrapper[4661]: I0120 18:28:42.360943 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"7301e169-326c-4397-89f7-28b94553cef4\") " pod="openstack/rabbitmq-cell1-server-0" Jan 20 18:28:42 crc kubenswrapper[4661]: I0120 18:28:42.361496 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/5a690866-3b40-4a9f-ba41-a5a3a6d76c95-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"5a690866-3b40-4a9f-ba41-a5a3a6d76c95\") " pod="openstack/rabbitmq-server-0" Jan 20 18:28:42 crc kubenswrapper[4661]: I0120 18:28:42.361505 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/5a690866-3b40-4a9f-ba41-a5a3a6d76c95-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"5a690866-3b40-4a9f-ba41-a5a3a6d76c95\") " pod="openstack/rabbitmq-server-0" Jan 20 18:28:42 crc kubenswrapper[4661]: I0120 18:28:42.362175 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/5a690866-3b40-4a9f-ba41-a5a3a6d76c95-server-conf\") pod \"rabbitmq-server-0\" (UID: \"5a690866-3b40-4a9f-ba41-a5a3a6d76c95\") " pod="openstack/rabbitmq-server-0" Jan 20 18:28:42 crc kubenswrapper[4661]: I0120 18:28:42.362223 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"rabbitmq-server-0\" (UID: \"5a690866-3b40-4a9f-ba41-a5a3a6d76c95\") " pod="openstack/rabbitmq-server-0" Jan 20 18:28:42 crc kubenswrapper[4661]: I0120 18:28:42.362254 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/5a690866-3b40-4a9f-ba41-a5a3a6d76c95-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"5a690866-3b40-4a9f-ba41-a5a3a6d76c95\") " pod="openstack/rabbitmq-server-0" Jan 20 18:28:42 crc kubenswrapper[4661]: I0120 18:28:42.362274 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zqjbl\" (UniqueName: \"kubernetes.io/projected/5a690866-3b40-4a9f-ba41-a5a3a6d76c95-kube-api-access-zqjbl\") pod \"rabbitmq-server-0\" (UID: \"5a690866-3b40-4a9f-ba41-a5a3a6d76c95\") " pod="openstack/rabbitmq-server-0" Jan 20 18:28:42 crc kubenswrapper[4661]: I0120 18:28:42.362312 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/5a690866-3b40-4a9f-ba41-a5a3a6d76c95-pod-info\") pod \"rabbitmq-server-0\" (UID: \"5a690866-3b40-4a9f-ba41-a5a3a6d76c95\") " pod="openstack/rabbitmq-server-0" Jan 20 18:28:42 crc kubenswrapper[4661]: I0120 18:28:42.362335 4661 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"rabbitmq-server-0\" (UID: \"5a690866-3b40-4a9f-ba41-a5a3a6d76c95\") device mount path \"/mnt/openstack/pv09\"" pod="openstack/rabbitmq-server-0" Jan 20 18:28:42 crc kubenswrapper[4661]: I0120 18:28:42.362349 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pmxgw\" (UniqueName: \"kubernetes.io/projected/7301e169-326c-4397-89f7-28b94553cef4-kube-api-access-pmxgw\") pod \"rabbitmq-cell1-server-0\" (UID: \"7301e169-326c-4397-89f7-28b94553cef4\") " pod="openstack/rabbitmq-cell1-server-0" Jan 20 18:28:42 crc kubenswrapper[4661]: I0120 18:28:42.362383 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/5a690866-3b40-4a9f-ba41-a5a3a6d76c95-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"5a690866-3b40-4a9f-ba41-a5a3a6d76c95\") " pod="openstack/rabbitmq-server-0" Jan 20 18:28:42 crc kubenswrapper[4661]: I0120 18:28:42.362401 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/5a690866-3b40-4a9f-ba41-a5a3a6d76c95-config-data\") pod \"rabbitmq-server-0\" (UID: \"5a690866-3b40-4a9f-ba41-a5a3a6d76c95\") " pod="openstack/rabbitmq-server-0" Jan 20 18:28:42 crc kubenswrapper[4661]: I0120 18:28:42.363056 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/5a690866-3b40-4a9f-ba41-a5a3a6d76c95-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"5a690866-3b40-4a9f-ba41-a5a3a6d76c95\") " pod="openstack/rabbitmq-server-0" Jan 20 18:28:42 crc kubenswrapper[4661]: I0120 18:28:42.363341 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/5a690866-3b40-4a9f-ba41-a5a3a6d76c95-config-data\") pod \"rabbitmq-server-0\" (UID: \"5a690866-3b40-4a9f-ba41-a5a3a6d76c95\") " pod="openstack/rabbitmq-server-0" Jan 20 18:28:42 crc kubenswrapper[4661]: I0120 18:28:42.366570 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/5a690866-3b40-4a9f-ba41-a5a3a6d76c95-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"5a690866-3b40-4a9f-ba41-a5a3a6d76c95\") " pod="openstack/rabbitmq-server-0" Jan 20 18:28:42 crc kubenswrapper[4661]: I0120 18:28:42.367129 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/5a690866-3b40-4a9f-ba41-a5a3a6d76c95-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"5a690866-3b40-4a9f-ba41-a5a3a6d76c95\") " pod="openstack/rabbitmq-server-0" Jan 20 18:28:42 crc kubenswrapper[4661]: I0120 18:28:42.368423 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/5a690866-3b40-4a9f-ba41-a5a3a6d76c95-pod-info\") pod \"rabbitmq-server-0\" (UID: \"5a690866-3b40-4a9f-ba41-a5a3a6d76c95\") " pod="openstack/rabbitmq-server-0" Jan 20 18:28:42 crc kubenswrapper[4661]: I0120 18:28:42.371059 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/5a690866-3b40-4a9f-ba41-a5a3a6d76c95-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"5a690866-3b40-4a9f-ba41-a5a3a6d76c95\") " pod="openstack/rabbitmq-server-0" Jan 20 18:28:42 crc kubenswrapper[4661]: I0120 18:28:42.388719 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zqjbl\" (UniqueName: \"kubernetes.io/projected/5a690866-3b40-4a9f-ba41-a5a3a6d76c95-kube-api-access-zqjbl\") pod \"rabbitmq-server-0\" (UID: \"5a690866-3b40-4a9f-ba41-a5a3a6d76c95\") " pod="openstack/rabbitmq-server-0" Jan 20 18:28:42 crc kubenswrapper[4661]: I0120 18:28:42.398392 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"rabbitmq-server-0\" (UID: \"5a690866-3b40-4a9f-ba41-a5a3a6d76c95\") " pod="openstack/rabbitmq-server-0" Jan 20 18:28:42 crc kubenswrapper[4661]: I0120 18:28:42.463611 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"7301e169-326c-4397-89f7-28b94553cef4\") " pod="openstack/rabbitmq-cell1-server-0" Jan 20 18:28:42 crc kubenswrapper[4661]: I0120 18:28:42.463697 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pmxgw\" (UniqueName: \"kubernetes.io/projected/7301e169-326c-4397-89f7-28b94553cef4-kube-api-access-pmxgw\") pod \"rabbitmq-cell1-server-0\" (UID: \"7301e169-326c-4397-89f7-28b94553cef4\") " pod="openstack/rabbitmq-cell1-server-0" Jan 20 18:28:42 crc kubenswrapper[4661]: I0120 18:28:42.463746 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/7301e169-326c-4397-89f7-28b94553cef4-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"7301e169-326c-4397-89f7-28b94553cef4\") " pod="openstack/rabbitmq-cell1-server-0" Jan 20 18:28:42 crc kubenswrapper[4661]: I0120 18:28:42.463765 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/7301e169-326c-4397-89f7-28b94553cef4-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"7301e169-326c-4397-89f7-28b94553cef4\") " pod="openstack/rabbitmq-cell1-server-0" Jan 20 18:28:42 crc kubenswrapper[4661]: I0120 18:28:42.463784 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7301e169-326c-4397-89f7-28b94553cef4-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"7301e169-326c-4397-89f7-28b94553cef4\") " pod="openstack/rabbitmq-cell1-server-0" Jan 20 18:28:42 crc kubenswrapper[4661]: I0120 18:28:42.463808 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/7301e169-326c-4397-89f7-28b94553cef4-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"7301e169-326c-4397-89f7-28b94553cef4\") " pod="openstack/rabbitmq-cell1-server-0" Jan 20 18:28:42 crc kubenswrapper[4661]: I0120 18:28:42.463825 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/7301e169-326c-4397-89f7-28b94553cef4-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"7301e169-326c-4397-89f7-28b94553cef4\") " pod="openstack/rabbitmq-cell1-server-0" Jan 20 18:28:42 crc kubenswrapper[4661]: I0120 18:28:42.463853 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/7301e169-326c-4397-89f7-28b94553cef4-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"7301e169-326c-4397-89f7-28b94553cef4\") " pod="openstack/rabbitmq-cell1-server-0" Jan 20 18:28:42 crc kubenswrapper[4661]: I0120 18:28:42.463886 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/7301e169-326c-4397-89f7-28b94553cef4-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"7301e169-326c-4397-89f7-28b94553cef4\") " pod="openstack/rabbitmq-cell1-server-0" Jan 20 18:28:42 crc kubenswrapper[4661]: I0120 18:28:42.463902 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/7301e169-326c-4397-89f7-28b94553cef4-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"7301e169-326c-4397-89f7-28b94553cef4\") " pod="openstack/rabbitmq-cell1-server-0" Jan 20 18:28:42 crc kubenswrapper[4661]: I0120 18:28:42.463920 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/7301e169-326c-4397-89f7-28b94553cef4-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"7301e169-326c-4397-89f7-28b94553cef4\") " pod="openstack/rabbitmq-cell1-server-0" Jan 20 18:28:42 crc kubenswrapper[4661]: I0120 18:28:42.464787 4661 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"7301e169-326c-4397-89f7-28b94553cef4\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/rabbitmq-cell1-server-0" Jan 20 18:28:42 crc kubenswrapper[4661]: I0120 18:28:42.465150 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7301e169-326c-4397-89f7-28b94553cef4-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"7301e169-326c-4397-89f7-28b94553cef4\") " pod="openstack/rabbitmq-cell1-server-0" Jan 20 18:28:42 crc kubenswrapper[4661]: I0120 18:28:42.466136 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/7301e169-326c-4397-89f7-28b94553cef4-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"7301e169-326c-4397-89f7-28b94553cef4\") " pod="openstack/rabbitmq-cell1-server-0" Jan 20 18:28:42 crc kubenswrapper[4661]: I0120 18:28:42.466183 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/7301e169-326c-4397-89f7-28b94553cef4-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"7301e169-326c-4397-89f7-28b94553cef4\") " pod="openstack/rabbitmq-cell1-server-0" Jan 20 18:28:42 crc kubenswrapper[4661]: I0120 18:28:42.466368 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/7301e169-326c-4397-89f7-28b94553cef4-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"7301e169-326c-4397-89f7-28b94553cef4\") " pod="openstack/rabbitmq-cell1-server-0" Jan 20 18:28:42 crc kubenswrapper[4661]: I0120 18:28:42.467072 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/7301e169-326c-4397-89f7-28b94553cef4-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"7301e169-326c-4397-89f7-28b94553cef4\") " pod="openstack/rabbitmq-cell1-server-0" Jan 20 18:28:42 crc kubenswrapper[4661]: I0120 18:28:42.472029 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/7301e169-326c-4397-89f7-28b94553cef4-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"7301e169-326c-4397-89f7-28b94553cef4\") " pod="openstack/rabbitmq-cell1-server-0" Jan 20 18:28:42 crc kubenswrapper[4661]: I0120 18:28:42.476493 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/7301e169-326c-4397-89f7-28b94553cef4-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"7301e169-326c-4397-89f7-28b94553cef4\") " pod="openstack/rabbitmq-cell1-server-0" Jan 20 18:28:42 crc kubenswrapper[4661]: I0120 18:28:42.479094 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/7301e169-326c-4397-89f7-28b94553cef4-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"7301e169-326c-4397-89f7-28b94553cef4\") " pod="openstack/rabbitmq-cell1-server-0" Jan 20 18:28:42 crc kubenswrapper[4661]: I0120 18:28:42.495271 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/7301e169-326c-4397-89f7-28b94553cef4-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"7301e169-326c-4397-89f7-28b94553cef4\") " pod="openstack/rabbitmq-cell1-server-0" Jan 20 18:28:42 crc kubenswrapper[4661]: I0120 18:28:42.522356 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pmxgw\" (UniqueName: \"kubernetes.io/projected/7301e169-326c-4397-89f7-28b94553cef4-kube-api-access-pmxgw\") pod \"rabbitmq-cell1-server-0\" (UID: \"7301e169-326c-4397-89f7-28b94553cef4\") " pod="openstack/rabbitmq-cell1-server-0" Jan 20 18:28:42 crc kubenswrapper[4661]: I0120 18:28:42.523348 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"7301e169-326c-4397-89f7-28b94553cef4\") " pod="openstack/rabbitmq-cell1-server-0" Jan 20 18:28:42 crc kubenswrapper[4661]: I0120 18:28:42.551271 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 20 18:28:42 crc kubenswrapper[4661]: I0120 18:28:42.565783 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 20 18:28:43 crc kubenswrapper[4661]: W0120 18:28:43.047003 4661 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5a690866_3b40_4a9f_ba41_a5a3a6d76c95.slice/crio-19c743ab746d436b4aaf66b69474ff28bee8e465363c796b7ca8452af5839723 WatchSource:0}: Error finding container 19c743ab746d436b4aaf66b69474ff28bee8e465363c796b7ca8452af5839723: Status 404 returned error can't find the container with id 19c743ab746d436b4aaf66b69474ff28bee8e465363c796b7ca8452af5839723 Jan 20 18:28:43 crc kubenswrapper[4661]: I0120 18:28:43.049307 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 20 18:28:43 crc kubenswrapper[4661]: I0120 18:28:43.066149 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 20 18:28:44 crc kubenswrapper[4661]: I0120 18:28:44.047270 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"5a690866-3b40-4a9f-ba41-a5a3a6d76c95","Type":"ContainerStarted","Data":"19c743ab746d436b4aaf66b69474ff28bee8e465363c796b7ca8452af5839723"} Jan 20 18:28:44 crc kubenswrapper[4661]: I0120 18:28:44.050738 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"7301e169-326c-4397-89f7-28b94553cef4","Type":"ContainerStarted","Data":"d2de4074087924263ea210079707ea8eaf76761f6ec8ed4ceac13c841c62764d"} Jan 20 18:28:45 crc kubenswrapper[4661]: I0120 18:28:45.071443 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"5a690866-3b40-4a9f-ba41-a5a3a6d76c95","Type":"ContainerStarted","Data":"4d43df4ee5a1eef1e28c16c7e75b021a84c46bfc1810c8498284d8ad97f27950"} Jan 20 18:28:45 crc kubenswrapper[4661]: I0120 18:28:45.078471 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"7301e169-326c-4397-89f7-28b94553cef4","Type":"ContainerStarted","Data":"7f6d40b819df9722a5445b60ea9efba0f3bab7b98baccf8c7b259a5e400f21fb"} Jan 20 18:28:45 crc kubenswrapper[4661]: I0120 18:28:45.487643 4661 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6447ccbd8f-hh7hz"] Jan 20 18:28:45 crc kubenswrapper[4661]: I0120 18:28:45.489138 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6447ccbd8f-hh7hz" Jan 20 18:28:45 crc kubenswrapper[4661]: I0120 18:28:45.492129 4661 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-edpm-ipam" Jan 20 18:28:45 crc kubenswrapper[4661]: I0120 18:28:45.516576 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6447ccbd8f-hh7hz"] Jan 20 18:28:45 crc kubenswrapper[4661]: I0120 18:28:45.619161 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3e1553a5-bc2a-47cc-95ce-57f35366273b-ovsdbserver-nb\") pod \"dnsmasq-dns-6447ccbd8f-hh7hz\" (UID: \"3e1553a5-bc2a-47cc-95ce-57f35366273b\") " pod="openstack/dnsmasq-dns-6447ccbd8f-hh7hz" Jan 20 18:28:45 crc kubenswrapper[4661]: I0120 18:28:45.619319 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3e1553a5-bc2a-47cc-95ce-57f35366273b-ovsdbserver-sb\") pod \"dnsmasq-dns-6447ccbd8f-hh7hz\" (UID: \"3e1553a5-bc2a-47cc-95ce-57f35366273b\") " pod="openstack/dnsmasq-dns-6447ccbd8f-hh7hz" Jan 20 18:28:45 crc kubenswrapper[4661]: I0120 18:28:45.619368 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/3e1553a5-bc2a-47cc-95ce-57f35366273b-openstack-edpm-ipam\") pod \"dnsmasq-dns-6447ccbd8f-hh7hz\" (UID: \"3e1553a5-bc2a-47cc-95ce-57f35366273b\") " pod="openstack/dnsmasq-dns-6447ccbd8f-hh7hz" Jan 20 18:28:45 crc kubenswrapper[4661]: I0120 18:28:45.619427 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3e1553a5-bc2a-47cc-95ce-57f35366273b-config\") pod \"dnsmasq-dns-6447ccbd8f-hh7hz\" (UID: \"3e1553a5-bc2a-47cc-95ce-57f35366273b\") " pod="openstack/dnsmasq-dns-6447ccbd8f-hh7hz" Jan 20 18:28:45 crc kubenswrapper[4661]: I0120 18:28:45.619458 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9nmfp\" (UniqueName: \"kubernetes.io/projected/3e1553a5-bc2a-47cc-95ce-57f35366273b-kube-api-access-9nmfp\") pod \"dnsmasq-dns-6447ccbd8f-hh7hz\" (UID: \"3e1553a5-bc2a-47cc-95ce-57f35366273b\") " pod="openstack/dnsmasq-dns-6447ccbd8f-hh7hz" Jan 20 18:28:45 crc kubenswrapper[4661]: I0120 18:28:45.619507 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3e1553a5-bc2a-47cc-95ce-57f35366273b-dns-svc\") pod \"dnsmasq-dns-6447ccbd8f-hh7hz\" (UID: \"3e1553a5-bc2a-47cc-95ce-57f35366273b\") " pod="openstack/dnsmasq-dns-6447ccbd8f-hh7hz" Jan 20 18:28:45 crc kubenswrapper[4661]: I0120 18:28:45.720991 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3e1553a5-bc2a-47cc-95ce-57f35366273b-ovsdbserver-nb\") pod \"dnsmasq-dns-6447ccbd8f-hh7hz\" (UID: \"3e1553a5-bc2a-47cc-95ce-57f35366273b\") " pod="openstack/dnsmasq-dns-6447ccbd8f-hh7hz" Jan 20 18:28:45 crc kubenswrapper[4661]: I0120 18:28:45.721098 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3e1553a5-bc2a-47cc-95ce-57f35366273b-ovsdbserver-sb\") pod \"dnsmasq-dns-6447ccbd8f-hh7hz\" (UID: \"3e1553a5-bc2a-47cc-95ce-57f35366273b\") " pod="openstack/dnsmasq-dns-6447ccbd8f-hh7hz" Jan 20 18:28:45 crc kubenswrapper[4661]: I0120 18:28:45.721159 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/3e1553a5-bc2a-47cc-95ce-57f35366273b-openstack-edpm-ipam\") pod \"dnsmasq-dns-6447ccbd8f-hh7hz\" (UID: \"3e1553a5-bc2a-47cc-95ce-57f35366273b\") " pod="openstack/dnsmasq-dns-6447ccbd8f-hh7hz" Jan 20 18:28:45 crc kubenswrapper[4661]: I0120 18:28:45.721195 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3e1553a5-bc2a-47cc-95ce-57f35366273b-config\") pod \"dnsmasq-dns-6447ccbd8f-hh7hz\" (UID: \"3e1553a5-bc2a-47cc-95ce-57f35366273b\") " pod="openstack/dnsmasq-dns-6447ccbd8f-hh7hz" Jan 20 18:28:45 crc kubenswrapper[4661]: I0120 18:28:45.721218 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9nmfp\" (UniqueName: \"kubernetes.io/projected/3e1553a5-bc2a-47cc-95ce-57f35366273b-kube-api-access-9nmfp\") pod \"dnsmasq-dns-6447ccbd8f-hh7hz\" (UID: \"3e1553a5-bc2a-47cc-95ce-57f35366273b\") " pod="openstack/dnsmasq-dns-6447ccbd8f-hh7hz" Jan 20 18:28:45 crc kubenswrapper[4661]: I0120 18:28:45.721270 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3e1553a5-bc2a-47cc-95ce-57f35366273b-dns-svc\") pod \"dnsmasq-dns-6447ccbd8f-hh7hz\" (UID: \"3e1553a5-bc2a-47cc-95ce-57f35366273b\") " pod="openstack/dnsmasq-dns-6447ccbd8f-hh7hz" Jan 20 18:28:45 crc kubenswrapper[4661]: I0120 18:28:45.721800 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3e1553a5-bc2a-47cc-95ce-57f35366273b-ovsdbserver-nb\") pod \"dnsmasq-dns-6447ccbd8f-hh7hz\" (UID: \"3e1553a5-bc2a-47cc-95ce-57f35366273b\") " pod="openstack/dnsmasq-dns-6447ccbd8f-hh7hz" Jan 20 18:28:45 crc kubenswrapper[4661]: I0120 18:28:45.722073 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3e1553a5-bc2a-47cc-95ce-57f35366273b-dns-svc\") pod \"dnsmasq-dns-6447ccbd8f-hh7hz\" (UID: \"3e1553a5-bc2a-47cc-95ce-57f35366273b\") " pod="openstack/dnsmasq-dns-6447ccbd8f-hh7hz" Jan 20 18:28:45 crc kubenswrapper[4661]: I0120 18:28:45.722390 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/3e1553a5-bc2a-47cc-95ce-57f35366273b-openstack-edpm-ipam\") pod \"dnsmasq-dns-6447ccbd8f-hh7hz\" (UID: \"3e1553a5-bc2a-47cc-95ce-57f35366273b\") " pod="openstack/dnsmasq-dns-6447ccbd8f-hh7hz" Jan 20 18:28:45 crc kubenswrapper[4661]: I0120 18:28:45.722749 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3e1553a5-bc2a-47cc-95ce-57f35366273b-config\") pod \"dnsmasq-dns-6447ccbd8f-hh7hz\" (UID: \"3e1553a5-bc2a-47cc-95ce-57f35366273b\") " pod="openstack/dnsmasq-dns-6447ccbd8f-hh7hz" Jan 20 18:28:45 crc kubenswrapper[4661]: I0120 18:28:45.722933 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3e1553a5-bc2a-47cc-95ce-57f35366273b-ovsdbserver-sb\") pod \"dnsmasq-dns-6447ccbd8f-hh7hz\" (UID: \"3e1553a5-bc2a-47cc-95ce-57f35366273b\") " pod="openstack/dnsmasq-dns-6447ccbd8f-hh7hz" Jan 20 18:28:45 crc kubenswrapper[4661]: I0120 18:28:45.748990 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9nmfp\" (UniqueName: \"kubernetes.io/projected/3e1553a5-bc2a-47cc-95ce-57f35366273b-kube-api-access-9nmfp\") pod \"dnsmasq-dns-6447ccbd8f-hh7hz\" (UID: \"3e1553a5-bc2a-47cc-95ce-57f35366273b\") " pod="openstack/dnsmasq-dns-6447ccbd8f-hh7hz" Jan 20 18:28:45 crc kubenswrapper[4661]: I0120 18:28:45.810459 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6447ccbd8f-hh7hz" Jan 20 18:28:46 crc kubenswrapper[4661]: I0120 18:28:46.321824 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6447ccbd8f-hh7hz"] Jan 20 18:28:47 crc kubenswrapper[4661]: I0120 18:28:47.099228 4661 generic.go:334] "Generic (PLEG): container finished" podID="3e1553a5-bc2a-47cc-95ce-57f35366273b" containerID="42425f22809477de2c603dcaf1c9d425fb978094e92e53097fec690a91128e3b" exitCode=0 Jan 20 18:28:47 crc kubenswrapper[4661]: I0120 18:28:47.099313 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6447ccbd8f-hh7hz" event={"ID":"3e1553a5-bc2a-47cc-95ce-57f35366273b","Type":"ContainerDied","Data":"42425f22809477de2c603dcaf1c9d425fb978094e92e53097fec690a91128e3b"} Jan 20 18:28:47 crc kubenswrapper[4661]: I0120 18:28:47.099742 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6447ccbd8f-hh7hz" event={"ID":"3e1553a5-bc2a-47cc-95ce-57f35366273b","Type":"ContainerStarted","Data":"d1a7697058c9c55cf5cb47594271d6bfbf21ac20ffa8e97a92fc7bf416fc377d"} Jan 20 18:28:48 crc kubenswrapper[4661]: I0120 18:28:48.118179 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6447ccbd8f-hh7hz" event={"ID":"3e1553a5-bc2a-47cc-95ce-57f35366273b","Type":"ContainerStarted","Data":"d48f04495b9c38c6802453d82eaba78701cd4ccbe43dce737dedb0b6002b0e49"} Jan 20 18:28:48 crc kubenswrapper[4661]: I0120 18:28:48.118493 4661 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6447ccbd8f-hh7hz" Jan 20 18:28:48 crc kubenswrapper[4661]: I0120 18:28:48.163949 4661 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6447ccbd8f-hh7hz" podStartSLOduration=3.163927234 podStartE2EDuration="3.163927234s" podCreationTimestamp="2026-01-20 18:28:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 18:28:48.161927927 +0000 UTC m=+1384.492717659" watchObservedRunningTime="2026-01-20 18:28:48.163927234 +0000 UTC m=+1384.494716906" Jan 20 18:28:55 crc kubenswrapper[4661]: I0120 18:28:55.812618 4661 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6447ccbd8f-hh7hz" Jan 20 18:28:55 crc kubenswrapper[4661]: I0120 18:28:55.908737 4661 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5b856c5697-x6dtq"] Jan 20 18:28:55 crc kubenswrapper[4661]: I0120 18:28:55.909000 4661 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5b856c5697-x6dtq" podUID="70b388ac-7547-498d-9bc2-97c8248fe4bc" containerName="dnsmasq-dns" containerID="cri-o://ceed4bd7cda1f544ac2f5a8e3be9e2511e166bc08c31d936f1955c3ed45f5383" gracePeriod=10 Jan 20 18:28:56 crc kubenswrapper[4661]: I0120 18:28:56.099082 4661 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-fb68d687f-b2v4r"] Jan 20 18:28:56 crc kubenswrapper[4661]: I0120 18:28:56.100432 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-fb68d687f-b2v4r" Jan 20 18:28:56 crc kubenswrapper[4661]: I0120 18:28:56.110930 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-fb68d687f-b2v4r"] Jan 20 18:28:56 crc kubenswrapper[4661]: I0120 18:28:56.138432 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/370135b1-1365-490b-a9ae-d8ffb1361718-ovsdbserver-nb\") pod \"dnsmasq-dns-fb68d687f-b2v4r\" (UID: \"370135b1-1365-490b-a9ae-d8ffb1361718\") " pod="openstack/dnsmasq-dns-fb68d687f-b2v4r" Jan 20 18:28:56 crc kubenswrapper[4661]: I0120 18:28:56.138519 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c56dl\" (UniqueName: \"kubernetes.io/projected/370135b1-1365-490b-a9ae-d8ffb1361718-kube-api-access-c56dl\") pod \"dnsmasq-dns-fb68d687f-b2v4r\" (UID: \"370135b1-1365-490b-a9ae-d8ffb1361718\") " pod="openstack/dnsmasq-dns-fb68d687f-b2v4r" Jan 20 18:28:56 crc kubenswrapper[4661]: I0120 18:28:56.138558 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/370135b1-1365-490b-a9ae-d8ffb1361718-dns-svc\") pod \"dnsmasq-dns-fb68d687f-b2v4r\" (UID: \"370135b1-1365-490b-a9ae-d8ffb1361718\") " pod="openstack/dnsmasq-dns-fb68d687f-b2v4r" Jan 20 18:28:56 crc kubenswrapper[4661]: I0120 18:28:56.138593 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/370135b1-1365-490b-a9ae-d8ffb1361718-ovsdbserver-sb\") pod \"dnsmasq-dns-fb68d687f-b2v4r\" (UID: \"370135b1-1365-490b-a9ae-d8ffb1361718\") " pod="openstack/dnsmasq-dns-fb68d687f-b2v4r" Jan 20 18:28:56 crc kubenswrapper[4661]: I0120 18:28:56.138623 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/370135b1-1365-490b-a9ae-d8ffb1361718-openstack-edpm-ipam\") pod \"dnsmasq-dns-fb68d687f-b2v4r\" (UID: \"370135b1-1365-490b-a9ae-d8ffb1361718\") " pod="openstack/dnsmasq-dns-fb68d687f-b2v4r" Jan 20 18:28:56 crc kubenswrapper[4661]: I0120 18:28:56.138649 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/370135b1-1365-490b-a9ae-d8ffb1361718-config\") pod \"dnsmasq-dns-fb68d687f-b2v4r\" (UID: \"370135b1-1365-490b-a9ae-d8ffb1361718\") " pod="openstack/dnsmasq-dns-fb68d687f-b2v4r" Jan 20 18:28:56 crc kubenswrapper[4661]: I0120 18:28:56.204588 4661 generic.go:334] "Generic (PLEG): container finished" podID="70b388ac-7547-498d-9bc2-97c8248fe4bc" containerID="ceed4bd7cda1f544ac2f5a8e3be9e2511e166bc08c31d936f1955c3ed45f5383" exitCode=0 Jan 20 18:28:56 crc kubenswrapper[4661]: I0120 18:28:56.204633 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b856c5697-x6dtq" event={"ID":"70b388ac-7547-498d-9bc2-97c8248fe4bc","Type":"ContainerDied","Data":"ceed4bd7cda1f544ac2f5a8e3be9e2511e166bc08c31d936f1955c3ed45f5383"} Jan 20 18:28:56 crc kubenswrapper[4661]: I0120 18:28:56.239816 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/370135b1-1365-490b-a9ae-d8ffb1361718-dns-svc\") pod \"dnsmasq-dns-fb68d687f-b2v4r\" (UID: \"370135b1-1365-490b-a9ae-d8ffb1361718\") " pod="openstack/dnsmasq-dns-fb68d687f-b2v4r" Jan 20 18:28:56 crc kubenswrapper[4661]: I0120 18:28:56.239873 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/370135b1-1365-490b-a9ae-d8ffb1361718-ovsdbserver-sb\") pod \"dnsmasq-dns-fb68d687f-b2v4r\" (UID: \"370135b1-1365-490b-a9ae-d8ffb1361718\") " pod="openstack/dnsmasq-dns-fb68d687f-b2v4r" Jan 20 18:28:56 crc kubenswrapper[4661]: I0120 18:28:56.239935 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/370135b1-1365-490b-a9ae-d8ffb1361718-openstack-edpm-ipam\") pod \"dnsmasq-dns-fb68d687f-b2v4r\" (UID: \"370135b1-1365-490b-a9ae-d8ffb1361718\") " pod="openstack/dnsmasq-dns-fb68d687f-b2v4r" Jan 20 18:28:56 crc kubenswrapper[4661]: I0120 18:28:56.239966 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/370135b1-1365-490b-a9ae-d8ffb1361718-config\") pod \"dnsmasq-dns-fb68d687f-b2v4r\" (UID: \"370135b1-1365-490b-a9ae-d8ffb1361718\") " pod="openstack/dnsmasq-dns-fb68d687f-b2v4r" Jan 20 18:28:56 crc kubenswrapper[4661]: I0120 18:28:56.240112 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/370135b1-1365-490b-a9ae-d8ffb1361718-ovsdbserver-nb\") pod \"dnsmasq-dns-fb68d687f-b2v4r\" (UID: \"370135b1-1365-490b-a9ae-d8ffb1361718\") " pod="openstack/dnsmasq-dns-fb68d687f-b2v4r" Jan 20 18:28:56 crc kubenswrapper[4661]: I0120 18:28:56.240155 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c56dl\" (UniqueName: \"kubernetes.io/projected/370135b1-1365-490b-a9ae-d8ffb1361718-kube-api-access-c56dl\") pod \"dnsmasq-dns-fb68d687f-b2v4r\" (UID: \"370135b1-1365-490b-a9ae-d8ffb1361718\") " pod="openstack/dnsmasq-dns-fb68d687f-b2v4r" Jan 20 18:28:56 crc kubenswrapper[4661]: I0120 18:28:56.240837 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/370135b1-1365-490b-a9ae-d8ffb1361718-ovsdbserver-sb\") pod \"dnsmasq-dns-fb68d687f-b2v4r\" (UID: \"370135b1-1365-490b-a9ae-d8ffb1361718\") " pod="openstack/dnsmasq-dns-fb68d687f-b2v4r" Jan 20 18:28:56 crc kubenswrapper[4661]: I0120 18:28:56.241346 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/370135b1-1365-490b-a9ae-d8ffb1361718-config\") pod \"dnsmasq-dns-fb68d687f-b2v4r\" (UID: \"370135b1-1365-490b-a9ae-d8ffb1361718\") " pod="openstack/dnsmasq-dns-fb68d687f-b2v4r" Jan 20 18:28:56 crc kubenswrapper[4661]: I0120 18:28:56.241367 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/370135b1-1365-490b-a9ae-d8ffb1361718-openstack-edpm-ipam\") pod \"dnsmasq-dns-fb68d687f-b2v4r\" (UID: \"370135b1-1365-490b-a9ae-d8ffb1361718\") " pod="openstack/dnsmasq-dns-fb68d687f-b2v4r" Jan 20 18:28:56 crc kubenswrapper[4661]: I0120 18:28:56.241372 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/370135b1-1365-490b-a9ae-d8ffb1361718-ovsdbserver-nb\") pod \"dnsmasq-dns-fb68d687f-b2v4r\" (UID: \"370135b1-1365-490b-a9ae-d8ffb1361718\") " pod="openstack/dnsmasq-dns-fb68d687f-b2v4r" Jan 20 18:28:56 crc kubenswrapper[4661]: I0120 18:28:56.241656 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/370135b1-1365-490b-a9ae-d8ffb1361718-dns-svc\") pod \"dnsmasq-dns-fb68d687f-b2v4r\" (UID: \"370135b1-1365-490b-a9ae-d8ffb1361718\") " pod="openstack/dnsmasq-dns-fb68d687f-b2v4r" Jan 20 18:28:56 crc kubenswrapper[4661]: I0120 18:28:56.258634 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c56dl\" (UniqueName: \"kubernetes.io/projected/370135b1-1365-490b-a9ae-d8ffb1361718-kube-api-access-c56dl\") pod \"dnsmasq-dns-fb68d687f-b2v4r\" (UID: \"370135b1-1365-490b-a9ae-d8ffb1361718\") " pod="openstack/dnsmasq-dns-fb68d687f-b2v4r" Jan 20 18:28:56 crc kubenswrapper[4661]: I0120 18:28:56.446805 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-fb68d687f-b2v4r" Jan 20 18:28:56 crc kubenswrapper[4661]: I0120 18:28:56.500726 4661 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b856c5697-x6dtq" Jan 20 18:28:56 crc kubenswrapper[4661]: I0120 18:28:56.544288 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/70b388ac-7547-498d-9bc2-97c8248fe4bc-ovsdbserver-nb\") pod \"70b388ac-7547-498d-9bc2-97c8248fe4bc\" (UID: \"70b388ac-7547-498d-9bc2-97c8248fe4bc\") " Jan 20 18:28:56 crc kubenswrapper[4661]: I0120 18:28:56.544339 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bxhgs\" (UniqueName: \"kubernetes.io/projected/70b388ac-7547-498d-9bc2-97c8248fe4bc-kube-api-access-bxhgs\") pod \"70b388ac-7547-498d-9bc2-97c8248fe4bc\" (UID: \"70b388ac-7547-498d-9bc2-97c8248fe4bc\") " Jan 20 18:28:56 crc kubenswrapper[4661]: I0120 18:28:56.544421 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/70b388ac-7547-498d-9bc2-97c8248fe4bc-config\") pod \"70b388ac-7547-498d-9bc2-97c8248fe4bc\" (UID: \"70b388ac-7547-498d-9bc2-97c8248fe4bc\") " Jan 20 18:28:56 crc kubenswrapper[4661]: I0120 18:28:56.544579 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/70b388ac-7547-498d-9bc2-97c8248fe4bc-dns-svc\") pod \"70b388ac-7547-498d-9bc2-97c8248fe4bc\" (UID: \"70b388ac-7547-498d-9bc2-97c8248fe4bc\") " Jan 20 18:28:56 crc kubenswrapper[4661]: I0120 18:28:56.544635 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/70b388ac-7547-498d-9bc2-97c8248fe4bc-ovsdbserver-sb\") pod \"70b388ac-7547-498d-9bc2-97c8248fe4bc\" (UID: \"70b388ac-7547-498d-9bc2-97c8248fe4bc\") " Jan 20 18:28:56 crc kubenswrapper[4661]: I0120 18:28:56.550913 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/70b388ac-7547-498d-9bc2-97c8248fe4bc-kube-api-access-bxhgs" (OuterVolumeSpecName: "kube-api-access-bxhgs") pod "70b388ac-7547-498d-9bc2-97c8248fe4bc" (UID: "70b388ac-7547-498d-9bc2-97c8248fe4bc"). InnerVolumeSpecName "kube-api-access-bxhgs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 18:28:56 crc kubenswrapper[4661]: I0120 18:28:56.606268 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/70b388ac-7547-498d-9bc2-97c8248fe4bc-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "70b388ac-7547-498d-9bc2-97c8248fe4bc" (UID: "70b388ac-7547-498d-9bc2-97c8248fe4bc"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 18:28:56 crc kubenswrapper[4661]: I0120 18:28:56.611929 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/70b388ac-7547-498d-9bc2-97c8248fe4bc-config" (OuterVolumeSpecName: "config") pod "70b388ac-7547-498d-9bc2-97c8248fe4bc" (UID: "70b388ac-7547-498d-9bc2-97c8248fe4bc"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 18:28:56 crc kubenswrapper[4661]: I0120 18:28:56.625487 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/70b388ac-7547-498d-9bc2-97c8248fe4bc-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "70b388ac-7547-498d-9bc2-97c8248fe4bc" (UID: "70b388ac-7547-498d-9bc2-97c8248fe4bc"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 18:28:56 crc kubenswrapper[4661]: I0120 18:28:56.632223 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/70b388ac-7547-498d-9bc2-97c8248fe4bc-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "70b388ac-7547-498d-9bc2-97c8248fe4bc" (UID: "70b388ac-7547-498d-9bc2-97c8248fe4bc"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 18:28:56 crc kubenswrapper[4661]: I0120 18:28:56.648459 4661 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/70b388ac-7547-498d-9bc2-97c8248fe4bc-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 20 18:28:56 crc kubenswrapper[4661]: I0120 18:28:56.648489 4661 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/70b388ac-7547-498d-9bc2-97c8248fe4bc-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 20 18:28:56 crc kubenswrapper[4661]: I0120 18:28:56.648500 4661 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bxhgs\" (UniqueName: \"kubernetes.io/projected/70b388ac-7547-498d-9bc2-97c8248fe4bc-kube-api-access-bxhgs\") on node \"crc\" DevicePath \"\"" Jan 20 18:28:56 crc kubenswrapper[4661]: I0120 18:28:56.648510 4661 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/70b388ac-7547-498d-9bc2-97c8248fe4bc-config\") on node \"crc\" DevicePath \"\"" Jan 20 18:28:56 crc kubenswrapper[4661]: I0120 18:28:56.648518 4661 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/70b388ac-7547-498d-9bc2-97c8248fe4bc-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 20 18:28:56 crc kubenswrapper[4661]: I0120 18:28:56.901315 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-fb68d687f-b2v4r"] Jan 20 18:28:56 crc kubenswrapper[4661]: W0120 18:28:56.903587 4661 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod370135b1_1365_490b_a9ae_d8ffb1361718.slice/crio-d2ba502a0fe2c4a68dc1d43015bd5ec21c3e7ab11b0b0efc83d5d882814d6b6f WatchSource:0}: Error finding container d2ba502a0fe2c4a68dc1d43015bd5ec21c3e7ab11b0b0efc83d5d882814d6b6f: Status 404 returned error can't find the container with id d2ba502a0fe2c4a68dc1d43015bd5ec21c3e7ab11b0b0efc83d5d882814d6b6f Jan 20 18:28:57 crc kubenswrapper[4661]: I0120 18:28:57.218899 4661 generic.go:334] "Generic (PLEG): container finished" podID="370135b1-1365-490b-a9ae-d8ffb1361718" containerID="9382aa020a5557717fe06067ed0346f0ddaddff9324539aba24af414acd4c186" exitCode=0 Jan 20 18:28:57 crc kubenswrapper[4661]: I0120 18:28:57.219317 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-fb68d687f-b2v4r" event={"ID":"370135b1-1365-490b-a9ae-d8ffb1361718","Type":"ContainerDied","Data":"9382aa020a5557717fe06067ed0346f0ddaddff9324539aba24af414acd4c186"} Jan 20 18:28:57 crc kubenswrapper[4661]: I0120 18:28:57.219346 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-fb68d687f-b2v4r" event={"ID":"370135b1-1365-490b-a9ae-d8ffb1361718","Type":"ContainerStarted","Data":"d2ba502a0fe2c4a68dc1d43015bd5ec21c3e7ab11b0b0efc83d5d882814d6b6f"} Jan 20 18:28:57 crc kubenswrapper[4661]: I0120 18:28:57.241215 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b856c5697-x6dtq" event={"ID":"70b388ac-7547-498d-9bc2-97c8248fe4bc","Type":"ContainerDied","Data":"60874919977fee9fe744f98ca369154d99e74ef0a063bf52a2a83bf51fc1c8d4"} Jan 20 18:28:57 crc kubenswrapper[4661]: I0120 18:28:57.241274 4661 scope.go:117] "RemoveContainer" containerID="ceed4bd7cda1f544ac2f5a8e3be9e2511e166bc08c31d936f1955c3ed45f5383" Jan 20 18:28:57 crc kubenswrapper[4661]: I0120 18:28:57.241421 4661 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b856c5697-x6dtq" Jan 20 18:28:57 crc kubenswrapper[4661]: I0120 18:28:57.406153 4661 scope.go:117] "RemoveContainer" containerID="9a5f818cdc31cc9bb71f9c92850996f8ba24b018d4bbab00f7c696ee7056dc26" Jan 20 18:28:57 crc kubenswrapper[4661]: E0120 18:28:57.414955 4661 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod370135b1_1365_490b_a9ae_d8ffb1361718.slice/crio-9382aa020a5557717fe06067ed0346f0ddaddff9324539aba24af414acd4c186.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod370135b1_1365_490b_a9ae_d8ffb1361718.slice/crio-conmon-9382aa020a5557717fe06067ed0346f0ddaddff9324539aba24af414acd4c186.scope\": RecentStats: unable to find data in memory cache]" Jan 20 18:28:57 crc kubenswrapper[4661]: I0120 18:28:57.480192 4661 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5b856c5697-x6dtq"] Jan 20 18:28:57 crc kubenswrapper[4661]: I0120 18:28:57.488627 4661 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5b856c5697-x6dtq"] Jan 20 18:28:58 crc kubenswrapper[4661]: I0120 18:28:58.153814 4661 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="70b388ac-7547-498d-9bc2-97c8248fe4bc" path="/var/lib/kubelet/pods/70b388ac-7547-498d-9bc2-97c8248fe4bc/volumes" Jan 20 18:28:58 crc kubenswrapper[4661]: I0120 18:28:58.253925 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-fb68d687f-b2v4r" event={"ID":"370135b1-1365-490b-a9ae-d8ffb1361718","Type":"ContainerStarted","Data":"5e98075e4224430510b945da043c9e1de92368693b303b6ff3394f290cf2b9d6"} Jan 20 18:28:58 crc kubenswrapper[4661]: I0120 18:28:58.254115 4661 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-fb68d687f-b2v4r" Jan 20 18:28:58 crc kubenswrapper[4661]: I0120 18:28:58.289938 4661 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-fb68d687f-b2v4r" podStartSLOduration=2.289908885 podStartE2EDuration="2.289908885s" podCreationTimestamp="2026-01-20 18:28:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 18:28:58.28455643 +0000 UTC m=+1394.615346102" watchObservedRunningTime="2026-01-20 18:28:58.289908885 +0000 UTC m=+1394.620698557" Jan 20 18:28:59 crc kubenswrapper[4661]: I0120 18:28:59.324051 4661 patch_prober.go:28] interesting pod/machine-config-daemon-svf7c container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 20 18:28:59 crc kubenswrapper[4661]: I0120 18:28:59.324113 4661 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-svf7c" podUID="78855c94-da90-4523-8d65-70f7fd153dee" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 20 18:28:59 crc kubenswrapper[4661]: I0120 18:28:59.324156 4661 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-svf7c" Jan 20 18:28:59 crc kubenswrapper[4661]: I0120 18:28:59.324766 4661 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"f275e114f07b1c3b029b2e359e7da6e2181e513e10ba8fa3419553c67d8e09a7"} pod="openshift-machine-config-operator/machine-config-daemon-svf7c" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 20 18:28:59 crc kubenswrapper[4661]: I0120 18:28:59.324824 4661 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-svf7c" podUID="78855c94-da90-4523-8d65-70f7fd153dee" containerName="machine-config-daemon" containerID="cri-o://f275e114f07b1c3b029b2e359e7da6e2181e513e10ba8fa3419553c67d8e09a7" gracePeriod=600 Jan 20 18:29:00 crc kubenswrapper[4661]: I0120 18:29:00.283454 4661 generic.go:334] "Generic (PLEG): container finished" podID="78855c94-da90-4523-8d65-70f7fd153dee" containerID="f275e114f07b1c3b029b2e359e7da6e2181e513e10ba8fa3419553c67d8e09a7" exitCode=0 Jan 20 18:29:00 crc kubenswrapper[4661]: I0120 18:29:00.283517 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-svf7c" event={"ID":"78855c94-da90-4523-8d65-70f7fd153dee","Type":"ContainerDied","Data":"f275e114f07b1c3b029b2e359e7da6e2181e513e10ba8fa3419553c67d8e09a7"} Jan 20 18:29:00 crc kubenswrapper[4661]: I0120 18:29:00.284100 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-svf7c" event={"ID":"78855c94-da90-4523-8d65-70f7fd153dee","Type":"ContainerStarted","Data":"a002274f41223b9a3369067182e338344af8ce86db3dfb1e5a412006f071924e"} Jan 20 18:29:00 crc kubenswrapper[4661]: I0120 18:29:00.284124 4661 scope.go:117] "RemoveContainer" containerID="6a7b06eb16aab1344c1779c2757f290ec217a65e34e3c4694e2964d4e3f3d079" Jan 20 18:29:06 crc kubenswrapper[4661]: I0120 18:29:06.448979 4661 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-fb68d687f-b2v4r" Jan 20 18:29:06 crc kubenswrapper[4661]: I0120 18:29:06.536998 4661 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6447ccbd8f-hh7hz"] Jan 20 18:29:06 crc kubenswrapper[4661]: I0120 18:29:06.537498 4661 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6447ccbd8f-hh7hz" podUID="3e1553a5-bc2a-47cc-95ce-57f35366273b" containerName="dnsmasq-dns" containerID="cri-o://d48f04495b9c38c6802453d82eaba78701cd4ccbe43dce737dedb0b6002b0e49" gracePeriod=10 Jan 20 18:29:07 crc kubenswrapper[4661]: I0120 18:29:07.007031 4661 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6447ccbd8f-hh7hz" Jan 20 18:29:07 crc kubenswrapper[4661]: I0120 18:29:07.181196 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3e1553a5-bc2a-47cc-95ce-57f35366273b-ovsdbserver-sb\") pod \"3e1553a5-bc2a-47cc-95ce-57f35366273b\" (UID: \"3e1553a5-bc2a-47cc-95ce-57f35366273b\") " Jan 20 18:29:07 crc kubenswrapper[4661]: I0120 18:29:07.181331 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9nmfp\" (UniqueName: \"kubernetes.io/projected/3e1553a5-bc2a-47cc-95ce-57f35366273b-kube-api-access-9nmfp\") pod \"3e1553a5-bc2a-47cc-95ce-57f35366273b\" (UID: \"3e1553a5-bc2a-47cc-95ce-57f35366273b\") " Jan 20 18:29:07 crc kubenswrapper[4661]: I0120 18:29:07.181440 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3e1553a5-bc2a-47cc-95ce-57f35366273b-dns-svc\") pod \"3e1553a5-bc2a-47cc-95ce-57f35366273b\" (UID: \"3e1553a5-bc2a-47cc-95ce-57f35366273b\") " Jan 20 18:29:07 crc kubenswrapper[4661]: I0120 18:29:07.181509 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3e1553a5-bc2a-47cc-95ce-57f35366273b-ovsdbserver-nb\") pod \"3e1553a5-bc2a-47cc-95ce-57f35366273b\" (UID: \"3e1553a5-bc2a-47cc-95ce-57f35366273b\") " Jan 20 18:29:07 crc kubenswrapper[4661]: I0120 18:29:07.181594 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3e1553a5-bc2a-47cc-95ce-57f35366273b-config\") pod \"3e1553a5-bc2a-47cc-95ce-57f35366273b\" (UID: \"3e1553a5-bc2a-47cc-95ce-57f35366273b\") " Jan 20 18:29:07 crc kubenswrapper[4661]: I0120 18:29:07.181643 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/3e1553a5-bc2a-47cc-95ce-57f35366273b-openstack-edpm-ipam\") pod \"3e1553a5-bc2a-47cc-95ce-57f35366273b\" (UID: \"3e1553a5-bc2a-47cc-95ce-57f35366273b\") " Jan 20 18:29:07 crc kubenswrapper[4661]: I0120 18:29:07.188499 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3e1553a5-bc2a-47cc-95ce-57f35366273b-kube-api-access-9nmfp" (OuterVolumeSpecName: "kube-api-access-9nmfp") pod "3e1553a5-bc2a-47cc-95ce-57f35366273b" (UID: "3e1553a5-bc2a-47cc-95ce-57f35366273b"). InnerVolumeSpecName "kube-api-access-9nmfp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 18:29:07 crc kubenswrapper[4661]: I0120 18:29:07.238297 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3e1553a5-bc2a-47cc-95ce-57f35366273b-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "3e1553a5-bc2a-47cc-95ce-57f35366273b" (UID: "3e1553a5-bc2a-47cc-95ce-57f35366273b"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 18:29:07 crc kubenswrapper[4661]: I0120 18:29:07.239490 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3e1553a5-bc2a-47cc-95ce-57f35366273b-openstack-edpm-ipam" (OuterVolumeSpecName: "openstack-edpm-ipam") pod "3e1553a5-bc2a-47cc-95ce-57f35366273b" (UID: "3e1553a5-bc2a-47cc-95ce-57f35366273b"). InnerVolumeSpecName "openstack-edpm-ipam". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 18:29:07 crc kubenswrapper[4661]: I0120 18:29:07.240968 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3e1553a5-bc2a-47cc-95ce-57f35366273b-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "3e1553a5-bc2a-47cc-95ce-57f35366273b" (UID: "3e1553a5-bc2a-47cc-95ce-57f35366273b"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 18:29:07 crc kubenswrapper[4661]: I0120 18:29:07.250856 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3e1553a5-bc2a-47cc-95ce-57f35366273b-config" (OuterVolumeSpecName: "config") pod "3e1553a5-bc2a-47cc-95ce-57f35366273b" (UID: "3e1553a5-bc2a-47cc-95ce-57f35366273b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 18:29:07 crc kubenswrapper[4661]: I0120 18:29:07.252406 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3e1553a5-bc2a-47cc-95ce-57f35366273b-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "3e1553a5-bc2a-47cc-95ce-57f35366273b" (UID: "3e1553a5-bc2a-47cc-95ce-57f35366273b"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 18:29:07 crc kubenswrapper[4661]: I0120 18:29:07.284723 4661 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3e1553a5-bc2a-47cc-95ce-57f35366273b-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 20 18:29:07 crc kubenswrapper[4661]: I0120 18:29:07.284758 4661 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3e1553a5-bc2a-47cc-95ce-57f35366273b-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 20 18:29:07 crc kubenswrapper[4661]: I0120 18:29:07.284768 4661 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3e1553a5-bc2a-47cc-95ce-57f35366273b-config\") on node \"crc\" DevicePath \"\"" Jan 20 18:29:07 crc kubenswrapper[4661]: I0120 18:29:07.284776 4661 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/3e1553a5-bc2a-47cc-95ce-57f35366273b-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 20 18:29:07 crc kubenswrapper[4661]: I0120 18:29:07.284786 4661 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3e1553a5-bc2a-47cc-95ce-57f35366273b-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 20 18:29:07 crc kubenswrapper[4661]: I0120 18:29:07.284794 4661 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9nmfp\" (UniqueName: \"kubernetes.io/projected/3e1553a5-bc2a-47cc-95ce-57f35366273b-kube-api-access-9nmfp\") on node \"crc\" DevicePath \"\"" Jan 20 18:29:07 crc kubenswrapper[4661]: I0120 18:29:07.376254 4661 generic.go:334] "Generic (PLEG): container finished" podID="3e1553a5-bc2a-47cc-95ce-57f35366273b" containerID="d48f04495b9c38c6802453d82eaba78701cd4ccbe43dce737dedb0b6002b0e49" exitCode=0 Jan 20 18:29:07 crc kubenswrapper[4661]: I0120 18:29:07.376302 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6447ccbd8f-hh7hz" event={"ID":"3e1553a5-bc2a-47cc-95ce-57f35366273b","Type":"ContainerDied","Data":"d48f04495b9c38c6802453d82eaba78701cd4ccbe43dce737dedb0b6002b0e49"} Jan 20 18:29:07 crc kubenswrapper[4661]: I0120 18:29:07.376333 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6447ccbd8f-hh7hz" event={"ID":"3e1553a5-bc2a-47cc-95ce-57f35366273b","Type":"ContainerDied","Data":"d1a7697058c9c55cf5cb47594271d6bfbf21ac20ffa8e97a92fc7bf416fc377d"} Jan 20 18:29:07 crc kubenswrapper[4661]: I0120 18:29:07.376353 4661 scope.go:117] "RemoveContainer" containerID="d48f04495b9c38c6802453d82eaba78701cd4ccbe43dce737dedb0b6002b0e49" Jan 20 18:29:07 crc kubenswrapper[4661]: I0120 18:29:07.376312 4661 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6447ccbd8f-hh7hz" Jan 20 18:29:07 crc kubenswrapper[4661]: I0120 18:29:07.400950 4661 scope.go:117] "RemoveContainer" containerID="42425f22809477de2c603dcaf1c9d425fb978094e92e53097fec690a91128e3b" Jan 20 18:29:07 crc kubenswrapper[4661]: I0120 18:29:07.417881 4661 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6447ccbd8f-hh7hz"] Jan 20 18:29:07 crc kubenswrapper[4661]: I0120 18:29:07.423530 4661 scope.go:117] "RemoveContainer" containerID="d48f04495b9c38c6802453d82eaba78701cd4ccbe43dce737dedb0b6002b0e49" Jan 20 18:29:07 crc kubenswrapper[4661]: E0120 18:29:07.424148 4661 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d48f04495b9c38c6802453d82eaba78701cd4ccbe43dce737dedb0b6002b0e49\": container with ID starting with d48f04495b9c38c6802453d82eaba78701cd4ccbe43dce737dedb0b6002b0e49 not found: ID does not exist" containerID="d48f04495b9c38c6802453d82eaba78701cd4ccbe43dce737dedb0b6002b0e49" Jan 20 18:29:07 crc kubenswrapper[4661]: I0120 18:29:07.424193 4661 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d48f04495b9c38c6802453d82eaba78701cd4ccbe43dce737dedb0b6002b0e49"} err="failed to get container status \"d48f04495b9c38c6802453d82eaba78701cd4ccbe43dce737dedb0b6002b0e49\": rpc error: code = NotFound desc = could not find container \"d48f04495b9c38c6802453d82eaba78701cd4ccbe43dce737dedb0b6002b0e49\": container with ID starting with d48f04495b9c38c6802453d82eaba78701cd4ccbe43dce737dedb0b6002b0e49 not found: ID does not exist" Jan 20 18:29:07 crc kubenswrapper[4661]: I0120 18:29:07.424218 4661 scope.go:117] "RemoveContainer" containerID="42425f22809477de2c603dcaf1c9d425fb978094e92e53097fec690a91128e3b" Jan 20 18:29:07 crc kubenswrapper[4661]: E0120 18:29:07.424691 4661 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"42425f22809477de2c603dcaf1c9d425fb978094e92e53097fec690a91128e3b\": container with ID starting with 42425f22809477de2c603dcaf1c9d425fb978094e92e53097fec690a91128e3b not found: ID does not exist" containerID="42425f22809477de2c603dcaf1c9d425fb978094e92e53097fec690a91128e3b" Jan 20 18:29:07 crc kubenswrapper[4661]: I0120 18:29:07.424735 4661 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"42425f22809477de2c603dcaf1c9d425fb978094e92e53097fec690a91128e3b"} err="failed to get container status \"42425f22809477de2c603dcaf1c9d425fb978094e92e53097fec690a91128e3b\": rpc error: code = NotFound desc = could not find container \"42425f22809477de2c603dcaf1c9d425fb978094e92e53097fec690a91128e3b\": container with ID starting with 42425f22809477de2c603dcaf1c9d425fb978094e92e53097fec690a91128e3b not found: ID does not exist" Jan 20 18:29:07 crc kubenswrapper[4661]: I0120 18:29:07.434856 4661 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6447ccbd8f-hh7hz"] Jan 20 18:29:08 crc kubenswrapper[4661]: I0120 18:29:08.157518 4661 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3e1553a5-bc2a-47cc-95ce-57f35366273b" path="/var/lib/kubelet/pods/3e1553a5-bc2a-47cc-95ce-57f35366273b/volumes" Jan 20 18:29:16 crc kubenswrapper[4661]: I0120 18:29:16.641330 4661 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-lc5l9"] Jan 20 18:29:16 crc kubenswrapper[4661]: E0120 18:29:16.643339 4661 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3e1553a5-bc2a-47cc-95ce-57f35366273b" containerName="init" Jan 20 18:29:16 crc kubenswrapper[4661]: I0120 18:29:16.643445 4661 state_mem.go:107] "Deleted CPUSet assignment" podUID="3e1553a5-bc2a-47cc-95ce-57f35366273b" containerName="init" Jan 20 18:29:16 crc kubenswrapper[4661]: E0120 18:29:16.643546 4661 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="70b388ac-7547-498d-9bc2-97c8248fe4bc" containerName="init" Jan 20 18:29:16 crc kubenswrapper[4661]: I0120 18:29:16.643620 4661 state_mem.go:107] "Deleted CPUSet assignment" podUID="70b388ac-7547-498d-9bc2-97c8248fe4bc" containerName="init" Jan 20 18:29:16 crc kubenswrapper[4661]: E0120 18:29:16.643730 4661 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="70b388ac-7547-498d-9bc2-97c8248fe4bc" containerName="dnsmasq-dns" Jan 20 18:29:16 crc kubenswrapper[4661]: I0120 18:29:16.643806 4661 state_mem.go:107] "Deleted CPUSet assignment" podUID="70b388ac-7547-498d-9bc2-97c8248fe4bc" containerName="dnsmasq-dns" Jan 20 18:29:16 crc kubenswrapper[4661]: E0120 18:29:16.643902 4661 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3e1553a5-bc2a-47cc-95ce-57f35366273b" containerName="dnsmasq-dns" Jan 20 18:29:16 crc kubenswrapper[4661]: I0120 18:29:16.643975 4661 state_mem.go:107] "Deleted CPUSet assignment" podUID="3e1553a5-bc2a-47cc-95ce-57f35366273b" containerName="dnsmasq-dns" Jan 20 18:29:16 crc kubenswrapper[4661]: I0120 18:29:16.644268 4661 memory_manager.go:354] "RemoveStaleState removing state" podUID="3e1553a5-bc2a-47cc-95ce-57f35366273b" containerName="dnsmasq-dns" Jan 20 18:29:16 crc kubenswrapper[4661]: I0120 18:29:16.644380 4661 memory_manager.go:354] "RemoveStaleState removing state" podUID="70b388ac-7547-498d-9bc2-97c8248fe4bc" containerName="dnsmasq-dns" Jan 20 18:29:16 crc kubenswrapper[4661]: I0120 18:29:16.645165 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-lc5l9" Jan 20 18:29:16 crc kubenswrapper[4661]: I0120 18:29:16.650700 4661 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 20 18:29:16 crc kubenswrapper[4661]: I0120 18:29:16.650730 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 20 18:29:16 crc kubenswrapper[4661]: I0120 18:29:16.651332 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-mmbv8" Jan 20 18:29:16 crc kubenswrapper[4661]: I0120 18:29:16.655606 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 20 18:29:16 crc kubenswrapper[4661]: I0120 18:29:16.667662 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-lc5l9"] Jan 20 18:29:16 crc kubenswrapper[4661]: I0120 18:29:16.811812 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wjtbs\" (UniqueName: \"kubernetes.io/projected/404d95a4-80b6-44e9-92ff-3d9f880ade4b-kube-api-access-wjtbs\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-lc5l9\" (UID: \"404d95a4-80b6-44e9-92ff-3d9f880ade4b\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-lc5l9" Jan 20 18:29:16 crc kubenswrapper[4661]: I0120 18:29:16.812195 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/404d95a4-80b6-44e9-92ff-3d9f880ade4b-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-lc5l9\" (UID: \"404d95a4-80b6-44e9-92ff-3d9f880ade4b\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-lc5l9" Jan 20 18:29:16 crc kubenswrapper[4661]: I0120 18:29:16.812487 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/404d95a4-80b6-44e9-92ff-3d9f880ade4b-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-lc5l9\" (UID: \"404d95a4-80b6-44e9-92ff-3d9f880ade4b\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-lc5l9" Jan 20 18:29:16 crc kubenswrapper[4661]: I0120 18:29:16.812798 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/404d95a4-80b6-44e9-92ff-3d9f880ade4b-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-lc5l9\" (UID: \"404d95a4-80b6-44e9-92ff-3d9f880ade4b\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-lc5l9" Jan 20 18:29:16 crc kubenswrapper[4661]: I0120 18:29:16.914919 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/404d95a4-80b6-44e9-92ff-3d9f880ade4b-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-lc5l9\" (UID: \"404d95a4-80b6-44e9-92ff-3d9f880ade4b\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-lc5l9" Jan 20 18:29:16 crc kubenswrapper[4661]: I0120 18:29:16.914979 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wjtbs\" (UniqueName: \"kubernetes.io/projected/404d95a4-80b6-44e9-92ff-3d9f880ade4b-kube-api-access-wjtbs\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-lc5l9\" (UID: \"404d95a4-80b6-44e9-92ff-3d9f880ade4b\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-lc5l9" Jan 20 18:29:16 crc kubenswrapper[4661]: I0120 18:29:16.915022 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/404d95a4-80b6-44e9-92ff-3d9f880ade4b-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-lc5l9\" (UID: \"404d95a4-80b6-44e9-92ff-3d9f880ade4b\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-lc5l9" Jan 20 18:29:16 crc kubenswrapper[4661]: I0120 18:29:16.915113 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/404d95a4-80b6-44e9-92ff-3d9f880ade4b-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-lc5l9\" (UID: \"404d95a4-80b6-44e9-92ff-3d9f880ade4b\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-lc5l9" Jan 20 18:29:16 crc kubenswrapper[4661]: I0120 18:29:16.925411 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/404d95a4-80b6-44e9-92ff-3d9f880ade4b-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-lc5l9\" (UID: \"404d95a4-80b6-44e9-92ff-3d9f880ade4b\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-lc5l9" Jan 20 18:29:16 crc kubenswrapper[4661]: I0120 18:29:16.928193 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/404d95a4-80b6-44e9-92ff-3d9f880ade4b-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-lc5l9\" (UID: \"404d95a4-80b6-44e9-92ff-3d9f880ade4b\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-lc5l9" Jan 20 18:29:16 crc kubenswrapper[4661]: I0120 18:29:16.928614 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/404d95a4-80b6-44e9-92ff-3d9f880ade4b-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-lc5l9\" (UID: \"404d95a4-80b6-44e9-92ff-3d9f880ade4b\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-lc5l9" Jan 20 18:29:16 crc kubenswrapper[4661]: I0120 18:29:16.932578 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wjtbs\" (UniqueName: \"kubernetes.io/projected/404d95a4-80b6-44e9-92ff-3d9f880ade4b-kube-api-access-wjtbs\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-lc5l9\" (UID: \"404d95a4-80b6-44e9-92ff-3d9f880ade4b\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-lc5l9" Jan 20 18:29:16 crc kubenswrapper[4661]: I0120 18:29:16.962066 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-lc5l9" Jan 20 18:29:17 crc kubenswrapper[4661]: I0120 18:29:17.500772 4661 generic.go:334] "Generic (PLEG): container finished" podID="7301e169-326c-4397-89f7-28b94553cef4" containerID="7f6d40b819df9722a5445b60ea9efba0f3bab7b98baccf8c7b259a5e400f21fb" exitCode=0 Jan 20 18:29:17 crc kubenswrapper[4661]: I0120 18:29:17.501174 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"7301e169-326c-4397-89f7-28b94553cef4","Type":"ContainerDied","Data":"7f6d40b819df9722a5445b60ea9efba0f3bab7b98baccf8c7b259a5e400f21fb"} Jan 20 18:29:17 crc kubenswrapper[4661]: I0120 18:29:17.508249 4661 generic.go:334] "Generic (PLEG): container finished" podID="5a690866-3b40-4a9f-ba41-a5a3a6d76c95" containerID="4d43df4ee5a1eef1e28c16c7e75b021a84c46bfc1810c8498284d8ad97f27950" exitCode=0 Jan 20 18:29:17 crc kubenswrapper[4661]: I0120 18:29:17.508299 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"5a690866-3b40-4a9f-ba41-a5a3a6d76c95","Type":"ContainerDied","Data":"4d43df4ee5a1eef1e28c16c7e75b021a84c46bfc1810c8498284d8ad97f27950"} Jan 20 18:29:17 crc kubenswrapper[4661]: I0120 18:29:17.637511 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-lc5l9"] Jan 20 18:29:17 crc kubenswrapper[4661]: W0120 18:29:17.656725 4661 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod404d95a4_80b6_44e9_92ff_3d9f880ade4b.slice/crio-0554384e5926824981b75653015e9b76ba11308ad5910963618c372c3e7bf80a WatchSource:0}: Error finding container 0554384e5926824981b75653015e9b76ba11308ad5910963618c372c3e7bf80a: Status 404 returned error can't find the container with id 0554384e5926824981b75653015e9b76ba11308ad5910963618c372c3e7bf80a Jan 20 18:29:18 crc kubenswrapper[4661]: I0120 18:29:18.517984 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-lc5l9" event={"ID":"404d95a4-80b6-44e9-92ff-3d9f880ade4b","Type":"ContainerStarted","Data":"0554384e5926824981b75653015e9b76ba11308ad5910963618c372c3e7bf80a"} Jan 20 18:29:18 crc kubenswrapper[4661]: I0120 18:29:18.520255 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"5a690866-3b40-4a9f-ba41-a5a3a6d76c95","Type":"ContainerStarted","Data":"b13f122b99f90ea0a5ee713ce32690580809d1f82d7485ac7ae2820bf23899ad"} Jan 20 18:29:18 crc kubenswrapper[4661]: I0120 18:29:18.520488 4661 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Jan 20 18:29:18 crc kubenswrapper[4661]: I0120 18:29:18.527531 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"7301e169-326c-4397-89f7-28b94553cef4","Type":"ContainerStarted","Data":"4a8edd80570df7e2cecdb7e5dca8d4eaffe23a58169effbd9f7b1e40fc637196"} Jan 20 18:29:18 crc kubenswrapper[4661]: I0120 18:29:18.527770 4661 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Jan 20 18:29:18 crc kubenswrapper[4661]: I0120 18:29:18.555969 4661 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=36.555948665 podStartE2EDuration="36.555948665s" podCreationTimestamp="2026-01-20 18:28:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 18:29:18.544926858 +0000 UTC m=+1414.875716520" watchObservedRunningTime="2026-01-20 18:29:18.555948665 +0000 UTC m=+1414.886738327" Jan 20 18:29:18 crc kubenswrapper[4661]: I0120 18:29:18.573249 4661 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=36.573234765 podStartE2EDuration="36.573234765s" podCreationTimestamp="2026-01-20 18:28:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 18:29:18.571268284 +0000 UTC m=+1414.902057956" watchObservedRunningTime="2026-01-20 18:29:18.573234765 +0000 UTC m=+1414.904024427" Jan 20 18:29:27 crc kubenswrapper[4661]: I0120 18:29:27.449086 4661 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-qpqjm"] Jan 20 18:29:27 crc kubenswrapper[4661]: I0120 18:29:27.456312 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qpqjm" Jan 20 18:29:27 crc kubenswrapper[4661]: I0120 18:29:27.478402 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-qpqjm"] Jan 20 18:29:27 crc kubenswrapper[4661]: I0120 18:29:27.547707 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b6a5dcc2-b0bd-4a0f-9f10-2329db52c0a9-utilities\") pod \"redhat-operators-qpqjm\" (UID: \"b6a5dcc2-b0bd-4a0f-9f10-2329db52c0a9\") " pod="openshift-marketplace/redhat-operators-qpqjm" Jan 20 18:29:27 crc kubenswrapper[4661]: I0120 18:29:27.547835 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f727r\" (UniqueName: \"kubernetes.io/projected/b6a5dcc2-b0bd-4a0f-9f10-2329db52c0a9-kube-api-access-f727r\") pod \"redhat-operators-qpqjm\" (UID: \"b6a5dcc2-b0bd-4a0f-9f10-2329db52c0a9\") " pod="openshift-marketplace/redhat-operators-qpqjm" Jan 20 18:29:27 crc kubenswrapper[4661]: I0120 18:29:27.547864 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b6a5dcc2-b0bd-4a0f-9f10-2329db52c0a9-catalog-content\") pod \"redhat-operators-qpqjm\" (UID: \"b6a5dcc2-b0bd-4a0f-9f10-2329db52c0a9\") " pod="openshift-marketplace/redhat-operators-qpqjm" Jan 20 18:29:27 crc kubenswrapper[4661]: I0120 18:29:27.651288 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f727r\" (UniqueName: \"kubernetes.io/projected/b6a5dcc2-b0bd-4a0f-9f10-2329db52c0a9-kube-api-access-f727r\") pod \"redhat-operators-qpqjm\" (UID: \"b6a5dcc2-b0bd-4a0f-9f10-2329db52c0a9\") " pod="openshift-marketplace/redhat-operators-qpqjm" Jan 20 18:29:27 crc kubenswrapper[4661]: I0120 18:29:27.651351 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b6a5dcc2-b0bd-4a0f-9f10-2329db52c0a9-catalog-content\") pod \"redhat-operators-qpqjm\" (UID: \"b6a5dcc2-b0bd-4a0f-9f10-2329db52c0a9\") " pod="openshift-marketplace/redhat-operators-qpqjm" Jan 20 18:29:27 crc kubenswrapper[4661]: I0120 18:29:27.651412 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b6a5dcc2-b0bd-4a0f-9f10-2329db52c0a9-utilities\") pod \"redhat-operators-qpqjm\" (UID: \"b6a5dcc2-b0bd-4a0f-9f10-2329db52c0a9\") " pod="openshift-marketplace/redhat-operators-qpqjm" Jan 20 18:29:27 crc kubenswrapper[4661]: I0120 18:29:27.651909 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b6a5dcc2-b0bd-4a0f-9f10-2329db52c0a9-utilities\") pod \"redhat-operators-qpqjm\" (UID: \"b6a5dcc2-b0bd-4a0f-9f10-2329db52c0a9\") " pod="openshift-marketplace/redhat-operators-qpqjm" Jan 20 18:29:27 crc kubenswrapper[4661]: I0120 18:29:27.652172 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b6a5dcc2-b0bd-4a0f-9f10-2329db52c0a9-catalog-content\") pod \"redhat-operators-qpqjm\" (UID: \"b6a5dcc2-b0bd-4a0f-9f10-2329db52c0a9\") " pod="openshift-marketplace/redhat-operators-qpqjm" Jan 20 18:29:27 crc kubenswrapper[4661]: I0120 18:29:27.681750 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f727r\" (UniqueName: \"kubernetes.io/projected/b6a5dcc2-b0bd-4a0f-9f10-2329db52c0a9-kube-api-access-f727r\") pod \"redhat-operators-qpqjm\" (UID: \"b6a5dcc2-b0bd-4a0f-9f10-2329db52c0a9\") " pod="openshift-marketplace/redhat-operators-qpqjm" Jan 20 18:29:27 crc kubenswrapper[4661]: I0120 18:29:27.812510 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qpqjm" Jan 20 18:29:29 crc kubenswrapper[4661]: I0120 18:29:29.266722 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-qpqjm"] Jan 20 18:29:29 crc kubenswrapper[4661]: W0120 18:29:29.272171 4661 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb6a5dcc2_b0bd_4a0f_9f10_2329db52c0a9.slice/crio-fcfc6ef5a468c5f93db037887c0d0e9bd314bad94d81a58ddc2321283f6125f1 WatchSource:0}: Error finding container fcfc6ef5a468c5f93db037887c0d0e9bd314bad94d81a58ddc2321283f6125f1: Status 404 returned error can't find the container with id fcfc6ef5a468c5f93db037887c0d0e9bd314bad94d81a58ddc2321283f6125f1 Jan 20 18:29:29 crc kubenswrapper[4661]: I0120 18:29:29.650262 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-lc5l9" event={"ID":"404d95a4-80b6-44e9-92ff-3d9f880ade4b","Type":"ContainerStarted","Data":"ac5b500f32ad20fd3818d44f140e4b9f1a7d4df30f7c916ddcd4d8434d8ab239"} Jan 20 18:29:29 crc kubenswrapper[4661]: I0120 18:29:29.652732 4661 generic.go:334] "Generic (PLEG): container finished" podID="b6a5dcc2-b0bd-4a0f-9f10-2329db52c0a9" containerID="f034104a02e7bd98ad3d404d6a66d0c6265a6bc1ac3c9a6207fbee0b615eb432" exitCode=0 Jan 20 18:29:29 crc kubenswrapper[4661]: I0120 18:29:29.652771 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qpqjm" event={"ID":"b6a5dcc2-b0bd-4a0f-9f10-2329db52c0a9","Type":"ContainerDied","Data":"f034104a02e7bd98ad3d404d6a66d0c6265a6bc1ac3c9a6207fbee0b615eb432"} Jan 20 18:29:29 crc kubenswrapper[4661]: I0120 18:29:29.652794 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qpqjm" event={"ID":"b6a5dcc2-b0bd-4a0f-9f10-2329db52c0a9","Type":"ContainerStarted","Data":"fcfc6ef5a468c5f93db037887c0d0e9bd314bad94d81a58ddc2321283f6125f1"} Jan 20 18:29:29 crc kubenswrapper[4661]: I0120 18:29:29.714406 4661 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-lc5l9" podStartSLOduration=2.613972837 podStartE2EDuration="13.714367063s" podCreationTimestamp="2026-01-20 18:29:16 +0000 UTC" firstStartedPulling="2026-01-20 18:29:17.663213418 +0000 UTC m=+1413.994003080" lastFinishedPulling="2026-01-20 18:29:28.763607644 +0000 UTC m=+1425.094397306" observedRunningTime="2026-01-20 18:29:29.679585166 +0000 UTC m=+1426.010374838" watchObservedRunningTime="2026-01-20 18:29:29.714367063 +0000 UTC m=+1426.045156725" Jan 20 18:29:31 crc kubenswrapper[4661]: I0120 18:29:31.669526 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qpqjm" event={"ID":"b6a5dcc2-b0bd-4a0f-9f10-2329db52c0a9","Type":"ContainerStarted","Data":"f6397ce2bb663ec1c1219dac5d58bb5f556482f409c488bd486e9a6d2faf0a05"} Jan 20 18:29:32 crc kubenswrapper[4661]: I0120 18:29:32.554944 4661 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Jan 20 18:29:32 crc kubenswrapper[4661]: I0120 18:29:32.570055 4661 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Jan 20 18:29:34 crc kubenswrapper[4661]: I0120 18:29:34.694613 4661 generic.go:334] "Generic (PLEG): container finished" podID="b6a5dcc2-b0bd-4a0f-9f10-2329db52c0a9" containerID="f6397ce2bb663ec1c1219dac5d58bb5f556482f409c488bd486e9a6d2faf0a05" exitCode=0 Jan 20 18:29:34 crc kubenswrapper[4661]: I0120 18:29:34.694724 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qpqjm" event={"ID":"b6a5dcc2-b0bd-4a0f-9f10-2329db52c0a9","Type":"ContainerDied","Data":"f6397ce2bb663ec1c1219dac5d58bb5f556482f409c488bd486e9a6d2faf0a05"} Jan 20 18:29:35 crc kubenswrapper[4661]: I0120 18:29:35.708870 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qpqjm" event={"ID":"b6a5dcc2-b0bd-4a0f-9f10-2329db52c0a9","Type":"ContainerStarted","Data":"6fc439b8c8852f2abe3f1319e9c86d61424f97ff8f5c8403fd4bdc5e60aeb36f"} Jan 20 18:29:35 crc kubenswrapper[4661]: I0120 18:29:35.739424 4661 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-qpqjm" podStartSLOduration=3.139295516 podStartE2EDuration="8.739404144s" podCreationTimestamp="2026-01-20 18:29:27 +0000 UTC" firstStartedPulling="2026-01-20 18:29:29.654629826 +0000 UTC m=+1425.985419488" lastFinishedPulling="2026-01-20 18:29:35.254738434 +0000 UTC m=+1431.585528116" observedRunningTime="2026-01-20 18:29:35.73079237 +0000 UTC m=+1432.061582042" watchObservedRunningTime="2026-01-20 18:29:35.739404144 +0000 UTC m=+1432.070193806" Jan 20 18:29:37 crc kubenswrapper[4661]: I0120 18:29:37.813378 4661 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-qpqjm" Jan 20 18:29:37 crc kubenswrapper[4661]: I0120 18:29:37.813800 4661 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-qpqjm" Jan 20 18:29:38 crc kubenswrapper[4661]: I0120 18:29:38.867284 4661 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-qpqjm" podUID="b6a5dcc2-b0bd-4a0f-9f10-2329db52c0a9" containerName="registry-server" probeResult="failure" output=< Jan 20 18:29:38 crc kubenswrapper[4661]: timeout: failed to connect service ":50051" within 1s Jan 20 18:29:38 crc kubenswrapper[4661]: > Jan 20 18:29:40 crc kubenswrapper[4661]: I0120 18:29:40.751568 4661 generic.go:334] "Generic (PLEG): container finished" podID="404d95a4-80b6-44e9-92ff-3d9f880ade4b" containerID="ac5b500f32ad20fd3818d44f140e4b9f1a7d4df30f7c916ddcd4d8434d8ab239" exitCode=0 Jan 20 18:29:40 crc kubenswrapper[4661]: I0120 18:29:40.751641 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-lc5l9" event={"ID":"404d95a4-80b6-44e9-92ff-3d9f880ade4b","Type":"ContainerDied","Data":"ac5b500f32ad20fd3818d44f140e4b9f1a7d4df30f7c916ddcd4d8434d8ab239"} Jan 20 18:29:42 crc kubenswrapper[4661]: I0120 18:29:42.191211 4661 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-lc5l9" Jan 20 18:29:42 crc kubenswrapper[4661]: I0120 18:29:42.314577 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/404d95a4-80b6-44e9-92ff-3d9f880ade4b-inventory\") pod \"404d95a4-80b6-44e9-92ff-3d9f880ade4b\" (UID: \"404d95a4-80b6-44e9-92ff-3d9f880ade4b\") " Jan 20 18:29:42 crc kubenswrapper[4661]: I0120 18:29:42.314618 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/404d95a4-80b6-44e9-92ff-3d9f880ade4b-repo-setup-combined-ca-bundle\") pod \"404d95a4-80b6-44e9-92ff-3d9f880ade4b\" (UID: \"404d95a4-80b6-44e9-92ff-3d9f880ade4b\") " Jan 20 18:29:42 crc kubenswrapper[4661]: I0120 18:29:42.314655 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/404d95a4-80b6-44e9-92ff-3d9f880ade4b-ssh-key-openstack-edpm-ipam\") pod \"404d95a4-80b6-44e9-92ff-3d9f880ade4b\" (UID: \"404d95a4-80b6-44e9-92ff-3d9f880ade4b\") " Jan 20 18:29:42 crc kubenswrapper[4661]: I0120 18:29:42.314821 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wjtbs\" (UniqueName: \"kubernetes.io/projected/404d95a4-80b6-44e9-92ff-3d9f880ade4b-kube-api-access-wjtbs\") pod \"404d95a4-80b6-44e9-92ff-3d9f880ade4b\" (UID: \"404d95a4-80b6-44e9-92ff-3d9f880ade4b\") " Jan 20 18:29:42 crc kubenswrapper[4661]: I0120 18:29:42.323374 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/404d95a4-80b6-44e9-92ff-3d9f880ade4b-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "404d95a4-80b6-44e9-92ff-3d9f880ade4b" (UID: "404d95a4-80b6-44e9-92ff-3d9f880ade4b"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 18:29:42 crc kubenswrapper[4661]: I0120 18:29:42.331892 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/404d95a4-80b6-44e9-92ff-3d9f880ade4b-kube-api-access-wjtbs" (OuterVolumeSpecName: "kube-api-access-wjtbs") pod "404d95a4-80b6-44e9-92ff-3d9f880ade4b" (UID: "404d95a4-80b6-44e9-92ff-3d9f880ade4b"). InnerVolumeSpecName "kube-api-access-wjtbs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 18:29:42 crc kubenswrapper[4661]: I0120 18:29:42.342859 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/404d95a4-80b6-44e9-92ff-3d9f880ade4b-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "404d95a4-80b6-44e9-92ff-3d9f880ade4b" (UID: "404d95a4-80b6-44e9-92ff-3d9f880ade4b"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 18:29:42 crc kubenswrapper[4661]: I0120 18:29:42.355130 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/404d95a4-80b6-44e9-92ff-3d9f880ade4b-inventory" (OuterVolumeSpecName: "inventory") pod "404d95a4-80b6-44e9-92ff-3d9f880ade4b" (UID: "404d95a4-80b6-44e9-92ff-3d9f880ade4b"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 18:29:42 crc kubenswrapper[4661]: I0120 18:29:42.416995 4661 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wjtbs\" (UniqueName: \"kubernetes.io/projected/404d95a4-80b6-44e9-92ff-3d9f880ade4b-kube-api-access-wjtbs\") on node \"crc\" DevicePath \"\"" Jan 20 18:29:42 crc kubenswrapper[4661]: I0120 18:29:42.417033 4661 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/404d95a4-80b6-44e9-92ff-3d9f880ade4b-inventory\") on node \"crc\" DevicePath \"\"" Jan 20 18:29:42 crc kubenswrapper[4661]: I0120 18:29:42.417043 4661 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/404d95a4-80b6-44e9-92ff-3d9f880ade4b-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 20 18:29:42 crc kubenswrapper[4661]: I0120 18:29:42.417052 4661 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/404d95a4-80b6-44e9-92ff-3d9f880ade4b-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 20 18:29:42 crc kubenswrapper[4661]: I0120 18:29:42.774552 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-lc5l9" event={"ID":"404d95a4-80b6-44e9-92ff-3d9f880ade4b","Type":"ContainerDied","Data":"0554384e5926824981b75653015e9b76ba11308ad5910963618c372c3e7bf80a"} Jan 20 18:29:42 crc kubenswrapper[4661]: I0120 18:29:42.774606 4661 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0554384e5926824981b75653015e9b76ba11308ad5910963618c372c3e7bf80a" Jan 20 18:29:42 crc kubenswrapper[4661]: I0120 18:29:42.774708 4661 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-lc5l9" Jan 20 18:29:42 crc kubenswrapper[4661]: I0120 18:29:42.858218 4661 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-rp546"] Jan 20 18:29:42 crc kubenswrapper[4661]: E0120 18:29:42.859394 4661 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="404d95a4-80b6-44e9-92ff-3d9f880ade4b" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Jan 20 18:29:42 crc kubenswrapper[4661]: I0120 18:29:42.859424 4661 state_mem.go:107] "Deleted CPUSet assignment" podUID="404d95a4-80b6-44e9-92ff-3d9f880ade4b" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Jan 20 18:29:42 crc kubenswrapper[4661]: I0120 18:29:42.859579 4661 memory_manager.go:354] "RemoveStaleState removing state" podUID="404d95a4-80b6-44e9-92ff-3d9f880ade4b" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Jan 20 18:29:42 crc kubenswrapper[4661]: I0120 18:29:42.860153 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-rp546" Jan 20 18:29:42 crc kubenswrapper[4661]: I0120 18:29:42.863863 4661 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 20 18:29:42 crc kubenswrapper[4661]: I0120 18:29:42.863867 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-mmbv8" Jan 20 18:29:42 crc kubenswrapper[4661]: I0120 18:29:42.864040 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 20 18:29:42 crc kubenswrapper[4661]: I0120 18:29:42.864086 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 20 18:29:42 crc kubenswrapper[4661]: I0120 18:29:42.884471 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-rp546"] Jan 20 18:29:42 crc kubenswrapper[4661]: I0120 18:29:42.929143 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f5353890-587c-4ab1-b6a5-8e21f7c573b5-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-rp546\" (UID: \"f5353890-587c-4ab1-b6a5-8e21f7c573b5\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-rp546" Jan 20 18:29:42 crc kubenswrapper[4661]: I0120 18:29:42.929191 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f5353890-587c-4ab1-b6a5-8e21f7c573b5-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-rp546\" (UID: \"f5353890-587c-4ab1-b6a5-8e21f7c573b5\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-rp546" Jan 20 18:29:42 crc kubenswrapper[4661]: I0120 18:29:42.929363 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hdrnk\" (UniqueName: \"kubernetes.io/projected/f5353890-587c-4ab1-b6a5-8e21f7c573b5-kube-api-access-hdrnk\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-rp546\" (UID: \"f5353890-587c-4ab1-b6a5-8e21f7c573b5\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-rp546" Jan 20 18:29:42 crc kubenswrapper[4661]: I0120 18:29:42.929487 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f5353890-587c-4ab1-b6a5-8e21f7c573b5-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-rp546\" (UID: \"f5353890-587c-4ab1-b6a5-8e21f7c573b5\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-rp546" Jan 20 18:29:43 crc kubenswrapper[4661]: I0120 18:29:43.031459 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f5353890-587c-4ab1-b6a5-8e21f7c573b5-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-rp546\" (UID: \"f5353890-587c-4ab1-b6a5-8e21f7c573b5\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-rp546" Jan 20 18:29:43 crc kubenswrapper[4661]: I0120 18:29:43.031956 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f5353890-587c-4ab1-b6a5-8e21f7c573b5-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-rp546\" (UID: \"f5353890-587c-4ab1-b6a5-8e21f7c573b5\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-rp546" Jan 20 18:29:43 crc kubenswrapper[4661]: I0120 18:29:43.032244 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hdrnk\" (UniqueName: \"kubernetes.io/projected/f5353890-587c-4ab1-b6a5-8e21f7c573b5-kube-api-access-hdrnk\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-rp546\" (UID: \"f5353890-587c-4ab1-b6a5-8e21f7c573b5\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-rp546" Jan 20 18:29:43 crc kubenswrapper[4661]: I0120 18:29:43.032469 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f5353890-587c-4ab1-b6a5-8e21f7c573b5-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-rp546\" (UID: \"f5353890-587c-4ab1-b6a5-8e21f7c573b5\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-rp546" Jan 20 18:29:43 crc kubenswrapper[4661]: I0120 18:29:43.037222 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f5353890-587c-4ab1-b6a5-8e21f7c573b5-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-rp546\" (UID: \"f5353890-587c-4ab1-b6a5-8e21f7c573b5\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-rp546" Jan 20 18:29:43 crc kubenswrapper[4661]: I0120 18:29:43.039248 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f5353890-587c-4ab1-b6a5-8e21f7c573b5-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-rp546\" (UID: \"f5353890-587c-4ab1-b6a5-8e21f7c573b5\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-rp546" Jan 20 18:29:43 crc kubenswrapper[4661]: I0120 18:29:43.050662 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f5353890-587c-4ab1-b6a5-8e21f7c573b5-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-rp546\" (UID: \"f5353890-587c-4ab1-b6a5-8e21f7c573b5\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-rp546" Jan 20 18:29:43 crc kubenswrapper[4661]: I0120 18:29:43.052500 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hdrnk\" (UniqueName: \"kubernetes.io/projected/f5353890-587c-4ab1-b6a5-8e21f7c573b5-kube-api-access-hdrnk\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-rp546\" (UID: \"f5353890-587c-4ab1-b6a5-8e21f7c573b5\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-rp546" Jan 20 18:29:43 crc kubenswrapper[4661]: I0120 18:29:43.176065 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-rp546" Jan 20 18:29:43 crc kubenswrapper[4661]: W0120 18:29:43.572293 4661 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf5353890_587c_4ab1_b6a5_8e21f7c573b5.slice/crio-5afe416b4840876d23ffeb5bc70a10de5e51643a1452291c25bdcf722a3571da WatchSource:0}: Error finding container 5afe416b4840876d23ffeb5bc70a10de5e51643a1452291c25bdcf722a3571da: Status 404 returned error can't find the container with id 5afe416b4840876d23ffeb5bc70a10de5e51643a1452291c25bdcf722a3571da Jan 20 18:29:43 crc kubenswrapper[4661]: I0120 18:29:43.577366 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-rp546"] Jan 20 18:29:43 crc kubenswrapper[4661]: I0120 18:29:43.785454 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-rp546" event={"ID":"f5353890-587c-4ab1-b6a5-8e21f7c573b5","Type":"ContainerStarted","Data":"5afe416b4840876d23ffeb5bc70a10de5e51643a1452291c25bdcf722a3571da"} Jan 20 18:29:45 crc kubenswrapper[4661]: I0120 18:29:45.076138 4661 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 20 18:29:45 crc kubenswrapper[4661]: I0120 18:29:45.812067 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-rp546" event={"ID":"f5353890-587c-4ab1-b6a5-8e21f7c573b5","Type":"ContainerStarted","Data":"6fb88c73f55b9609f9d50f268b3726ae00d7cf53163008b6105adffe39fdd063"} Jan 20 18:29:45 crc kubenswrapper[4661]: I0120 18:29:45.839208 4661 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-rp546" podStartSLOduration=2.340593428 podStartE2EDuration="3.839184038s" podCreationTimestamp="2026-01-20 18:29:42 +0000 UTC" firstStartedPulling="2026-01-20 18:29:43.575545778 +0000 UTC m=+1439.906335460" lastFinishedPulling="2026-01-20 18:29:45.074136408 +0000 UTC m=+1441.404926070" observedRunningTime="2026-01-20 18:29:45.837200417 +0000 UTC m=+1442.167990099" watchObservedRunningTime="2026-01-20 18:29:45.839184038 +0000 UTC m=+1442.169973700" Jan 20 18:29:47 crc kubenswrapper[4661]: I0120 18:29:47.890801 4661 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-qpqjm" Jan 20 18:29:47 crc kubenswrapper[4661]: I0120 18:29:47.962789 4661 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-qpqjm" Jan 20 18:29:48 crc kubenswrapper[4661]: I0120 18:29:48.135462 4661 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-qpqjm"] Jan 20 18:29:49 crc kubenswrapper[4661]: I0120 18:29:49.853538 4661 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-qpqjm" podUID="b6a5dcc2-b0bd-4a0f-9f10-2329db52c0a9" containerName="registry-server" containerID="cri-o://6fc439b8c8852f2abe3f1319e9c86d61424f97ff8f5c8403fd4bdc5e60aeb36f" gracePeriod=2 Jan 20 18:29:50 crc kubenswrapper[4661]: I0120 18:29:50.307543 4661 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qpqjm" Jan 20 18:29:50 crc kubenswrapper[4661]: I0120 18:29:50.396310 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f727r\" (UniqueName: \"kubernetes.io/projected/b6a5dcc2-b0bd-4a0f-9f10-2329db52c0a9-kube-api-access-f727r\") pod \"b6a5dcc2-b0bd-4a0f-9f10-2329db52c0a9\" (UID: \"b6a5dcc2-b0bd-4a0f-9f10-2329db52c0a9\") " Jan 20 18:29:50 crc kubenswrapper[4661]: I0120 18:29:50.396367 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b6a5dcc2-b0bd-4a0f-9f10-2329db52c0a9-utilities\") pod \"b6a5dcc2-b0bd-4a0f-9f10-2329db52c0a9\" (UID: \"b6a5dcc2-b0bd-4a0f-9f10-2329db52c0a9\") " Jan 20 18:29:50 crc kubenswrapper[4661]: I0120 18:29:50.396460 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b6a5dcc2-b0bd-4a0f-9f10-2329db52c0a9-catalog-content\") pod \"b6a5dcc2-b0bd-4a0f-9f10-2329db52c0a9\" (UID: \"b6a5dcc2-b0bd-4a0f-9f10-2329db52c0a9\") " Jan 20 18:29:50 crc kubenswrapper[4661]: I0120 18:29:50.398089 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b6a5dcc2-b0bd-4a0f-9f10-2329db52c0a9-utilities" (OuterVolumeSpecName: "utilities") pod "b6a5dcc2-b0bd-4a0f-9f10-2329db52c0a9" (UID: "b6a5dcc2-b0bd-4a0f-9f10-2329db52c0a9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 18:29:50 crc kubenswrapper[4661]: I0120 18:29:50.403519 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6a5dcc2-b0bd-4a0f-9f10-2329db52c0a9-kube-api-access-f727r" (OuterVolumeSpecName: "kube-api-access-f727r") pod "b6a5dcc2-b0bd-4a0f-9f10-2329db52c0a9" (UID: "b6a5dcc2-b0bd-4a0f-9f10-2329db52c0a9"). InnerVolumeSpecName "kube-api-access-f727r". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 18:29:50 crc kubenswrapper[4661]: I0120 18:29:50.498883 4661 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f727r\" (UniqueName: \"kubernetes.io/projected/b6a5dcc2-b0bd-4a0f-9f10-2329db52c0a9-kube-api-access-f727r\") on node \"crc\" DevicePath \"\"" Jan 20 18:29:50 crc kubenswrapper[4661]: I0120 18:29:50.498924 4661 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b6a5dcc2-b0bd-4a0f-9f10-2329db52c0a9-utilities\") on node \"crc\" DevicePath \"\"" Jan 20 18:29:50 crc kubenswrapper[4661]: I0120 18:29:50.513036 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b6a5dcc2-b0bd-4a0f-9f10-2329db52c0a9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b6a5dcc2-b0bd-4a0f-9f10-2329db52c0a9" (UID: "b6a5dcc2-b0bd-4a0f-9f10-2329db52c0a9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 18:29:50 crc kubenswrapper[4661]: I0120 18:29:50.600390 4661 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b6a5dcc2-b0bd-4a0f-9f10-2329db52c0a9-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 20 18:29:50 crc kubenswrapper[4661]: I0120 18:29:50.870655 4661 generic.go:334] "Generic (PLEG): container finished" podID="b6a5dcc2-b0bd-4a0f-9f10-2329db52c0a9" containerID="6fc439b8c8852f2abe3f1319e9c86d61424f97ff8f5c8403fd4bdc5e60aeb36f" exitCode=0 Jan 20 18:29:50 crc kubenswrapper[4661]: I0120 18:29:50.870745 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qpqjm" event={"ID":"b6a5dcc2-b0bd-4a0f-9f10-2329db52c0a9","Type":"ContainerDied","Data":"6fc439b8c8852f2abe3f1319e9c86d61424f97ff8f5c8403fd4bdc5e60aeb36f"} Jan 20 18:29:50 crc kubenswrapper[4661]: I0120 18:29:50.870852 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qpqjm" event={"ID":"b6a5dcc2-b0bd-4a0f-9f10-2329db52c0a9","Type":"ContainerDied","Data":"fcfc6ef5a468c5f93db037887c0d0e9bd314bad94d81a58ddc2321283f6125f1"} Jan 20 18:29:50 crc kubenswrapper[4661]: I0120 18:29:50.870765 4661 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qpqjm" Jan 20 18:29:50 crc kubenswrapper[4661]: I0120 18:29:50.870879 4661 scope.go:117] "RemoveContainer" containerID="6fc439b8c8852f2abe3f1319e9c86d61424f97ff8f5c8403fd4bdc5e60aeb36f" Jan 20 18:29:50 crc kubenswrapper[4661]: I0120 18:29:50.909205 4661 scope.go:117] "RemoveContainer" containerID="f6397ce2bb663ec1c1219dac5d58bb5f556482f409c488bd486e9a6d2faf0a05" Jan 20 18:29:50 crc kubenswrapper[4661]: I0120 18:29:50.921805 4661 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-qpqjm"] Jan 20 18:29:50 crc kubenswrapper[4661]: I0120 18:29:50.941576 4661 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-qpqjm"] Jan 20 18:29:50 crc kubenswrapper[4661]: I0120 18:29:50.954133 4661 scope.go:117] "RemoveContainer" containerID="f034104a02e7bd98ad3d404d6a66d0c6265a6bc1ac3c9a6207fbee0b615eb432" Jan 20 18:29:50 crc kubenswrapper[4661]: I0120 18:29:50.977964 4661 scope.go:117] "RemoveContainer" containerID="6fc439b8c8852f2abe3f1319e9c86d61424f97ff8f5c8403fd4bdc5e60aeb36f" Jan 20 18:29:50 crc kubenswrapper[4661]: E0120 18:29:50.978726 4661 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6fc439b8c8852f2abe3f1319e9c86d61424f97ff8f5c8403fd4bdc5e60aeb36f\": container with ID starting with 6fc439b8c8852f2abe3f1319e9c86d61424f97ff8f5c8403fd4bdc5e60aeb36f not found: ID does not exist" containerID="6fc439b8c8852f2abe3f1319e9c86d61424f97ff8f5c8403fd4bdc5e60aeb36f" Jan 20 18:29:50 crc kubenswrapper[4661]: I0120 18:29:50.978758 4661 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6fc439b8c8852f2abe3f1319e9c86d61424f97ff8f5c8403fd4bdc5e60aeb36f"} err="failed to get container status \"6fc439b8c8852f2abe3f1319e9c86d61424f97ff8f5c8403fd4bdc5e60aeb36f\": rpc error: code = NotFound desc = could not find container \"6fc439b8c8852f2abe3f1319e9c86d61424f97ff8f5c8403fd4bdc5e60aeb36f\": container with ID starting with 6fc439b8c8852f2abe3f1319e9c86d61424f97ff8f5c8403fd4bdc5e60aeb36f not found: ID does not exist" Jan 20 18:29:50 crc kubenswrapper[4661]: I0120 18:29:50.978777 4661 scope.go:117] "RemoveContainer" containerID="f6397ce2bb663ec1c1219dac5d58bb5f556482f409c488bd486e9a6d2faf0a05" Jan 20 18:29:50 crc kubenswrapper[4661]: E0120 18:29:50.979043 4661 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f6397ce2bb663ec1c1219dac5d58bb5f556482f409c488bd486e9a6d2faf0a05\": container with ID starting with f6397ce2bb663ec1c1219dac5d58bb5f556482f409c488bd486e9a6d2faf0a05 not found: ID does not exist" containerID="f6397ce2bb663ec1c1219dac5d58bb5f556482f409c488bd486e9a6d2faf0a05" Jan 20 18:29:50 crc kubenswrapper[4661]: I0120 18:29:50.979066 4661 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f6397ce2bb663ec1c1219dac5d58bb5f556482f409c488bd486e9a6d2faf0a05"} err="failed to get container status \"f6397ce2bb663ec1c1219dac5d58bb5f556482f409c488bd486e9a6d2faf0a05\": rpc error: code = NotFound desc = could not find container \"f6397ce2bb663ec1c1219dac5d58bb5f556482f409c488bd486e9a6d2faf0a05\": container with ID starting with f6397ce2bb663ec1c1219dac5d58bb5f556482f409c488bd486e9a6d2faf0a05 not found: ID does not exist" Jan 20 18:29:50 crc kubenswrapper[4661]: I0120 18:29:50.979079 4661 scope.go:117] "RemoveContainer" containerID="f034104a02e7bd98ad3d404d6a66d0c6265a6bc1ac3c9a6207fbee0b615eb432" Jan 20 18:29:50 crc kubenswrapper[4661]: E0120 18:29:50.979289 4661 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f034104a02e7bd98ad3d404d6a66d0c6265a6bc1ac3c9a6207fbee0b615eb432\": container with ID starting with f034104a02e7bd98ad3d404d6a66d0c6265a6bc1ac3c9a6207fbee0b615eb432 not found: ID does not exist" containerID="f034104a02e7bd98ad3d404d6a66d0c6265a6bc1ac3c9a6207fbee0b615eb432" Jan 20 18:29:50 crc kubenswrapper[4661]: I0120 18:29:50.979309 4661 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f034104a02e7bd98ad3d404d6a66d0c6265a6bc1ac3c9a6207fbee0b615eb432"} err="failed to get container status \"f034104a02e7bd98ad3d404d6a66d0c6265a6bc1ac3c9a6207fbee0b615eb432\": rpc error: code = NotFound desc = could not find container \"f034104a02e7bd98ad3d404d6a66d0c6265a6bc1ac3c9a6207fbee0b615eb432\": container with ID starting with f034104a02e7bd98ad3d404d6a66d0c6265a6bc1ac3c9a6207fbee0b615eb432 not found: ID does not exist" Jan 20 18:29:52 crc kubenswrapper[4661]: I0120 18:29:52.158415 4661 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6a5dcc2-b0bd-4a0f-9f10-2329db52c0a9" path="/var/lib/kubelet/pods/b6a5dcc2-b0bd-4a0f-9f10-2329db52c0a9/volumes" Jan 20 18:30:00 crc kubenswrapper[4661]: I0120 18:30:00.195187 4661 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29482230-pvvcl"] Jan 20 18:30:00 crc kubenswrapper[4661]: E0120 18:30:00.196012 4661 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b6a5dcc2-b0bd-4a0f-9f10-2329db52c0a9" containerName="extract-content" Jan 20 18:30:00 crc kubenswrapper[4661]: I0120 18:30:00.196033 4661 state_mem.go:107] "Deleted CPUSet assignment" podUID="b6a5dcc2-b0bd-4a0f-9f10-2329db52c0a9" containerName="extract-content" Jan 20 18:30:00 crc kubenswrapper[4661]: E0120 18:30:00.196051 4661 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b6a5dcc2-b0bd-4a0f-9f10-2329db52c0a9" containerName="extract-utilities" Jan 20 18:30:00 crc kubenswrapper[4661]: I0120 18:30:00.196061 4661 state_mem.go:107] "Deleted CPUSet assignment" podUID="b6a5dcc2-b0bd-4a0f-9f10-2329db52c0a9" containerName="extract-utilities" Jan 20 18:30:00 crc kubenswrapper[4661]: E0120 18:30:00.196101 4661 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b6a5dcc2-b0bd-4a0f-9f10-2329db52c0a9" containerName="registry-server" Jan 20 18:30:00 crc kubenswrapper[4661]: I0120 18:30:00.196109 4661 state_mem.go:107] "Deleted CPUSet assignment" podUID="b6a5dcc2-b0bd-4a0f-9f10-2329db52c0a9" containerName="registry-server" Jan 20 18:30:00 crc kubenswrapper[4661]: I0120 18:30:00.196316 4661 memory_manager.go:354] "RemoveStaleState removing state" podUID="b6a5dcc2-b0bd-4a0f-9f10-2329db52c0a9" containerName="registry-server" Jan 20 18:30:00 crc kubenswrapper[4661]: I0120 18:30:00.197104 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29482230-pvvcl" Jan 20 18:30:00 crc kubenswrapper[4661]: I0120 18:30:00.201639 4661 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 20 18:30:00 crc kubenswrapper[4661]: I0120 18:30:00.202071 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 20 18:30:00 crc kubenswrapper[4661]: I0120 18:30:00.226860 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29482230-pvvcl"] Jan 20 18:30:00 crc kubenswrapper[4661]: I0120 18:30:00.394938 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fjwhx\" (UniqueName: \"kubernetes.io/projected/fbb388f0-e71a-4a38-88d2-0569af45dad4-kube-api-access-fjwhx\") pod \"collect-profiles-29482230-pvvcl\" (UID: \"fbb388f0-e71a-4a38-88d2-0569af45dad4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29482230-pvvcl" Jan 20 18:30:00 crc kubenswrapper[4661]: I0120 18:30:00.395771 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/fbb388f0-e71a-4a38-88d2-0569af45dad4-secret-volume\") pod \"collect-profiles-29482230-pvvcl\" (UID: \"fbb388f0-e71a-4a38-88d2-0569af45dad4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29482230-pvvcl" Jan 20 18:30:00 crc kubenswrapper[4661]: I0120 18:30:00.395909 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fbb388f0-e71a-4a38-88d2-0569af45dad4-config-volume\") pod \"collect-profiles-29482230-pvvcl\" (UID: \"fbb388f0-e71a-4a38-88d2-0569af45dad4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29482230-pvvcl" Jan 20 18:30:00 crc kubenswrapper[4661]: I0120 18:30:00.497623 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/fbb388f0-e71a-4a38-88d2-0569af45dad4-secret-volume\") pod \"collect-profiles-29482230-pvvcl\" (UID: \"fbb388f0-e71a-4a38-88d2-0569af45dad4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29482230-pvvcl" Jan 20 18:30:00 crc kubenswrapper[4661]: I0120 18:30:00.497720 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fbb388f0-e71a-4a38-88d2-0569af45dad4-config-volume\") pod \"collect-profiles-29482230-pvvcl\" (UID: \"fbb388f0-e71a-4a38-88d2-0569af45dad4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29482230-pvvcl" Jan 20 18:30:00 crc kubenswrapper[4661]: I0120 18:30:00.497794 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fjwhx\" (UniqueName: \"kubernetes.io/projected/fbb388f0-e71a-4a38-88d2-0569af45dad4-kube-api-access-fjwhx\") pod \"collect-profiles-29482230-pvvcl\" (UID: \"fbb388f0-e71a-4a38-88d2-0569af45dad4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29482230-pvvcl" Jan 20 18:30:00 crc kubenswrapper[4661]: I0120 18:30:00.498534 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fbb388f0-e71a-4a38-88d2-0569af45dad4-config-volume\") pod \"collect-profiles-29482230-pvvcl\" (UID: \"fbb388f0-e71a-4a38-88d2-0569af45dad4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29482230-pvvcl" Jan 20 18:30:00 crc kubenswrapper[4661]: I0120 18:30:00.512444 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/fbb388f0-e71a-4a38-88d2-0569af45dad4-secret-volume\") pod \"collect-profiles-29482230-pvvcl\" (UID: \"fbb388f0-e71a-4a38-88d2-0569af45dad4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29482230-pvvcl" Jan 20 18:30:00 crc kubenswrapper[4661]: I0120 18:30:00.523970 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fjwhx\" (UniqueName: \"kubernetes.io/projected/fbb388f0-e71a-4a38-88d2-0569af45dad4-kube-api-access-fjwhx\") pod \"collect-profiles-29482230-pvvcl\" (UID: \"fbb388f0-e71a-4a38-88d2-0569af45dad4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29482230-pvvcl" Jan 20 18:30:00 crc kubenswrapper[4661]: I0120 18:30:00.528562 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29482230-pvvcl" Jan 20 18:30:00 crc kubenswrapper[4661]: I0120 18:30:00.978913 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29482230-pvvcl"] Jan 20 18:30:01 crc kubenswrapper[4661]: I0120 18:30:01.997093 4661 generic.go:334] "Generic (PLEG): container finished" podID="fbb388f0-e71a-4a38-88d2-0569af45dad4" containerID="f8a4ae0b9950e8fd39c5057982e00647917bbb2b8176cd8e574d5847ce553728" exitCode=0 Jan 20 18:30:01 crc kubenswrapper[4661]: I0120 18:30:01.997201 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29482230-pvvcl" event={"ID":"fbb388f0-e71a-4a38-88d2-0569af45dad4","Type":"ContainerDied","Data":"f8a4ae0b9950e8fd39c5057982e00647917bbb2b8176cd8e574d5847ce553728"} Jan 20 18:30:01 crc kubenswrapper[4661]: I0120 18:30:01.997496 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29482230-pvvcl" event={"ID":"fbb388f0-e71a-4a38-88d2-0569af45dad4","Type":"ContainerStarted","Data":"975db8fdbc69098e3994ebcc14709a5aa9be0bc991268b38dceceab77f33448a"} Jan 20 18:30:03 crc kubenswrapper[4661]: I0120 18:30:03.344857 4661 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29482230-pvvcl" Jan 20 18:30:03 crc kubenswrapper[4661]: I0120 18:30:03.345665 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fjwhx\" (UniqueName: \"kubernetes.io/projected/fbb388f0-e71a-4a38-88d2-0569af45dad4-kube-api-access-fjwhx\") pod \"fbb388f0-e71a-4a38-88d2-0569af45dad4\" (UID: \"fbb388f0-e71a-4a38-88d2-0569af45dad4\") " Jan 20 18:30:03 crc kubenswrapper[4661]: I0120 18:30:03.345804 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/fbb388f0-e71a-4a38-88d2-0569af45dad4-secret-volume\") pod \"fbb388f0-e71a-4a38-88d2-0569af45dad4\" (UID: \"fbb388f0-e71a-4a38-88d2-0569af45dad4\") " Jan 20 18:30:03 crc kubenswrapper[4661]: I0120 18:30:03.346003 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fbb388f0-e71a-4a38-88d2-0569af45dad4-config-volume\") pod \"fbb388f0-e71a-4a38-88d2-0569af45dad4\" (UID: \"fbb388f0-e71a-4a38-88d2-0569af45dad4\") " Jan 20 18:30:03 crc kubenswrapper[4661]: I0120 18:30:03.346456 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fbb388f0-e71a-4a38-88d2-0569af45dad4-config-volume" (OuterVolumeSpecName: "config-volume") pod "fbb388f0-e71a-4a38-88d2-0569af45dad4" (UID: "fbb388f0-e71a-4a38-88d2-0569af45dad4"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 18:30:03 crc kubenswrapper[4661]: I0120 18:30:03.350984 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fbb388f0-e71a-4a38-88d2-0569af45dad4-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "fbb388f0-e71a-4a38-88d2-0569af45dad4" (UID: "fbb388f0-e71a-4a38-88d2-0569af45dad4"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 18:30:03 crc kubenswrapper[4661]: I0120 18:30:03.353083 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fbb388f0-e71a-4a38-88d2-0569af45dad4-kube-api-access-fjwhx" (OuterVolumeSpecName: "kube-api-access-fjwhx") pod "fbb388f0-e71a-4a38-88d2-0569af45dad4" (UID: "fbb388f0-e71a-4a38-88d2-0569af45dad4"). InnerVolumeSpecName "kube-api-access-fjwhx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 18:30:03 crc kubenswrapper[4661]: I0120 18:30:03.447537 4661 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fbb388f0-e71a-4a38-88d2-0569af45dad4-config-volume\") on node \"crc\" DevicePath \"\"" Jan 20 18:30:03 crc kubenswrapper[4661]: I0120 18:30:03.447578 4661 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fjwhx\" (UniqueName: \"kubernetes.io/projected/fbb388f0-e71a-4a38-88d2-0569af45dad4-kube-api-access-fjwhx\") on node \"crc\" DevicePath \"\"" Jan 20 18:30:03 crc kubenswrapper[4661]: I0120 18:30:03.447592 4661 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/fbb388f0-e71a-4a38-88d2-0569af45dad4-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 20 18:30:04 crc kubenswrapper[4661]: I0120 18:30:04.015268 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29482230-pvvcl" event={"ID":"fbb388f0-e71a-4a38-88d2-0569af45dad4","Type":"ContainerDied","Data":"975db8fdbc69098e3994ebcc14709a5aa9be0bc991268b38dceceab77f33448a"} Jan 20 18:30:04 crc kubenswrapper[4661]: I0120 18:30:04.015542 4661 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="975db8fdbc69098e3994ebcc14709a5aa9be0bc991268b38dceceab77f33448a" Jan 20 18:30:04 crc kubenswrapper[4661]: I0120 18:30:04.015404 4661 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29482230-pvvcl" Jan 20 18:30:14 crc kubenswrapper[4661]: I0120 18:30:14.966750 4661 scope.go:117] "RemoveContainer" containerID="50a31ec5968f93b74cfbba609fda811b5e0b978fdeae0a5f9a884b824faa4103" Jan 20 18:30:15 crc kubenswrapper[4661]: I0120 18:30:15.005144 4661 scope.go:117] "RemoveContainer" containerID="94410b0d1d664184358059c9752a856bce000fdbd3932d5acd266822b3cc9626" Jan 20 18:30:15 crc kubenswrapper[4661]: I0120 18:30:15.049999 4661 scope.go:117] "RemoveContainer" containerID="6670cf7646a37fc2b25aa36c24b0ac42f4b0e90eb50263c69d96c7a1d1fa734f" Jan 20 18:30:59 crc kubenswrapper[4661]: I0120 18:30:59.323581 4661 patch_prober.go:28] interesting pod/machine-config-daemon-svf7c container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 20 18:30:59 crc kubenswrapper[4661]: I0120 18:30:59.324117 4661 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-svf7c" podUID="78855c94-da90-4523-8d65-70f7fd153dee" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 20 18:31:06 crc kubenswrapper[4661]: I0120 18:31:06.420756 4661 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-sbcc6"] Jan 20 18:31:06 crc kubenswrapper[4661]: E0120 18:31:06.424093 4661 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fbb388f0-e71a-4a38-88d2-0569af45dad4" containerName="collect-profiles" Jan 20 18:31:06 crc kubenswrapper[4661]: I0120 18:31:06.424206 4661 state_mem.go:107] "Deleted CPUSet assignment" podUID="fbb388f0-e71a-4a38-88d2-0569af45dad4" containerName="collect-profiles" Jan 20 18:31:06 crc kubenswrapper[4661]: I0120 18:31:06.424494 4661 memory_manager.go:354] "RemoveStaleState removing state" podUID="fbb388f0-e71a-4a38-88d2-0569af45dad4" containerName="collect-profiles" Jan 20 18:31:06 crc kubenswrapper[4661]: I0120 18:31:06.426067 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-sbcc6" Jan 20 18:31:06 crc kubenswrapper[4661]: I0120 18:31:06.450301 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-sbcc6"] Jan 20 18:31:06 crc kubenswrapper[4661]: I0120 18:31:06.533885 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/12591ff8-2a55-4302-ac27-0df813dfce6e-catalog-content\") pod \"certified-operators-sbcc6\" (UID: \"12591ff8-2a55-4302-ac27-0df813dfce6e\") " pod="openshift-marketplace/certified-operators-sbcc6" Jan 20 18:31:06 crc kubenswrapper[4661]: I0120 18:31:06.533947 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/12591ff8-2a55-4302-ac27-0df813dfce6e-utilities\") pod \"certified-operators-sbcc6\" (UID: \"12591ff8-2a55-4302-ac27-0df813dfce6e\") " pod="openshift-marketplace/certified-operators-sbcc6" Jan 20 18:31:06 crc kubenswrapper[4661]: I0120 18:31:06.533999 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j5dbk\" (UniqueName: \"kubernetes.io/projected/12591ff8-2a55-4302-ac27-0df813dfce6e-kube-api-access-j5dbk\") pod \"certified-operators-sbcc6\" (UID: \"12591ff8-2a55-4302-ac27-0df813dfce6e\") " pod="openshift-marketplace/certified-operators-sbcc6" Jan 20 18:31:06 crc kubenswrapper[4661]: I0120 18:31:06.635874 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/12591ff8-2a55-4302-ac27-0df813dfce6e-catalog-content\") pod \"certified-operators-sbcc6\" (UID: \"12591ff8-2a55-4302-ac27-0df813dfce6e\") " pod="openshift-marketplace/certified-operators-sbcc6" Jan 20 18:31:06 crc kubenswrapper[4661]: I0120 18:31:06.635957 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/12591ff8-2a55-4302-ac27-0df813dfce6e-utilities\") pod \"certified-operators-sbcc6\" (UID: \"12591ff8-2a55-4302-ac27-0df813dfce6e\") " pod="openshift-marketplace/certified-operators-sbcc6" Jan 20 18:31:06 crc kubenswrapper[4661]: I0120 18:31:06.636011 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j5dbk\" (UniqueName: \"kubernetes.io/projected/12591ff8-2a55-4302-ac27-0df813dfce6e-kube-api-access-j5dbk\") pod \"certified-operators-sbcc6\" (UID: \"12591ff8-2a55-4302-ac27-0df813dfce6e\") " pod="openshift-marketplace/certified-operators-sbcc6" Jan 20 18:31:06 crc kubenswrapper[4661]: I0120 18:31:06.636511 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/12591ff8-2a55-4302-ac27-0df813dfce6e-utilities\") pod \"certified-operators-sbcc6\" (UID: \"12591ff8-2a55-4302-ac27-0df813dfce6e\") " pod="openshift-marketplace/certified-operators-sbcc6" Jan 20 18:31:06 crc kubenswrapper[4661]: I0120 18:31:06.636526 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/12591ff8-2a55-4302-ac27-0df813dfce6e-catalog-content\") pod \"certified-operators-sbcc6\" (UID: \"12591ff8-2a55-4302-ac27-0df813dfce6e\") " pod="openshift-marketplace/certified-operators-sbcc6" Jan 20 18:31:06 crc kubenswrapper[4661]: I0120 18:31:06.657834 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j5dbk\" (UniqueName: \"kubernetes.io/projected/12591ff8-2a55-4302-ac27-0df813dfce6e-kube-api-access-j5dbk\") pod \"certified-operators-sbcc6\" (UID: \"12591ff8-2a55-4302-ac27-0df813dfce6e\") " pod="openshift-marketplace/certified-operators-sbcc6" Jan 20 18:31:06 crc kubenswrapper[4661]: I0120 18:31:06.749518 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-sbcc6" Jan 20 18:31:07 crc kubenswrapper[4661]: I0120 18:31:07.009844 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-sbcc6"] Jan 20 18:31:07 crc kubenswrapper[4661]: I0120 18:31:07.679853 4661 generic.go:334] "Generic (PLEG): container finished" podID="12591ff8-2a55-4302-ac27-0df813dfce6e" containerID="cc4b64bb82fa02a773e10b35b78b7db061332bf307076dad5daf65359c550229" exitCode=0 Jan 20 18:31:07 crc kubenswrapper[4661]: I0120 18:31:07.681139 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sbcc6" event={"ID":"12591ff8-2a55-4302-ac27-0df813dfce6e","Type":"ContainerDied","Data":"cc4b64bb82fa02a773e10b35b78b7db061332bf307076dad5daf65359c550229"} Jan 20 18:31:07 crc kubenswrapper[4661]: I0120 18:31:07.681240 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sbcc6" event={"ID":"12591ff8-2a55-4302-ac27-0df813dfce6e","Type":"ContainerStarted","Data":"632d1bab35ad4e188ac73168fd5579065ad2cf9ff191a5f59f97e67dd642ebb6"} Jan 20 18:31:08 crc kubenswrapper[4661]: I0120 18:31:08.698484 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sbcc6" event={"ID":"12591ff8-2a55-4302-ac27-0df813dfce6e","Type":"ContainerStarted","Data":"e5ab59f46ee03cf46940695302e94cd4bdc6086eba83ca87fde01939e86a7b42"} Jan 20 18:31:09 crc kubenswrapper[4661]: I0120 18:31:09.708146 4661 generic.go:334] "Generic (PLEG): container finished" podID="12591ff8-2a55-4302-ac27-0df813dfce6e" containerID="e5ab59f46ee03cf46940695302e94cd4bdc6086eba83ca87fde01939e86a7b42" exitCode=0 Jan 20 18:31:09 crc kubenswrapper[4661]: I0120 18:31:09.708204 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sbcc6" event={"ID":"12591ff8-2a55-4302-ac27-0df813dfce6e","Type":"ContainerDied","Data":"e5ab59f46ee03cf46940695302e94cd4bdc6086eba83ca87fde01939e86a7b42"} Jan 20 18:31:10 crc kubenswrapper[4661]: I0120 18:31:10.721194 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sbcc6" event={"ID":"12591ff8-2a55-4302-ac27-0df813dfce6e","Type":"ContainerStarted","Data":"c9498cae34302c726f8802debcb1ae1b4cb4265179f6f0c6d4e6b9e5975f1c0b"} Jan 20 18:31:10 crc kubenswrapper[4661]: I0120 18:31:10.742783 4661 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-sbcc6" podStartSLOduration=2.300464628 podStartE2EDuration="4.742761831s" podCreationTimestamp="2026-01-20 18:31:06 +0000 UTC" firstStartedPulling="2026-01-20 18:31:07.683327861 +0000 UTC m=+1524.014117523" lastFinishedPulling="2026-01-20 18:31:10.125625064 +0000 UTC m=+1526.456414726" observedRunningTime="2026-01-20 18:31:10.738911084 +0000 UTC m=+1527.069700776" watchObservedRunningTime="2026-01-20 18:31:10.742761831 +0000 UTC m=+1527.073551503" Jan 20 18:31:15 crc kubenswrapper[4661]: I0120 18:31:15.183104 4661 scope.go:117] "RemoveContainer" containerID="f2cc9eecd7e9e582ebb25740836d718a0994063682d827bd0fc03e7f0bdf8b3d" Jan 20 18:31:16 crc kubenswrapper[4661]: I0120 18:31:16.750492 4661 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-sbcc6" Jan 20 18:31:16 crc kubenswrapper[4661]: I0120 18:31:16.750896 4661 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-sbcc6" Jan 20 18:31:16 crc kubenswrapper[4661]: I0120 18:31:16.800207 4661 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-sbcc6" Jan 20 18:31:16 crc kubenswrapper[4661]: I0120 18:31:16.850520 4661 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-sbcc6" Jan 20 18:31:17 crc kubenswrapper[4661]: I0120 18:31:17.035030 4661 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-sbcc6"] Jan 20 18:31:18 crc kubenswrapper[4661]: I0120 18:31:18.794012 4661 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-sbcc6" podUID="12591ff8-2a55-4302-ac27-0df813dfce6e" containerName="registry-server" containerID="cri-o://c9498cae34302c726f8802debcb1ae1b4cb4265179f6f0c6d4e6b9e5975f1c0b" gracePeriod=2 Jan 20 18:31:19 crc kubenswrapper[4661]: I0120 18:31:19.205118 4661 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-sbcc6" Jan 20 18:31:19 crc kubenswrapper[4661]: I0120 18:31:19.276365 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/12591ff8-2a55-4302-ac27-0df813dfce6e-catalog-content\") pod \"12591ff8-2a55-4302-ac27-0df813dfce6e\" (UID: \"12591ff8-2a55-4302-ac27-0df813dfce6e\") " Jan 20 18:31:19 crc kubenswrapper[4661]: I0120 18:31:19.276444 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j5dbk\" (UniqueName: \"kubernetes.io/projected/12591ff8-2a55-4302-ac27-0df813dfce6e-kube-api-access-j5dbk\") pod \"12591ff8-2a55-4302-ac27-0df813dfce6e\" (UID: \"12591ff8-2a55-4302-ac27-0df813dfce6e\") " Jan 20 18:31:19 crc kubenswrapper[4661]: I0120 18:31:19.276536 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/12591ff8-2a55-4302-ac27-0df813dfce6e-utilities\") pod \"12591ff8-2a55-4302-ac27-0df813dfce6e\" (UID: \"12591ff8-2a55-4302-ac27-0df813dfce6e\") " Jan 20 18:31:19 crc kubenswrapper[4661]: I0120 18:31:19.278412 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/12591ff8-2a55-4302-ac27-0df813dfce6e-utilities" (OuterVolumeSpecName: "utilities") pod "12591ff8-2a55-4302-ac27-0df813dfce6e" (UID: "12591ff8-2a55-4302-ac27-0df813dfce6e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 18:31:19 crc kubenswrapper[4661]: I0120 18:31:19.283412 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/12591ff8-2a55-4302-ac27-0df813dfce6e-kube-api-access-j5dbk" (OuterVolumeSpecName: "kube-api-access-j5dbk") pod "12591ff8-2a55-4302-ac27-0df813dfce6e" (UID: "12591ff8-2a55-4302-ac27-0df813dfce6e"). InnerVolumeSpecName "kube-api-access-j5dbk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 18:31:19 crc kubenswrapper[4661]: I0120 18:31:19.323247 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/12591ff8-2a55-4302-ac27-0df813dfce6e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "12591ff8-2a55-4302-ac27-0df813dfce6e" (UID: "12591ff8-2a55-4302-ac27-0df813dfce6e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 18:31:19 crc kubenswrapper[4661]: I0120 18:31:19.378489 4661 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/12591ff8-2a55-4302-ac27-0df813dfce6e-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 20 18:31:19 crc kubenswrapper[4661]: I0120 18:31:19.378520 4661 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j5dbk\" (UniqueName: \"kubernetes.io/projected/12591ff8-2a55-4302-ac27-0df813dfce6e-kube-api-access-j5dbk\") on node \"crc\" DevicePath \"\"" Jan 20 18:31:19 crc kubenswrapper[4661]: I0120 18:31:19.378531 4661 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/12591ff8-2a55-4302-ac27-0df813dfce6e-utilities\") on node \"crc\" DevicePath \"\"" Jan 20 18:31:19 crc kubenswrapper[4661]: I0120 18:31:19.808266 4661 generic.go:334] "Generic (PLEG): container finished" podID="12591ff8-2a55-4302-ac27-0df813dfce6e" containerID="c9498cae34302c726f8802debcb1ae1b4cb4265179f6f0c6d4e6b9e5975f1c0b" exitCode=0 Jan 20 18:31:19 crc kubenswrapper[4661]: I0120 18:31:19.808318 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sbcc6" event={"ID":"12591ff8-2a55-4302-ac27-0df813dfce6e","Type":"ContainerDied","Data":"c9498cae34302c726f8802debcb1ae1b4cb4265179f6f0c6d4e6b9e5975f1c0b"} Jan 20 18:31:19 crc kubenswrapper[4661]: I0120 18:31:19.808359 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sbcc6" event={"ID":"12591ff8-2a55-4302-ac27-0df813dfce6e","Type":"ContainerDied","Data":"632d1bab35ad4e188ac73168fd5579065ad2cf9ff191a5f59f97e67dd642ebb6"} Jan 20 18:31:19 crc kubenswrapper[4661]: I0120 18:31:19.808387 4661 scope.go:117] "RemoveContainer" containerID="c9498cae34302c726f8802debcb1ae1b4cb4265179f6f0c6d4e6b9e5975f1c0b" Jan 20 18:31:19 crc kubenswrapper[4661]: I0120 18:31:19.808392 4661 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-sbcc6" Jan 20 18:31:19 crc kubenswrapper[4661]: I0120 18:31:19.833758 4661 scope.go:117] "RemoveContainer" containerID="e5ab59f46ee03cf46940695302e94cd4bdc6086eba83ca87fde01939e86a7b42" Jan 20 18:31:19 crc kubenswrapper[4661]: I0120 18:31:19.881912 4661 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-sbcc6"] Jan 20 18:31:19 crc kubenswrapper[4661]: I0120 18:31:19.882318 4661 scope.go:117] "RemoveContainer" containerID="cc4b64bb82fa02a773e10b35b78b7db061332bf307076dad5daf65359c550229" Jan 20 18:31:19 crc kubenswrapper[4661]: I0120 18:31:19.895340 4661 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-sbcc6"] Jan 20 18:31:19 crc kubenswrapper[4661]: I0120 18:31:19.931802 4661 scope.go:117] "RemoveContainer" containerID="c9498cae34302c726f8802debcb1ae1b4cb4265179f6f0c6d4e6b9e5975f1c0b" Jan 20 18:31:19 crc kubenswrapper[4661]: E0120 18:31:19.932464 4661 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c9498cae34302c726f8802debcb1ae1b4cb4265179f6f0c6d4e6b9e5975f1c0b\": container with ID starting with c9498cae34302c726f8802debcb1ae1b4cb4265179f6f0c6d4e6b9e5975f1c0b not found: ID does not exist" containerID="c9498cae34302c726f8802debcb1ae1b4cb4265179f6f0c6d4e6b9e5975f1c0b" Jan 20 18:31:19 crc kubenswrapper[4661]: I0120 18:31:19.932538 4661 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c9498cae34302c726f8802debcb1ae1b4cb4265179f6f0c6d4e6b9e5975f1c0b"} err="failed to get container status \"c9498cae34302c726f8802debcb1ae1b4cb4265179f6f0c6d4e6b9e5975f1c0b\": rpc error: code = NotFound desc = could not find container \"c9498cae34302c726f8802debcb1ae1b4cb4265179f6f0c6d4e6b9e5975f1c0b\": container with ID starting with c9498cae34302c726f8802debcb1ae1b4cb4265179f6f0c6d4e6b9e5975f1c0b not found: ID does not exist" Jan 20 18:31:19 crc kubenswrapper[4661]: I0120 18:31:19.932582 4661 scope.go:117] "RemoveContainer" containerID="e5ab59f46ee03cf46940695302e94cd4bdc6086eba83ca87fde01939e86a7b42" Jan 20 18:31:19 crc kubenswrapper[4661]: E0120 18:31:19.933085 4661 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e5ab59f46ee03cf46940695302e94cd4bdc6086eba83ca87fde01939e86a7b42\": container with ID starting with e5ab59f46ee03cf46940695302e94cd4bdc6086eba83ca87fde01939e86a7b42 not found: ID does not exist" containerID="e5ab59f46ee03cf46940695302e94cd4bdc6086eba83ca87fde01939e86a7b42" Jan 20 18:31:19 crc kubenswrapper[4661]: I0120 18:31:19.933157 4661 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e5ab59f46ee03cf46940695302e94cd4bdc6086eba83ca87fde01939e86a7b42"} err="failed to get container status \"e5ab59f46ee03cf46940695302e94cd4bdc6086eba83ca87fde01939e86a7b42\": rpc error: code = NotFound desc = could not find container \"e5ab59f46ee03cf46940695302e94cd4bdc6086eba83ca87fde01939e86a7b42\": container with ID starting with e5ab59f46ee03cf46940695302e94cd4bdc6086eba83ca87fde01939e86a7b42 not found: ID does not exist" Jan 20 18:31:19 crc kubenswrapper[4661]: I0120 18:31:19.933196 4661 scope.go:117] "RemoveContainer" containerID="cc4b64bb82fa02a773e10b35b78b7db061332bf307076dad5daf65359c550229" Jan 20 18:31:19 crc kubenswrapper[4661]: E0120 18:31:19.933843 4661 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cc4b64bb82fa02a773e10b35b78b7db061332bf307076dad5daf65359c550229\": container with ID starting with cc4b64bb82fa02a773e10b35b78b7db061332bf307076dad5daf65359c550229 not found: ID does not exist" containerID="cc4b64bb82fa02a773e10b35b78b7db061332bf307076dad5daf65359c550229" Jan 20 18:31:19 crc kubenswrapper[4661]: I0120 18:31:19.933887 4661 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cc4b64bb82fa02a773e10b35b78b7db061332bf307076dad5daf65359c550229"} err="failed to get container status \"cc4b64bb82fa02a773e10b35b78b7db061332bf307076dad5daf65359c550229\": rpc error: code = NotFound desc = could not find container \"cc4b64bb82fa02a773e10b35b78b7db061332bf307076dad5daf65359c550229\": container with ID starting with cc4b64bb82fa02a773e10b35b78b7db061332bf307076dad5daf65359c550229 not found: ID does not exist" Jan 20 18:31:20 crc kubenswrapper[4661]: I0120 18:31:20.153221 4661 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="12591ff8-2a55-4302-ac27-0df813dfce6e" path="/var/lib/kubelet/pods/12591ff8-2a55-4302-ac27-0df813dfce6e/volumes" Jan 20 18:31:28 crc kubenswrapper[4661]: I0120 18:31:28.853654 4661 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-4mwpl"] Jan 20 18:31:28 crc kubenswrapper[4661]: E0120 18:31:28.854506 4661 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="12591ff8-2a55-4302-ac27-0df813dfce6e" containerName="extract-utilities" Jan 20 18:31:28 crc kubenswrapper[4661]: I0120 18:31:28.854523 4661 state_mem.go:107] "Deleted CPUSet assignment" podUID="12591ff8-2a55-4302-ac27-0df813dfce6e" containerName="extract-utilities" Jan 20 18:31:28 crc kubenswrapper[4661]: E0120 18:31:28.854549 4661 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="12591ff8-2a55-4302-ac27-0df813dfce6e" containerName="registry-server" Jan 20 18:31:28 crc kubenswrapper[4661]: I0120 18:31:28.854557 4661 state_mem.go:107] "Deleted CPUSet assignment" podUID="12591ff8-2a55-4302-ac27-0df813dfce6e" containerName="registry-server" Jan 20 18:31:28 crc kubenswrapper[4661]: E0120 18:31:28.854573 4661 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="12591ff8-2a55-4302-ac27-0df813dfce6e" containerName="extract-content" Jan 20 18:31:28 crc kubenswrapper[4661]: I0120 18:31:28.854579 4661 state_mem.go:107] "Deleted CPUSet assignment" podUID="12591ff8-2a55-4302-ac27-0df813dfce6e" containerName="extract-content" Jan 20 18:31:28 crc kubenswrapper[4661]: I0120 18:31:28.854764 4661 memory_manager.go:354] "RemoveStaleState removing state" podUID="12591ff8-2a55-4302-ac27-0df813dfce6e" containerName="registry-server" Jan 20 18:31:28 crc kubenswrapper[4661]: I0120 18:31:28.856267 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4mwpl" Jan 20 18:31:28 crc kubenswrapper[4661]: I0120 18:31:28.865383 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-4mwpl"] Jan 20 18:31:28 crc kubenswrapper[4661]: I0120 18:31:28.959413 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/16789029-eec6-4884-9bd7-52e99134dc15-utilities\") pod \"community-operators-4mwpl\" (UID: \"16789029-eec6-4884-9bd7-52e99134dc15\") " pod="openshift-marketplace/community-operators-4mwpl" Jan 20 18:31:28 crc kubenswrapper[4661]: I0120 18:31:28.959578 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qxtf9\" (UniqueName: \"kubernetes.io/projected/16789029-eec6-4884-9bd7-52e99134dc15-kube-api-access-qxtf9\") pod \"community-operators-4mwpl\" (UID: \"16789029-eec6-4884-9bd7-52e99134dc15\") " pod="openshift-marketplace/community-operators-4mwpl" Jan 20 18:31:28 crc kubenswrapper[4661]: I0120 18:31:28.959612 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/16789029-eec6-4884-9bd7-52e99134dc15-catalog-content\") pod \"community-operators-4mwpl\" (UID: \"16789029-eec6-4884-9bd7-52e99134dc15\") " pod="openshift-marketplace/community-operators-4mwpl" Jan 20 18:31:29 crc kubenswrapper[4661]: I0120 18:31:29.062148 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/16789029-eec6-4884-9bd7-52e99134dc15-utilities\") pod \"community-operators-4mwpl\" (UID: \"16789029-eec6-4884-9bd7-52e99134dc15\") " pod="openshift-marketplace/community-operators-4mwpl" Jan 20 18:31:29 crc kubenswrapper[4661]: I0120 18:31:29.062299 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qxtf9\" (UniqueName: \"kubernetes.io/projected/16789029-eec6-4884-9bd7-52e99134dc15-kube-api-access-qxtf9\") pod \"community-operators-4mwpl\" (UID: \"16789029-eec6-4884-9bd7-52e99134dc15\") " pod="openshift-marketplace/community-operators-4mwpl" Jan 20 18:31:29 crc kubenswrapper[4661]: I0120 18:31:29.062342 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/16789029-eec6-4884-9bd7-52e99134dc15-catalog-content\") pod \"community-operators-4mwpl\" (UID: \"16789029-eec6-4884-9bd7-52e99134dc15\") " pod="openshift-marketplace/community-operators-4mwpl" Jan 20 18:31:29 crc kubenswrapper[4661]: I0120 18:31:29.062859 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/16789029-eec6-4884-9bd7-52e99134dc15-catalog-content\") pod \"community-operators-4mwpl\" (UID: \"16789029-eec6-4884-9bd7-52e99134dc15\") " pod="openshift-marketplace/community-operators-4mwpl" Jan 20 18:31:29 crc kubenswrapper[4661]: I0120 18:31:29.063218 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/16789029-eec6-4884-9bd7-52e99134dc15-utilities\") pod \"community-operators-4mwpl\" (UID: \"16789029-eec6-4884-9bd7-52e99134dc15\") " pod="openshift-marketplace/community-operators-4mwpl" Jan 20 18:31:29 crc kubenswrapper[4661]: I0120 18:31:29.096607 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qxtf9\" (UniqueName: \"kubernetes.io/projected/16789029-eec6-4884-9bd7-52e99134dc15-kube-api-access-qxtf9\") pod \"community-operators-4mwpl\" (UID: \"16789029-eec6-4884-9bd7-52e99134dc15\") " pod="openshift-marketplace/community-operators-4mwpl" Jan 20 18:31:29 crc kubenswrapper[4661]: I0120 18:31:29.176005 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4mwpl" Jan 20 18:31:29 crc kubenswrapper[4661]: I0120 18:31:29.328077 4661 patch_prober.go:28] interesting pod/machine-config-daemon-svf7c container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 20 18:31:29 crc kubenswrapper[4661]: I0120 18:31:29.328384 4661 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-svf7c" podUID="78855c94-da90-4523-8d65-70f7fd153dee" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 20 18:31:29 crc kubenswrapper[4661]: I0120 18:31:29.618250 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-4mwpl"] Jan 20 18:31:29 crc kubenswrapper[4661]: I0120 18:31:29.919009 4661 generic.go:334] "Generic (PLEG): container finished" podID="16789029-eec6-4884-9bd7-52e99134dc15" containerID="84e9eb67b19d02e21b8f3e9dc6fa61a4327ddbcedc1e93c9143337e2ab446fce" exitCode=0 Jan 20 18:31:29 crc kubenswrapper[4661]: I0120 18:31:29.919306 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4mwpl" event={"ID":"16789029-eec6-4884-9bd7-52e99134dc15","Type":"ContainerDied","Data":"84e9eb67b19d02e21b8f3e9dc6fa61a4327ddbcedc1e93c9143337e2ab446fce"} Jan 20 18:31:29 crc kubenswrapper[4661]: I0120 18:31:29.919330 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4mwpl" event={"ID":"16789029-eec6-4884-9bd7-52e99134dc15","Type":"ContainerStarted","Data":"4540c33fcdc564bea88fa94784ef52bede1f6f33ff62d904d6a99ec5bdfc516b"} Jan 20 18:31:30 crc kubenswrapper[4661]: I0120 18:31:30.931552 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4mwpl" event={"ID":"16789029-eec6-4884-9bd7-52e99134dc15","Type":"ContainerStarted","Data":"36b9e1a3e8705e6067b78bbf80f85964623b11972a97899be4c4edb0136e2f94"} Jan 20 18:31:32 crc kubenswrapper[4661]: I0120 18:31:32.447167 4661 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-gd8f4"] Jan 20 18:31:32 crc kubenswrapper[4661]: I0120 18:31:32.449476 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-gd8f4" Jan 20 18:31:32 crc kubenswrapper[4661]: I0120 18:31:32.456886 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-gd8f4"] Jan 20 18:31:32 crc kubenswrapper[4661]: I0120 18:31:32.522080 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/055a197f-ec7c-4188-a58a-7a87500bcd36-catalog-content\") pod \"redhat-marketplace-gd8f4\" (UID: \"055a197f-ec7c-4188-a58a-7a87500bcd36\") " pod="openshift-marketplace/redhat-marketplace-gd8f4" Jan 20 18:31:32 crc kubenswrapper[4661]: I0120 18:31:32.522456 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-44qkv\" (UniqueName: \"kubernetes.io/projected/055a197f-ec7c-4188-a58a-7a87500bcd36-kube-api-access-44qkv\") pod \"redhat-marketplace-gd8f4\" (UID: \"055a197f-ec7c-4188-a58a-7a87500bcd36\") " pod="openshift-marketplace/redhat-marketplace-gd8f4" Jan 20 18:31:32 crc kubenswrapper[4661]: I0120 18:31:32.522513 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/055a197f-ec7c-4188-a58a-7a87500bcd36-utilities\") pod \"redhat-marketplace-gd8f4\" (UID: \"055a197f-ec7c-4188-a58a-7a87500bcd36\") " pod="openshift-marketplace/redhat-marketplace-gd8f4" Jan 20 18:31:32 crc kubenswrapper[4661]: I0120 18:31:32.624661 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/055a197f-ec7c-4188-a58a-7a87500bcd36-utilities\") pod \"redhat-marketplace-gd8f4\" (UID: \"055a197f-ec7c-4188-a58a-7a87500bcd36\") " pod="openshift-marketplace/redhat-marketplace-gd8f4" Jan 20 18:31:32 crc kubenswrapper[4661]: I0120 18:31:32.624763 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/055a197f-ec7c-4188-a58a-7a87500bcd36-catalog-content\") pod \"redhat-marketplace-gd8f4\" (UID: \"055a197f-ec7c-4188-a58a-7a87500bcd36\") " pod="openshift-marketplace/redhat-marketplace-gd8f4" Jan 20 18:31:32 crc kubenswrapper[4661]: I0120 18:31:32.624866 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-44qkv\" (UniqueName: \"kubernetes.io/projected/055a197f-ec7c-4188-a58a-7a87500bcd36-kube-api-access-44qkv\") pod \"redhat-marketplace-gd8f4\" (UID: \"055a197f-ec7c-4188-a58a-7a87500bcd36\") " pod="openshift-marketplace/redhat-marketplace-gd8f4" Jan 20 18:31:32 crc kubenswrapper[4661]: I0120 18:31:32.625254 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/055a197f-ec7c-4188-a58a-7a87500bcd36-utilities\") pod \"redhat-marketplace-gd8f4\" (UID: \"055a197f-ec7c-4188-a58a-7a87500bcd36\") " pod="openshift-marketplace/redhat-marketplace-gd8f4" Jan 20 18:31:32 crc kubenswrapper[4661]: I0120 18:31:32.625268 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/055a197f-ec7c-4188-a58a-7a87500bcd36-catalog-content\") pod \"redhat-marketplace-gd8f4\" (UID: \"055a197f-ec7c-4188-a58a-7a87500bcd36\") " pod="openshift-marketplace/redhat-marketplace-gd8f4" Jan 20 18:31:32 crc kubenswrapper[4661]: I0120 18:31:32.646481 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-44qkv\" (UniqueName: \"kubernetes.io/projected/055a197f-ec7c-4188-a58a-7a87500bcd36-kube-api-access-44qkv\") pod \"redhat-marketplace-gd8f4\" (UID: \"055a197f-ec7c-4188-a58a-7a87500bcd36\") " pod="openshift-marketplace/redhat-marketplace-gd8f4" Jan 20 18:31:32 crc kubenswrapper[4661]: I0120 18:31:32.785157 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-gd8f4" Jan 20 18:31:32 crc kubenswrapper[4661]: I0120 18:31:32.948630 4661 generic.go:334] "Generic (PLEG): container finished" podID="16789029-eec6-4884-9bd7-52e99134dc15" containerID="36b9e1a3e8705e6067b78bbf80f85964623b11972a97899be4c4edb0136e2f94" exitCode=0 Jan 20 18:31:32 crc kubenswrapper[4661]: I0120 18:31:32.948813 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4mwpl" event={"ID":"16789029-eec6-4884-9bd7-52e99134dc15","Type":"ContainerDied","Data":"36b9e1a3e8705e6067b78bbf80f85964623b11972a97899be4c4edb0136e2f94"} Jan 20 18:31:33 crc kubenswrapper[4661]: I0120 18:31:33.273784 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-gd8f4"] Jan 20 18:31:33 crc kubenswrapper[4661]: I0120 18:31:33.959474 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4mwpl" event={"ID":"16789029-eec6-4884-9bd7-52e99134dc15","Type":"ContainerStarted","Data":"a2e3ddd65eafc61e033e6eab9565cfdd89c03dc5e76fa145ce2bccc59222f38a"} Jan 20 18:31:33 crc kubenswrapper[4661]: I0120 18:31:33.962507 4661 generic.go:334] "Generic (PLEG): container finished" podID="055a197f-ec7c-4188-a58a-7a87500bcd36" containerID="3142a6d3c03f06f2fe8b6be8b492feceb25ccf8f937531657854c8a7422adabe" exitCode=0 Jan 20 18:31:33 crc kubenswrapper[4661]: I0120 18:31:33.962544 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gd8f4" event={"ID":"055a197f-ec7c-4188-a58a-7a87500bcd36","Type":"ContainerDied","Data":"3142a6d3c03f06f2fe8b6be8b492feceb25ccf8f937531657854c8a7422adabe"} Jan 20 18:31:33 crc kubenswrapper[4661]: I0120 18:31:33.962569 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gd8f4" event={"ID":"055a197f-ec7c-4188-a58a-7a87500bcd36","Type":"ContainerStarted","Data":"e3006af8cc6d87f561b5c9cf4a5964407058362b171d4e88b77d5e3f8c364432"} Jan 20 18:31:33 crc kubenswrapper[4661]: I0120 18:31:33.986627 4661 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-4mwpl" podStartSLOduration=2.230663096 podStartE2EDuration="5.986606949s" podCreationTimestamp="2026-01-20 18:31:28 +0000 UTC" firstStartedPulling="2026-01-20 18:31:29.920472381 +0000 UTC m=+1546.251262043" lastFinishedPulling="2026-01-20 18:31:33.676416224 +0000 UTC m=+1550.007205896" observedRunningTime="2026-01-20 18:31:33.979618952 +0000 UTC m=+1550.310408624" watchObservedRunningTime="2026-01-20 18:31:33.986606949 +0000 UTC m=+1550.317396611" Jan 20 18:31:34 crc kubenswrapper[4661]: I0120 18:31:34.972109 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gd8f4" event={"ID":"055a197f-ec7c-4188-a58a-7a87500bcd36","Type":"ContainerStarted","Data":"e75ec9272720f4366f11320fe3179111027386e816cbcd0f507f8d9042390c74"} Jan 20 18:31:35 crc kubenswrapper[4661]: I0120 18:31:35.982496 4661 generic.go:334] "Generic (PLEG): container finished" podID="055a197f-ec7c-4188-a58a-7a87500bcd36" containerID="e75ec9272720f4366f11320fe3179111027386e816cbcd0f507f8d9042390c74" exitCode=0 Jan 20 18:31:35 crc kubenswrapper[4661]: I0120 18:31:35.982568 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gd8f4" event={"ID":"055a197f-ec7c-4188-a58a-7a87500bcd36","Type":"ContainerDied","Data":"e75ec9272720f4366f11320fe3179111027386e816cbcd0f507f8d9042390c74"} Jan 20 18:31:36 crc kubenswrapper[4661]: I0120 18:31:36.992102 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gd8f4" event={"ID":"055a197f-ec7c-4188-a58a-7a87500bcd36","Type":"ContainerStarted","Data":"9ff4252c1dab04a39e74249a376884dbc35606d47653d0b5407b68687b238d42"} Jan 20 18:31:37 crc kubenswrapper[4661]: I0120 18:31:37.016264 4661 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-gd8f4" podStartSLOduration=2.448108036 podStartE2EDuration="5.016244143s" podCreationTimestamp="2026-01-20 18:31:32 +0000 UTC" firstStartedPulling="2026-01-20 18:31:33.96377745 +0000 UTC m=+1550.294567112" lastFinishedPulling="2026-01-20 18:31:36.531913557 +0000 UTC m=+1552.862703219" observedRunningTime="2026-01-20 18:31:37.00782521 +0000 UTC m=+1553.338614892" watchObservedRunningTime="2026-01-20 18:31:37.016244143 +0000 UTC m=+1553.347033805" Jan 20 18:31:39 crc kubenswrapper[4661]: I0120 18:31:39.177702 4661 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-4mwpl" Jan 20 18:31:39 crc kubenswrapper[4661]: I0120 18:31:39.178040 4661 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-4mwpl" Jan 20 18:31:40 crc kubenswrapper[4661]: I0120 18:31:40.228777 4661 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-4mwpl" podUID="16789029-eec6-4884-9bd7-52e99134dc15" containerName="registry-server" probeResult="failure" output=< Jan 20 18:31:40 crc kubenswrapper[4661]: timeout: failed to connect service ":50051" within 1s Jan 20 18:31:40 crc kubenswrapper[4661]: > Jan 20 18:31:42 crc kubenswrapper[4661]: I0120 18:31:42.786456 4661 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-gd8f4" Jan 20 18:31:42 crc kubenswrapper[4661]: I0120 18:31:42.786771 4661 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-gd8f4" Jan 20 18:31:42 crc kubenswrapper[4661]: I0120 18:31:42.842051 4661 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-gd8f4" Jan 20 18:31:43 crc kubenswrapper[4661]: I0120 18:31:43.107258 4661 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-gd8f4" Jan 20 18:31:43 crc kubenswrapper[4661]: I0120 18:31:43.157310 4661 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-gd8f4"] Jan 20 18:31:45 crc kubenswrapper[4661]: I0120 18:31:45.072862 4661 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-gd8f4" podUID="055a197f-ec7c-4188-a58a-7a87500bcd36" containerName="registry-server" containerID="cri-o://9ff4252c1dab04a39e74249a376884dbc35606d47653d0b5407b68687b238d42" gracePeriod=2 Jan 20 18:31:45 crc kubenswrapper[4661]: I0120 18:31:45.556561 4661 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-gd8f4" Jan 20 18:31:45 crc kubenswrapper[4661]: I0120 18:31:45.668685 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/055a197f-ec7c-4188-a58a-7a87500bcd36-utilities\") pod \"055a197f-ec7c-4188-a58a-7a87500bcd36\" (UID: \"055a197f-ec7c-4188-a58a-7a87500bcd36\") " Jan 20 18:31:45 crc kubenswrapper[4661]: I0120 18:31:45.668779 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/055a197f-ec7c-4188-a58a-7a87500bcd36-catalog-content\") pod \"055a197f-ec7c-4188-a58a-7a87500bcd36\" (UID: \"055a197f-ec7c-4188-a58a-7a87500bcd36\") " Jan 20 18:31:45 crc kubenswrapper[4661]: I0120 18:31:45.668890 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-44qkv\" (UniqueName: \"kubernetes.io/projected/055a197f-ec7c-4188-a58a-7a87500bcd36-kube-api-access-44qkv\") pod \"055a197f-ec7c-4188-a58a-7a87500bcd36\" (UID: \"055a197f-ec7c-4188-a58a-7a87500bcd36\") " Jan 20 18:31:45 crc kubenswrapper[4661]: I0120 18:31:45.669428 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/055a197f-ec7c-4188-a58a-7a87500bcd36-utilities" (OuterVolumeSpecName: "utilities") pod "055a197f-ec7c-4188-a58a-7a87500bcd36" (UID: "055a197f-ec7c-4188-a58a-7a87500bcd36"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 18:31:45 crc kubenswrapper[4661]: I0120 18:31:45.701891 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/055a197f-ec7c-4188-a58a-7a87500bcd36-kube-api-access-44qkv" (OuterVolumeSpecName: "kube-api-access-44qkv") pod "055a197f-ec7c-4188-a58a-7a87500bcd36" (UID: "055a197f-ec7c-4188-a58a-7a87500bcd36"). InnerVolumeSpecName "kube-api-access-44qkv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 18:31:45 crc kubenswrapper[4661]: I0120 18:31:45.739386 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/055a197f-ec7c-4188-a58a-7a87500bcd36-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "055a197f-ec7c-4188-a58a-7a87500bcd36" (UID: "055a197f-ec7c-4188-a58a-7a87500bcd36"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 18:31:45 crc kubenswrapper[4661]: I0120 18:31:45.770613 4661 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/055a197f-ec7c-4188-a58a-7a87500bcd36-utilities\") on node \"crc\" DevicePath \"\"" Jan 20 18:31:45 crc kubenswrapper[4661]: I0120 18:31:45.770639 4661 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/055a197f-ec7c-4188-a58a-7a87500bcd36-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 20 18:31:45 crc kubenswrapper[4661]: I0120 18:31:45.770648 4661 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-44qkv\" (UniqueName: \"kubernetes.io/projected/055a197f-ec7c-4188-a58a-7a87500bcd36-kube-api-access-44qkv\") on node \"crc\" DevicePath \"\"" Jan 20 18:31:46 crc kubenswrapper[4661]: I0120 18:31:46.085489 4661 generic.go:334] "Generic (PLEG): container finished" podID="055a197f-ec7c-4188-a58a-7a87500bcd36" containerID="9ff4252c1dab04a39e74249a376884dbc35606d47653d0b5407b68687b238d42" exitCode=0 Jan 20 18:31:46 crc kubenswrapper[4661]: I0120 18:31:46.085831 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gd8f4" event={"ID":"055a197f-ec7c-4188-a58a-7a87500bcd36","Type":"ContainerDied","Data":"9ff4252c1dab04a39e74249a376884dbc35606d47653d0b5407b68687b238d42"} Jan 20 18:31:46 crc kubenswrapper[4661]: I0120 18:31:46.085863 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gd8f4" event={"ID":"055a197f-ec7c-4188-a58a-7a87500bcd36","Type":"ContainerDied","Data":"e3006af8cc6d87f561b5c9cf4a5964407058362b171d4e88b77d5e3f8c364432"} Jan 20 18:31:46 crc kubenswrapper[4661]: I0120 18:31:46.085885 4661 scope.go:117] "RemoveContainer" containerID="9ff4252c1dab04a39e74249a376884dbc35606d47653d0b5407b68687b238d42" Jan 20 18:31:46 crc kubenswrapper[4661]: I0120 18:31:46.086040 4661 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-gd8f4" Jan 20 18:31:46 crc kubenswrapper[4661]: I0120 18:31:46.108014 4661 scope.go:117] "RemoveContainer" containerID="e75ec9272720f4366f11320fe3179111027386e816cbcd0f507f8d9042390c74" Jan 20 18:31:46 crc kubenswrapper[4661]: I0120 18:31:46.131123 4661 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-gd8f4"] Jan 20 18:31:46 crc kubenswrapper[4661]: I0120 18:31:46.140354 4661 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-gd8f4"] Jan 20 18:31:46 crc kubenswrapper[4661]: I0120 18:31:46.143241 4661 scope.go:117] "RemoveContainer" containerID="3142a6d3c03f06f2fe8b6be8b492feceb25ccf8f937531657854c8a7422adabe" Jan 20 18:31:46 crc kubenswrapper[4661]: I0120 18:31:46.159871 4661 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="055a197f-ec7c-4188-a58a-7a87500bcd36" path="/var/lib/kubelet/pods/055a197f-ec7c-4188-a58a-7a87500bcd36/volumes" Jan 20 18:31:46 crc kubenswrapper[4661]: I0120 18:31:46.170354 4661 scope.go:117] "RemoveContainer" containerID="9ff4252c1dab04a39e74249a376884dbc35606d47653d0b5407b68687b238d42" Jan 20 18:31:46 crc kubenswrapper[4661]: E0120 18:31:46.170918 4661 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9ff4252c1dab04a39e74249a376884dbc35606d47653d0b5407b68687b238d42\": container with ID starting with 9ff4252c1dab04a39e74249a376884dbc35606d47653d0b5407b68687b238d42 not found: ID does not exist" containerID="9ff4252c1dab04a39e74249a376884dbc35606d47653d0b5407b68687b238d42" Jan 20 18:31:46 crc kubenswrapper[4661]: I0120 18:31:46.171016 4661 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9ff4252c1dab04a39e74249a376884dbc35606d47653d0b5407b68687b238d42"} err="failed to get container status \"9ff4252c1dab04a39e74249a376884dbc35606d47653d0b5407b68687b238d42\": rpc error: code = NotFound desc = could not find container \"9ff4252c1dab04a39e74249a376884dbc35606d47653d0b5407b68687b238d42\": container with ID starting with 9ff4252c1dab04a39e74249a376884dbc35606d47653d0b5407b68687b238d42 not found: ID does not exist" Jan 20 18:31:46 crc kubenswrapper[4661]: I0120 18:31:46.171053 4661 scope.go:117] "RemoveContainer" containerID="e75ec9272720f4366f11320fe3179111027386e816cbcd0f507f8d9042390c74" Jan 20 18:31:46 crc kubenswrapper[4661]: E0120 18:31:46.171543 4661 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e75ec9272720f4366f11320fe3179111027386e816cbcd0f507f8d9042390c74\": container with ID starting with e75ec9272720f4366f11320fe3179111027386e816cbcd0f507f8d9042390c74 not found: ID does not exist" containerID="e75ec9272720f4366f11320fe3179111027386e816cbcd0f507f8d9042390c74" Jan 20 18:31:46 crc kubenswrapper[4661]: I0120 18:31:46.171595 4661 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e75ec9272720f4366f11320fe3179111027386e816cbcd0f507f8d9042390c74"} err="failed to get container status \"e75ec9272720f4366f11320fe3179111027386e816cbcd0f507f8d9042390c74\": rpc error: code = NotFound desc = could not find container \"e75ec9272720f4366f11320fe3179111027386e816cbcd0f507f8d9042390c74\": container with ID starting with e75ec9272720f4366f11320fe3179111027386e816cbcd0f507f8d9042390c74 not found: ID does not exist" Jan 20 18:31:46 crc kubenswrapper[4661]: I0120 18:31:46.171631 4661 scope.go:117] "RemoveContainer" containerID="3142a6d3c03f06f2fe8b6be8b492feceb25ccf8f937531657854c8a7422adabe" Jan 20 18:31:46 crc kubenswrapper[4661]: E0120 18:31:46.172026 4661 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3142a6d3c03f06f2fe8b6be8b492feceb25ccf8f937531657854c8a7422adabe\": container with ID starting with 3142a6d3c03f06f2fe8b6be8b492feceb25ccf8f937531657854c8a7422adabe not found: ID does not exist" containerID="3142a6d3c03f06f2fe8b6be8b492feceb25ccf8f937531657854c8a7422adabe" Jan 20 18:31:46 crc kubenswrapper[4661]: I0120 18:31:46.172075 4661 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3142a6d3c03f06f2fe8b6be8b492feceb25ccf8f937531657854c8a7422adabe"} err="failed to get container status \"3142a6d3c03f06f2fe8b6be8b492feceb25ccf8f937531657854c8a7422adabe\": rpc error: code = NotFound desc = could not find container \"3142a6d3c03f06f2fe8b6be8b492feceb25ccf8f937531657854c8a7422adabe\": container with ID starting with 3142a6d3c03f06f2fe8b6be8b492feceb25ccf8f937531657854c8a7422adabe not found: ID does not exist" Jan 20 18:31:49 crc kubenswrapper[4661]: I0120 18:31:49.231582 4661 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-4mwpl" Jan 20 18:31:49 crc kubenswrapper[4661]: I0120 18:31:49.286779 4661 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-4mwpl" Jan 20 18:31:49 crc kubenswrapper[4661]: I0120 18:31:49.473394 4661 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-4mwpl"] Jan 20 18:31:51 crc kubenswrapper[4661]: I0120 18:31:51.132731 4661 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-4mwpl" podUID="16789029-eec6-4884-9bd7-52e99134dc15" containerName="registry-server" containerID="cri-o://a2e3ddd65eafc61e033e6eab9565cfdd89c03dc5e76fa145ce2bccc59222f38a" gracePeriod=2 Jan 20 18:31:51 crc kubenswrapper[4661]: I0120 18:31:51.612315 4661 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4mwpl" Jan 20 18:31:51 crc kubenswrapper[4661]: I0120 18:31:51.678594 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/16789029-eec6-4884-9bd7-52e99134dc15-utilities\") pod \"16789029-eec6-4884-9bd7-52e99134dc15\" (UID: \"16789029-eec6-4884-9bd7-52e99134dc15\") " Jan 20 18:31:51 crc kubenswrapper[4661]: I0120 18:31:51.678718 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/16789029-eec6-4884-9bd7-52e99134dc15-catalog-content\") pod \"16789029-eec6-4884-9bd7-52e99134dc15\" (UID: \"16789029-eec6-4884-9bd7-52e99134dc15\") " Jan 20 18:31:51 crc kubenswrapper[4661]: I0120 18:31:51.678836 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qxtf9\" (UniqueName: \"kubernetes.io/projected/16789029-eec6-4884-9bd7-52e99134dc15-kube-api-access-qxtf9\") pod \"16789029-eec6-4884-9bd7-52e99134dc15\" (UID: \"16789029-eec6-4884-9bd7-52e99134dc15\") " Jan 20 18:31:51 crc kubenswrapper[4661]: I0120 18:31:51.679953 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/16789029-eec6-4884-9bd7-52e99134dc15-utilities" (OuterVolumeSpecName: "utilities") pod "16789029-eec6-4884-9bd7-52e99134dc15" (UID: "16789029-eec6-4884-9bd7-52e99134dc15"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 18:31:51 crc kubenswrapper[4661]: I0120 18:31:51.697237 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/16789029-eec6-4884-9bd7-52e99134dc15-kube-api-access-qxtf9" (OuterVolumeSpecName: "kube-api-access-qxtf9") pod "16789029-eec6-4884-9bd7-52e99134dc15" (UID: "16789029-eec6-4884-9bd7-52e99134dc15"). InnerVolumeSpecName "kube-api-access-qxtf9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 18:31:51 crc kubenswrapper[4661]: I0120 18:31:51.743322 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/16789029-eec6-4884-9bd7-52e99134dc15-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "16789029-eec6-4884-9bd7-52e99134dc15" (UID: "16789029-eec6-4884-9bd7-52e99134dc15"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 18:31:51 crc kubenswrapper[4661]: I0120 18:31:51.783012 4661 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qxtf9\" (UniqueName: \"kubernetes.io/projected/16789029-eec6-4884-9bd7-52e99134dc15-kube-api-access-qxtf9\") on node \"crc\" DevicePath \"\"" Jan 20 18:31:51 crc kubenswrapper[4661]: I0120 18:31:51.783046 4661 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/16789029-eec6-4884-9bd7-52e99134dc15-utilities\") on node \"crc\" DevicePath \"\"" Jan 20 18:31:51 crc kubenswrapper[4661]: I0120 18:31:51.783055 4661 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/16789029-eec6-4884-9bd7-52e99134dc15-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 20 18:31:52 crc kubenswrapper[4661]: I0120 18:31:52.142705 4661 generic.go:334] "Generic (PLEG): container finished" podID="16789029-eec6-4884-9bd7-52e99134dc15" containerID="a2e3ddd65eafc61e033e6eab9565cfdd89c03dc5e76fa145ce2bccc59222f38a" exitCode=0 Jan 20 18:31:52 crc kubenswrapper[4661]: I0120 18:31:52.142796 4661 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4mwpl" Jan 20 18:31:52 crc kubenswrapper[4661]: I0120 18:31:52.161806 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4mwpl" event={"ID":"16789029-eec6-4884-9bd7-52e99134dc15","Type":"ContainerDied","Data":"a2e3ddd65eafc61e033e6eab9565cfdd89c03dc5e76fa145ce2bccc59222f38a"} Jan 20 18:31:52 crc kubenswrapper[4661]: I0120 18:31:52.161844 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4mwpl" event={"ID":"16789029-eec6-4884-9bd7-52e99134dc15","Type":"ContainerDied","Data":"4540c33fcdc564bea88fa94784ef52bede1f6f33ff62d904d6a99ec5bdfc516b"} Jan 20 18:31:52 crc kubenswrapper[4661]: I0120 18:31:52.161863 4661 scope.go:117] "RemoveContainer" containerID="a2e3ddd65eafc61e033e6eab9565cfdd89c03dc5e76fa145ce2bccc59222f38a" Jan 20 18:31:52 crc kubenswrapper[4661]: I0120 18:31:52.192385 4661 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-4mwpl"] Jan 20 18:31:52 crc kubenswrapper[4661]: I0120 18:31:52.203892 4661 scope.go:117] "RemoveContainer" containerID="36b9e1a3e8705e6067b78bbf80f85964623b11972a97899be4c4edb0136e2f94" Jan 20 18:31:52 crc kubenswrapper[4661]: I0120 18:31:52.204743 4661 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-4mwpl"] Jan 20 18:31:52 crc kubenswrapper[4661]: I0120 18:31:52.221855 4661 scope.go:117] "RemoveContainer" containerID="84e9eb67b19d02e21b8f3e9dc6fa61a4327ddbcedc1e93c9143337e2ab446fce" Jan 20 18:31:52 crc kubenswrapper[4661]: I0120 18:31:52.260080 4661 scope.go:117] "RemoveContainer" containerID="a2e3ddd65eafc61e033e6eab9565cfdd89c03dc5e76fa145ce2bccc59222f38a" Jan 20 18:31:52 crc kubenswrapper[4661]: E0120 18:31:52.261055 4661 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a2e3ddd65eafc61e033e6eab9565cfdd89c03dc5e76fa145ce2bccc59222f38a\": container with ID starting with a2e3ddd65eafc61e033e6eab9565cfdd89c03dc5e76fa145ce2bccc59222f38a not found: ID does not exist" containerID="a2e3ddd65eafc61e033e6eab9565cfdd89c03dc5e76fa145ce2bccc59222f38a" Jan 20 18:31:52 crc kubenswrapper[4661]: I0120 18:31:52.261135 4661 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a2e3ddd65eafc61e033e6eab9565cfdd89c03dc5e76fa145ce2bccc59222f38a"} err="failed to get container status \"a2e3ddd65eafc61e033e6eab9565cfdd89c03dc5e76fa145ce2bccc59222f38a\": rpc error: code = NotFound desc = could not find container \"a2e3ddd65eafc61e033e6eab9565cfdd89c03dc5e76fa145ce2bccc59222f38a\": container with ID starting with a2e3ddd65eafc61e033e6eab9565cfdd89c03dc5e76fa145ce2bccc59222f38a not found: ID does not exist" Jan 20 18:31:52 crc kubenswrapper[4661]: I0120 18:31:52.261194 4661 scope.go:117] "RemoveContainer" containerID="36b9e1a3e8705e6067b78bbf80f85964623b11972a97899be4c4edb0136e2f94" Jan 20 18:31:52 crc kubenswrapper[4661]: E0120 18:31:52.263288 4661 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"36b9e1a3e8705e6067b78bbf80f85964623b11972a97899be4c4edb0136e2f94\": container with ID starting with 36b9e1a3e8705e6067b78bbf80f85964623b11972a97899be4c4edb0136e2f94 not found: ID does not exist" containerID="36b9e1a3e8705e6067b78bbf80f85964623b11972a97899be4c4edb0136e2f94" Jan 20 18:31:52 crc kubenswrapper[4661]: I0120 18:31:52.263403 4661 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"36b9e1a3e8705e6067b78bbf80f85964623b11972a97899be4c4edb0136e2f94"} err="failed to get container status \"36b9e1a3e8705e6067b78bbf80f85964623b11972a97899be4c4edb0136e2f94\": rpc error: code = NotFound desc = could not find container \"36b9e1a3e8705e6067b78bbf80f85964623b11972a97899be4c4edb0136e2f94\": container with ID starting with 36b9e1a3e8705e6067b78bbf80f85964623b11972a97899be4c4edb0136e2f94 not found: ID does not exist" Jan 20 18:31:52 crc kubenswrapper[4661]: I0120 18:31:52.263443 4661 scope.go:117] "RemoveContainer" containerID="84e9eb67b19d02e21b8f3e9dc6fa61a4327ddbcedc1e93c9143337e2ab446fce" Jan 20 18:31:52 crc kubenswrapper[4661]: E0120 18:31:52.264289 4661 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"84e9eb67b19d02e21b8f3e9dc6fa61a4327ddbcedc1e93c9143337e2ab446fce\": container with ID starting with 84e9eb67b19d02e21b8f3e9dc6fa61a4327ddbcedc1e93c9143337e2ab446fce not found: ID does not exist" containerID="84e9eb67b19d02e21b8f3e9dc6fa61a4327ddbcedc1e93c9143337e2ab446fce" Jan 20 18:31:52 crc kubenswrapper[4661]: I0120 18:31:52.264320 4661 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"84e9eb67b19d02e21b8f3e9dc6fa61a4327ddbcedc1e93c9143337e2ab446fce"} err="failed to get container status \"84e9eb67b19d02e21b8f3e9dc6fa61a4327ddbcedc1e93c9143337e2ab446fce\": rpc error: code = NotFound desc = could not find container \"84e9eb67b19d02e21b8f3e9dc6fa61a4327ddbcedc1e93c9143337e2ab446fce\": container with ID starting with 84e9eb67b19d02e21b8f3e9dc6fa61a4327ddbcedc1e93c9143337e2ab446fce not found: ID does not exist" Jan 20 18:31:54 crc kubenswrapper[4661]: I0120 18:31:54.154979 4661 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="16789029-eec6-4884-9bd7-52e99134dc15" path="/var/lib/kubelet/pods/16789029-eec6-4884-9bd7-52e99134dc15/volumes" Jan 20 18:31:59 crc kubenswrapper[4661]: I0120 18:31:59.323965 4661 patch_prober.go:28] interesting pod/machine-config-daemon-svf7c container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 20 18:31:59 crc kubenswrapper[4661]: I0120 18:31:59.324547 4661 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-svf7c" podUID="78855c94-da90-4523-8d65-70f7fd153dee" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 20 18:31:59 crc kubenswrapper[4661]: I0120 18:31:59.324599 4661 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-svf7c" Jan 20 18:31:59 crc kubenswrapper[4661]: I0120 18:31:59.325458 4661 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"a002274f41223b9a3369067182e338344af8ce86db3dfb1e5a412006f071924e"} pod="openshift-machine-config-operator/machine-config-daemon-svf7c" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 20 18:31:59 crc kubenswrapper[4661]: I0120 18:31:59.325528 4661 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-svf7c" podUID="78855c94-da90-4523-8d65-70f7fd153dee" containerName="machine-config-daemon" containerID="cri-o://a002274f41223b9a3369067182e338344af8ce86db3dfb1e5a412006f071924e" gracePeriod=600 Jan 20 18:31:59 crc kubenswrapper[4661]: E0120 18:31:59.487095 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-svf7c_openshift-machine-config-operator(78855c94-da90-4523-8d65-70f7fd153dee)\"" pod="openshift-machine-config-operator/machine-config-daemon-svf7c" podUID="78855c94-da90-4523-8d65-70f7fd153dee" Jan 20 18:32:00 crc kubenswrapper[4661]: I0120 18:32:00.213059 4661 generic.go:334] "Generic (PLEG): container finished" podID="78855c94-da90-4523-8d65-70f7fd153dee" containerID="a002274f41223b9a3369067182e338344af8ce86db3dfb1e5a412006f071924e" exitCode=0 Jan 20 18:32:00 crc kubenswrapper[4661]: I0120 18:32:00.213129 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-svf7c" event={"ID":"78855c94-da90-4523-8d65-70f7fd153dee","Type":"ContainerDied","Data":"a002274f41223b9a3369067182e338344af8ce86db3dfb1e5a412006f071924e"} Jan 20 18:32:00 crc kubenswrapper[4661]: I0120 18:32:00.213197 4661 scope.go:117] "RemoveContainer" containerID="f275e114f07b1c3b029b2e359e7da6e2181e513e10ba8fa3419553c67d8e09a7" Jan 20 18:32:00 crc kubenswrapper[4661]: I0120 18:32:00.213964 4661 scope.go:117] "RemoveContainer" containerID="a002274f41223b9a3369067182e338344af8ce86db3dfb1e5a412006f071924e" Jan 20 18:32:00 crc kubenswrapper[4661]: E0120 18:32:00.214418 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-svf7c_openshift-machine-config-operator(78855c94-da90-4523-8d65-70f7fd153dee)\"" pod="openshift-machine-config-operator/machine-config-daemon-svf7c" podUID="78855c94-da90-4523-8d65-70f7fd153dee" Jan 20 18:32:11 crc kubenswrapper[4661]: I0120 18:32:11.141465 4661 scope.go:117] "RemoveContainer" containerID="a002274f41223b9a3369067182e338344af8ce86db3dfb1e5a412006f071924e" Jan 20 18:32:11 crc kubenswrapper[4661]: E0120 18:32:11.142067 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-svf7c_openshift-machine-config-operator(78855c94-da90-4523-8d65-70f7fd153dee)\"" pod="openshift-machine-config-operator/machine-config-daemon-svf7c" podUID="78855c94-da90-4523-8d65-70f7fd153dee" Jan 20 18:32:25 crc kubenswrapper[4661]: I0120 18:32:25.143045 4661 scope.go:117] "RemoveContainer" containerID="a002274f41223b9a3369067182e338344af8ce86db3dfb1e5a412006f071924e" Jan 20 18:32:25 crc kubenswrapper[4661]: E0120 18:32:25.143905 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-svf7c_openshift-machine-config-operator(78855c94-da90-4523-8d65-70f7fd153dee)\"" pod="openshift-machine-config-operator/machine-config-daemon-svf7c" podUID="78855c94-da90-4523-8d65-70f7fd153dee" Jan 20 18:32:38 crc kubenswrapper[4661]: I0120 18:32:38.141940 4661 scope.go:117] "RemoveContainer" containerID="a002274f41223b9a3369067182e338344af8ce86db3dfb1e5a412006f071924e" Jan 20 18:32:38 crc kubenswrapper[4661]: E0120 18:32:38.142729 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-svf7c_openshift-machine-config-operator(78855c94-da90-4523-8d65-70f7fd153dee)\"" pod="openshift-machine-config-operator/machine-config-daemon-svf7c" podUID="78855c94-da90-4523-8d65-70f7fd153dee" Jan 20 18:32:53 crc kubenswrapper[4661]: I0120 18:32:53.142400 4661 scope.go:117] "RemoveContainer" containerID="a002274f41223b9a3369067182e338344af8ce86db3dfb1e5a412006f071924e" Jan 20 18:32:53 crc kubenswrapper[4661]: E0120 18:32:53.143412 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-svf7c_openshift-machine-config-operator(78855c94-da90-4523-8d65-70f7fd153dee)\"" pod="openshift-machine-config-operator/machine-config-daemon-svf7c" podUID="78855c94-da90-4523-8d65-70f7fd153dee" Jan 20 18:33:07 crc kubenswrapper[4661]: I0120 18:33:07.143231 4661 scope.go:117] "RemoveContainer" containerID="a002274f41223b9a3369067182e338344af8ce86db3dfb1e5a412006f071924e" Jan 20 18:33:07 crc kubenswrapper[4661]: E0120 18:33:07.144573 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-svf7c_openshift-machine-config-operator(78855c94-da90-4523-8d65-70f7fd153dee)\"" pod="openshift-machine-config-operator/machine-config-daemon-svf7c" podUID="78855c94-da90-4523-8d65-70f7fd153dee" Jan 20 18:33:15 crc kubenswrapper[4661]: I0120 18:33:15.308368 4661 scope.go:117] "RemoveContainer" containerID="8ef29d914a654b2a7c8dd6fd7036af984b4b3272ea8a1d028d9cf67d21169514" Jan 20 18:33:15 crc kubenswrapper[4661]: I0120 18:33:15.330571 4661 scope.go:117] "RemoveContainer" containerID="eecd28c7713ef934ce80f916120c1f21736c559183af0d31f9a0212a18869453" Jan 20 18:33:15 crc kubenswrapper[4661]: I0120 18:33:15.355861 4661 scope.go:117] "RemoveContainer" containerID="85ae039c6ba58ca8acaec9761c0ab69e29eda80f7400d96e264e079a6c80dc89" Jan 20 18:33:15 crc kubenswrapper[4661]: I0120 18:33:15.380255 4661 scope.go:117] "RemoveContainer" containerID="9e152bee87e820935f37220ae3486053f2f75adad8437cd919f41e0432a28bb1" Jan 20 18:33:15 crc kubenswrapper[4661]: I0120 18:33:15.420288 4661 scope.go:117] "RemoveContainer" containerID="386b4ba09b2989b6d3d4eb6cebb2ccc48f527d9df337d2a139365862bf3325cd" Jan 20 18:33:18 crc kubenswrapper[4661]: I0120 18:33:18.143534 4661 scope.go:117] "RemoveContainer" containerID="a002274f41223b9a3369067182e338344af8ce86db3dfb1e5a412006f071924e" Jan 20 18:33:18 crc kubenswrapper[4661]: E0120 18:33:18.144053 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-svf7c_openshift-machine-config-operator(78855c94-da90-4523-8d65-70f7fd153dee)\"" pod="openshift-machine-config-operator/machine-config-daemon-svf7c" podUID="78855c94-da90-4523-8d65-70f7fd153dee" Jan 20 18:33:21 crc kubenswrapper[4661]: I0120 18:33:21.114322 4661 generic.go:334] "Generic (PLEG): container finished" podID="f5353890-587c-4ab1-b6a5-8e21f7c573b5" containerID="6fb88c73f55b9609f9d50f268b3726ae00d7cf53163008b6105adffe39fdd063" exitCode=0 Jan 20 18:33:21 crc kubenswrapper[4661]: I0120 18:33:21.114457 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-rp546" event={"ID":"f5353890-587c-4ab1-b6a5-8e21f7c573b5","Type":"ContainerDied","Data":"6fb88c73f55b9609f9d50f268b3726ae00d7cf53163008b6105adffe39fdd063"} Jan 20 18:33:22 crc kubenswrapper[4661]: I0120 18:33:22.564482 4661 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-rp546" Jan 20 18:33:22 crc kubenswrapper[4661]: I0120 18:33:22.729636 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f5353890-587c-4ab1-b6a5-8e21f7c573b5-inventory\") pod \"f5353890-587c-4ab1-b6a5-8e21f7c573b5\" (UID: \"f5353890-587c-4ab1-b6a5-8e21f7c573b5\") " Jan 20 18:33:22 crc kubenswrapper[4661]: I0120 18:33:22.730388 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f5353890-587c-4ab1-b6a5-8e21f7c573b5-bootstrap-combined-ca-bundle\") pod \"f5353890-587c-4ab1-b6a5-8e21f7c573b5\" (UID: \"f5353890-587c-4ab1-b6a5-8e21f7c573b5\") " Jan 20 18:33:22 crc kubenswrapper[4661]: I0120 18:33:22.730587 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f5353890-587c-4ab1-b6a5-8e21f7c573b5-ssh-key-openstack-edpm-ipam\") pod \"f5353890-587c-4ab1-b6a5-8e21f7c573b5\" (UID: \"f5353890-587c-4ab1-b6a5-8e21f7c573b5\") " Jan 20 18:33:22 crc kubenswrapper[4661]: I0120 18:33:22.730683 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hdrnk\" (UniqueName: \"kubernetes.io/projected/f5353890-587c-4ab1-b6a5-8e21f7c573b5-kube-api-access-hdrnk\") pod \"f5353890-587c-4ab1-b6a5-8e21f7c573b5\" (UID: \"f5353890-587c-4ab1-b6a5-8e21f7c573b5\") " Jan 20 18:33:22 crc kubenswrapper[4661]: I0120 18:33:22.736362 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f5353890-587c-4ab1-b6a5-8e21f7c573b5-kube-api-access-hdrnk" (OuterVolumeSpecName: "kube-api-access-hdrnk") pod "f5353890-587c-4ab1-b6a5-8e21f7c573b5" (UID: "f5353890-587c-4ab1-b6a5-8e21f7c573b5"). InnerVolumeSpecName "kube-api-access-hdrnk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 18:33:22 crc kubenswrapper[4661]: I0120 18:33:22.748695 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f5353890-587c-4ab1-b6a5-8e21f7c573b5-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "f5353890-587c-4ab1-b6a5-8e21f7c573b5" (UID: "f5353890-587c-4ab1-b6a5-8e21f7c573b5"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 18:33:22 crc kubenswrapper[4661]: I0120 18:33:22.764123 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f5353890-587c-4ab1-b6a5-8e21f7c573b5-inventory" (OuterVolumeSpecName: "inventory") pod "f5353890-587c-4ab1-b6a5-8e21f7c573b5" (UID: "f5353890-587c-4ab1-b6a5-8e21f7c573b5"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 18:33:22 crc kubenswrapper[4661]: I0120 18:33:22.782765 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f5353890-587c-4ab1-b6a5-8e21f7c573b5-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "f5353890-587c-4ab1-b6a5-8e21f7c573b5" (UID: "f5353890-587c-4ab1-b6a5-8e21f7c573b5"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 18:33:22 crc kubenswrapper[4661]: I0120 18:33:22.832562 4661 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f5353890-587c-4ab1-b6a5-8e21f7c573b5-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 20 18:33:22 crc kubenswrapper[4661]: I0120 18:33:22.832824 4661 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hdrnk\" (UniqueName: \"kubernetes.io/projected/f5353890-587c-4ab1-b6a5-8e21f7c573b5-kube-api-access-hdrnk\") on node \"crc\" DevicePath \"\"" Jan 20 18:33:22 crc kubenswrapper[4661]: I0120 18:33:22.832835 4661 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f5353890-587c-4ab1-b6a5-8e21f7c573b5-inventory\") on node \"crc\" DevicePath \"\"" Jan 20 18:33:22 crc kubenswrapper[4661]: I0120 18:33:22.832843 4661 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f5353890-587c-4ab1-b6a5-8e21f7c573b5-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 20 18:33:23 crc kubenswrapper[4661]: I0120 18:33:23.148446 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-rp546" event={"ID":"f5353890-587c-4ab1-b6a5-8e21f7c573b5","Type":"ContainerDied","Data":"5afe416b4840876d23ffeb5bc70a10de5e51643a1452291c25bdcf722a3571da"} Jan 20 18:33:23 crc kubenswrapper[4661]: I0120 18:33:23.148486 4661 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5afe416b4840876d23ffeb5bc70a10de5e51643a1452291c25bdcf722a3571da" Jan 20 18:33:23 crc kubenswrapper[4661]: I0120 18:33:23.148497 4661 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-rp546" Jan 20 18:33:23 crc kubenswrapper[4661]: I0120 18:33:23.236497 4661 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-rg7xl"] Jan 20 18:33:23 crc kubenswrapper[4661]: E0120 18:33:23.237013 4661 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="055a197f-ec7c-4188-a58a-7a87500bcd36" containerName="extract-utilities" Jan 20 18:33:23 crc kubenswrapper[4661]: I0120 18:33:23.237046 4661 state_mem.go:107] "Deleted CPUSet assignment" podUID="055a197f-ec7c-4188-a58a-7a87500bcd36" containerName="extract-utilities" Jan 20 18:33:23 crc kubenswrapper[4661]: E0120 18:33:23.237067 4661 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f5353890-587c-4ab1-b6a5-8e21f7c573b5" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Jan 20 18:33:23 crc kubenswrapper[4661]: I0120 18:33:23.237080 4661 state_mem.go:107] "Deleted CPUSet assignment" podUID="f5353890-587c-4ab1-b6a5-8e21f7c573b5" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Jan 20 18:33:23 crc kubenswrapper[4661]: E0120 18:33:23.237104 4661 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="055a197f-ec7c-4188-a58a-7a87500bcd36" containerName="registry-server" Jan 20 18:33:23 crc kubenswrapper[4661]: I0120 18:33:23.237115 4661 state_mem.go:107] "Deleted CPUSet assignment" podUID="055a197f-ec7c-4188-a58a-7a87500bcd36" containerName="registry-server" Jan 20 18:33:23 crc kubenswrapper[4661]: E0120 18:33:23.237132 4661 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="16789029-eec6-4884-9bd7-52e99134dc15" containerName="extract-utilities" Jan 20 18:33:23 crc kubenswrapper[4661]: I0120 18:33:23.237142 4661 state_mem.go:107] "Deleted CPUSet assignment" podUID="16789029-eec6-4884-9bd7-52e99134dc15" containerName="extract-utilities" Jan 20 18:33:23 crc kubenswrapper[4661]: E0120 18:33:23.237162 4661 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="16789029-eec6-4884-9bd7-52e99134dc15" containerName="registry-server" Jan 20 18:33:23 crc kubenswrapper[4661]: I0120 18:33:23.237175 4661 state_mem.go:107] "Deleted CPUSet assignment" podUID="16789029-eec6-4884-9bd7-52e99134dc15" containerName="registry-server" Jan 20 18:33:23 crc kubenswrapper[4661]: E0120 18:33:23.237200 4661 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="16789029-eec6-4884-9bd7-52e99134dc15" containerName="extract-content" Jan 20 18:33:23 crc kubenswrapper[4661]: I0120 18:33:23.237211 4661 state_mem.go:107] "Deleted CPUSet assignment" podUID="16789029-eec6-4884-9bd7-52e99134dc15" containerName="extract-content" Jan 20 18:33:23 crc kubenswrapper[4661]: E0120 18:33:23.237237 4661 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="055a197f-ec7c-4188-a58a-7a87500bcd36" containerName="extract-content" Jan 20 18:33:23 crc kubenswrapper[4661]: I0120 18:33:23.237248 4661 state_mem.go:107] "Deleted CPUSet assignment" podUID="055a197f-ec7c-4188-a58a-7a87500bcd36" containerName="extract-content" Jan 20 18:33:23 crc kubenswrapper[4661]: I0120 18:33:23.237505 4661 memory_manager.go:354] "RemoveStaleState removing state" podUID="f5353890-587c-4ab1-b6a5-8e21f7c573b5" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Jan 20 18:33:23 crc kubenswrapper[4661]: I0120 18:33:23.237533 4661 memory_manager.go:354] "RemoveStaleState removing state" podUID="055a197f-ec7c-4188-a58a-7a87500bcd36" containerName="registry-server" Jan 20 18:33:23 crc kubenswrapper[4661]: I0120 18:33:23.237557 4661 memory_manager.go:354] "RemoveStaleState removing state" podUID="16789029-eec6-4884-9bd7-52e99134dc15" containerName="registry-server" Jan 20 18:33:23 crc kubenswrapper[4661]: I0120 18:33:23.238473 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-rg7xl" Jan 20 18:33:23 crc kubenswrapper[4661]: I0120 18:33:23.240739 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 20 18:33:23 crc kubenswrapper[4661]: I0120 18:33:23.242222 4661 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 20 18:33:23 crc kubenswrapper[4661]: I0120 18:33:23.245066 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-mmbv8" Jan 20 18:33:23 crc kubenswrapper[4661]: I0120 18:33:23.245279 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 20 18:33:23 crc kubenswrapper[4661]: I0120 18:33:23.265139 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-rg7xl"] Jan 20 18:33:23 crc kubenswrapper[4661]: I0120 18:33:23.341436 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xtd65\" (UniqueName: \"kubernetes.io/projected/4931cafe-18cd-4020-9112-610654812598-kube-api-access-xtd65\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-rg7xl\" (UID: \"4931cafe-18cd-4020-9112-610654812598\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-rg7xl" Jan 20 18:33:23 crc kubenswrapper[4661]: I0120 18:33:23.341495 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4931cafe-18cd-4020-9112-610654812598-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-rg7xl\" (UID: \"4931cafe-18cd-4020-9112-610654812598\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-rg7xl" Jan 20 18:33:23 crc kubenswrapper[4661]: I0120 18:33:23.341647 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4931cafe-18cd-4020-9112-610654812598-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-rg7xl\" (UID: \"4931cafe-18cd-4020-9112-610654812598\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-rg7xl" Jan 20 18:33:23 crc kubenswrapper[4661]: I0120 18:33:23.442838 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4931cafe-18cd-4020-9112-610654812598-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-rg7xl\" (UID: \"4931cafe-18cd-4020-9112-610654812598\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-rg7xl" Jan 20 18:33:23 crc kubenswrapper[4661]: I0120 18:33:23.442969 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xtd65\" (UniqueName: \"kubernetes.io/projected/4931cafe-18cd-4020-9112-610654812598-kube-api-access-xtd65\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-rg7xl\" (UID: \"4931cafe-18cd-4020-9112-610654812598\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-rg7xl" Jan 20 18:33:23 crc kubenswrapper[4661]: I0120 18:33:23.442999 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4931cafe-18cd-4020-9112-610654812598-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-rg7xl\" (UID: \"4931cafe-18cd-4020-9112-610654812598\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-rg7xl" Jan 20 18:33:23 crc kubenswrapper[4661]: I0120 18:33:23.447395 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4931cafe-18cd-4020-9112-610654812598-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-rg7xl\" (UID: \"4931cafe-18cd-4020-9112-610654812598\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-rg7xl" Jan 20 18:33:23 crc kubenswrapper[4661]: I0120 18:33:23.448863 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4931cafe-18cd-4020-9112-610654812598-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-rg7xl\" (UID: \"4931cafe-18cd-4020-9112-610654812598\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-rg7xl" Jan 20 18:33:23 crc kubenswrapper[4661]: I0120 18:33:23.473702 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xtd65\" (UniqueName: \"kubernetes.io/projected/4931cafe-18cd-4020-9112-610654812598-kube-api-access-xtd65\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-rg7xl\" (UID: \"4931cafe-18cd-4020-9112-610654812598\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-rg7xl" Jan 20 18:33:23 crc kubenswrapper[4661]: I0120 18:33:23.565344 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-rg7xl" Jan 20 18:33:24 crc kubenswrapper[4661]: I0120 18:33:24.113504 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-rg7xl"] Jan 20 18:33:24 crc kubenswrapper[4661]: I0120 18:33:24.126240 4661 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 20 18:33:24 crc kubenswrapper[4661]: I0120 18:33:24.176220 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-rg7xl" event={"ID":"4931cafe-18cd-4020-9112-610654812598","Type":"ContainerStarted","Data":"685aff150bcdaa8319e73eaec465395ad62fa75b32bcc593a2639e731110efe3"} Jan 20 18:33:25 crc kubenswrapper[4661]: I0120 18:33:25.186804 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-rg7xl" event={"ID":"4931cafe-18cd-4020-9112-610654812598","Type":"ContainerStarted","Data":"7337c5fd849d4afc21661eac4a3715efcde8f73ebfa8acba7df21a1df12e0ce2"} Jan 20 18:33:25 crc kubenswrapper[4661]: I0120 18:33:25.211405 4661 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-rg7xl" podStartSLOduration=1.6821045190000001 podStartE2EDuration="2.211377321s" podCreationTimestamp="2026-01-20 18:33:23 +0000 UTC" firstStartedPulling="2026-01-20 18:33:24.125968256 +0000 UTC m=+1660.456757938" lastFinishedPulling="2026-01-20 18:33:24.655241068 +0000 UTC m=+1660.986030740" observedRunningTime="2026-01-20 18:33:25.201430119 +0000 UTC m=+1661.532219781" watchObservedRunningTime="2026-01-20 18:33:25.211377321 +0000 UTC m=+1661.542166993" Jan 20 18:33:31 crc kubenswrapper[4661]: I0120 18:33:31.142143 4661 scope.go:117] "RemoveContainer" containerID="a002274f41223b9a3369067182e338344af8ce86db3dfb1e5a412006f071924e" Jan 20 18:33:31 crc kubenswrapper[4661]: E0120 18:33:31.143027 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-svf7c_openshift-machine-config-operator(78855c94-da90-4523-8d65-70f7fd153dee)\"" pod="openshift-machine-config-operator/machine-config-daemon-svf7c" podUID="78855c94-da90-4523-8d65-70f7fd153dee" Jan 20 18:33:42 crc kubenswrapper[4661]: I0120 18:33:42.142789 4661 scope.go:117] "RemoveContainer" containerID="a002274f41223b9a3369067182e338344af8ce86db3dfb1e5a412006f071924e" Jan 20 18:33:42 crc kubenswrapper[4661]: E0120 18:33:42.143553 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-svf7c_openshift-machine-config-operator(78855c94-da90-4523-8d65-70f7fd153dee)\"" pod="openshift-machine-config-operator/machine-config-daemon-svf7c" podUID="78855c94-da90-4523-8d65-70f7fd153dee" Jan 20 18:33:55 crc kubenswrapper[4661]: I0120 18:33:55.142465 4661 scope.go:117] "RemoveContainer" containerID="a002274f41223b9a3369067182e338344af8ce86db3dfb1e5a412006f071924e" Jan 20 18:33:55 crc kubenswrapper[4661]: E0120 18:33:55.143198 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-svf7c_openshift-machine-config-operator(78855c94-da90-4523-8d65-70f7fd153dee)\"" pod="openshift-machine-config-operator/machine-config-daemon-svf7c" podUID="78855c94-da90-4523-8d65-70f7fd153dee" Jan 20 18:34:02 crc kubenswrapper[4661]: I0120 18:34:02.073374 4661 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-create-7hvj8"] Jan 20 18:34:02 crc kubenswrapper[4661]: I0120 18:34:02.083757 4661 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-create-46xmv"] Jan 20 18:34:02 crc kubenswrapper[4661]: I0120 18:34:02.093609 4661 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-25c4-account-create-update-ztjbg"] Jan 20 18:34:02 crc kubenswrapper[4661]: I0120 18:34:02.102609 4661 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-create-skwf2"] Jan 20 18:34:02 crc kubenswrapper[4661]: I0120 18:34:02.113410 4661 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-8948-account-create-update-s48vf"] Jan 20 18:34:02 crc kubenswrapper[4661]: I0120 18:34:02.122915 4661 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-create-46xmv"] Jan 20 18:34:02 crc kubenswrapper[4661]: I0120 18:34:02.129756 4661 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-7af2-account-create-update-42cbf"] Jan 20 18:34:02 crc kubenswrapper[4661]: I0120 18:34:02.136393 4661 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-8948-account-create-update-s48vf"] Jan 20 18:34:02 crc kubenswrapper[4661]: I0120 18:34:02.153205 4661 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4b6667ee-ad18-4f1c-9e5c-ff6574793de1" path="/var/lib/kubelet/pods/4b6667ee-ad18-4f1c-9e5c-ff6574793de1/volumes" Jan 20 18:34:02 crc kubenswrapper[4661]: I0120 18:34:02.154003 4661 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d1035984-3393-4204-9840-6ede3ceef2e0" path="/var/lib/kubelet/pods/d1035984-3393-4204-9840-6ede3ceef2e0/volumes" Jan 20 18:34:02 crc kubenswrapper[4661]: I0120 18:34:02.154635 4661 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-create-7hvj8"] Jan 20 18:34:02 crc kubenswrapper[4661]: I0120 18:34:02.154771 4661 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-25c4-account-create-update-ztjbg"] Jan 20 18:34:02 crc kubenswrapper[4661]: I0120 18:34:02.160485 4661 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-7af2-account-create-update-42cbf"] Jan 20 18:34:02 crc kubenswrapper[4661]: I0120 18:34:02.167738 4661 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-create-skwf2"] Jan 20 18:34:04 crc kubenswrapper[4661]: I0120 18:34:04.162278 4661 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="28dece6c-e91e-47ad-997e-0b0e6575a39c" path="/var/lib/kubelet/pods/28dece6c-e91e-47ad-997e-0b0e6575a39c/volumes" Jan 20 18:34:04 crc kubenswrapper[4661]: I0120 18:34:04.163590 4661 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bca3d32d-572f-43fc-87b1-0a1a25a49703" path="/var/lib/kubelet/pods/bca3d32d-572f-43fc-87b1-0a1a25a49703/volumes" Jan 20 18:34:04 crc kubenswrapper[4661]: I0120 18:34:04.164321 4661 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ece50d74-6552-42cf-b0df-72749a5e2edb" path="/var/lib/kubelet/pods/ece50d74-6552-42cf-b0df-72749a5e2edb/volumes" Jan 20 18:34:04 crc kubenswrapper[4661]: I0120 18:34:04.165263 4661 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fd6f1c00-4ad6-48c1-9f69-88479e6de2f7" path="/var/lib/kubelet/pods/fd6f1c00-4ad6-48c1-9f69-88479e6de2f7/volumes" Jan 20 18:34:08 crc kubenswrapper[4661]: I0120 18:34:08.142029 4661 scope.go:117] "RemoveContainer" containerID="a002274f41223b9a3369067182e338344af8ce86db3dfb1e5a412006f071924e" Jan 20 18:34:08 crc kubenswrapper[4661]: E0120 18:34:08.142596 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-svf7c_openshift-machine-config-operator(78855c94-da90-4523-8d65-70f7fd153dee)\"" pod="openshift-machine-config-operator/machine-config-daemon-svf7c" podUID="78855c94-da90-4523-8d65-70f7fd153dee" Jan 20 18:34:10 crc kubenswrapper[4661]: I0120 18:34:10.040521 4661 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-dkb68"] Jan 20 18:34:10 crc kubenswrapper[4661]: I0120 18:34:10.055182 4661 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-dkb68"] Jan 20 18:34:10 crc kubenswrapper[4661]: I0120 18:34:10.153936 4661 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f1a5c71b-b855-4712-b208-fc14557dd032" path="/var/lib/kubelet/pods/f1a5c71b-b855-4712-b208-fc14557dd032/volumes" Jan 20 18:34:15 crc kubenswrapper[4661]: I0120 18:34:15.487019 4661 scope.go:117] "RemoveContainer" containerID="53dbd93507567298a5fc017b2004c1628a8d96a582e78b258b6696fb4175eeb0" Jan 20 18:34:15 crc kubenswrapper[4661]: I0120 18:34:15.516296 4661 scope.go:117] "RemoveContainer" containerID="9e4c8a7d827f28c523e5281cf0854f34859108e98f972c95150d51db93dfe8e0" Jan 20 18:34:15 crc kubenswrapper[4661]: I0120 18:34:15.563686 4661 scope.go:117] "RemoveContainer" containerID="c3bd633c20d931ea4c152a53725bd9c1498d20d4b02ae9324b567a0c1df8c057" Jan 20 18:34:15 crc kubenswrapper[4661]: I0120 18:34:15.612936 4661 scope.go:117] "RemoveContainer" containerID="3fd3299ad3a125607c38b5170e61b32800735fb36df0c819cf020b3b58918cdd" Jan 20 18:34:15 crc kubenswrapper[4661]: I0120 18:34:15.650111 4661 scope.go:117] "RemoveContainer" containerID="0fcc75b4d92303fc8e95432c1e013aae3f7cbe712d1e1f207d350c66e0f597c2" Jan 20 18:34:15 crc kubenswrapper[4661]: I0120 18:34:15.687278 4661 scope.go:117] "RemoveContainer" containerID="1b98bc9b353adc6754aac8ed5d8c1c3647e1efd437d4a3a951202a4910cf9f1b" Jan 20 18:34:15 crc kubenswrapper[4661]: I0120 18:34:15.718158 4661 scope.go:117] "RemoveContainer" containerID="10b0355e1ceaec1926b283ca42258049e0ac4ab4f55bfdcdf7ff23c401443adf" Jan 20 18:34:22 crc kubenswrapper[4661]: I0120 18:34:22.143890 4661 scope.go:117] "RemoveContainer" containerID="a002274f41223b9a3369067182e338344af8ce86db3dfb1e5a412006f071924e" Jan 20 18:34:22 crc kubenswrapper[4661]: E0120 18:34:22.144449 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-svf7c_openshift-machine-config-operator(78855c94-da90-4523-8d65-70f7fd153dee)\"" pod="openshift-machine-config-operator/machine-config-daemon-svf7c" podUID="78855c94-da90-4523-8d65-70f7fd153dee" Jan 20 18:34:23 crc kubenswrapper[4661]: I0120 18:34:23.024581 4661 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-create-b6hh6"] Jan 20 18:34:23 crc kubenswrapper[4661]: I0120 18:34:23.031807 4661 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-create-b6hh6"] Jan 20 18:34:24 crc kubenswrapper[4661]: I0120 18:34:24.153314 4661 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f44143b3-87a7-4a6c-a5ca-fd2ddbb5a5b2" path="/var/lib/kubelet/pods/f44143b3-87a7-4a6c-a5ca-fd2ddbb5a5b2/volumes" Jan 20 18:34:33 crc kubenswrapper[4661]: I0120 18:34:33.142640 4661 scope.go:117] "RemoveContainer" containerID="a002274f41223b9a3369067182e338344af8ce86db3dfb1e5a412006f071924e" Jan 20 18:34:33 crc kubenswrapper[4661]: E0120 18:34:33.143655 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-svf7c_openshift-machine-config-operator(78855c94-da90-4523-8d65-70f7fd153dee)\"" pod="openshift-machine-config-operator/machine-config-daemon-svf7c" podUID="78855c94-da90-4523-8d65-70f7fd153dee" Jan 20 18:34:34 crc kubenswrapper[4661]: I0120 18:34:34.060155 4661 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-6b29-account-create-update-mp485"] Jan 20 18:34:34 crc kubenswrapper[4661]: I0120 18:34:34.075413 4661 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-c726-account-create-update-9nmqv"] Jan 20 18:34:34 crc kubenswrapper[4661]: I0120 18:34:34.088444 4661 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-4694-account-create-update-d9j2f"] Jan 20 18:34:34 crc kubenswrapper[4661]: I0120 18:34:34.098244 4661 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-6b29-account-create-update-mp485"] Jan 20 18:34:34 crc kubenswrapper[4661]: I0120 18:34:34.110571 4661 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-create-g2bml"] Jan 20 18:34:34 crc kubenswrapper[4661]: I0120 18:34:34.123022 4661 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-create-p9spf"] Jan 20 18:34:34 crc kubenswrapper[4661]: I0120 18:34:34.131684 4661 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-4694-account-create-update-d9j2f"] Jan 20 18:34:34 crc kubenswrapper[4661]: I0120 18:34:34.151759 4661 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8ca68c3b-f314-4a85-80c3-e3ff0a17b449" path="/var/lib/kubelet/pods/8ca68c3b-f314-4a85-80c3-e3ff0a17b449/volumes" Jan 20 18:34:34 crc kubenswrapper[4661]: I0120 18:34:34.152296 4661 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ba6f71cd-96ff-473f-b875-b8f44a58dab4" path="/var/lib/kubelet/pods/ba6f71cd-96ff-473f-b875-b8f44a58dab4/volumes" Jan 20 18:34:34 crc kubenswrapper[4661]: I0120 18:34:34.153010 4661 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-c726-account-create-update-9nmqv"] Jan 20 18:34:34 crc kubenswrapper[4661]: I0120 18:34:34.153041 4661 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-create-g2bml"] Jan 20 18:34:34 crc kubenswrapper[4661]: I0120 18:34:34.161123 4661 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-create-p9spf"] Jan 20 18:34:36 crc kubenswrapper[4661]: I0120 18:34:36.152783 4661 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8ecbb4c9-acd4-45bd-a270-8798d5aa5926" path="/var/lib/kubelet/pods/8ecbb4c9-acd4-45bd-a270-8798d5aa5926/volumes" Jan 20 18:34:36 crc kubenswrapper[4661]: I0120 18:34:36.153788 4661 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="becce439-c271-41e5-9d39-0bdd4b284772" path="/var/lib/kubelet/pods/becce439-c271-41e5-9d39-0bdd4b284772/volumes" Jan 20 18:34:36 crc kubenswrapper[4661]: I0120 18:34:36.154371 4661 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d42815f3-34a4-40b4-bfca-cc453cb7d569" path="/var/lib/kubelet/pods/d42815f3-34a4-40b4-bfca-cc453cb7d569/volumes" Jan 20 18:34:41 crc kubenswrapper[4661]: I0120 18:34:41.037602 4661 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-sync-tck9n"] Jan 20 18:34:41 crc kubenswrapper[4661]: I0120 18:34:41.049584 4661 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-sync-tck9n"] Jan 20 18:34:42 crc kubenswrapper[4661]: I0120 18:34:42.155302 4661 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="64020290-0e73-480d-b523-d7cb664eacfd" path="/var/lib/kubelet/pods/64020290-0e73-480d-b523-d7cb664eacfd/volumes" Jan 20 18:34:42 crc kubenswrapper[4661]: I0120 18:34:42.913099 4661 generic.go:334] "Generic (PLEG): container finished" podID="4931cafe-18cd-4020-9112-610654812598" containerID="7337c5fd849d4afc21661eac4a3715efcde8f73ebfa8acba7df21a1df12e0ce2" exitCode=0 Jan 20 18:34:42 crc kubenswrapper[4661]: I0120 18:34:42.913148 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-rg7xl" event={"ID":"4931cafe-18cd-4020-9112-610654812598","Type":"ContainerDied","Data":"7337c5fd849d4afc21661eac4a3715efcde8f73ebfa8acba7df21a1df12e0ce2"} Jan 20 18:34:44 crc kubenswrapper[4661]: I0120 18:34:44.356636 4661 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-rg7xl" Jan 20 18:34:44 crc kubenswrapper[4661]: I0120 18:34:44.487432 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4931cafe-18cd-4020-9112-610654812598-ssh-key-openstack-edpm-ipam\") pod \"4931cafe-18cd-4020-9112-610654812598\" (UID: \"4931cafe-18cd-4020-9112-610654812598\") " Jan 20 18:34:44 crc kubenswrapper[4661]: I0120 18:34:44.487525 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xtd65\" (UniqueName: \"kubernetes.io/projected/4931cafe-18cd-4020-9112-610654812598-kube-api-access-xtd65\") pod \"4931cafe-18cd-4020-9112-610654812598\" (UID: \"4931cafe-18cd-4020-9112-610654812598\") " Jan 20 18:34:44 crc kubenswrapper[4661]: I0120 18:34:44.487572 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4931cafe-18cd-4020-9112-610654812598-inventory\") pod \"4931cafe-18cd-4020-9112-610654812598\" (UID: \"4931cafe-18cd-4020-9112-610654812598\") " Jan 20 18:34:44 crc kubenswrapper[4661]: I0120 18:34:44.493225 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4931cafe-18cd-4020-9112-610654812598-kube-api-access-xtd65" (OuterVolumeSpecName: "kube-api-access-xtd65") pod "4931cafe-18cd-4020-9112-610654812598" (UID: "4931cafe-18cd-4020-9112-610654812598"). InnerVolumeSpecName "kube-api-access-xtd65". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 18:34:44 crc kubenswrapper[4661]: I0120 18:34:44.510918 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4931cafe-18cd-4020-9112-610654812598-inventory" (OuterVolumeSpecName: "inventory") pod "4931cafe-18cd-4020-9112-610654812598" (UID: "4931cafe-18cd-4020-9112-610654812598"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 18:34:44 crc kubenswrapper[4661]: I0120 18:34:44.521354 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4931cafe-18cd-4020-9112-610654812598-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "4931cafe-18cd-4020-9112-610654812598" (UID: "4931cafe-18cd-4020-9112-610654812598"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 18:34:44 crc kubenswrapper[4661]: I0120 18:34:44.591065 4661 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4931cafe-18cd-4020-9112-610654812598-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 20 18:34:44 crc kubenswrapper[4661]: I0120 18:34:44.591206 4661 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xtd65\" (UniqueName: \"kubernetes.io/projected/4931cafe-18cd-4020-9112-610654812598-kube-api-access-xtd65\") on node \"crc\" DevicePath \"\"" Jan 20 18:34:44 crc kubenswrapper[4661]: I0120 18:34:44.591225 4661 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4931cafe-18cd-4020-9112-610654812598-inventory\") on node \"crc\" DevicePath \"\"" Jan 20 18:34:44 crc kubenswrapper[4661]: I0120 18:34:44.945421 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-rg7xl" event={"ID":"4931cafe-18cd-4020-9112-610654812598","Type":"ContainerDied","Data":"685aff150bcdaa8319e73eaec465395ad62fa75b32bcc593a2639e731110efe3"} Jan 20 18:34:44 crc kubenswrapper[4661]: I0120 18:34:44.945463 4661 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="685aff150bcdaa8319e73eaec465395ad62fa75b32bcc593a2639e731110efe3" Jan 20 18:34:44 crc kubenswrapper[4661]: I0120 18:34:44.945582 4661 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-rg7xl" Jan 20 18:34:45 crc kubenswrapper[4661]: I0120 18:34:45.052314 4661 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-s4q92"] Jan 20 18:34:45 crc kubenswrapper[4661]: E0120 18:34:45.052867 4661 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4931cafe-18cd-4020-9112-610654812598" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Jan 20 18:34:45 crc kubenswrapper[4661]: I0120 18:34:45.052936 4661 state_mem.go:107] "Deleted CPUSet assignment" podUID="4931cafe-18cd-4020-9112-610654812598" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Jan 20 18:34:45 crc kubenswrapper[4661]: I0120 18:34:45.053165 4661 memory_manager.go:354] "RemoveStaleState removing state" podUID="4931cafe-18cd-4020-9112-610654812598" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Jan 20 18:34:45 crc kubenswrapper[4661]: I0120 18:34:45.053797 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-s4q92" Jan 20 18:34:45 crc kubenswrapper[4661]: I0120 18:34:45.056086 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 20 18:34:45 crc kubenswrapper[4661]: I0120 18:34:45.056256 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 20 18:34:45 crc kubenswrapper[4661]: I0120 18:34:45.056557 4661 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 20 18:34:45 crc kubenswrapper[4661]: I0120 18:34:45.063415 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-mmbv8" Jan 20 18:34:45 crc kubenswrapper[4661]: I0120 18:34:45.122187 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-s4q92"] Jan 20 18:34:45 crc kubenswrapper[4661]: I0120 18:34:45.142281 4661 scope.go:117] "RemoveContainer" containerID="a002274f41223b9a3369067182e338344af8ce86db3dfb1e5a412006f071924e" Jan 20 18:34:45 crc kubenswrapper[4661]: E0120 18:34:45.142588 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-svf7c_openshift-machine-config-operator(78855c94-da90-4523-8d65-70f7fd153dee)\"" pod="openshift-machine-config-operator/machine-config-daemon-svf7c" podUID="78855c94-da90-4523-8d65-70f7fd153dee" Jan 20 18:34:45 crc kubenswrapper[4661]: I0120 18:34:45.200952 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/04ea87ee-8ceb-4be6-b968-1ab597f5c7b1-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-s4q92\" (UID: \"04ea87ee-8ceb-4be6-b968-1ab597f5c7b1\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-s4q92" Jan 20 18:34:45 crc kubenswrapper[4661]: I0120 18:34:45.201089 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/04ea87ee-8ceb-4be6-b968-1ab597f5c7b1-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-s4q92\" (UID: \"04ea87ee-8ceb-4be6-b968-1ab597f5c7b1\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-s4q92" Jan 20 18:34:45 crc kubenswrapper[4661]: I0120 18:34:45.201182 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lngkf\" (UniqueName: \"kubernetes.io/projected/04ea87ee-8ceb-4be6-b968-1ab597f5c7b1-kube-api-access-lngkf\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-s4q92\" (UID: \"04ea87ee-8ceb-4be6-b968-1ab597f5c7b1\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-s4q92" Jan 20 18:34:45 crc kubenswrapper[4661]: I0120 18:34:45.302410 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lngkf\" (UniqueName: \"kubernetes.io/projected/04ea87ee-8ceb-4be6-b968-1ab597f5c7b1-kube-api-access-lngkf\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-s4q92\" (UID: \"04ea87ee-8ceb-4be6-b968-1ab597f5c7b1\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-s4q92" Jan 20 18:34:45 crc kubenswrapper[4661]: I0120 18:34:45.302523 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/04ea87ee-8ceb-4be6-b968-1ab597f5c7b1-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-s4q92\" (UID: \"04ea87ee-8ceb-4be6-b968-1ab597f5c7b1\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-s4q92" Jan 20 18:34:45 crc kubenswrapper[4661]: I0120 18:34:45.302573 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/04ea87ee-8ceb-4be6-b968-1ab597f5c7b1-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-s4q92\" (UID: \"04ea87ee-8ceb-4be6-b968-1ab597f5c7b1\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-s4q92" Jan 20 18:34:45 crc kubenswrapper[4661]: I0120 18:34:45.307655 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/04ea87ee-8ceb-4be6-b968-1ab597f5c7b1-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-s4q92\" (UID: \"04ea87ee-8ceb-4be6-b968-1ab597f5c7b1\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-s4q92" Jan 20 18:34:45 crc kubenswrapper[4661]: I0120 18:34:45.318454 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/04ea87ee-8ceb-4be6-b968-1ab597f5c7b1-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-s4q92\" (UID: \"04ea87ee-8ceb-4be6-b968-1ab597f5c7b1\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-s4q92" Jan 20 18:34:45 crc kubenswrapper[4661]: I0120 18:34:45.339083 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lngkf\" (UniqueName: \"kubernetes.io/projected/04ea87ee-8ceb-4be6-b968-1ab597f5c7b1-kube-api-access-lngkf\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-s4q92\" (UID: \"04ea87ee-8ceb-4be6-b968-1ab597f5c7b1\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-s4q92" Jan 20 18:34:45 crc kubenswrapper[4661]: I0120 18:34:45.374096 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-s4q92" Jan 20 18:34:46 crc kubenswrapper[4661]: I0120 18:34:46.493132 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-s4q92"] Jan 20 18:34:46 crc kubenswrapper[4661]: I0120 18:34:46.961166 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-s4q92" event={"ID":"04ea87ee-8ceb-4be6-b968-1ab597f5c7b1","Type":"ContainerStarted","Data":"ff91f829c3465170b84d0add5268202110fccf1c228c9a7678f668ae2239b588"} Jan 20 18:34:47 crc kubenswrapper[4661]: I0120 18:34:47.970864 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-s4q92" event={"ID":"04ea87ee-8ceb-4be6-b968-1ab597f5c7b1","Type":"ContainerStarted","Data":"6c5731613a6a89276146dee2b213cfb655cc82037b6f9976ba27a14101f5885c"} Jan 20 18:34:48 crc kubenswrapper[4661]: I0120 18:34:48.033847 4661 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-s4q92" podStartSLOduration=2.553991804 podStartE2EDuration="3.033821045s" podCreationTimestamp="2026-01-20 18:34:45 +0000 UTC" firstStartedPulling="2026-01-20 18:34:46.499330551 +0000 UTC m=+1742.830120213" lastFinishedPulling="2026-01-20 18:34:46.979159792 +0000 UTC m=+1743.309949454" observedRunningTime="2026-01-20 18:34:47.987783025 +0000 UTC m=+1744.318572687" watchObservedRunningTime="2026-01-20 18:34:48.033821045 +0000 UTC m=+1744.364610707" Jan 20 18:34:50 crc kubenswrapper[4661]: I0120 18:34:50.033367 4661 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-sync-wq7m7"] Jan 20 18:34:50 crc kubenswrapper[4661]: I0120 18:34:50.044855 4661 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-sync-wq7m7"] Jan 20 18:34:50 crc kubenswrapper[4661]: I0120 18:34:50.157749 4661 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="289c1012-9041-4cb4-baa7-888d31048e4c" path="/var/lib/kubelet/pods/289c1012-9041-4cb4-baa7-888d31048e4c/volumes" Jan 20 18:34:53 crc kubenswrapper[4661]: I0120 18:34:53.009150 4661 generic.go:334] "Generic (PLEG): container finished" podID="04ea87ee-8ceb-4be6-b968-1ab597f5c7b1" containerID="6c5731613a6a89276146dee2b213cfb655cc82037b6f9976ba27a14101f5885c" exitCode=0 Jan 20 18:34:53 crc kubenswrapper[4661]: I0120 18:34:53.009244 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-s4q92" event={"ID":"04ea87ee-8ceb-4be6-b968-1ab597f5c7b1","Type":"ContainerDied","Data":"6c5731613a6a89276146dee2b213cfb655cc82037b6f9976ba27a14101f5885c"} Jan 20 18:34:54 crc kubenswrapper[4661]: I0120 18:34:54.456921 4661 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-s4q92" Jan 20 18:34:54 crc kubenswrapper[4661]: I0120 18:34:54.578103 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/04ea87ee-8ceb-4be6-b968-1ab597f5c7b1-inventory\") pod \"04ea87ee-8ceb-4be6-b968-1ab597f5c7b1\" (UID: \"04ea87ee-8ceb-4be6-b968-1ab597f5c7b1\") " Jan 20 18:34:54 crc kubenswrapper[4661]: I0120 18:34:54.578184 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lngkf\" (UniqueName: \"kubernetes.io/projected/04ea87ee-8ceb-4be6-b968-1ab597f5c7b1-kube-api-access-lngkf\") pod \"04ea87ee-8ceb-4be6-b968-1ab597f5c7b1\" (UID: \"04ea87ee-8ceb-4be6-b968-1ab597f5c7b1\") " Jan 20 18:34:54 crc kubenswrapper[4661]: I0120 18:34:54.578274 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/04ea87ee-8ceb-4be6-b968-1ab597f5c7b1-ssh-key-openstack-edpm-ipam\") pod \"04ea87ee-8ceb-4be6-b968-1ab597f5c7b1\" (UID: \"04ea87ee-8ceb-4be6-b968-1ab597f5c7b1\") " Jan 20 18:34:54 crc kubenswrapper[4661]: I0120 18:34:54.584072 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/04ea87ee-8ceb-4be6-b968-1ab597f5c7b1-kube-api-access-lngkf" (OuterVolumeSpecName: "kube-api-access-lngkf") pod "04ea87ee-8ceb-4be6-b968-1ab597f5c7b1" (UID: "04ea87ee-8ceb-4be6-b968-1ab597f5c7b1"). InnerVolumeSpecName "kube-api-access-lngkf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 18:34:54 crc kubenswrapper[4661]: I0120 18:34:54.606028 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/04ea87ee-8ceb-4be6-b968-1ab597f5c7b1-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "04ea87ee-8ceb-4be6-b968-1ab597f5c7b1" (UID: "04ea87ee-8ceb-4be6-b968-1ab597f5c7b1"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 18:34:54 crc kubenswrapper[4661]: I0120 18:34:54.611180 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/04ea87ee-8ceb-4be6-b968-1ab597f5c7b1-inventory" (OuterVolumeSpecName: "inventory") pod "04ea87ee-8ceb-4be6-b968-1ab597f5c7b1" (UID: "04ea87ee-8ceb-4be6-b968-1ab597f5c7b1"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 18:34:54 crc kubenswrapper[4661]: I0120 18:34:54.680134 4661 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/04ea87ee-8ceb-4be6-b968-1ab597f5c7b1-inventory\") on node \"crc\" DevicePath \"\"" Jan 20 18:34:54 crc kubenswrapper[4661]: I0120 18:34:54.680173 4661 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lngkf\" (UniqueName: \"kubernetes.io/projected/04ea87ee-8ceb-4be6-b968-1ab597f5c7b1-kube-api-access-lngkf\") on node \"crc\" DevicePath \"\"" Jan 20 18:34:54 crc kubenswrapper[4661]: I0120 18:34:54.680189 4661 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/04ea87ee-8ceb-4be6-b968-1ab597f5c7b1-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 20 18:34:55 crc kubenswrapper[4661]: I0120 18:34:55.027736 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-s4q92" event={"ID":"04ea87ee-8ceb-4be6-b968-1ab597f5c7b1","Type":"ContainerDied","Data":"ff91f829c3465170b84d0add5268202110fccf1c228c9a7678f668ae2239b588"} Jan 20 18:34:55 crc kubenswrapper[4661]: I0120 18:34:55.027933 4661 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ff91f829c3465170b84d0add5268202110fccf1c228c9a7678f668ae2239b588" Jan 20 18:34:55 crc kubenswrapper[4661]: I0120 18:34:55.027783 4661 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-s4q92" Jan 20 18:34:55 crc kubenswrapper[4661]: I0120 18:34:55.101405 4661 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-7d2mh"] Jan 20 18:34:55 crc kubenswrapper[4661]: E0120 18:34:55.101849 4661 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="04ea87ee-8ceb-4be6-b968-1ab597f5c7b1" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Jan 20 18:34:55 crc kubenswrapper[4661]: I0120 18:34:55.101871 4661 state_mem.go:107] "Deleted CPUSet assignment" podUID="04ea87ee-8ceb-4be6-b968-1ab597f5c7b1" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Jan 20 18:34:55 crc kubenswrapper[4661]: I0120 18:34:55.102033 4661 memory_manager.go:354] "RemoveStaleState removing state" podUID="04ea87ee-8ceb-4be6-b968-1ab597f5c7b1" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Jan 20 18:34:55 crc kubenswrapper[4661]: I0120 18:34:55.102642 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-7d2mh" Jan 20 18:34:55 crc kubenswrapper[4661]: I0120 18:34:55.109312 4661 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 20 18:34:55 crc kubenswrapper[4661]: I0120 18:34:55.109339 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 20 18:34:55 crc kubenswrapper[4661]: I0120 18:34:55.109883 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 20 18:34:55 crc kubenswrapper[4661]: I0120 18:34:55.109962 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-mmbv8" Jan 20 18:34:55 crc kubenswrapper[4661]: I0120 18:34:55.113059 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-7d2mh"] Jan 20 18:34:55 crc kubenswrapper[4661]: I0120 18:34:55.186612 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d6cfabfa-20dc-43e7-895a-cfeabfefcde1-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-7d2mh\" (UID: \"d6cfabfa-20dc-43e7-895a-cfeabfefcde1\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-7d2mh" Jan 20 18:34:55 crc kubenswrapper[4661]: I0120 18:34:55.187055 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d6cfabfa-20dc-43e7-895a-cfeabfefcde1-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-7d2mh\" (UID: \"d6cfabfa-20dc-43e7-895a-cfeabfefcde1\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-7d2mh" Jan 20 18:34:55 crc kubenswrapper[4661]: I0120 18:34:55.187147 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dmcl8\" (UniqueName: \"kubernetes.io/projected/d6cfabfa-20dc-43e7-895a-cfeabfefcde1-kube-api-access-dmcl8\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-7d2mh\" (UID: \"d6cfabfa-20dc-43e7-895a-cfeabfefcde1\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-7d2mh" Jan 20 18:34:55 crc kubenswrapper[4661]: I0120 18:34:55.288935 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d6cfabfa-20dc-43e7-895a-cfeabfefcde1-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-7d2mh\" (UID: \"d6cfabfa-20dc-43e7-895a-cfeabfefcde1\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-7d2mh" Jan 20 18:34:55 crc kubenswrapper[4661]: I0120 18:34:55.289123 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d6cfabfa-20dc-43e7-895a-cfeabfefcde1-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-7d2mh\" (UID: \"d6cfabfa-20dc-43e7-895a-cfeabfefcde1\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-7d2mh" Jan 20 18:34:55 crc kubenswrapper[4661]: I0120 18:34:55.289174 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dmcl8\" (UniqueName: \"kubernetes.io/projected/d6cfabfa-20dc-43e7-895a-cfeabfefcde1-kube-api-access-dmcl8\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-7d2mh\" (UID: \"d6cfabfa-20dc-43e7-895a-cfeabfefcde1\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-7d2mh" Jan 20 18:34:55 crc kubenswrapper[4661]: I0120 18:34:55.293521 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d6cfabfa-20dc-43e7-895a-cfeabfefcde1-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-7d2mh\" (UID: \"d6cfabfa-20dc-43e7-895a-cfeabfefcde1\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-7d2mh" Jan 20 18:34:55 crc kubenswrapper[4661]: I0120 18:34:55.304820 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d6cfabfa-20dc-43e7-895a-cfeabfefcde1-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-7d2mh\" (UID: \"d6cfabfa-20dc-43e7-895a-cfeabfefcde1\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-7d2mh" Jan 20 18:34:55 crc kubenswrapper[4661]: I0120 18:34:55.313212 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dmcl8\" (UniqueName: \"kubernetes.io/projected/d6cfabfa-20dc-43e7-895a-cfeabfefcde1-kube-api-access-dmcl8\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-7d2mh\" (UID: \"d6cfabfa-20dc-43e7-895a-cfeabfefcde1\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-7d2mh" Jan 20 18:34:55 crc kubenswrapper[4661]: I0120 18:34:55.424277 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-7d2mh" Jan 20 18:34:55 crc kubenswrapper[4661]: I0120 18:34:55.912769 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-7d2mh"] Jan 20 18:34:56 crc kubenswrapper[4661]: I0120 18:34:56.038106 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-7d2mh" event={"ID":"d6cfabfa-20dc-43e7-895a-cfeabfefcde1","Type":"ContainerStarted","Data":"1d7b10331d182e2e74d57e94769df30c26dabc34cd00d97289c985c4c0247918"} Jan 20 18:34:56 crc kubenswrapper[4661]: I0120 18:34:56.144745 4661 scope.go:117] "RemoveContainer" containerID="a002274f41223b9a3369067182e338344af8ce86db3dfb1e5a412006f071924e" Jan 20 18:34:56 crc kubenswrapper[4661]: E0120 18:34:56.145295 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-svf7c_openshift-machine-config-operator(78855c94-da90-4523-8d65-70f7fd153dee)\"" pod="openshift-machine-config-operator/machine-config-daemon-svf7c" podUID="78855c94-da90-4523-8d65-70f7fd153dee" Jan 20 18:34:58 crc kubenswrapper[4661]: I0120 18:34:58.062198 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-7d2mh" event={"ID":"d6cfabfa-20dc-43e7-895a-cfeabfefcde1","Type":"ContainerStarted","Data":"09d2b4b0165214612382aee45c81e5b92cbceee390be329108e78346bd4bc33d"} Jan 20 18:34:58 crc kubenswrapper[4661]: I0120 18:34:58.092501 4661 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-7d2mh" podStartSLOduration=2.589365248 podStartE2EDuration="3.092477743s" podCreationTimestamp="2026-01-20 18:34:55 +0000 UTC" firstStartedPulling="2026-01-20 18:34:55.917197597 +0000 UTC m=+1752.247987259" lastFinishedPulling="2026-01-20 18:34:56.420310082 +0000 UTC m=+1752.751099754" observedRunningTime="2026-01-20 18:34:58.08772895 +0000 UTC m=+1754.418518602" watchObservedRunningTime="2026-01-20 18:34:58.092477743 +0000 UTC m=+1754.423267415" Jan 20 18:35:08 crc kubenswrapper[4661]: I0120 18:35:08.146361 4661 scope.go:117] "RemoveContainer" containerID="a002274f41223b9a3369067182e338344af8ce86db3dfb1e5a412006f071924e" Jan 20 18:35:08 crc kubenswrapper[4661]: E0120 18:35:08.147780 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-svf7c_openshift-machine-config-operator(78855c94-da90-4523-8d65-70f7fd153dee)\"" pod="openshift-machine-config-operator/machine-config-daemon-svf7c" podUID="78855c94-da90-4523-8d65-70f7fd153dee" Jan 20 18:35:15 crc kubenswrapper[4661]: I0120 18:35:15.856033 4661 scope.go:117] "RemoveContainer" containerID="25897a81bce10762f0db6c8058da7297d3ed4363b9e3345f1fd34da2d3873903" Jan 20 18:35:15 crc kubenswrapper[4661]: I0120 18:35:15.886040 4661 scope.go:117] "RemoveContainer" containerID="4e961f31c710c32d632178da3c1f6553e83a76abff69ce2153f682fd7c352402" Jan 20 18:35:15 crc kubenswrapper[4661]: I0120 18:35:15.925078 4661 scope.go:117] "RemoveContainer" containerID="3004c39b2a690d14c58709df3cdafe712bdb579576ed3687154e82d37a2c8591" Jan 20 18:35:15 crc kubenswrapper[4661]: I0120 18:35:15.975031 4661 scope.go:117] "RemoveContainer" containerID="77925133481db4cab9d22b8313e107aa9311cb661804b6c28eea5f7f4979af4b" Jan 20 18:35:16 crc kubenswrapper[4661]: I0120 18:35:16.036031 4661 scope.go:117] "RemoveContainer" containerID="91c1c0dc5ee3b06a8c25355770483a60714c2caa96d9416802ca6b14f02eda5f" Jan 20 18:35:16 crc kubenswrapper[4661]: I0120 18:35:16.059233 4661 scope.go:117] "RemoveContainer" containerID="9c44cb67da46db745fac0fed83b94683bb0aff4510142ef54d241b5dfbcff29a" Jan 20 18:35:16 crc kubenswrapper[4661]: I0120 18:35:16.104808 4661 scope.go:117] "RemoveContainer" containerID="4e14a4f29c28dcd080fd708fc1a1397641711d809052cb863dca7f5e9ba4ca54" Jan 20 18:35:16 crc kubenswrapper[4661]: I0120 18:35:16.125403 4661 scope.go:117] "RemoveContainer" containerID="b357a0ec01664d19d9701e2fe9631e4f8db945a5ea59a45bc4c522d3be569270" Jan 20 18:35:19 crc kubenswrapper[4661]: I0120 18:35:19.142492 4661 scope.go:117] "RemoveContainer" containerID="a002274f41223b9a3369067182e338344af8ce86db3dfb1e5a412006f071924e" Jan 20 18:35:19 crc kubenswrapper[4661]: E0120 18:35:19.143093 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-svf7c_openshift-machine-config-operator(78855c94-da90-4523-8d65-70f7fd153dee)\"" pod="openshift-machine-config-operator/machine-config-daemon-svf7c" podUID="78855c94-da90-4523-8d65-70f7fd153dee" Jan 20 18:35:20 crc kubenswrapper[4661]: I0120 18:35:20.058737 4661 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-sync-6mvml"] Jan 20 18:35:20 crc kubenswrapper[4661]: I0120 18:35:20.071840 4661 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-sync-8t5pf"] Jan 20 18:35:20 crc kubenswrapper[4661]: I0120 18:35:20.084399 4661 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-sync-bbdtt"] Jan 20 18:35:20 crc kubenswrapper[4661]: I0120 18:35:20.092175 4661 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-68lzj"] Jan 20 18:35:20 crc kubenswrapper[4661]: I0120 18:35:20.099357 4661 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-sync-8t5pf"] Jan 20 18:35:20 crc kubenswrapper[4661]: I0120 18:35:20.108040 4661 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-sync-6mvml"] Jan 20 18:35:20 crc kubenswrapper[4661]: I0120 18:35:20.117595 4661 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-68lzj"] Jan 20 18:35:20 crc kubenswrapper[4661]: I0120 18:35:20.125775 4661 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-sync-bbdtt"] Jan 20 18:35:20 crc kubenswrapper[4661]: I0120 18:35:20.154721 4661 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5443a645-bf2b-48db-8111-efa81b46526c" path="/var/lib/kubelet/pods/5443a645-bf2b-48db-8111-efa81b46526c/volumes" Jan 20 18:35:20 crc kubenswrapper[4661]: I0120 18:35:20.155409 4661 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b097fc1c-9aca-44d7-be7e-cd35bf67f9f5" path="/var/lib/kubelet/pods/b097fc1c-9aca-44d7-be7e-cd35bf67f9f5/volumes" Jan 20 18:35:20 crc kubenswrapper[4661]: I0120 18:35:20.156136 4661 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dbaa46dc-18ed-41e6-84b6-86daf834ffd4" path="/var/lib/kubelet/pods/dbaa46dc-18ed-41e6-84b6-86daf834ffd4/volumes" Jan 20 18:35:20 crc kubenswrapper[4661]: I0120 18:35:20.164781 4661 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4ab2fe8-a602-4b1c-b880-f1eeb9bde9de" path="/var/lib/kubelet/pods/f4ab2fe8-a602-4b1c-b880-f1eeb9bde9de/volumes" Jan 20 18:35:32 crc kubenswrapper[4661]: I0120 18:35:32.143233 4661 scope.go:117] "RemoveContainer" containerID="a002274f41223b9a3369067182e338344af8ce86db3dfb1e5a412006f071924e" Jan 20 18:35:32 crc kubenswrapper[4661]: E0120 18:35:32.144098 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-svf7c_openshift-machine-config-operator(78855c94-da90-4523-8d65-70f7fd153dee)\"" pod="openshift-machine-config-operator/machine-config-daemon-svf7c" podUID="78855c94-da90-4523-8d65-70f7fd153dee" Jan 20 18:35:45 crc kubenswrapper[4661]: I0120 18:35:45.050013 4661 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-sync-glwqq"] Jan 20 18:35:45 crc kubenswrapper[4661]: I0120 18:35:45.091183 4661 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-sync-glwqq"] Jan 20 18:35:45 crc kubenswrapper[4661]: I0120 18:35:45.142489 4661 scope.go:117] "RemoveContainer" containerID="a002274f41223b9a3369067182e338344af8ce86db3dfb1e5a412006f071924e" Jan 20 18:35:45 crc kubenswrapper[4661]: E0120 18:35:45.142901 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-svf7c_openshift-machine-config-operator(78855c94-da90-4523-8d65-70f7fd153dee)\"" pod="openshift-machine-config-operator/machine-config-daemon-svf7c" podUID="78855c94-da90-4523-8d65-70f7fd153dee" Jan 20 18:35:46 crc kubenswrapper[4661]: I0120 18:35:46.156603 4661 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2423d758-4514-439d-a804-42287945bedc" path="/var/lib/kubelet/pods/2423d758-4514-439d-a804-42287945bedc/volumes" Jan 20 18:35:46 crc kubenswrapper[4661]: I0120 18:35:46.487929 4661 generic.go:334] "Generic (PLEG): container finished" podID="d6cfabfa-20dc-43e7-895a-cfeabfefcde1" containerID="09d2b4b0165214612382aee45c81e5b92cbceee390be329108e78346bd4bc33d" exitCode=0 Jan 20 18:35:46 crc kubenswrapper[4661]: I0120 18:35:46.487991 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-7d2mh" event={"ID":"d6cfabfa-20dc-43e7-895a-cfeabfefcde1","Type":"ContainerDied","Data":"09d2b4b0165214612382aee45c81e5b92cbceee390be329108e78346bd4bc33d"} Jan 20 18:35:47 crc kubenswrapper[4661]: I0120 18:35:47.896179 4661 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-7d2mh" Jan 20 18:35:48 crc kubenswrapper[4661]: I0120 18:35:48.083870 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d6cfabfa-20dc-43e7-895a-cfeabfefcde1-ssh-key-openstack-edpm-ipam\") pod \"d6cfabfa-20dc-43e7-895a-cfeabfefcde1\" (UID: \"d6cfabfa-20dc-43e7-895a-cfeabfefcde1\") " Jan 20 18:35:48 crc kubenswrapper[4661]: I0120 18:35:48.084473 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dmcl8\" (UniqueName: \"kubernetes.io/projected/d6cfabfa-20dc-43e7-895a-cfeabfefcde1-kube-api-access-dmcl8\") pod \"d6cfabfa-20dc-43e7-895a-cfeabfefcde1\" (UID: \"d6cfabfa-20dc-43e7-895a-cfeabfefcde1\") " Jan 20 18:35:48 crc kubenswrapper[4661]: I0120 18:35:48.084744 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d6cfabfa-20dc-43e7-895a-cfeabfefcde1-inventory\") pod \"d6cfabfa-20dc-43e7-895a-cfeabfefcde1\" (UID: \"d6cfabfa-20dc-43e7-895a-cfeabfefcde1\") " Jan 20 18:35:48 crc kubenswrapper[4661]: I0120 18:35:48.096308 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d6cfabfa-20dc-43e7-895a-cfeabfefcde1-kube-api-access-dmcl8" (OuterVolumeSpecName: "kube-api-access-dmcl8") pod "d6cfabfa-20dc-43e7-895a-cfeabfefcde1" (UID: "d6cfabfa-20dc-43e7-895a-cfeabfefcde1"). InnerVolumeSpecName "kube-api-access-dmcl8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 18:35:48 crc kubenswrapper[4661]: I0120 18:35:48.121832 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d6cfabfa-20dc-43e7-895a-cfeabfefcde1-inventory" (OuterVolumeSpecName: "inventory") pod "d6cfabfa-20dc-43e7-895a-cfeabfefcde1" (UID: "d6cfabfa-20dc-43e7-895a-cfeabfefcde1"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 18:35:48 crc kubenswrapper[4661]: I0120 18:35:48.121881 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d6cfabfa-20dc-43e7-895a-cfeabfefcde1-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "d6cfabfa-20dc-43e7-895a-cfeabfefcde1" (UID: "d6cfabfa-20dc-43e7-895a-cfeabfefcde1"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 18:35:48 crc kubenswrapper[4661]: I0120 18:35:48.186178 4661 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d6cfabfa-20dc-43e7-895a-cfeabfefcde1-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 20 18:35:48 crc kubenswrapper[4661]: I0120 18:35:48.186219 4661 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dmcl8\" (UniqueName: \"kubernetes.io/projected/d6cfabfa-20dc-43e7-895a-cfeabfefcde1-kube-api-access-dmcl8\") on node \"crc\" DevicePath \"\"" Jan 20 18:35:48 crc kubenswrapper[4661]: I0120 18:35:48.186233 4661 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d6cfabfa-20dc-43e7-895a-cfeabfefcde1-inventory\") on node \"crc\" DevicePath \"\"" Jan 20 18:35:48 crc kubenswrapper[4661]: I0120 18:35:48.506585 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-7d2mh" event={"ID":"d6cfabfa-20dc-43e7-895a-cfeabfefcde1","Type":"ContainerDied","Data":"1d7b10331d182e2e74d57e94769df30c26dabc34cd00d97289c985c4c0247918"} Jan 20 18:35:48 crc kubenswrapper[4661]: I0120 18:35:48.506640 4661 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1d7b10331d182e2e74d57e94769df30c26dabc34cd00d97289c985c4c0247918" Jan 20 18:35:48 crc kubenswrapper[4661]: I0120 18:35:48.507101 4661 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-7d2mh" Jan 20 18:35:48 crc kubenswrapper[4661]: I0120 18:35:48.624314 4661 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-9k7lh"] Jan 20 18:35:48 crc kubenswrapper[4661]: E0120 18:35:48.624851 4661 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d6cfabfa-20dc-43e7-895a-cfeabfefcde1" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Jan 20 18:35:48 crc kubenswrapper[4661]: I0120 18:35:48.624885 4661 state_mem.go:107] "Deleted CPUSet assignment" podUID="d6cfabfa-20dc-43e7-895a-cfeabfefcde1" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Jan 20 18:35:48 crc kubenswrapper[4661]: I0120 18:35:48.625105 4661 memory_manager.go:354] "RemoveStaleState removing state" podUID="d6cfabfa-20dc-43e7-895a-cfeabfefcde1" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Jan 20 18:35:48 crc kubenswrapper[4661]: I0120 18:35:48.625843 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-9k7lh" Jan 20 18:35:48 crc kubenswrapper[4661]: I0120 18:35:48.632162 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-mmbv8" Jan 20 18:35:48 crc kubenswrapper[4661]: I0120 18:35:48.632398 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 20 18:35:48 crc kubenswrapper[4661]: I0120 18:35:48.632748 4661 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 20 18:35:48 crc kubenswrapper[4661]: I0120 18:35:48.634740 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 20 18:35:48 crc kubenswrapper[4661]: I0120 18:35:48.646499 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-9k7lh"] Jan 20 18:35:48 crc kubenswrapper[4661]: I0120 18:35:48.834886 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dlq6c\" (UniqueName: \"kubernetes.io/projected/3da242a6-ba42-4b70-9745-e06ea2a4146e-kube-api-access-dlq6c\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-9k7lh\" (UID: \"3da242a6-ba42-4b70-9745-e06ea2a4146e\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-9k7lh" Jan 20 18:35:48 crc kubenswrapper[4661]: I0120 18:35:48.834982 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3da242a6-ba42-4b70-9745-e06ea2a4146e-ssh-key-openstack-edpm-ipam\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-9k7lh\" (UID: \"3da242a6-ba42-4b70-9745-e06ea2a4146e\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-9k7lh" Jan 20 18:35:48 crc kubenswrapper[4661]: I0120 18:35:48.835066 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3da242a6-ba42-4b70-9745-e06ea2a4146e-inventory\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-9k7lh\" (UID: \"3da242a6-ba42-4b70-9745-e06ea2a4146e\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-9k7lh" Jan 20 18:35:48 crc kubenswrapper[4661]: I0120 18:35:48.936555 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dlq6c\" (UniqueName: \"kubernetes.io/projected/3da242a6-ba42-4b70-9745-e06ea2a4146e-kube-api-access-dlq6c\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-9k7lh\" (UID: \"3da242a6-ba42-4b70-9745-e06ea2a4146e\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-9k7lh" Jan 20 18:35:48 crc kubenswrapper[4661]: I0120 18:35:48.937382 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3da242a6-ba42-4b70-9745-e06ea2a4146e-ssh-key-openstack-edpm-ipam\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-9k7lh\" (UID: \"3da242a6-ba42-4b70-9745-e06ea2a4146e\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-9k7lh" Jan 20 18:35:48 crc kubenswrapper[4661]: I0120 18:35:48.938065 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3da242a6-ba42-4b70-9745-e06ea2a4146e-inventory\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-9k7lh\" (UID: \"3da242a6-ba42-4b70-9745-e06ea2a4146e\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-9k7lh" Jan 20 18:35:48 crc kubenswrapper[4661]: I0120 18:35:48.943802 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3da242a6-ba42-4b70-9745-e06ea2a4146e-inventory\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-9k7lh\" (UID: \"3da242a6-ba42-4b70-9745-e06ea2a4146e\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-9k7lh" Jan 20 18:35:48 crc kubenswrapper[4661]: I0120 18:35:48.946315 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3da242a6-ba42-4b70-9745-e06ea2a4146e-ssh-key-openstack-edpm-ipam\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-9k7lh\" (UID: \"3da242a6-ba42-4b70-9745-e06ea2a4146e\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-9k7lh" Jan 20 18:35:48 crc kubenswrapper[4661]: I0120 18:35:48.958082 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dlq6c\" (UniqueName: \"kubernetes.io/projected/3da242a6-ba42-4b70-9745-e06ea2a4146e-kube-api-access-dlq6c\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-9k7lh\" (UID: \"3da242a6-ba42-4b70-9745-e06ea2a4146e\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-9k7lh" Jan 20 18:35:48 crc kubenswrapper[4661]: I0120 18:35:48.987897 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-9k7lh" Jan 20 18:35:49 crc kubenswrapper[4661]: I0120 18:35:49.518601 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-9k7lh"] Jan 20 18:35:50 crc kubenswrapper[4661]: I0120 18:35:50.534736 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-9k7lh" event={"ID":"3da242a6-ba42-4b70-9745-e06ea2a4146e","Type":"ContainerStarted","Data":"dfd542129740de45defae7d074253489ae918f116b9e5462695e4151363cb82d"} Jan 20 18:35:50 crc kubenswrapper[4661]: I0120 18:35:50.535014 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-9k7lh" event={"ID":"3da242a6-ba42-4b70-9745-e06ea2a4146e","Type":"ContainerStarted","Data":"53ea85c07384a2db7c38b02bac4e8d8785bdaf8e850fc5936e54feb3f114c98e"} Jan 20 18:35:50 crc kubenswrapper[4661]: I0120 18:35:50.558210 4661 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-9k7lh" podStartSLOduration=1.938980473 podStartE2EDuration="2.558185546s" podCreationTimestamp="2026-01-20 18:35:48 +0000 UTC" firstStartedPulling="2026-01-20 18:35:49.526883719 +0000 UTC m=+1805.857673381" lastFinishedPulling="2026-01-20 18:35:50.146088792 +0000 UTC m=+1806.476878454" observedRunningTime="2026-01-20 18:35:50.550076956 +0000 UTC m=+1806.880866628" watchObservedRunningTime="2026-01-20 18:35:50.558185546 +0000 UTC m=+1806.888975218" Jan 20 18:35:55 crc kubenswrapper[4661]: I0120 18:35:55.574618 4661 generic.go:334] "Generic (PLEG): container finished" podID="3da242a6-ba42-4b70-9745-e06ea2a4146e" containerID="dfd542129740de45defae7d074253489ae918f116b9e5462695e4151363cb82d" exitCode=0 Jan 20 18:35:55 crc kubenswrapper[4661]: I0120 18:35:55.574694 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-9k7lh" event={"ID":"3da242a6-ba42-4b70-9745-e06ea2a4146e","Type":"ContainerDied","Data":"dfd542129740de45defae7d074253489ae918f116b9e5462695e4151363cb82d"} Jan 20 18:35:56 crc kubenswrapper[4661]: I0120 18:35:56.990694 4661 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-9k7lh" Jan 20 18:35:57 crc kubenswrapper[4661]: I0120 18:35:57.081296 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3da242a6-ba42-4b70-9745-e06ea2a4146e-inventory\") pod \"3da242a6-ba42-4b70-9745-e06ea2a4146e\" (UID: \"3da242a6-ba42-4b70-9745-e06ea2a4146e\") " Jan 20 18:35:57 crc kubenswrapper[4661]: I0120 18:35:57.081413 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3da242a6-ba42-4b70-9745-e06ea2a4146e-ssh-key-openstack-edpm-ipam\") pod \"3da242a6-ba42-4b70-9745-e06ea2a4146e\" (UID: \"3da242a6-ba42-4b70-9745-e06ea2a4146e\") " Jan 20 18:35:57 crc kubenswrapper[4661]: I0120 18:35:57.081449 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dlq6c\" (UniqueName: \"kubernetes.io/projected/3da242a6-ba42-4b70-9745-e06ea2a4146e-kube-api-access-dlq6c\") pod \"3da242a6-ba42-4b70-9745-e06ea2a4146e\" (UID: \"3da242a6-ba42-4b70-9745-e06ea2a4146e\") " Jan 20 18:35:57 crc kubenswrapper[4661]: I0120 18:35:57.101080 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3da242a6-ba42-4b70-9745-e06ea2a4146e-kube-api-access-dlq6c" (OuterVolumeSpecName: "kube-api-access-dlq6c") pod "3da242a6-ba42-4b70-9745-e06ea2a4146e" (UID: "3da242a6-ba42-4b70-9745-e06ea2a4146e"). InnerVolumeSpecName "kube-api-access-dlq6c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 18:35:57 crc kubenswrapper[4661]: I0120 18:35:57.117517 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3da242a6-ba42-4b70-9745-e06ea2a4146e-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "3da242a6-ba42-4b70-9745-e06ea2a4146e" (UID: "3da242a6-ba42-4b70-9745-e06ea2a4146e"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 18:35:57 crc kubenswrapper[4661]: I0120 18:35:57.139710 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3da242a6-ba42-4b70-9745-e06ea2a4146e-inventory" (OuterVolumeSpecName: "inventory") pod "3da242a6-ba42-4b70-9745-e06ea2a4146e" (UID: "3da242a6-ba42-4b70-9745-e06ea2a4146e"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 18:35:57 crc kubenswrapper[4661]: I0120 18:35:57.182575 4661 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dlq6c\" (UniqueName: \"kubernetes.io/projected/3da242a6-ba42-4b70-9745-e06ea2a4146e-kube-api-access-dlq6c\") on node \"crc\" DevicePath \"\"" Jan 20 18:35:57 crc kubenswrapper[4661]: I0120 18:35:57.182607 4661 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3da242a6-ba42-4b70-9745-e06ea2a4146e-inventory\") on node \"crc\" DevicePath \"\"" Jan 20 18:35:57 crc kubenswrapper[4661]: I0120 18:35:57.182617 4661 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3da242a6-ba42-4b70-9745-e06ea2a4146e-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 20 18:35:57 crc kubenswrapper[4661]: I0120 18:35:57.599018 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-9k7lh" event={"ID":"3da242a6-ba42-4b70-9745-e06ea2a4146e","Type":"ContainerDied","Data":"53ea85c07384a2db7c38b02bac4e8d8785bdaf8e850fc5936e54feb3f114c98e"} Jan 20 18:35:57 crc kubenswrapper[4661]: I0120 18:35:57.599381 4661 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="53ea85c07384a2db7c38b02bac4e8d8785bdaf8e850fc5936e54feb3f114c98e" Jan 20 18:35:57 crc kubenswrapper[4661]: I0120 18:35:57.599101 4661 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-9k7lh" Jan 20 18:35:57 crc kubenswrapper[4661]: I0120 18:35:57.683141 4661 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-5hhlm"] Jan 20 18:35:57 crc kubenswrapper[4661]: E0120 18:35:57.683932 4661 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3da242a6-ba42-4b70-9745-e06ea2a4146e" containerName="ceph-hci-pre-edpm-deployment-openstack-edpm-ipam" Jan 20 18:35:57 crc kubenswrapper[4661]: I0120 18:35:57.683963 4661 state_mem.go:107] "Deleted CPUSet assignment" podUID="3da242a6-ba42-4b70-9745-e06ea2a4146e" containerName="ceph-hci-pre-edpm-deployment-openstack-edpm-ipam" Jan 20 18:35:57 crc kubenswrapper[4661]: I0120 18:35:57.684261 4661 memory_manager.go:354] "RemoveStaleState removing state" podUID="3da242a6-ba42-4b70-9745-e06ea2a4146e" containerName="ceph-hci-pre-edpm-deployment-openstack-edpm-ipam" Jan 20 18:35:57 crc kubenswrapper[4661]: I0120 18:35:57.685219 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-5hhlm" Jan 20 18:35:57 crc kubenswrapper[4661]: I0120 18:35:57.687732 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-mmbv8" Jan 20 18:35:57 crc kubenswrapper[4661]: I0120 18:35:57.688150 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 20 18:35:57 crc kubenswrapper[4661]: I0120 18:35:57.688979 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 20 18:35:57 crc kubenswrapper[4661]: I0120 18:35:57.691351 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/66e42258-c528-4826-b990-06290a7e595d-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-5hhlm\" (UID: \"66e42258-c528-4826-b990-06290a7e595d\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-5hhlm" Jan 20 18:35:57 crc kubenswrapper[4661]: I0120 18:35:57.691459 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/66e42258-c528-4826-b990-06290a7e595d-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-5hhlm\" (UID: \"66e42258-c528-4826-b990-06290a7e595d\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-5hhlm" Jan 20 18:35:57 crc kubenswrapper[4661]: I0120 18:35:57.691661 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lflsf\" (UniqueName: \"kubernetes.io/projected/66e42258-c528-4826-b990-06290a7e595d-kube-api-access-lflsf\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-5hhlm\" (UID: \"66e42258-c528-4826-b990-06290a7e595d\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-5hhlm" Jan 20 18:35:57 crc kubenswrapper[4661]: I0120 18:35:57.692386 4661 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 20 18:35:57 crc kubenswrapper[4661]: I0120 18:35:57.700291 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-5hhlm"] Jan 20 18:35:57 crc kubenswrapper[4661]: I0120 18:35:57.792611 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lflsf\" (UniqueName: \"kubernetes.io/projected/66e42258-c528-4826-b990-06290a7e595d-kube-api-access-lflsf\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-5hhlm\" (UID: \"66e42258-c528-4826-b990-06290a7e595d\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-5hhlm" Jan 20 18:35:57 crc kubenswrapper[4661]: I0120 18:35:57.792984 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/66e42258-c528-4826-b990-06290a7e595d-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-5hhlm\" (UID: \"66e42258-c528-4826-b990-06290a7e595d\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-5hhlm" Jan 20 18:35:57 crc kubenswrapper[4661]: I0120 18:35:57.793906 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/66e42258-c528-4826-b990-06290a7e595d-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-5hhlm\" (UID: \"66e42258-c528-4826-b990-06290a7e595d\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-5hhlm" Jan 20 18:35:57 crc kubenswrapper[4661]: I0120 18:35:57.797012 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/66e42258-c528-4826-b990-06290a7e595d-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-5hhlm\" (UID: \"66e42258-c528-4826-b990-06290a7e595d\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-5hhlm" Jan 20 18:35:57 crc kubenswrapper[4661]: I0120 18:35:57.797035 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/66e42258-c528-4826-b990-06290a7e595d-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-5hhlm\" (UID: \"66e42258-c528-4826-b990-06290a7e595d\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-5hhlm" Jan 20 18:35:57 crc kubenswrapper[4661]: I0120 18:35:57.809135 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lflsf\" (UniqueName: \"kubernetes.io/projected/66e42258-c528-4826-b990-06290a7e595d-kube-api-access-lflsf\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-5hhlm\" (UID: \"66e42258-c528-4826-b990-06290a7e595d\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-5hhlm" Jan 20 18:35:58 crc kubenswrapper[4661]: I0120 18:35:58.004515 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-5hhlm" Jan 20 18:35:58 crc kubenswrapper[4661]: I0120 18:35:58.141788 4661 scope.go:117] "RemoveContainer" containerID="a002274f41223b9a3369067182e338344af8ce86db3dfb1e5a412006f071924e" Jan 20 18:35:58 crc kubenswrapper[4661]: E0120 18:35:58.142276 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-svf7c_openshift-machine-config-operator(78855c94-da90-4523-8d65-70f7fd153dee)\"" pod="openshift-machine-config-operator/machine-config-daemon-svf7c" podUID="78855c94-da90-4523-8d65-70f7fd153dee" Jan 20 18:35:58 crc kubenswrapper[4661]: I0120 18:35:58.557340 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-5hhlm"] Jan 20 18:35:58 crc kubenswrapper[4661]: I0120 18:35:58.609898 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-5hhlm" event={"ID":"66e42258-c528-4826-b990-06290a7e595d","Type":"ContainerStarted","Data":"0bb9739f7c2f8ed128ab84e02c493e77cadc759d40f0afb4d998f551b395b8b1"} Jan 20 18:35:59 crc kubenswrapper[4661]: I0120 18:35:59.617544 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-5hhlm" event={"ID":"66e42258-c528-4826-b990-06290a7e595d","Type":"ContainerStarted","Data":"f9e5e1c9ec7c2f5244b61c235e49d2fd02ac958a806cd3e29467783a51b62a32"} Jan 20 18:36:13 crc kubenswrapper[4661]: I0120 18:36:13.073451 4661 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-5hhlm" podStartSLOduration=15.416615616 podStartE2EDuration="16.073428419s" podCreationTimestamp="2026-01-20 18:35:57 +0000 UTC" firstStartedPulling="2026-01-20 18:35:58.562356874 +0000 UTC m=+1814.893146536" lastFinishedPulling="2026-01-20 18:35:59.219169677 +0000 UTC m=+1815.549959339" observedRunningTime="2026-01-20 18:35:59.649433161 +0000 UTC m=+1815.980222813" watchObservedRunningTime="2026-01-20 18:36:13.073428419 +0000 UTC m=+1829.404218081" Jan 20 18:36:13 crc kubenswrapper[4661]: I0120 18:36:13.085026 4661 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-750e-account-create-update-lq645"] Jan 20 18:36:13 crc kubenswrapper[4661]: I0120 18:36:13.101202 4661 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-db-create-cpd2l"] Jan 20 18:36:13 crc kubenswrapper[4661]: I0120 18:36:13.109722 4661 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-db-create-vlvmp"] Jan 20 18:36:13 crc kubenswrapper[4661]: I0120 18:36:13.116451 4661 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-db-create-pkjqx"] Jan 20 18:36:13 crc kubenswrapper[4661]: I0120 18:36:13.123434 4661 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-8e5c-account-create-update-4p6rp"] Jan 20 18:36:13 crc kubenswrapper[4661]: I0120 18:36:13.131071 4661 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-db-create-vlvmp"] Jan 20 18:36:13 crc kubenswrapper[4661]: I0120 18:36:13.137594 4661 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-f854-account-create-update-9gfsv"] Jan 20 18:36:13 crc kubenswrapper[4661]: I0120 18:36:13.142391 4661 scope.go:117] "RemoveContainer" containerID="a002274f41223b9a3369067182e338344af8ce86db3dfb1e5a412006f071924e" Jan 20 18:36:13 crc kubenswrapper[4661]: E0120 18:36:13.142765 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-svf7c_openshift-machine-config-operator(78855c94-da90-4523-8d65-70f7fd153dee)\"" pod="openshift-machine-config-operator/machine-config-daemon-svf7c" podUID="78855c94-da90-4523-8d65-70f7fd153dee" Jan 20 18:36:13 crc kubenswrapper[4661]: I0120 18:36:13.146325 4661 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-750e-account-create-update-lq645"] Jan 20 18:36:13 crc kubenswrapper[4661]: I0120 18:36:13.155507 4661 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-db-create-cpd2l"] Jan 20 18:36:13 crc kubenswrapper[4661]: I0120 18:36:13.164724 4661 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-db-create-pkjqx"] Jan 20 18:36:13 crc kubenswrapper[4661]: I0120 18:36:13.174156 4661 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-f854-account-create-update-9gfsv"] Jan 20 18:36:13 crc kubenswrapper[4661]: I0120 18:36:13.181436 4661 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-8e5c-account-create-update-4p6rp"] Jan 20 18:36:14 crc kubenswrapper[4661]: I0120 18:36:14.153470 4661 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="07abbdc8-94ea-4e7f-9d9f-92eef06d26c5" path="/var/lib/kubelet/pods/07abbdc8-94ea-4e7f-9d9f-92eef06d26c5/volumes" Jan 20 18:36:14 crc kubenswrapper[4661]: I0120 18:36:14.154457 4661 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1879889e-4ae9-4bbf-b25d-efc0020c3000" path="/var/lib/kubelet/pods/1879889e-4ae9-4bbf-b25d-efc0020c3000/volumes" Jan 20 18:36:14 crc kubenswrapper[4661]: I0120 18:36:14.155148 4661 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49840a38-6542-47bf-ab78-ba21cd4fdd94" path="/var/lib/kubelet/pods/49840a38-6542-47bf-ab78-ba21cd4fdd94/volumes" Jan 20 18:36:14 crc kubenswrapper[4661]: I0120 18:36:14.155838 4661 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a47f9dfb-b359-4486-bbb8-7895eddd6176" path="/var/lib/kubelet/pods/a47f9dfb-b359-4486-bbb8-7895eddd6176/volumes" Jan 20 18:36:14 crc kubenswrapper[4661]: I0120 18:36:14.157517 4661 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b20fc329-8560-46c2-ad9e-235a140302e2" path="/var/lib/kubelet/pods/b20fc329-8560-46c2-ad9e-235a140302e2/volumes" Jan 20 18:36:14 crc kubenswrapper[4661]: I0120 18:36:14.158226 4661 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e49fe865-f025-4842-8e32-a4f9213cdc2a" path="/var/lib/kubelet/pods/e49fe865-f025-4842-8e32-a4f9213cdc2a/volumes" Jan 20 18:36:16 crc kubenswrapper[4661]: I0120 18:36:16.278775 4661 scope.go:117] "RemoveContainer" containerID="d78a870eb313e545ddd0b420c52038a328c22cbaaaa42e5563a12a2abedfffc1" Jan 20 18:36:16 crc kubenswrapper[4661]: I0120 18:36:16.309313 4661 scope.go:117] "RemoveContainer" containerID="dea524a1d31897fc46521509b37fdba03041d35156177fabefea81763cbb3879" Jan 20 18:36:16 crc kubenswrapper[4661]: I0120 18:36:16.374009 4661 scope.go:117] "RemoveContainer" containerID="3117defb758cb93e92036fcd7872361417ce7c3f258b7e847604642f3aaed493" Jan 20 18:36:16 crc kubenswrapper[4661]: I0120 18:36:16.407412 4661 scope.go:117] "RemoveContainer" containerID="cbc718a107e4e6824972da0ab909768ba760bb49841908a25546d3620f552c18" Jan 20 18:36:16 crc kubenswrapper[4661]: I0120 18:36:16.450787 4661 scope.go:117] "RemoveContainer" containerID="b40a508cc12511d22aae4c26fcfede2554ce1b126643d36df46ac6337e227d94" Jan 20 18:36:16 crc kubenswrapper[4661]: I0120 18:36:16.477922 4661 scope.go:117] "RemoveContainer" containerID="bfb8fe059c190ece7c98766ac159b4761b111ba1e77524918c001bf90963fc6f" Jan 20 18:36:16 crc kubenswrapper[4661]: I0120 18:36:16.534400 4661 scope.go:117] "RemoveContainer" containerID="e856c8ed4f3ff69814e70be1993cc2847ac0c782d52155a3460e0c435a669ba1" Jan 20 18:36:16 crc kubenswrapper[4661]: I0120 18:36:16.555836 4661 scope.go:117] "RemoveContainer" containerID="166853743caf9feb45b4da68340bd7710917c4dacbe6d5327f532c651daa70fd" Jan 20 18:36:16 crc kubenswrapper[4661]: I0120 18:36:16.580275 4661 scope.go:117] "RemoveContainer" containerID="4e632186df404443d8e36f9acac18c5e5a84d07feaa93ee71904ebd4c057a8bf" Jan 20 18:36:16 crc kubenswrapper[4661]: I0120 18:36:16.601244 4661 scope.go:117] "RemoveContainer" containerID="149dfd1b9ed80c83523be9a2d33cfc40fbe80e6fbcaf62730c06b27739790b82" Jan 20 18:36:16 crc kubenswrapper[4661]: I0120 18:36:16.618608 4661 scope.go:117] "RemoveContainer" containerID="8395cb0b9fda90bec15e72a2609c5c80f301bafa24d002f3743c8fa74498f372" Jan 20 18:36:24 crc kubenswrapper[4661]: I0120 18:36:24.146743 4661 scope.go:117] "RemoveContainer" containerID="a002274f41223b9a3369067182e338344af8ce86db3dfb1e5a412006f071924e" Jan 20 18:36:24 crc kubenswrapper[4661]: E0120 18:36:24.147807 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-svf7c_openshift-machine-config-operator(78855c94-da90-4523-8d65-70f7fd153dee)\"" pod="openshift-machine-config-operator/machine-config-daemon-svf7c" podUID="78855c94-da90-4523-8d65-70f7fd153dee" Jan 20 18:36:37 crc kubenswrapper[4661]: I0120 18:36:37.142221 4661 scope.go:117] "RemoveContainer" containerID="a002274f41223b9a3369067182e338344af8ce86db3dfb1e5a412006f071924e" Jan 20 18:36:37 crc kubenswrapper[4661]: E0120 18:36:37.143010 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-svf7c_openshift-machine-config-operator(78855c94-da90-4523-8d65-70f7fd153dee)\"" pod="openshift-machine-config-operator/machine-config-daemon-svf7c" podUID="78855c94-da90-4523-8d65-70f7fd153dee" Jan 20 18:36:44 crc kubenswrapper[4661]: I0120 18:36:44.058235 4661 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-zbltb"] Jan 20 18:36:44 crc kubenswrapper[4661]: I0120 18:36:44.070671 4661 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-zbltb"] Jan 20 18:36:44 crc kubenswrapper[4661]: I0120 18:36:44.154432 4661 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d53b16a-96f4-476d-bbfb-4b83adc3e33a" path="/var/lib/kubelet/pods/9d53b16a-96f4-476d-bbfb-4b83adc3e33a/volumes" Jan 20 18:36:48 crc kubenswrapper[4661]: I0120 18:36:48.142773 4661 scope.go:117] "RemoveContainer" containerID="a002274f41223b9a3369067182e338344af8ce86db3dfb1e5a412006f071924e" Jan 20 18:36:48 crc kubenswrapper[4661]: E0120 18:36:48.143306 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-svf7c_openshift-machine-config-operator(78855c94-da90-4523-8d65-70f7fd153dee)\"" pod="openshift-machine-config-operator/machine-config-daemon-svf7c" podUID="78855c94-da90-4523-8d65-70f7fd153dee" Jan 20 18:37:00 crc kubenswrapper[4661]: I0120 18:37:00.148623 4661 generic.go:334] "Generic (PLEG): container finished" podID="66e42258-c528-4826-b990-06290a7e595d" containerID="f9e5e1c9ec7c2f5244b61c235e49d2fd02ac958a806cd3e29467783a51b62a32" exitCode=0 Jan 20 18:37:00 crc kubenswrapper[4661]: I0120 18:37:00.153037 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-5hhlm" event={"ID":"66e42258-c528-4826-b990-06290a7e595d","Type":"ContainerDied","Data":"f9e5e1c9ec7c2f5244b61c235e49d2fd02ac958a806cd3e29467783a51b62a32"} Jan 20 18:37:01 crc kubenswrapper[4661]: I0120 18:37:01.513327 4661 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-5hhlm" Jan 20 18:37:01 crc kubenswrapper[4661]: I0120 18:37:01.672388 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lflsf\" (UniqueName: \"kubernetes.io/projected/66e42258-c528-4826-b990-06290a7e595d-kube-api-access-lflsf\") pod \"66e42258-c528-4826-b990-06290a7e595d\" (UID: \"66e42258-c528-4826-b990-06290a7e595d\") " Jan 20 18:37:01 crc kubenswrapper[4661]: I0120 18:37:01.672525 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/66e42258-c528-4826-b990-06290a7e595d-inventory\") pod \"66e42258-c528-4826-b990-06290a7e595d\" (UID: \"66e42258-c528-4826-b990-06290a7e595d\") " Jan 20 18:37:01 crc kubenswrapper[4661]: I0120 18:37:01.672579 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/66e42258-c528-4826-b990-06290a7e595d-ssh-key-openstack-edpm-ipam\") pod \"66e42258-c528-4826-b990-06290a7e595d\" (UID: \"66e42258-c528-4826-b990-06290a7e595d\") " Jan 20 18:37:01 crc kubenswrapper[4661]: I0120 18:37:01.678935 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/66e42258-c528-4826-b990-06290a7e595d-kube-api-access-lflsf" (OuterVolumeSpecName: "kube-api-access-lflsf") pod "66e42258-c528-4826-b990-06290a7e595d" (UID: "66e42258-c528-4826-b990-06290a7e595d"). InnerVolumeSpecName "kube-api-access-lflsf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 18:37:01 crc kubenswrapper[4661]: I0120 18:37:01.704267 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/66e42258-c528-4826-b990-06290a7e595d-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "66e42258-c528-4826-b990-06290a7e595d" (UID: "66e42258-c528-4826-b990-06290a7e595d"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 18:37:01 crc kubenswrapper[4661]: I0120 18:37:01.712438 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/66e42258-c528-4826-b990-06290a7e595d-inventory" (OuterVolumeSpecName: "inventory") pod "66e42258-c528-4826-b990-06290a7e595d" (UID: "66e42258-c528-4826-b990-06290a7e595d"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 18:37:01 crc kubenswrapper[4661]: I0120 18:37:01.774627 4661 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lflsf\" (UniqueName: \"kubernetes.io/projected/66e42258-c528-4826-b990-06290a7e595d-kube-api-access-lflsf\") on node \"crc\" DevicePath \"\"" Jan 20 18:37:01 crc kubenswrapper[4661]: I0120 18:37:01.774653 4661 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/66e42258-c528-4826-b990-06290a7e595d-inventory\") on node \"crc\" DevicePath \"\"" Jan 20 18:37:01 crc kubenswrapper[4661]: I0120 18:37:01.774663 4661 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/66e42258-c528-4826-b990-06290a7e595d-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 20 18:37:02 crc kubenswrapper[4661]: I0120 18:37:02.175464 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-5hhlm" event={"ID":"66e42258-c528-4826-b990-06290a7e595d","Type":"ContainerDied","Data":"0bb9739f7c2f8ed128ab84e02c493e77cadc759d40f0afb4d998f551b395b8b1"} Jan 20 18:37:02 crc kubenswrapper[4661]: I0120 18:37:02.175523 4661 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0bb9739f7c2f8ed128ab84e02c493e77cadc759d40f0afb4d998f551b395b8b1" Jan 20 18:37:02 crc kubenswrapper[4661]: I0120 18:37:02.175537 4661 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-5hhlm" Jan 20 18:37:02 crc kubenswrapper[4661]: I0120 18:37:02.279386 4661 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-wvdt6"] Jan 20 18:37:02 crc kubenswrapper[4661]: E0120 18:37:02.279856 4661 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="66e42258-c528-4826-b990-06290a7e595d" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Jan 20 18:37:02 crc kubenswrapper[4661]: I0120 18:37:02.279888 4661 state_mem.go:107] "Deleted CPUSet assignment" podUID="66e42258-c528-4826-b990-06290a7e595d" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Jan 20 18:37:02 crc kubenswrapper[4661]: I0120 18:37:02.280097 4661 memory_manager.go:354] "RemoveStaleState removing state" podUID="66e42258-c528-4826-b990-06290a7e595d" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Jan 20 18:37:02 crc kubenswrapper[4661]: I0120 18:37:02.280848 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-wvdt6" Jan 20 18:37:02 crc kubenswrapper[4661]: I0120 18:37:02.285644 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 20 18:37:02 crc kubenswrapper[4661]: I0120 18:37:02.286062 4661 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 20 18:37:02 crc kubenswrapper[4661]: I0120 18:37:02.286248 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 20 18:37:02 crc kubenswrapper[4661]: I0120 18:37:02.286409 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-mmbv8" Jan 20 18:37:02 crc kubenswrapper[4661]: I0120 18:37:02.295420 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-wvdt6"] Jan 20 18:37:02 crc kubenswrapper[4661]: I0120 18:37:02.388993 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/5203b99e-3f05-4d7d-8900-ee1c7d58526d-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-wvdt6\" (UID: \"5203b99e-3f05-4d7d-8900-ee1c7d58526d\") " pod="openstack/ssh-known-hosts-edpm-deployment-wvdt6" Jan 20 18:37:02 crc kubenswrapper[4661]: I0120 18:37:02.389308 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/5203b99e-3f05-4d7d-8900-ee1c7d58526d-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-wvdt6\" (UID: \"5203b99e-3f05-4d7d-8900-ee1c7d58526d\") " pod="openstack/ssh-known-hosts-edpm-deployment-wvdt6" Jan 20 18:37:02 crc kubenswrapper[4661]: I0120 18:37:02.389445 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-txg8p\" (UniqueName: \"kubernetes.io/projected/5203b99e-3f05-4d7d-8900-ee1c7d58526d-kube-api-access-txg8p\") pod \"ssh-known-hosts-edpm-deployment-wvdt6\" (UID: \"5203b99e-3f05-4d7d-8900-ee1c7d58526d\") " pod="openstack/ssh-known-hosts-edpm-deployment-wvdt6" Jan 20 18:37:02 crc kubenswrapper[4661]: I0120 18:37:02.490960 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/5203b99e-3f05-4d7d-8900-ee1c7d58526d-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-wvdt6\" (UID: \"5203b99e-3f05-4d7d-8900-ee1c7d58526d\") " pod="openstack/ssh-known-hosts-edpm-deployment-wvdt6" Jan 20 18:37:02 crc kubenswrapper[4661]: I0120 18:37:02.491318 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/5203b99e-3f05-4d7d-8900-ee1c7d58526d-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-wvdt6\" (UID: \"5203b99e-3f05-4d7d-8900-ee1c7d58526d\") " pod="openstack/ssh-known-hosts-edpm-deployment-wvdt6" Jan 20 18:37:02 crc kubenswrapper[4661]: I0120 18:37:02.491630 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-txg8p\" (UniqueName: \"kubernetes.io/projected/5203b99e-3f05-4d7d-8900-ee1c7d58526d-kube-api-access-txg8p\") pod \"ssh-known-hosts-edpm-deployment-wvdt6\" (UID: \"5203b99e-3f05-4d7d-8900-ee1c7d58526d\") " pod="openstack/ssh-known-hosts-edpm-deployment-wvdt6" Jan 20 18:37:02 crc kubenswrapper[4661]: I0120 18:37:02.497608 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/5203b99e-3f05-4d7d-8900-ee1c7d58526d-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-wvdt6\" (UID: \"5203b99e-3f05-4d7d-8900-ee1c7d58526d\") " pod="openstack/ssh-known-hosts-edpm-deployment-wvdt6" Jan 20 18:37:02 crc kubenswrapper[4661]: I0120 18:37:02.498245 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/5203b99e-3f05-4d7d-8900-ee1c7d58526d-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-wvdt6\" (UID: \"5203b99e-3f05-4d7d-8900-ee1c7d58526d\") " pod="openstack/ssh-known-hosts-edpm-deployment-wvdt6" Jan 20 18:37:02 crc kubenswrapper[4661]: I0120 18:37:02.517264 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-txg8p\" (UniqueName: \"kubernetes.io/projected/5203b99e-3f05-4d7d-8900-ee1c7d58526d-kube-api-access-txg8p\") pod \"ssh-known-hosts-edpm-deployment-wvdt6\" (UID: \"5203b99e-3f05-4d7d-8900-ee1c7d58526d\") " pod="openstack/ssh-known-hosts-edpm-deployment-wvdt6" Jan 20 18:37:02 crc kubenswrapper[4661]: I0120 18:37:02.604220 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-wvdt6" Jan 20 18:37:03 crc kubenswrapper[4661]: I0120 18:37:03.048917 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-wvdt6"] Jan 20 18:37:03 crc kubenswrapper[4661]: I0120 18:37:03.142348 4661 scope.go:117] "RemoveContainer" containerID="a002274f41223b9a3369067182e338344af8ce86db3dfb1e5a412006f071924e" Jan 20 18:37:03 crc kubenswrapper[4661]: I0120 18:37:03.185227 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-wvdt6" event={"ID":"5203b99e-3f05-4d7d-8900-ee1c7d58526d","Type":"ContainerStarted","Data":"66ea02b23d465eff6f3f9a6f8f226226d9a71c5c26bce2f23d2f39bee076b5f5"} Jan 20 18:37:04 crc kubenswrapper[4661]: I0120 18:37:04.194149 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-svf7c" event={"ID":"78855c94-da90-4523-8d65-70f7fd153dee","Type":"ContainerStarted","Data":"525958e045fed6e29649f5dde6c90a67fbcda63ec54c37aaef4ceed26f58480a"} Jan 20 18:37:04 crc kubenswrapper[4661]: I0120 18:37:04.195940 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-wvdt6" event={"ID":"5203b99e-3f05-4d7d-8900-ee1c7d58526d","Type":"ContainerStarted","Data":"f95ab87f307fe5e17634937beab9d1ecac56bf1054ac4aac49b88b1ed27f3712"} Jan 20 18:37:04 crc kubenswrapper[4661]: I0120 18:37:04.254803 4661 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ssh-known-hosts-edpm-deployment-wvdt6" podStartSLOduration=1.683059952 podStartE2EDuration="2.254725724s" podCreationTimestamp="2026-01-20 18:37:02 +0000 UTC" firstStartedPulling="2026-01-20 18:37:03.051397745 +0000 UTC m=+1879.382187397" lastFinishedPulling="2026-01-20 18:37:03.623063507 +0000 UTC m=+1879.953853169" observedRunningTime="2026-01-20 18:37:04.243457451 +0000 UTC m=+1880.574247133" watchObservedRunningTime="2026-01-20 18:37:04.254725724 +0000 UTC m=+1880.585515386" Jan 20 18:37:08 crc kubenswrapper[4661]: I0120 18:37:08.055052 4661 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-cell-mapping-n2hww"] Jan 20 18:37:08 crc kubenswrapper[4661]: I0120 18:37:08.064240 4661 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-cell-mapping-n2hww"] Jan 20 18:37:08 crc kubenswrapper[4661]: I0120 18:37:08.155724 4661 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1e665d0f-e6d1-4829-b745-262ce699c011" path="/var/lib/kubelet/pods/1e665d0f-e6d1-4829-b745-262ce699c011/volumes" Jan 20 18:37:09 crc kubenswrapper[4661]: I0120 18:37:09.027160 4661 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-9p79w"] Jan 20 18:37:09 crc kubenswrapper[4661]: I0120 18:37:09.034258 4661 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-9p79w"] Jan 20 18:37:10 crc kubenswrapper[4661]: I0120 18:37:10.160414 4661 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f75d1b27-5cb6-496a-82ae-e3880dfd4b4c" path="/var/lib/kubelet/pods/f75d1b27-5cb6-496a-82ae-e3880dfd4b4c/volumes" Jan 20 18:37:11 crc kubenswrapper[4661]: I0120 18:37:11.252931 4661 generic.go:334] "Generic (PLEG): container finished" podID="5203b99e-3f05-4d7d-8900-ee1c7d58526d" containerID="f95ab87f307fe5e17634937beab9d1ecac56bf1054ac4aac49b88b1ed27f3712" exitCode=0 Jan 20 18:37:11 crc kubenswrapper[4661]: I0120 18:37:11.253735 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-wvdt6" event={"ID":"5203b99e-3f05-4d7d-8900-ee1c7d58526d","Type":"ContainerDied","Data":"f95ab87f307fe5e17634937beab9d1ecac56bf1054ac4aac49b88b1ed27f3712"} Jan 20 18:37:12 crc kubenswrapper[4661]: I0120 18:37:12.697434 4661 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-wvdt6" Jan 20 18:37:12 crc kubenswrapper[4661]: I0120 18:37:12.784007 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-txg8p\" (UniqueName: \"kubernetes.io/projected/5203b99e-3f05-4d7d-8900-ee1c7d58526d-kube-api-access-txg8p\") pod \"5203b99e-3f05-4d7d-8900-ee1c7d58526d\" (UID: \"5203b99e-3f05-4d7d-8900-ee1c7d58526d\") " Jan 20 18:37:12 crc kubenswrapper[4661]: I0120 18:37:12.784366 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/5203b99e-3f05-4d7d-8900-ee1c7d58526d-ssh-key-openstack-edpm-ipam\") pod \"5203b99e-3f05-4d7d-8900-ee1c7d58526d\" (UID: \"5203b99e-3f05-4d7d-8900-ee1c7d58526d\") " Jan 20 18:37:12 crc kubenswrapper[4661]: I0120 18:37:12.784460 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/5203b99e-3f05-4d7d-8900-ee1c7d58526d-inventory-0\") pod \"5203b99e-3f05-4d7d-8900-ee1c7d58526d\" (UID: \"5203b99e-3f05-4d7d-8900-ee1c7d58526d\") " Jan 20 18:37:12 crc kubenswrapper[4661]: I0120 18:37:12.791970 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5203b99e-3f05-4d7d-8900-ee1c7d58526d-kube-api-access-txg8p" (OuterVolumeSpecName: "kube-api-access-txg8p") pod "5203b99e-3f05-4d7d-8900-ee1c7d58526d" (UID: "5203b99e-3f05-4d7d-8900-ee1c7d58526d"). InnerVolumeSpecName "kube-api-access-txg8p". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 18:37:12 crc kubenswrapper[4661]: I0120 18:37:12.822125 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5203b99e-3f05-4d7d-8900-ee1c7d58526d-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "5203b99e-3f05-4d7d-8900-ee1c7d58526d" (UID: "5203b99e-3f05-4d7d-8900-ee1c7d58526d"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 18:37:12 crc kubenswrapper[4661]: I0120 18:37:12.824997 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5203b99e-3f05-4d7d-8900-ee1c7d58526d-inventory-0" (OuterVolumeSpecName: "inventory-0") pod "5203b99e-3f05-4d7d-8900-ee1c7d58526d" (UID: "5203b99e-3f05-4d7d-8900-ee1c7d58526d"). InnerVolumeSpecName "inventory-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 18:37:12 crc kubenswrapper[4661]: I0120 18:37:12.886160 4661 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-txg8p\" (UniqueName: \"kubernetes.io/projected/5203b99e-3f05-4d7d-8900-ee1c7d58526d-kube-api-access-txg8p\") on node \"crc\" DevicePath \"\"" Jan 20 18:37:12 crc kubenswrapper[4661]: I0120 18:37:12.886206 4661 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/5203b99e-3f05-4d7d-8900-ee1c7d58526d-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 20 18:37:12 crc kubenswrapper[4661]: I0120 18:37:12.886221 4661 reconciler_common.go:293] "Volume detached for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/5203b99e-3f05-4d7d-8900-ee1c7d58526d-inventory-0\") on node \"crc\" DevicePath \"\"" Jan 20 18:37:13 crc kubenswrapper[4661]: I0120 18:37:13.270417 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-wvdt6" event={"ID":"5203b99e-3f05-4d7d-8900-ee1c7d58526d","Type":"ContainerDied","Data":"66ea02b23d465eff6f3f9a6f8f226226d9a71c5c26bce2f23d2f39bee076b5f5"} Jan 20 18:37:13 crc kubenswrapper[4661]: I0120 18:37:13.270470 4661 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="66ea02b23d465eff6f3f9a6f8f226226d9a71c5c26bce2f23d2f39bee076b5f5" Jan 20 18:37:13 crc kubenswrapper[4661]: I0120 18:37:13.270490 4661 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-wvdt6" Jan 20 18:37:13 crc kubenswrapper[4661]: I0120 18:37:13.373033 4661 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-thssw"] Jan 20 18:37:13 crc kubenswrapper[4661]: E0120 18:37:13.373432 4661 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5203b99e-3f05-4d7d-8900-ee1c7d58526d" containerName="ssh-known-hosts-edpm-deployment" Jan 20 18:37:13 crc kubenswrapper[4661]: I0120 18:37:13.373455 4661 state_mem.go:107] "Deleted CPUSet assignment" podUID="5203b99e-3f05-4d7d-8900-ee1c7d58526d" containerName="ssh-known-hosts-edpm-deployment" Jan 20 18:37:13 crc kubenswrapper[4661]: I0120 18:37:13.373890 4661 memory_manager.go:354] "RemoveStaleState removing state" podUID="5203b99e-3f05-4d7d-8900-ee1c7d58526d" containerName="ssh-known-hosts-edpm-deployment" Jan 20 18:37:13 crc kubenswrapper[4661]: I0120 18:37:13.374577 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-thssw" Jan 20 18:37:13 crc kubenswrapper[4661]: I0120 18:37:13.377883 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 20 18:37:13 crc kubenswrapper[4661]: I0120 18:37:13.378866 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-mmbv8" Jan 20 18:37:13 crc kubenswrapper[4661]: I0120 18:37:13.379164 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 20 18:37:13 crc kubenswrapper[4661]: I0120 18:37:13.381731 4661 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 20 18:37:13 crc kubenswrapper[4661]: I0120 18:37:13.384017 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-thssw"] Jan 20 18:37:13 crc kubenswrapper[4661]: I0120 18:37:13.579957 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s9sv9\" (UniqueName: \"kubernetes.io/projected/e88a285a-dd4f-4c1f-9b33-c8ebe02c4047-kube-api-access-s9sv9\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-thssw\" (UID: \"e88a285a-dd4f-4c1f-9b33-c8ebe02c4047\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-thssw" Jan 20 18:37:13 crc kubenswrapper[4661]: I0120 18:37:13.580514 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e88a285a-dd4f-4c1f-9b33-c8ebe02c4047-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-thssw\" (UID: \"e88a285a-dd4f-4c1f-9b33-c8ebe02c4047\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-thssw" Jan 20 18:37:13 crc kubenswrapper[4661]: I0120 18:37:13.580549 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e88a285a-dd4f-4c1f-9b33-c8ebe02c4047-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-thssw\" (UID: \"e88a285a-dd4f-4c1f-9b33-c8ebe02c4047\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-thssw" Jan 20 18:37:13 crc kubenswrapper[4661]: I0120 18:37:13.681859 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e88a285a-dd4f-4c1f-9b33-c8ebe02c4047-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-thssw\" (UID: \"e88a285a-dd4f-4c1f-9b33-c8ebe02c4047\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-thssw" Jan 20 18:37:13 crc kubenswrapper[4661]: I0120 18:37:13.681991 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s9sv9\" (UniqueName: \"kubernetes.io/projected/e88a285a-dd4f-4c1f-9b33-c8ebe02c4047-kube-api-access-s9sv9\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-thssw\" (UID: \"e88a285a-dd4f-4c1f-9b33-c8ebe02c4047\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-thssw" Jan 20 18:37:13 crc kubenswrapper[4661]: I0120 18:37:13.682048 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e88a285a-dd4f-4c1f-9b33-c8ebe02c4047-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-thssw\" (UID: \"e88a285a-dd4f-4c1f-9b33-c8ebe02c4047\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-thssw" Jan 20 18:37:13 crc kubenswrapper[4661]: I0120 18:37:13.690137 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e88a285a-dd4f-4c1f-9b33-c8ebe02c4047-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-thssw\" (UID: \"e88a285a-dd4f-4c1f-9b33-c8ebe02c4047\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-thssw" Jan 20 18:37:13 crc kubenswrapper[4661]: I0120 18:37:13.697885 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e88a285a-dd4f-4c1f-9b33-c8ebe02c4047-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-thssw\" (UID: \"e88a285a-dd4f-4c1f-9b33-c8ebe02c4047\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-thssw" Jan 20 18:37:13 crc kubenswrapper[4661]: I0120 18:37:13.700519 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s9sv9\" (UniqueName: \"kubernetes.io/projected/e88a285a-dd4f-4c1f-9b33-c8ebe02c4047-kube-api-access-s9sv9\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-thssw\" (UID: \"e88a285a-dd4f-4c1f-9b33-c8ebe02c4047\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-thssw" Jan 20 18:37:13 crc kubenswrapper[4661]: I0120 18:37:13.785795 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-thssw" Jan 20 18:37:14 crc kubenswrapper[4661]: I0120 18:37:14.345914 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-thssw"] Jan 20 18:37:14 crc kubenswrapper[4661]: W0120 18:37:14.353891 4661 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode88a285a_dd4f_4c1f_9b33_c8ebe02c4047.slice/crio-d6617775e70d9cbea336bde0b7025072e05a8cfe67f5e966bf7bf1fdc444507d WatchSource:0}: Error finding container d6617775e70d9cbea336bde0b7025072e05a8cfe67f5e966bf7bf1fdc444507d: Status 404 returned error can't find the container with id d6617775e70d9cbea336bde0b7025072e05a8cfe67f5e966bf7bf1fdc444507d Jan 20 18:37:15 crc kubenswrapper[4661]: I0120 18:37:15.298736 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-thssw" event={"ID":"e88a285a-dd4f-4c1f-9b33-c8ebe02c4047","Type":"ContainerStarted","Data":"0ad173982e1363dc07bbdce62e931022fe0217632a01343613c2b02123495a44"} Jan 20 18:37:15 crc kubenswrapper[4661]: I0120 18:37:15.299080 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-thssw" event={"ID":"e88a285a-dd4f-4c1f-9b33-c8ebe02c4047","Type":"ContainerStarted","Data":"d6617775e70d9cbea336bde0b7025072e05a8cfe67f5e966bf7bf1fdc444507d"} Jan 20 18:37:15 crc kubenswrapper[4661]: I0120 18:37:15.340909 4661 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-thssw" podStartSLOduration=1.922366066 podStartE2EDuration="2.34088916s" podCreationTimestamp="2026-01-20 18:37:13 +0000 UTC" firstStartedPulling="2026-01-20 18:37:14.35808393 +0000 UTC m=+1890.688873592" lastFinishedPulling="2026-01-20 18:37:14.776607004 +0000 UTC m=+1891.107396686" observedRunningTime="2026-01-20 18:37:15.321104355 +0000 UTC m=+1891.651894037" watchObservedRunningTime="2026-01-20 18:37:15.34088916 +0000 UTC m=+1891.671678822" Jan 20 18:37:16 crc kubenswrapper[4661]: I0120 18:37:16.823969 4661 scope.go:117] "RemoveContainer" containerID="bb3d7e3bf9f512e16e057a0d13379209ee93ce7865d68839abaebac3894b6855" Jan 20 18:37:16 crc kubenswrapper[4661]: I0120 18:37:16.867645 4661 scope.go:117] "RemoveContainer" containerID="4f65aa729892c3506e526a21564bd0d05bf1660279c82973c9e5a69dc58b5b40" Jan 20 18:37:16 crc kubenswrapper[4661]: I0120 18:37:16.906644 4661 scope.go:117] "RemoveContainer" containerID="28e5b932ef2a30e51319c647957fe9b4e19d736cab1f5d41182433ea4c6885f6" Jan 20 18:37:25 crc kubenswrapper[4661]: I0120 18:37:25.389958 4661 generic.go:334] "Generic (PLEG): container finished" podID="e88a285a-dd4f-4c1f-9b33-c8ebe02c4047" containerID="0ad173982e1363dc07bbdce62e931022fe0217632a01343613c2b02123495a44" exitCode=0 Jan 20 18:37:25 crc kubenswrapper[4661]: I0120 18:37:25.390059 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-thssw" event={"ID":"e88a285a-dd4f-4c1f-9b33-c8ebe02c4047","Type":"ContainerDied","Data":"0ad173982e1363dc07bbdce62e931022fe0217632a01343613c2b02123495a44"} Jan 20 18:37:26 crc kubenswrapper[4661]: I0120 18:37:26.788350 4661 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-thssw" Jan 20 18:37:26 crc kubenswrapper[4661]: I0120 18:37:26.938965 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e88a285a-dd4f-4c1f-9b33-c8ebe02c4047-inventory\") pod \"e88a285a-dd4f-4c1f-9b33-c8ebe02c4047\" (UID: \"e88a285a-dd4f-4c1f-9b33-c8ebe02c4047\") " Jan 20 18:37:26 crc kubenswrapper[4661]: I0120 18:37:26.939234 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s9sv9\" (UniqueName: \"kubernetes.io/projected/e88a285a-dd4f-4c1f-9b33-c8ebe02c4047-kube-api-access-s9sv9\") pod \"e88a285a-dd4f-4c1f-9b33-c8ebe02c4047\" (UID: \"e88a285a-dd4f-4c1f-9b33-c8ebe02c4047\") " Jan 20 18:37:26 crc kubenswrapper[4661]: I0120 18:37:26.939474 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e88a285a-dd4f-4c1f-9b33-c8ebe02c4047-ssh-key-openstack-edpm-ipam\") pod \"e88a285a-dd4f-4c1f-9b33-c8ebe02c4047\" (UID: \"e88a285a-dd4f-4c1f-9b33-c8ebe02c4047\") " Jan 20 18:37:26 crc kubenswrapper[4661]: I0120 18:37:26.946999 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e88a285a-dd4f-4c1f-9b33-c8ebe02c4047-kube-api-access-s9sv9" (OuterVolumeSpecName: "kube-api-access-s9sv9") pod "e88a285a-dd4f-4c1f-9b33-c8ebe02c4047" (UID: "e88a285a-dd4f-4c1f-9b33-c8ebe02c4047"). InnerVolumeSpecName "kube-api-access-s9sv9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 18:37:26 crc kubenswrapper[4661]: I0120 18:37:26.973603 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e88a285a-dd4f-4c1f-9b33-c8ebe02c4047-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "e88a285a-dd4f-4c1f-9b33-c8ebe02c4047" (UID: "e88a285a-dd4f-4c1f-9b33-c8ebe02c4047"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 18:37:26 crc kubenswrapper[4661]: I0120 18:37:26.988703 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e88a285a-dd4f-4c1f-9b33-c8ebe02c4047-inventory" (OuterVolumeSpecName: "inventory") pod "e88a285a-dd4f-4c1f-9b33-c8ebe02c4047" (UID: "e88a285a-dd4f-4c1f-9b33-c8ebe02c4047"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 18:37:27 crc kubenswrapper[4661]: I0120 18:37:27.041648 4661 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e88a285a-dd4f-4c1f-9b33-c8ebe02c4047-inventory\") on node \"crc\" DevicePath \"\"" Jan 20 18:37:27 crc kubenswrapper[4661]: I0120 18:37:27.041851 4661 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s9sv9\" (UniqueName: \"kubernetes.io/projected/e88a285a-dd4f-4c1f-9b33-c8ebe02c4047-kube-api-access-s9sv9\") on node \"crc\" DevicePath \"\"" Jan 20 18:37:27 crc kubenswrapper[4661]: I0120 18:37:27.041907 4661 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e88a285a-dd4f-4c1f-9b33-c8ebe02c4047-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 20 18:37:27 crc kubenswrapper[4661]: I0120 18:37:27.415485 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-thssw" event={"ID":"e88a285a-dd4f-4c1f-9b33-c8ebe02c4047","Type":"ContainerDied","Data":"d6617775e70d9cbea336bde0b7025072e05a8cfe67f5e966bf7bf1fdc444507d"} Jan 20 18:37:27 crc kubenswrapper[4661]: I0120 18:37:27.415563 4661 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d6617775e70d9cbea336bde0b7025072e05a8cfe67f5e966bf7bf1fdc444507d" Jan 20 18:37:27 crc kubenswrapper[4661]: I0120 18:37:27.415610 4661 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-thssw" Jan 20 18:37:27 crc kubenswrapper[4661]: I0120 18:37:27.488746 4661 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-7mjwn"] Jan 20 18:37:27 crc kubenswrapper[4661]: E0120 18:37:27.489185 4661 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e88a285a-dd4f-4c1f-9b33-c8ebe02c4047" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Jan 20 18:37:27 crc kubenswrapper[4661]: I0120 18:37:27.489207 4661 state_mem.go:107] "Deleted CPUSet assignment" podUID="e88a285a-dd4f-4c1f-9b33-c8ebe02c4047" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Jan 20 18:37:27 crc kubenswrapper[4661]: I0120 18:37:27.489421 4661 memory_manager.go:354] "RemoveStaleState removing state" podUID="e88a285a-dd4f-4c1f-9b33-c8ebe02c4047" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Jan 20 18:37:27 crc kubenswrapper[4661]: I0120 18:37:27.490110 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-7mjwn" Jan 20 18:37:27 crc kubenswrapper[4661]: I0120 18:37:27.493118 4661 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 20 18:37:27 crc kubenswrapper[4661]: I0120 18:37:27.493385 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-mmbv8" Jan 20 18:37:27 crc kubenswrapper[4661]: I0120 18:37:27.493563 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 20 18:37:27 crc kubenswrapper[4661]: I0120 18:37:27.496178 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 20 18:37:27 crc kubenswrapper[4661]: I0120 18:37:27.511698 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-7mjwn"] Jan 20 18:37:27 crc kubenswrapper[4661]: I0120 18:37:27.552290 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/562e95fc-8559-456b-b0b8-ace033341f49-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-7mjwn\" (UID: \"562e95fc-8559-456b-b0b8-ace033341f49\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-7mjwn" Jan 20 18:37:27 crc kubenswrapper[4661]: I0120 18:37:27.552364 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/562e95fc-8559-456b-b0b8-ace033341f49-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-7mjwn\" (UID: \"562e95fc-8559-456b-b0b8-ace033341f49\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-7mjwn" Jan 20 18:37:27 crc kubenswrapper[4661]: I0120 18:37:27.552446 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-29dm5\" (UniqueName: \"kubernetes.io/projected/562e95fc-8559-456b-b0b8-ace033341f49-kube-api-access-29dm5\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-7mjwn\" (UID: \"562e95fc-8559-456b-b0b8-ace033341f49\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-7mjwn" Jan 20 18:37:27 crc kubenswrapper[4661]: I0120 18:37:27.653774 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-29dm5\" (UniqueName: \"kubernetes.io/projected/562e95fc-8559-456b-b0b8-ace033341f49-kube-api-access-29dm5\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-7mjwn\" (UID: \"562e95fc-8559-456b-b0b8-ace033341f49\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-7mjwn" Jan 20 18:37:27 crc kubenswrapper[4661]: I0120 18:37:27.653930 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/562e95fc-8559-456b-b0b8-ace033341f49-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-7mjwn\" (UID: \"562e95fc-8559-456b-b0b8-ace033341f49\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-7mjwn" Jan 20 18:37:27 crc kubenswrapper[4661]: I0120 18:37:27.653987 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/562e95fc-8559-456b-b0b8-ace033341f49-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-7mjwn\" (UID: \"562e95fc-8559-456b-b0b8-ace033341f49\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-7mjwn" Jan 20 18:37:27 crc kubenswrapper[4661]: I0120 18:37:27.667849 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/562e95fc-8559-456b-b0b8-ace033341f49-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-7mjwn\" (UID: \"562e95fc-8559-456b-b0b8-ace033341f49\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-7mjwn" Jan 20 18:37:27 crc kubenswrapper[4661]: I0120 18:37:27.668247 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/562e95fc-8559-456b-b0b8-ace033341f49-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-7mjwn\" (UID: \"562e95fc-8559-456b-b0b8-ace033341f49\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-7mjwn" Jan 20 18:37:27 crc kubenswrapper[4661]: I0120 18:37:27.675015 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-29dm5\" (UniqueName: \"kubernetes.io/projected/562e95fc-8559-456b-b0b8-ace033341f49-kube-api-access-29dm5\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-7mjwn\" (UID: \"562e95fc-8559-456b-b0b8-ace033341f49\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-7mjwn" Jan 20 18:37:27 crc kubenswrapper[4661]: I0120 18:37:27.816141 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-7mjwn" Jan 20 18:37:28 crc kubenswrapper[4661]: I0120 18:37:28.379502 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-7mjwn"] Jan 20 18:37:28 crc kubenswrapper[4661]: W0120 18:37:28.382785 4661 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod562e95fc_8559_456b_b0b8_ace033341f49.slice/crio-7c17b582b331e5230a7e1667d1eaf9ddadeaebe1defa6199f9a29f474910e449 WatchSource:0}: Error finding container 7c17b582b331e5230a7e1667d1eaf9ddadeaebe1defa6199f9a29f474910e449: Status 404 returned error can't find the container with id 7c17b582b331e5230a7e1667d1eaf9ddadeaebe1defa6199f9a29f474910e449 Jan 20 18:37:28 crc kubenswrapper[4661]: I0120 18:37:28.427744 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-7mjwn" event={"ID":"562e95fc-8559-456b-b0b8-ace033341f49","Type":"ContainerStarted","Data":"7c17b582b331e5230a7e1667d1eaf9ddadeaebe1defa6199f9a29f474910e449"} Jan 20 18:37:29 crc kubenswrapper[4661]: I0120 18:37:29.438699 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-7mjwn" event={"ID":"562e95fc-8559-456b-b0b8-ace033341f49","Type":"ContainerStarted","Data":"46b6d6c55a17c7a8bb051cc7f13181d92a0c99d08ba928aa462ccfa513560894"} Jan 20 18:37:29 crc kubenswrapper[4661]: I0120 18:37:29.473620 4661 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-7mjwn" podStartSLOduration=2.040135627 podStartE2EDuration="2.473591089s" podCreationTimestamp="2026-01-20 18:37:27 +0000 UTC" firstStartedPulling="2026-01-20 18:37:28.386294292 +0000 UTC m=+1904.717083964" lastFinishedPulling="2026-01-20 18:37:28.819749754 +0000 UTC m=+1905.150539426" observedRunningTime="2026-01-20 18:37:29.456024502 +0000 UTC m=+1905.786814194" watchObservedRunningTime="2026-01-20 18:37:29.473591089 +0000 UTC m=+1905.804380771" Jan 20 18:37:39 crc kubenswrapper[4661]: I0120 18:37:39.524757 4661 generic.go:334] "Generic (PLEG): container finished" podID="562e95fc-8559-456b-b0b8-ace033341f49" containerID="46b6d6c55a17c7a8bb051cc7f13181d92a0c99d08ba928aa462ccfa513560894" exitCode=0 Jan 20 18:37:39 crc kubenswrapper[4661]: I0120 18:37:39.524823 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-7mjwn" event={"ID":"562e95fc-8559-456b-b0b8-ace033341f49","Type":"ContainerDied","Data":"46b6d6c55a17c7a8bb051cc7f13181d92a0c99d08ba928aa462ccfa513560894"} Jan 20 18:37:40 crc kubenswrapper[4661]: I0120 18:37:40.959399 4661 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-7mjwn" Jan 20 18:37:41 crc kubenswrapper[4661]: I0120 18:37:41.122032 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-29dm5\" (UniqueName: \"kubernetes.io/projected/562e95fc-8559-456b-b0b8-ace033341f49-kube-api-access-29dm5\") pod \"562e95fc-8559-456b-b0b8-ace033341f49\" (UID: \"562e95fc-8559-456b-b0b8-ace033341f49\") " Jan 20 18:37:41 crc kubenswrapper[4661]: I0120 18:37:41.122210 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/562e95fc-8559-456b-b0b8-ace033341f49-inventory\") pod \"562e95fc-8559-456b-b0b8-ace033341f49\" (UID: \"562e95fc-8559-456b-b0b8-ace033341f49\") " Jan 20 18:37:41 crc kubenswrapper[4661]: I0120 18:37:41.122484 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/562e95fc-8559-456b-b0b8-ace033341f49-ssh-key-openstack-edpm-ipam\") pod \"562e95fc-8559-456b-b0b8-ace033341f49\" (UID: \"562e95fc-8559-456b-b0b8-ace033341f49\") " Jan 20 18:37:41 crc kubenswrapper[4661]: I0120 18:37:41.130366 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/562e95fc-8559-456b-b0b8-ace033341f49-kube-api-access-29dm5" (OuterVolumeSpecName: "kube-api-access-29dm5") pod "562e95fc-8559-456b-b0b8-ace033341f49" (UID: "562e95fc-8559-456b-b0b8-ace033341f49"). InnerVolumeSpecName "kube-api-access-29dm5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 18:37:41 crc kubenswrapper[4661]: I0120 18:37:41.147006 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/562e95fc-8559-456b-b0b8-ace033341f49-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "562e95fc-8559-456b-b0b8-ace033341f49" (UID: "562e95fc-8559-456b-b0b8-ace033341f49"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 18:37:41 crc kubenswrapper[4661]: I0120 18:37:41.149067 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/562e95fc-8559-456b-b0b8-ace033341f49-inventory" (OuterVolumeSpecName: "inventory") pod "562e95fc-8559-456b-b0b8-ace033341f49" (UID: "562e95fc-8559-456b-b0b8-ace033341f49"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 18:37:41 crc kubenswrapper[4661]: I0120 18:37:41.225340 4661 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/562e95fc-8559-456b-b0b8-ace033341f49-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 20 18:37:41 crc kubenswrapper[4661]: I0120 18:37:41.225542 4661 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-29dm5\" (UniqueName: \"kubernetes.io/projected/562e95fc-8559-456b-b0b8-ace033341f49-kube-api-access-29dm5\") on node \"crc\" DevicePath \"\"" Jan 20 18:37:41 crc kubenswrapper[4661]: I0120 18:37:41.225618 4661 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/562e95fc-8559-456b-b0b8-ace033341f49-inventory\") on node \"crc\" DevicePath \"\"" Jan 20 18:37:41 crc kubenswrapper[4661]: I0120 18:37:41.542345 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-7mjwn" event={"ID":"562e95fc-8559-456b-b0b8-ace033341f49","Type":"ContainerDied","Data":"7c17b582b331e5230a7e1667d1eaf9ddadeaebe1defa6199f9a29f474910e449"} Jan 20 18:37:41 crc kubenswrapper[4661]: I0120 18:37:41.542593 4661 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7c17b582b331e5230a7e1667d1eaf9ddadeaebe1defa6199f9a29f474910e449" Jan 20 18:37:41 crc kubenswrapper[4661]: I0120 18:37:41.542396 4661 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-7mjwn" Jan 20 18:37:53 crc kubenswrapper[4661]: I0120 18:37:53.071797 4661 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-cell-mapping-jqfj9"] Jan 20 18:37:53 crc kubenswrapper[4661]: I0120 18:37:53.081551 4661 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-cell-mapping-jqfj9"] Jan 20 18:37:54 crc kubenswrapper[4661]: I0120 18:37:54.152648 4661 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7cb4b490-1124-4361-b5fb-ca6db9245b74" path="/var/lib/kubelet/pods/7cb4b490-1124-4361-b5fb-ca6db9245b74/volumes" Jan 20 18:38:17 crc kubenswrapper[4661]: I0120 18:38:17.024782 4661 scope.go:117] "RemoveContainer" containerID="e0105e71385efa784d6b559e5f64ab0b499e1fce55bafb85eb78a914adb65420" Jan 20 18:39:29 crc kubenswrapper[4661]: I0120 18:39:29.324137 4661 patch_prober.go:28] interesting pod/machine-config-daemon-svf7c container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 20 18:39:29 crc kubenswrapper[4661]: I0120 18:39:29.324876 4661 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-svf7c" podUID="78855c94-da90-4523-8d65-70f7fd153dee" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 20 18:39:59 crc kubenswrapper[4661]: I0120 18:39:59.323362 4661 patch_prober.go:28] interesting pod/machine-config-daemon-svf7c container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 20 18:39:59 crc kubenswrapper[4661]: I0120 18:39:59.323951 4661 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-svf7c" podUID="78855c94-da90-4523-8d65-70f7fd153dee" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 20 18:40:29 crc kubenswrapper[4661]: I0120 18:40:29.323417 4661 patch_prober.go:28] interesting pod/machine-config-daemon-svf7c container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 20 18:40:29 crc kubenswrapper[4661]: I0120 18:40:29.324109 4661 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-svf7c" podUID="78855c94-da90-4523-8d65-70f7fd153dee" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 20 18:40:29 crc kubenswrapper[4661]: I0120 18:40:29.324170 4661 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-svf7c" Jan 20 18:40:29 crc kubenswrapper[4661]: I0120 18:40:29.324832 4661 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"525958e045fed6e29649f5dde6c90a67fbcda63ec54c37aaef4ceed26f58480a"} pod="openshift-machine-config-operator/machine-config-daemon-svf7c" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 20 18:40:29 crc kubenswrapper[4661]: I0120 18:40:29.324891 4661 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-svf7c" podUID="78855c94-da90-4523-8d65-70f7fd153dee" containerName="machine-config-daemon" containerID="cri-o://525958e045fed6e29649f5dde6c90a67fbcda63ec54c37aaef4ceed26f58480a" gracePeriod=600 Jan 20 18:40:29 crc kubenswrapper[4661]: I0120 18:40:29.397246 4661 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-9twg9"] Jan 20 18:40:29 crc kubenswrapper[4661]: E0120 18:40:29.397897 4661 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="562e95fc-8559-456b-b0b8-ace033341f49" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Jan 20 18:40:29 crc kubenswrapper[4661]: I0120 18:40:29.397917 4661 state_mem.go:107] "Deleted CPUSet assignment" podUID="562e95fc-8559-456b-b0b8-ace033341f49" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Jan 20 18:40:29 crc kubenswrapper[4661]: I0120 18:40:29.398134 4661 memory_manager.go:354] "RemoveStaleState removing state" podUID="562e95fc-8559-456b-b0b8-ace033341f49" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Jan 20 18:40:29 crc kubenswrapper[4661]: I0120 18:40:29.399582 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9twg9" Jan 20 18:40:29 crc kubenswrapper[4661]: I0120 18:40:29.423984 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-9twg9"] Jan 20 18:40:29 crc kubenswrapper[4661]: I0120 18:40:29.521137 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f18f75da-5b8c-4193-957b-5a226f40a637-utilities\") pod \"redhat-operators-9twg9\" (UID: \"f18f75da-5b8c-4193-957b-5a226f40a637\") " pod="openshift-marketplace/redhat-operators-9twg9" Jan 20 18:40:29 crc kubenswrapper[4661]: I0120 18:40:29.521191 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f18f75da-5b8c-4193-957b-5a226f40a637-catalog-content\") pod \"redhat-operators-9twg9\" (UID: \"f18f75da-5b8c-4193-957b-5a226f40a637\") " pod="openshift-marketplace/redhat-operators-9twg9" Jan 20 18:40:29 crc kubenswrapper[4661]: I0120 18:40:29.521215 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-25jzk\" (UniqueName: \"kubernetes.io/projected/f18f75da-5b8c-4193-957b-5a226f40a637-kube-api-access-25jzk\") pod \"redhat-operators-9twg9\" (UID: \"f18f75da-5b8c-4193-957b-5a226f40a637\") " pod="openshift-marketplace/redhat-operators-9twg9" Jan 20 18:40:29 crc kubenswrapper[4661]: I0120 18:40:29.622859 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f18f75da-5b8c-4193-957b-5a226f40a637-utilities\") pod \"redhat-operators-9twg9\" (UID: \"f18f75da-5b8c-4193-957b-5a226f40a637\") " pod="openshift-marketplace/redhat-operators-9twg9" Jan 20 18:40:29 crc kubenswrapper[4661]: I0120 18:40:29.623202 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f18f75da-5b8c-4193-957b-5a226f40a637-catalog-content\") pod \"redhat-operators-9twg9\" (UID: \"f18f75da-5b8c-4193-957b-5a226f40a637\") " pod="openshift-marketplace/redhat-operators-9twg9" Jan 20 18:40:29 crc kubenswrapper[4661]: I0120 18:40:29.623226 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-25jzk\" (UniqueName: \"kubernetes.io/projected/f18f75da-5b8c-4193-957b-5a226f40a637-kube-api-access-25jzk\") pod \"redhat-operators-9twg9\" (UID: \"f18f75da-5b8c-4193-957b-5a226f40a637\") " pod="openshift-marketplace/redhat-operators-9twg9" Jan 20 18:40:29 crc kubenswrapper[4661]: I0120 18:40:29.623331 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f18f75da-5b8c-4193-957b-5a226f40a637-utilities\") pod \"redhat-operators-9twg9\" (UID: \"f18f75da-5b8c-4193-957b-5a226f40a637\") " pod="openshift-marketplace/redhat-operators-9twg9" Jan 20 18:40:29 crc kubenswrapper[4661]: I0120 18:40:29.623520 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f18f75da-5b8c-4193-957b-5a226f40a637-catalog-content\") pod \"redhat-operators-9twg9\" (UID: \"f18f75da-5b8c-4193-957b-5a226f40a637\") " pod="openshift-marketplace/redhat-operators-9twg9" Jan 20 18:40:29 crc kubenswrapper[4661]: I0120 18:40:29.643733 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-25jzk\" (UniqueName: \"kubernetes.io/projected/f18f75da-5b8c-4193-957b-5a226f40a637-kube-api-access-25jzk\") pod \"redhat-operators-9twg9\" (UID: \"f18f75da-5b8c-4193-957b-5a226f40a637\") " pod="openshift-marketplace/redhat-operators-9twg9" Jan 20 18:40:29 crc kubenswrapper[4661]: I0120 18:40:29.832901 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9twg9" Jan 20 18:40:30 crc kubenswrapper[4661]: I0120 18:40:30.275137 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-9twg9"] Jan 20 18:40:30 crc kubenswrapper[4661]: W0120 18:40:30.282599 4661 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf18f75da_5b8c_4193_957b_5a226f40a637.slice/crio-c1cf6c6711592770b7cd8d01be9e812b849fc366974bd2c6edb3f3cbb785d851 WatchSource:0}: Error finding container c1cf6c6711592770b7cd8d01be9e812b849fc366974bd2c6edb3f3cbb785d851: Status 404 returned error can't find the container with id c1cf6c6711592770b7cd8d01be9e812b849fc366974bd2c6edb3f3cbb785d851 Jan 20 18:40:30 crc kubenswrapper[4661]: I0120 18:40:30.305552 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9twg9" event={"ID":"f18f75da-5b8c-4193-957b-5a226f40a637","Type":"ContainerStarted","Data":"c1cf6c6711592770b7cd8d01be9e812b849fc366974bd2c6edb3f3cbb785d851"} Jan 20 18:40:30 crc kubenswrapper[4661]: I0120 18:40:30.307935 4661 generic.go:334] "Generic (PLEG): container finished" podID="78855c94-da90-4523-8d65-70f7fd153dee" containerID="525958e045fed6e29649f5dde6c90a67fbcda63ec54c37aaef4ceed26f58480a" exitCode=0 Jan 20 18:40:30 crc kubenswrapper[4661]: I0120 18:40:30.307971 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-svf7c" event={"ID":"78855c94-da90-4523-8d65-70f7fd153dee","Type":"ContainerDied","Data":"525958e045fed6e29649f5dde6c90a67fbcda63ec54c37aaef4ceed26f58480a"} Jan 20 18:40:30 crc kubenswrapper[4661]: I0120 18:40:30.307994 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-svf7c" event={"ID":"78855c94-da90-4523-8d65-70f7fd153dee","Type":"ContainerStarted","Data":"927fc50872b021bab1ad3425d5a889eb67f5570428c82cb9e465408890506791"} Jan 20 18:40:30 crc kubenswrapper[4661]: I0120 18:40:30.308012 4661 scope.go:117] "RemoveContainer" containerID="a002274f41223b9a3369067182e338344af8ce86db3dfb1e5a412006f071924e" Jan 20 18:40:31 crc kubenswrapper[4661]: I0120 18:40:31.321105 4661 generic.go:334] "Generic (PLEG): container finished" podID="f18f75da-5b8c-4193-957b-5a226f40a637" containerID="db6f8dc8aa9131081fe26bc873303f0606034def669776cbaac3d89ad8040990" exitCode=0 Jan 20 18:40:31 crc kubenswrapper[4661]: I0120 18:40:31.321310 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9twg9" event={"ID":"f18f75da-5b8c-4193-957b-5a226f40a637","Type":"ContainerDied","Data":"db6f8dc8aa9131081fe26bc873303f0606034def669776cbaac3d89ad8040990"} Jan 20 18:40:31 crc kubenswrapper[4661]: I0120 18:40:31.324467 4661 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 20 18:40:32 crc kubenswrapper[4661]: I0120 18:40:32.335020 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9twg9" event={"ID":"f18f75da-5b8c-4193-957b-5a226f40a637","Type":"ContainerStarted","Data":"774282b7a7800ffcbb062dcf7f9c8883edab094b20d01877012985ac3147aab1"} Jan 20 18:40:37 crc kubenswrapper[4661]: I0120 18:40:37.410104 4661 generic.go:334] "Generic (PLEG): container finished" podID="f18f75da-5b8c-4193-957b-5a226f40a637" containerID="774282b7a7800ffcbb062dcf7f9c8883edab094b20d01877012985ac3147aab1" exitCode=0 Jan 20 18:40:37 crc kubenswrapper[4661]: I0120 18:40:37.410179 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9twg9" event={"ID":"f18f75da-5b8c-4193-957b-5a226f40a637","Type":"ContainerDied","Data":"774282b7a7800ffcbb062dcf7f9c8883edab094b20d01877012985ac3147aab1"} Jan 20 18:40:38 crc kubenswrapper[4661]: I0120 18:40:38.421224 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9twg9" event={"ID":"f18f75da-5b8c-4193-957b-5a226f40a637","Type":"ContainerStarted","Data":"5819086d121ced997ab6c94a6d6e417d0bd42e93d46b0ca9233a10532cfecad5"} Jan 20 18:40:38 crc kubenswrapper[4661]: I0120 18:40:38.448437 4661 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-9twg9" podStartSLOduration=2.853846601 podStartE2EDuration="9.448418561s" podCreationTimestamp="2026-01-20 18:40:29 +0000 UTC" firstStartedPulling="2026-01-20 18:40:31.324150998 +0000 UTC m=+2087.654940660" lastFinishedPulling="2026-01-20 18:40:37.918722948 +0000 UTC m=+2094.249512620" observedRunningTime="2026-01-20 18:40:38.444817738 +0000 UTC m=+2094.775607410" watchObservedRunningTime="2026-01-20 18:40:38.448418561 +0000 UTC m=+2094.779208223" Jan 20 18:40:39 crc kubenswrapper[4661]: I0120 18:40:39.833465 4661 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-9twg9" Jan 20 18:40:39 crc kubenswrapper[4661]: I0120 18:40:39.833816 4661 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-9twg9" Jan 20 18:40:40 crc kubenswrapper[4661]: I0120 18:40:40.890571 4661 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-9twg9" podUID="f18f75da-5b8c-4193-957b-5a226f40a637" containerName="registry-server" probeResult="failure" output=< Jan 20 18:40:40 crc kubenswrapper[4661]: timeout: failed to connect service ":50051" within 1s Jan 20 18:40:40 crc kubenswrapper[4661]: > Jan 20 18:40:49 crc kubenswrapper[4661]: I0120 18:40:49.893161 4661 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-9twg9" Jan 20 18:40:49 crc kubenswrapper[4661]: I0120 18:40:49.952778 4661 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-9twg9" Jan 20 18:40:50 crc kubenswrapper[4661]: I0120 18:40:50.155498 4661 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-9twg9"] Jan 20 18:40:51 crc kubenswrapper[4661]: I0120 18:40:51.532704 4661 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-9twg9" podUID="f18f75da-5b8c-4193-957b-5a226f40a637" containerName="registry-server" containerID="cri-o://5819086d121ced997ab6c94a6d6e417d0bd42e93d46b0ca9233a10532cfecad5" gracePeriod=2 Jan 20 18:40:51 crc kubenswrapper[4661]: I0120 18:40:51.952969 4661 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9twg9" Jan 20 18:40:52 crc kubenswrapper[4661]: I0120 18:40:52.082059 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f18f75da-5b8c-4193-957b-5a226f40a637-catalog-content\") pod \"f18f75da-5b8c-4193-957b-5a226f40a637\" (UID: \"f18f75da-5b8c-4193-957b-5a226f40a637\") " Jan 20 18:40:52 crc kubenswrapper[4661]: I0120 18:40:52.082157 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f18f75da-5b8c-4193-957b-5a226f40a637-utilities\") pod \"f18f75da-5b8c-4193-957b-5a226f40a637\" (UID: \"f18f75da-5b8c-4193-957b-5a226f40a637\") " Jan 20 18:40:52 crc kubenswrapper[4661]: I0120 18:40:52.082202 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-25jzk\" (UniqueName: \"kubernetes.io/projected/f18f75da-5b8c-4193-957b-5a226f40a637-kube-api-access-25jzk\") pod \"f18f75da-5b8c-4193-957b-5a226f40a637\" (UID: \"f18f75da-5b8c-4193-957b-5a226f40a637\") " Jan 20 18:40:52 crc kubenswrapper[4661]: I0120 18:40:52.084849 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f18f75da-5b8c-4193-957b-5a226f40a637-utilities" (OuterVolumeSpecName: "utilities") pod "f18f75da-5b8c-4193-957b-5a226f40a637" (UID: "f18f75da-5b8c-4193-957b-5a226f40a637"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 18:40:52 crc kubenswrapper[4661]: I0120 18:40:52.089856 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f18f75da-5b8c-4193-957b-5a226f40a637-kube-api-access-25jzk" (OuterVolumeSpecName: "kube-api-access-25jzk") pod "f18f75da-5b8c-4193-957b-5a226f40a637" (UID: "f18f75da-5b8c-4193-957b-5a226f40a637"). InnerVolumeSpecName "kube-api-access-25jzk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 18:40:52 crc kubenswrapper[4661]: I0120 18:40:52.184657 4661 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f18f75da-5b8c-4193-957b-5a226f40a637-utilities\") on node \"crc\" DevicePath \"\"" Jan 20 18:40:52 crc kubenswrapper[4661]: I0120 18:40:52.186474 4661 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-25jzk\" (UniqueName: \"kubernetes.io/projected/f18f75da-5b8c-4193-957b-5a226f40a637-kube-api-access-25jzk\") on node \"crc\" DevicePath \"\"" Jan 20 18:40:52 crc kubenswrapper[4661]: I0120 18:40:52.258278 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f18f75da-5b8c-4193-957b-5a226f40a637-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f18f75da-5b8c-4193-957b-5a226f40a637" (UID: "f18f75da-5b8c-4193-957b-5a226f40a637"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 18:40:52 crc kubenswrapper[4661]: I0120 18:40:52.288482 4661 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f18f75da-5b8c-4193-957b-5a226f40a637-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 20 18:40:52 crc kubenswrapper[4661]: I0120 18:40:52.541404 4661 generic.go:334] "Generic (PLEG): container finished" podID="f18f75da-5b8c-4193-957b-5a226f40a637" containerID="5819086d121ced997ab6c94a6d6e417d0bd42e93d46b0ca9233a10532cfecad5" exitCode=0 Jan 20 18:40:52 crc kubenswrapper[4661]: I0120 18:40:52.541447 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9twg9" event={"ID":"f18f75da-5b8c-4193-957b-5a226f40a637","Type":"ContainerDied","Data":"5819086d121ced997ab6c94a6d6e417d0bd42e93d46b0ca9233a10532cfecad5"} Jan 20 18:40:52 crc kubenswrapper[4661]: I0120 18:40:52.541470 4661 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9twg9" Jan 20 18:40:52 crc kubenswrapper[4661]: I0120 18:40:52.541487 4661 scope.go:117] "RemoveContainer" containerID="5819086d121ced997ab6c94a6d6e417d0bd42e93d46b0ca9233a10532cfecad5" Jan 20 18:40:52 crc kubenswrapper[4661]: I0120 18:40:52.541475 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9twg9" event={"ID":"f18f75da-5b8c-4193-957b-5a226f40a637","Type":"ContainerDied","Data":"c1cf6c6711592770b7cd8d01be9e812b849fc366974bd2c6edb3f3cbb785d851"} Jan 20 18:40:52 crc kubenswrapper[4661]: I0120 18:40:52.565378 4661 scope.go:117] "RemoveContainer" containerID="774282b7a7800ffcbb062dcf7f9c8883edab094b20d01877012985ac3147aab1" Jan 20 18:40:52 crc kubenswrapper[4661]: I0120 18:40:52.591597 4661 scope.go:117] "RemoveContainer" containerID="db6f8dc8aa9131081fe26bc873303f0606034def669776cbaac3d89ad8040990" Jan 20 18:40:52 crc kubenswrapper[4661]: I0120 18:40:52.596571 4661 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-9twg9"] Jan 20 18:40:52 crc kubenswrapper[4661]: I0120 18:40:52.604059 4661 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-9twg9"] Jan 20 18:40:52 crc kubenswrapper[4661]: I0120 18:40:52.648171 4661 scope.go:117] "RemoveContainer" containerID="5819086d121ced997ab6c94a6d6e417d0bd42e93d46b0ca9233a10532cfecad5" Jan 20 18:40:52 crc kubenswrapper[4661]: E0120 18:40:52.648709 4661 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5819086d121ced997ab6c94a6d6e417d0bd42e93d46b0ca9233a10532cfecad5\": container with ID starting with 5819086d121ced997ab6c94a6d6e417d0bd42e93d46b0ca9233a10532cfecad5 not found: ID does not exist" containerID="5819086d121ced997ab6c94a6d6e417d0bd42e93d46b0ca9233a10532cfecad5" Jan 20 18:40:52 crc kubenswrapper[4661]: I0120 18:40:52.648744 4661 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5819086d121ced997ab6c94a6d6e417d0bd42e93d46b0ca9233a10532cfecad5"} err="failed to get container status \"5819086d121ced997ab6c94a6d6e417d0bd42e93d46b0ca9233a10532cfecad5\": rpc error: code = NotFound desc = could not find container \"5819086d121ced997ab6c94a6d6e417d0bd42e93d46b0ca9233a10532cfecad5\": container with ID starting with 5819086d121ced997ab6c94a6d6e417d0bd42e93d46b0ca9233a10532cfecad5 not found: ID does not exist" Jan 20 18:40:52 crc kubenswrapper[4661]: I0120 18:40:52.648796 4661 scope.go:117] "RemoveContainer" containerID="774282b7a7800ffcbb062dcf7f9c8883edab094b20d01877012985ac3147aab1" Jan 20 18:40:52 crc kubenswrapper[4661]: E0120 18:40:52.649136 4661 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"774282b7a7800ffcbb062dcf7f9c8883edab094b20d01877012985ac3147aab1\": container with ID starting with 774282b7a7800ffcbb062dcf7f9c8883edab094b20d01877012985ac3147aab1 not found: ID does not exist" containerID="774282b7a7800ffcbb062dcf7f9c8883edab094b20d01877012985ac3147aab1" Jan 20 18:40:52 crc kubenswrapper[4661]: I0120 18:40:52.649186 4661 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"774282b7a7800ffcbb062dcf7f9c8883edab094b20d01877012985ac3147aab1"} err="failed to get container status \"774282b7a7800ffcbb062dcf7f9c8883edab094b20d01877012985ac3147aab1\": rpc error: code = NotFound desc = could not find container \"774282b7a7800ffcbb062dcf7f9c8883edab094b20d01877012985ac3147aab1\": container with ID starting with 774282b7a7800ffcbb062dcf7f9c8883edab094b20d01877012985ac3147aab1 not found: ID does not exist" Jan 20 18:40:52 crc kubenswrapper[4661]: I0120 18:40:52.649199 4661 scope.go:117] "RemoveContainer" containerID="db6f8dc8aa9131081fe26bc873303f0606034def669776cbaac3d89ad8040990" Jan 20 18:40:52 crc kubenswrapper[4661]: E0120 18:40:52.649515 4661 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"db6f8dc8aa9131081fe26bc873303f0606034def669776cbaac3d89ad8040990\": container with ID starting with db6f8dc8aa9131081fe26bc873303f0606034def669776cbaac3d89ad8040990 not found: ID does not exist" containerID="db6f8dc8aa9131081fe26bc873303f0606034def669776cbaac3d89ad8040990" Jan 20 18:40:52 crc kubenswrapper[4661]: I0120 18:40:52.649536 4661 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"db6f8dc8aa9131081fe26bc873303f0606034def669776cbaac3d89ad8040990"} err="failed to get container status \"db6f8dc8aa9131081fe26bc873303f0606034def669776cbaac3d89ad8040990\": rpc error: code = NotFound desc = could not find container \"db6f8dc8aa9131081fe26bc873303f0606034def669776cbaac3d89ad8040990\": container with ID starting with db6f8dc8aa9131081fe26bc873303f0606034def669776cbaac3d89ad8040990 not found: ID does not exist" Jan 20 18:40:54 crc kubenswrapper[4661]: I0120 18:40:54.153084 4661 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f18f75da-5b8c-4193-957b-5a226f40a637" path="/var/lib/kubelet/pods/f18f75da-5b8c-4193-957b-5a226f40a637/volumes" Jan 20 18:41:33 crc kubenswrapper[4661]: I0120 18:41:33.238029 4661 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-zhmqf"] Jan 20 18:41:33 crc kubenswrapper[4661]: E0120 18:41:33.238976 4661 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f18f75da-5b8c-4193-957b-5a226f40a637" containerName="extract-utilities" Jan 20 18:41:33 crc kubenswrapper[4661]: I0120 18:41:33.238992 4661 state_mem.go:107] "Deleted CPUSet assignment" podUID="f18f75da-5b8c-4193-957b-5a226f40a637" containerName="extract-utilities" Jan 20 18:41:33 crc kubenswrapper[4661]: E0120 18:41:33.239020 4661 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f18f75da-5b8c-4193-957b-5a226f40a637" containerName="extract-content" Jan 20 18:41:33 crc kubenswrapper[4661]: I0120 18:41:33.239029 4661 state_mem.go:107] "Deleted CPUSet assignment" podUID="f18f75da-5b8c-4193-957b-5a226f40a637" containerName="extract-content" Jan 20 18:41:33 crc kubenswrapper[4661]: E0120 18:41:33.239048 4661 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f18f75da-5b8c-4193-957b-5a226f40a637" containerName="registry-server" Jan 20 18:41:33 crc kubenswrapper[4661]: I0120 18:41:33.239056 4661 state_mem.go:107] "Deleted CPUSet assignment" podUID="f18f75da-5b8c-4193-957b-5a226f40a637" containerName="registry-server" Jan 20 18:41:33 crc kubenswrapper[4661]: I0120 18:41:33.239268 4661 memory_manager.go:354] "RemoveStaleState removing state" podUID="f18f75da-5b8c-4193-957b-5a226f40a637" containerName="registry-server" Jan 20 18:41:33 crc kubenswrapper[4661]: I0120 18:41:33.240755 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zhmqf" Jan 20 18:41:33 crc kubenswrapper[4661]: I0120 18:41:33.253902 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-zhmqf"] Jan 20 18:41:33 crc kubenswrapper[4661]: I0120 18:41:33.393670 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/578ff0f1-a322-4357-8440-96511dfba67c-catalog-content\") pod \"redhat-marketplace-zhmqf\" (UID: \"578ff0f1-a322-4357-8440-96511dfba67c\") " pod="openshift-marketplace/redhat-marketplace-zhmqf" Jan 20 18:41:33 crc kubenswrapper[4661]: I0120 18:41:33.394065 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-85d27\" (UniqueName: \"kubernetes.io/projected/578ff0f1-a322-4357-8440-96511dfba67c-kube-api-access-85d27\") pod \"redhat-marketplace-zhmqf\" (UID: \"578ff0f1-a322-4357-8440-96511dfba67c\") " pod="openshift-marketplace/redhat-marketplace-zhmqf" Jan 20 18:41:33 crc kubenswrapper[4661]: I0120 18:41:33.394270 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/578ff0f1-a322-4357-8440-96511dfba67c-utilities\") pod \"redhat-marketplace-zhmqf\" (UID: \"578ff0f1-a322-4357-8440-96511dfba67c\") " pod="openshift-marketplace/redhat-marketplace-zhmqf" Jan 20 18:41:33 crc kubenswrapper[4661]: I0120 18:41:33.495757 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-85d27\" (UniqueName: \"kubernetes.io/projected/578ff0f1-a322-4357-8440-96511dfba67c-kube-api-access-85d27\") pod \"redhat-marketplace-zhmqf\" (UID: \"578ff0f1-a322-4357-8440-96511dfba67c\") " pod="openshift-marketplace/redhat-marketplace-zhmqf" Jan 20 18:41:33 crc kubenswrapper[4661]: I0120 18:41:33.496104 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/578ff0f1-a322-4357-8440-96511dfba67c-utilities\") pod \"redhat-marketplace-zhmqf\" (UID: \"578ff0f1-a322-4357-8440-96511dfba67c\") " pod="openshift-marketplace/redhat-marketplace-zhmqf" Jan 20 18:41:33 crc kubenswrapper[4661]: I0120 18:41:33.496228 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/578ff0f1-a322-4357-8440-96511dfba67c-catalog-content\") pod \"redhat-marketplace-zhmqf\" (UID: \"578ff0f1-a322-4357-8440-96511dfba67c\") " pod="openshift-marketplace/redhat-marketplace-zhmqf" Jan 20 18:41:33 crc kubenswrapper[4661]: I0120 18:41:33.496501 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/578ff0f1-a322-4357-8440-96511dfba67c-utilities\") pod \"redhat-marketplace-zhmqf\" (UID: \"578ff0f1-a322-4357-8440-96511dfba67c\") " pod="openshift-marketplace/redhat-marketplace-zhmqf" Jan 20 18:41:33 crc kubenswrapper[4661]: I0120 18:41:33.496655 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/578ff0f1-a322-4357-8440-96511dfba67c-catalog-content\") pod \"redhat-marketplace-zhmqf\" (UID: \"578ff0f1-a322-4357-8440-96511dfba67c\") " pod="openshift-marketplace/redhat-marketplace-zhmqf" Jan 20 18:41:33 crc kubenswrapper[4661]: I0120 18:41:33.518368 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-85d27\" (UniqueName: \"kubernetes.io/projected/578ff0f1-a322-4357-8440-96511dfba67c-kube-api-access-85d27\") pod \"redhat-marketplace-zhmqf\" (UID: \"578ff0f1-a322-4357-8440-96511dfba67c\") " pod="openshift-marketplace/redhat-marketplace-zhmqf" Jan 20 18:41:33 crc kubenswrapper[4661]: I0120 18:41:33.560542 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zhmqf" Jan 20 18:41:34 crc kubenswrapper[4661]: I0120 18:41:34.012567 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-zhmqf"] Jan 20 18:41:34 crc kubenswrapper[4661]: I0120 18:41:34.903075 4661 generic.go:334] "Generic (PLEG): container finished" podID="578ff0f1-a322-4357-8440-96511dfba67c" containerID="e91baff2e7e800f99ba6b4731f7bcb8ddda16a4c37241cf79729d0cd43c86991" exitCode=0 Jan 20 18:41:34 crc kubenswrapper[4661]: I0120 18:41:34.903201 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zhmqf" event={"ID":"578ff0f1-a322-4357-8440-96511dfba67c","Type":"ContainerDied","Data":"e91baff2e7e800f99ba6b4731f7bcb8ddda16a4c37241cf79729d0cd43c86991"} Jan 20 18:41:34 crc kubenswrapper[4661]: I0120 18:41:34.903408 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zhmqf" event={"ID":"578ff0f1-a322-4357-8440-96511dfba67c","Type":"ContainerStarted","Data":"d38555a96e25d8f1bdb296a1e030cbd7239b2f2523e515d084c0462541c63121"} Jan 20 18:41:35 crc kubenswrapper[4661]: I0120 18:41:35.912729 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zhmqf" event={"ID":"578ff0f1-a322-4357-8440-96511dfba67c","Type":"ContainerStarted","Data":"d373ec39c585a444469533add0d93d45be36d79a9c9b7ba6add5bd35498ca2c5"} Jan 20 18:41:36 crc kubenswrapper[4661]: I0120 18:41:36.932341 4661 generic.go:334] "Generic (PLEG): container finished" podID="578ff0f1-a322-4357-8440-96511dfba67c" containerID="d373ec39c585a444469533add0d93d45be36d79a9c9b7ba6add5bd35498ca2c5" exitCode=0 Jan 20 18:41:36 crc kubenswrapper[4661]: I0120 18:41:36.934746 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zhmqf" event={"ID":"578ff0f1-a322-4357-8440-96511dfba67c","Type":"ContainerDied","Data":"d373ec39c585a444469533add0d93d45be36d79a9c9b7ba6add5bd35498ca2c5"} Jan 20 18:41:37 crc kubenswrapper[4661]: I0120 18:41:37.945201 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zhmqf" event={"ID":"578ff0f1-a322-4357-8440-96511dfba67c","Type":"ContainerStarted","Data":"c080fedd8467a9d01b77e0b88b056eb543ff9154241fbaccbe42fcb4c1b4e964"} Jan 20 18:41:37 crc kubenswrapper[4661]: I0120 18:41:37.979114 4661 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-zhmqf" podStartSLOduration=2.522391717 podStartE2EDuration="4.979079367s" podCreationTimestamp="2026-01-20 18:41:33 +0000 UTC" firstStartedPulling="2026-01-20 18:41:34.906808594 +0000 UTC m=+2151.237598256" lastFinishedPulling="2026-01-20 18:41:37.363496254 +0000 UTC m=+2153.694285906" observedRunningTime="2026-01-20 18:41:37.975999575 +0000 UTC m=+2154.306789237" watchObservedRunningTime="2026-01-20 18:41:37.979079367 +0000 UTC m=+2154.309869029" Jan 20 18:41:43 crc kubenswrapper[4661]: I0120 18:41:43.560708 4661 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-zhmqf" Jan 20 18:41:43 crc kubenswrapper[4661]: I0120 18:41:43.561244 4661 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-zhmqf" Jan 20 18:41:43 crc kubenswrapper[4661]: I0120 18:41:43.613905 4661 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-zhmqf" Jan 20 18:41:44 crc kubenswrapper[4661]: I0120 18:41:44.041537 4661 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-zhmqf" Jan 20 18:41:44 crc kubenswrapper[4661]: I0120 18:41:44.101300 4661 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-zhmqf"] Jan 20 18:41:46 crc kubenswrapper[4661]: I0120 18:41:46.008415 4661 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-zhmqf" podUID="578ff0f1-a322-4357-8440-96511dfba67c" containerName="registry-server" containerID="cri-o://c080fedd8467a9d01b77e0b88b056eb543ff9154241fbaccbe42fcb4c1b4e964" gracePeriod=2 Jan 20 18:41:46 crc kubenswrapper[4661]: I0120 18:41:46.476449 4661 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zhmqf" Jan 20 18:41:46 crc kubenswrapper[4661]: I0120 18:41:46.640069 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-85d27\" (UniqueName: \"kubernetes.io/projected/578ff0f1-a322-4357-8440-96511dfba67c-kube-api-access-85d27\") pod \"578ff0f1-a322-4357-8440-96511dfba67c\" (UID: \"578ff0f1-a322-4357-8440-96511dfba67c\") " Jan 20 18:41:46 crc kubenswrapper[4661]: I0120 18:41:46.640190 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/578ff0f1-a322-4357-8440-96511dfba67c-utilities\") pod \"578ff0f1-a322-4357-8440-96511dfba67c\" (UID: \"578ff0f1-a322-4357-8440-96511dfba67c\") " Jan 20 18:41:46 crc kubenswrapper[4661]: I0120 18:41:46.640237 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/578ff0f1-a322-4357-8440-96511dfba67c-catalog-content\") pod \"578ff0f1-a322-4357-8440-96511dfba67c\" (UID: \"578ff0f1-a322-4357-8440-96511dfba67c\") " Jan 20 18:41:46 crc kubenswrapper[4661]: I0120 18:41:46.641185 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/578ff0f1-a322-4357-8440-96511dfba67c-utilities" (OuterVolumeSpecName: "utilities") pod "578ff0f1-a322-4357-8440-96511dfba67c" (UID: "578ff0f1-a322-4357-8440-96511dfba67c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 18:41:46 crc kubenswrapper[4661]: I0120 18:41:46.660476 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/578ff0f1-a322-4357-8440-96511dfba67c-kube-api-access-85d27" (OuterVolumeSpecName: "kube-api-access-85d27") pod "578ff0f1-a322-4357-8440-96511dfba67c" (UID: "578ff0f1-a322-4357-8440-96511dfba67c"). InnerVolumeSpecName "kube-api-access-85d27". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 18:41:46 crc kubenswrapper[4661]: I0120 18:41:46.665184 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/578ff0f1-a322-4357-8440-96511dfba67c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "578ff0f1-a322-4357-8440-96511dfba67c" (UID: "578ff0f1-a322-4357-8440-96511dfba67c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 18:41:46 crc kubenswrapper[4661]: I0120 18:41:46.742084 4661 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-85d27\" (UniqueName: \"kubernetes.io/projected/578ff0f1-a322-4357-8440-96511dfba67c-kube-api-access-85d27\") on node \"crc\" DevicePath \"\"" Jan 20 18:41:46 crc kubenswrapper[4661]: I0120 18:41:46.742472 4661 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/578ff0f1-a322-4357-8440-96511dfba67c-utilities\") on node \"crc\" DevicePath \"\"" Jan 20 18:41:46 crc kubenswrapper[4661]: I0120 18:41:46.742488 4661 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/578ff0f1-a322-4357-8440-96511dfba67c-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 20 18:41:47 crc kubenswrapper[4661]: I0120 18:41:47.020766 4661 generic.go:334] "Generic (PLEG): container finished" podID="578ff0f1-a322-4357-8440-96511dfba67c" containerID="c080fedd8467a9d01b77e0b88b056eb543ff9154241fbaccbe42fcb4c1b4e964" exitCode=0 Jan 20 18:41:47 crc kubenswrapper[4661]: I0120 18:41:47.020823 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zhmqf" event={"ID":"578ff0f1-a322-4357-8440-96511dfba67c","Type":"ContainerDied","Data":"c080fedd8467a9d01b77e0b88b056eb543ff9154241fbaccbe42fcb4c1b4e964"} Jan 20 18:41:47 crc kubenswrapper[4661]: I0120 18:41:47.020841 4661 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zhmqf" Jan 20 18:41:47 crc kubenswrapper[4661]: I0120 18:41:47.020870 4661 scope.go:117] "RemoveContainer" containerID="c080fedd8467a9d01b77e0b88b056eb543ff9154241fbaccbe42fcb4c1b4e964" Jan 20 18:41:47 crc kubenswrapper[4661]: I0120 18:41:47.020857 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zhmqf" event={"ID":"578ff0f1-a322-4357-8440-96511dfba67c","Type":"ContainerDied","Data":"d38555a96e25d8f1bdb296a1e030cbd7239b2f2523e515d084c0462541c63121"} Jan 20 18:41:47 crc kubenswrapper[4661]: I0120 18:41:47.065619 4661 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-zhmqf"] Jan 20 18:41:47 crc kubenswrapper[4661]: I0120 18:41:47.084197 4661 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-zhmqf"] Jan 20 18:41:47 crc kubenswrapper[4661]: I0120 18:41:47.097827 4661 scope.go:117] "RemoveContainer" containerID="d373ec39c585a444469533add0d93d45be36d79a9c9b7ba6add5bd35498ca2c5" Jan 20 18:41:47 crc kubenswrapper[4661]: I0120 18:41:47.147827 4661 scope.go:117] "RemoveContainer" containerID="e91baff2e7e800f99ba6b4731f7bcb8ddda16a4c37241cf79729d0cd43c86991" Jan 20 18:41:47 crc kubenswrapper[4661]: I0120 18:41:47.215812 4661 scope.go:117] "RemoveContainer" containerID="c080fedd8467a9d01b77e0b88b056eb543ff9154241fbaccbe42fcb4c1b4e964" Jan 20 18:41:47 crc kubenswrapper[4661]: E0120 18:41:47.216469 4661 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c080fedd8467a9d01b77e0b88b056eb543ff9154241fbaccbe42fcb4c1b4e964\": container with ID starting with c080fedd8467a9d01b77e0b88b056eb543ff9154241fbaccbe42fcb4c1b4e964 not found: ID does not exist" containerID="c080fedd8467a9d01b77e0b88b056eb543ff9154241fbaccbe42fcb4c1b4e964" Jan 20 18:41:47 crc kubenswrapper[4661]: I0120 18:41:47.216556 4661 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c080fedd8467a9d01b77e0b88b056eb543ff9154241fbaccbe42fcb4c1b4e964"} err="failed to get container status \"c080fedd8467a9d01b77e0b88b056eb543ff9154241fbaccbe42fcb4c1b4e964\": rpc error: code = NotFound desc = could not find container \"c080fedd8467a9d01b77e0b88b056eb543ff9154241fbaccbe42fcb4c1b4e964\": container with ID starting with c080fedd8467a9d01b77e0b88b056eb543ff9154241fbaccbe42fcb4c1b4e964 not found: ID does not exist" Jan 20 18:41:47 crc kubenswrapper[4661]: I0120 18:41:47.216578 4661 scope.go:117] "RemoveContainer" containerID="d373ec39c585a444469533add0d93d45be36d79a9c9b7ba6add5bd35498ca2c5" Jan 20 18:41:47 crc kubenswrapper[4661]: E0120 18:41:47.216879 4661 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d373ec39c585a444469533add0d93d45be36d79a9c9b7ba6add5bd35498ca2c5\": container with ID starting with d373ec39c585a444469533add0d93d45be36d79a9c9b7ba6add5bd35498ca2c5 not found: ID does not exist" containerID="d373ec39c585a444469533add0d93d45be36d79a9c9b7ba6add5bd35498ca2c5" Jan 20 18:41:47 crc kubenswrapper[4661]: I0120 18:41:47.216901 4661 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d373ec39c585a444469533add0d93d45be36d79a9c9b7ba6add5bd35498ca2c5"} err="failed to get container status \"d373ec39c585a444469533add0d93d45be36d79a9c9b7ba6add5bd35498ca2c5\": rpc error: code = NotFound desc = could not find container \"d373ec39c585a444469533add0d93d45be36d79a9c9b7ba6add5bd35498ca2c5\": container with ID starting with d373ec39c585a444469533add0d93d45be36d79a9c9b7ba6add5bd35498ca2c5 not found: ID does not exist" Jan 20 18:41:47 crc kubenswrapper[4661]: I0120 18:41:47.216915 4661 scope.go:117] "RemoveContainer" containerID="e91baff2e7e800f99ba6b4731f7bcb8ddda16a4c37241cf79729d0cd43c86991" Jan 20 18:41:47 crc kubenswrapper[4661]: E0120 18:41:47.217257 4661 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e91baff2e7e800f99ba6b4731f7bcb8ddda16a4c37241cf79729d0cd43c86991\": container with ID starting with e91baff2e7e800f99ba6b4731f7bcb8ddda16a4c37241cf79729d0cd43c86991 not found: ID does not exist" containerID="e91baff2e7e800f99ba6b4731f7bcb8ddda16a4c37241cf79729d0cd43c86991" Jan 20 18:41:47 crc kubenswrapper[4661]: I0120 18:41:47.217284 4661 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e91baff2e7e800f99ba6b4731f7bcb8ddda16a4c37241cf79729d0cd43c86991"} err="failed to get container status \"e91baff2e7e800f99ba6b4731f7bcb8ddda16a4c37241cf79729d0cd43c86991\": rpc error: code = NotFound desc = could not find container \"e91baff2e7e800f99ba6b4731f7bcb8ddda16a4c37241cf79729d0cd43c86991\": container with ID starting with e91baff2e7e800f99ba6b4731f7bcb8ddda16a4c37241cf79729d0cd43c86991 not found: ID does not exist" Jan 20 18:41:48 crc kubenswrapper[4661]: I0120 18:41:48.152436 4661 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="578ff0f1-a322-4357-8440-96511dfba67c" path="/var/lib/kubelet/pods/578ff0f1-a322-4357-8440-96511dfba67c/volumes" Jan 20 18:42:29 crc kubenswrapper[4661]: I0120 18:42:29.324506 4661 patch_prober.go:28] interesting pod/machine-config-daemon-svf7c container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 20 18:42:29 crc kubenswrapper[4661]: I0120 18:42:29.325132 4661 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-svf7c" podUID="78855c94-da90-4523-8d65-70f7fd153dee" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 20 18:42:59 crc kubenswrapper[4661]: I0120 18:42:59.323909 4661 patch_prober.go:28] interesting pod/machine-config-daemon-svf7c container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 20 18:42:59 crc kubenswrapper[4661]: I0120 18:42:59.324568 4661 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-svf7c" podUID="78855c94-da90-4523-8d65-70f7fd153dee" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 20 18:43:29 crc kubenswrapper[4661]: I0120 18:43:29.324187 4661 patch_prober.go:28] interesting pod/machine-config-daemon-svf7c container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 20 18:43:29 crc kubenswrapper[4661]: I0120 18:43:29.325860 4661 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-svf7c" podUID="78855c94-da90-4523-8d65-70f7fd153dee" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 20 18:43:29 crc kubenswrapper[4661]: I0120 18:43:29.325924 4661 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-svf7c" Jan 20 18:43:29 crc kubenswrapper[4661]: I0120 18:43:29.326723 4661 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"927fc50872b021bab1ad3425d5a889eb67f5570428c82cb9e465408890506791"} pod="openshift-machine-config-operator/machine-config-daemon-svf7c" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 20 18:43:29 crc kubenswrapper[4661]: I0120 18:43:29.326789 4661 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-svf7c" podUID="78855c94-da90-4523-8d65-70f7fd153dee" containerName="machine-config-daemon" containerID="cri-o://927fc50872b021bab1ad3425d5a889eb67f5570428c82cb9e465408890506791" gracePeriod=600 Jan 20 18:43:29 crc kubenswrapper[4661]: E0120 18:43:29.462500 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-svf7c_openshift-machine-config-operator(78855c94-da90-4523-8d65-70f7fd153dee)\"" pod="openshift-machine-config-operator/machine-config-daemon-svf7c" podUID="78855c94-da90-4523-8d65-70f7fd153dee" Jan 20 18:43:29 crc kubenswrapper[4661]: I0120 18:43:29.944210 4661 generic.go:334] "Generic (PLEG): container finished" podID="78855c94-da90-4523-8d65-70f7fd153dee" containerID="927fc50872b021bab1ad3425d5a889eb67f5570428c82cb9e465408890506791" exitCode=0 Jan 20 18:43:29 crc kubenswrapper[4661]: I0120 18:43:29.944257 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-svf7c" event={"ID":"78855c94-da90-4523-8d65-70f7fd153dee","Type":"ContainerDied","Data":"927fc50872b021bab1ad3425d5a889eb67f5570428c82cb9e465408890506791"} Jan 20 18:43:29 crc kubenswrapper[4661]: I0120 18:43:29.944503 4661 scope.go:117] "RemoveContainer" containerID="525958e045fed6e29649f5dde6c90a67fbcda63ec54c37aaef4ceed26f58480a" Jan 20 18:43:29 crc kubenswrapper[4661]: I0120 18:43:29.945122 4661 scope.go:117] "RemoveContainer" containerID="927fc50872b021bab1ad3425d5a889eb67f5570428c82cb9e465408890506791" Jan 20 18:43:29 crc kubenswrapper[4661]: E0120 18:43:29.945513 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-svf7c_openshift-machine-config-operator(78855c94-da90-4523-8d65-70f7fd153dee)\"" pod="openshift-machine-config-operator/machine-config-daemon-svf7c" podUID="78855c94-da90-4523-8d65-70f7fd153dee" Jan 20 18:43:45 crc kubenswrapper[4661]: I0120 18:43:45.143450 4661 scope.go:117] "RemoveContainer" containerID="927fc50872b021bab1ad3425d5a889eb67f5570428c82cb9e465408890506791" Jan 20 18:43:45 crc kubenswrapper[4661]: E0120 18:43:45.144220 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-svf7c_openshift-machine-config-operator(78855c94-da90-4523-8d65-70f7fd153dee)\"" pod="openshift-machine-config-operator/machine-config-daemon-svf7c" podUID="78855c94-da90-4523-8d65-70f7fd153dee" Jan 20 18:44:00 crc kubenswrapper[4661]: I0120 18:44:00.141792 4661 scope.go:117] "RemoveContainer" containerID="927fc50872b021bab1ad3425d5a889eb67f5570428c82cb9e465408890506791" Jan 20 18:44:00 crc kubenswrapper[4661]: E0120 18:44:00.142497 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-svf7c_openshift-machine-config-operator(78855c94-da90-4523-8d65-70f7fd153dee)\"" pod="openshift-machine-config-operator/machine-config-daemon-svf7c" podUID="78855c94-da90-4523-8d65-70f7fd153dee" Jan 20 18:44:11 crc kubenswrapper[4661]: I0120 18:44:11.142345 4661 scope.go:117] "RemoveContainer" containerID="927fc50872b021bab1ad3425d5a889eb67f5570428c82cb9e465408890506791" Jan 20 18:44:11 crc kubenswrapper[4661]: E0120 18:44:11.143533 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-svf7c_openshift-machine-config-operator(78855c94-da90-4523-8d65-70f7fd153dee)\"" pod="openshift-machine-config-operator/machine-config-daemon-svf7c" podUID="78855c94-da90-4523-8d65-70f7fd153dee" Jan 20 18:44:16 crc kubenswrapper[4661]: I0120 18:44:16.405508 4661 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-rg7xl"] Jan 20 18:44:16 crc kubenswrapper[4661]: I0120 18:44:16.417319 4661 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-7mjwn"] Jan 20 18:44:16 crc kubenswrapper[4661]: I0120 18:44:16.426966 4661 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-7d2mh"] Jan 20 18:44:16 crc kubenswrapper[4661]: I0120 18:44:16.435325 4661 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-rg7xl"] Jan 20 18:44:16 crc kubenswrapper[4661]: I0120 18:44:16.443709 4661 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-s4q92"] Jan 20 18:44:16 crc kubenswrapper[4661]: I0120 18:44:16.450493 4661 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-rp546"] Jan 20 18:44:16 crc kubenswrapper[4661]: I0120 18:44:16.456560 4661 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-9k7lh"] Jan 20 18:44:16 crc kubenswrapper[4661]: I0120 18:44:16.462239 4661 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-wvdt6"] Jan 20 18:44:16 crc kubenswrapper[4661]: I0120 18:44:16.467728 4661 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-thssw"] Jan 20 18:44:16 crc kubenswrapper[4661]: I0120 18:44:16.473134 4661 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-5hhlm"] Jan 20 18:44:16 crc kubenswrapper[4661]: I0120 18:44:16.480012 4661 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-lc5l9"] Jan 20 18:44:16 crc kubenswrapper[4661]: I0120 18:44:16.486314 4661 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-rp546"] Jan 20 18:44:16 crc kubenswrapper[4661]: I0120 18:44:16.492307 4661 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-wvdt6"] Jan 20 18:44:16 crc kubenswrapper[4661]: I0120 18:44:16.498691 4661 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-lc5l9"] Jan 20 18:44:16 crc kubenswrapper[4661]: I0120 18:44:16.505269 4661 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-5hhlm"] Jan 20 18:44:16 crc kubenswrapper[4661]: I0120 18:44:16.512147 4661 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-9k7lh"] Jan 20 18:44:16 crc kubenswrapper[4661]: I0120 18:44:16.518411 4661 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-s4q92"] Jan 20 18:44:16 crc kubenswrapper[4661]: I0120 18:44:16.524245 4661 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-thssw"] Jan 20 18:44:16 crc kubenswrapper[4661]: I0120 18:44:16.530587 4661 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-7mjwn"] Jan 20 18:44:16 crc kubenswrapper[4661]: I0120 18:44:16.536088 4661 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-7d2mh"] Jan 20 18:44:18 crc kubenswrapper[4661]: I0120 18:44:18.157623 4661 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="04ea87ee-8ceb-4be6-b968-1ab597f5c7b1" path="/var/lib/kubelet/pods/04ea87ee-8ceb-4be6-b968-1ab597f5c7b1/volumes" Jan 20 18:44:18 crc kubenswrapper[4661]: I0120 18:44:18.159976 4661 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3da242a6-ba42-4b70-9745-e06ea2a4146e" path="/var/lib/kubelet/pods/3da242a6-ba42-4b70-9745-e06ea2a4146e/volumes" Jan 20 18:44:18 crc kubenswrapper[4661]: I0120 18:44:18.160767 4661 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="404d95a4-80b6-44e9-92ff-3d9f880ade4b" path="/var/lib/kubelet/pods/404d95a4-80b6-44e9-92ff-3d9f880ade4b/volumes" Jan 20 18:44:18 crc kubenswrapper[4661]: I0120 18:44:18.161485 4661 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4931cafe-18cd-4020-9112-610654812598" path="/var/lib/kubelet/pods/4931cafe-18cd-4020-9112-610654812598/volumes" Jan 20 18:44:18 crc kubenswrapper[4661]: I0120 18:44:18.162274 4661 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5203b99e-3f05-4d7d-8900-ee1c7d58526d" path="/var/lib/kubelet/pods/5203b99e-3f05-4d7d-8900-ee1c7d58526d/volumes" Jan 20 18:44:18 crc kubenswrapper[4661]: I0120 18:44:18.164189 4661 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="562e95fc-8559-456b-b0b8-ace033341f49" path="/var/lib/kubelet/pods/562e95fc-8559-456b-b0b8-ace033341f49/volumes" Jan 20 18:44:18 crc kubenswrapper[4661]: I0120 18:44:18.164864 4661 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="66e42258-c528-4826-b990-06290a7e595d" path="/var/lib/kubelet/pods/66e42258-c528-4826-b990-06290a7e595d/volumes" Jan 20 18:44:18 crc kubenswrapper[4661]: I0120 18:44:18.165474 4661 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d6cfabfa-20dc-43e7-895a-cfeabfefcde1" path="/var/lib/kubelet/pods/d6cfabfa-20dc-43e7-895a-cfeabfefcde1/volumes" Jan 20 18:44:18 crc kubenswrapper[4661]: I0120 18:44:18.166142 4661 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e88a285a-dd4f-4c1f-9b33-c8ebe02c4047" path="/var/lib/kubelet/pods/e88a285a-dd4f-4c1f-9b33-c8ebe02c4047/volumes" Jan 20 18:44:18 crc kubenswrapper[4661]: I0120 18:44:18.167309 4661 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f5353890-587c-4ab1-b6a5-8e21f7c573b5" path="/var/lib/kubelet/pods/f5353890-587c-4ab1-b6a5-8e21f7c573b5/volumes" Jan 20 18:44:25 crc kubenswrapper[4661]: I0120 18:44:25.142598 4661 scope.go:117] "RemoveContainer" containerID="927fc50872b021bab1ad3425d5a889eb67f5570428c82cb9e465408890506791" Jan 20 18:44:25 crc kubenswrapper[4661]: E0120 18:44:25.143741 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-svf7c_openshift-machine-config-operator(78855c94-da90-4523-8d65-70f7fd153dee)\"" pod="openshift-machine-config-operator/machine-config-daemon-svf7c" podUID="78855c94-da90-4523-8d65-70f7fd153dee" Jan 20 18:44:29 crc kubenswrapper[4661]: I0120 18:44:29.624709 4661 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-h5wl9"] Jan 20 18:44:29 crc kubenswrapper[4661]: E0120 18:44:29.625311 4661 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="578ff0f1-a322-4357-8440-96511dfba67c" containerName="extract-utilities" Jan 20 18:44:29 crc kubenswrapper[4661]: I0120 18:44:29.625325 4661 state_mem.go:107] "Deleted CPUSet assignment" podUID="578ff0f1-a322-4357-8440-96511dfba67c" containerName="extract-utilities" Jan 20 18:44:29 crc kubenswrapper[4661]: E0120 18:44:29.625346 4661 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="578ff0f1-a322-4357-8440-96511dfba67c" containerName="registry-server" Jan 20 18:44:29 crc kubenswrapper[4661]: I0120 18:44:29.625354 4661 state_mem.go:107] "Deleted CPUSet assignment" podUID="578ff0f1-a322-4357-8440-96511dfba67c" containerName="registry-server" Jan 20 18:44:29 crc kubenswrapper[4661]: E0120 18:44:29.625366 4661 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="578ff0f1-a322-4357-8440-96511dfba67c" containerName="extract-content" Jan 20 18:44:29 crc kubenswrapper[4661]: I0120 18:44:29.625374 4661 state_mem.go:107] "Deleted CPUSet assignment" podUID="578ff0f1-a322-4357-8440-96511dfba67c" containerName="extract-content" Jan 20 18:44:29 crc kubenswrapper[4661]: I0120 18:44:29.625586 4661 memory_manager.go:354] "RemoveStaleState removing state" podUID="578ff0f1-a322-4357-8440-96511dfba67c" containerName="registry-server" Jan 20 18:44:29 crc kubenswrapper[4661]: I0120 18:44:29.626468 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-h5wl9" Jan 20 18:44:29 crc kubenswrapper[4661]: I0120 18:44:29.633877 4661 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 20 18:44:29 crc kubenswrapper[4661]: I0120 18:44:29.634102 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-mmbv8" Jan 20 18:44:29 crc kubenswrapper[4661]: I0120 18:44:29.633973 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 20 18:44:29 crc kubenswrapper[4661]: I0120 18:44:29.634034 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Jan 20 18:44:29 crc kubenswrapper[4661]: I0120 18:44:29.636350 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 20 18:44:29 crc kubenswrapper[4661]: I0120 18:44:29.636339 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-h5wl9"] Jan 20 18:44:29 crc kubenswrapper[4661]: I0120 18:44:29.734272 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/39ff301d-9d0a-441f-879b-64ddb885ad9b-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-h5wl9\" (UID: \"39ff301d-9d0a-441f-879b-64ddb885ad9b\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-h5wl9" Jan 20 18:44:29 crc kubenswrapper[4661]: I0120 18:44:29.734374 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/39ff301d-9d0a-441f-879b-64ddb885ad9b-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-h5wl9\" (UID: \"39ff301d-9d0a-441f-879b-64ddb885ad9b\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-h5wl9" Jan 20 18:44:29 crc kubenswrapper[4661]: I0120 18:44:29.734424 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/39ff301d-9d0a-441f-879b-64ddb885ad9b-ceph\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-h5wl9\" (UID: \"39ff301d-9d0a-441f-879b-64ddb885ad9b\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-h5wl9" Jan 20 18:44:29 crc kubenswrapper[4661]: I0120 18:44:29.734454 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/39ff301d-9d0a-441f-879b-64ddb885ad9b-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-h5wl9\" (UID: \"39ff301d-9d0a-441f-879b-64ddb885ad9b\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-h5wl9" Jan 20 18:44:29 crc kubenswrapper[4661]: I0120 18:44:29.734811 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jbxnl\" (UniqueName: \"kubernetes.io/projected/39ff301d-9d0a-441f-879b-64ddb885ad9b-kube-api-access-jbxnl\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-h5wl9\" (UID: \"39ff301d-9d0a-441f-879b-64ddb885ad9b\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-h5wl9" Jan 20 18:44:29 crc kubenswrapper[4661]: I0120 18:44:29.836066 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/39ff301d-9d0a-441f-879b-64ddb885ad9b-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-h5wl9\" (UID: \"39ff301d-9d0a-441f-879b-64ddb885ad9b\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-h5wl9" Jan 20 18:44:29 crc kubenswrapper[4661]: I0120 18:44:29.836162 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/39ff301d-9d0a-441f-879b-64ddb885ad9b-ceph\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-h5wl9\" (UID: \"39ff301d-9d0a-441f-879b-64ddb885ad9b\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-h5wl9" Jan 20 18:44:29 crc kubenswrapper[4661]: I0120 18:44:29.836222 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/39ff301d-9d0a-441f-879b-64ddb885ad9b-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-h5wl9\" (UID: \"39ff301d-9d0a-441f-879b-64ddb885ad9b\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-h5wl9" Jan 20 18:44:29 crc kubenswrapper[4661]: I0120 18:44:29.836342 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jbxnl\" (UniqueName: \"kubernetes.io/projected/39ff301d-9d0a-441f-879b-64ddb885ad9b-kube-api-access-jbxnl\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-h5wl9\" (UID: \"39ff301d-9d0a-441f-879b-64ddb885ad9b\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-h5wl9" Jan 20 18:44:29 crc kubenswrapper[4661]: I0120 18:44:29.836410 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/39ff301d-9d0a-441f-879b-64ddb885ad9b-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-h5wl9\" (UID: \"39ff301d-9d0a-441f-879b-64ddb885ad9b\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-h5wl9" Jan 20 18:44:29 crc kubenswrapper[4661]: I0120 18:44:29.842947 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/39ff301d-9d0a-441f-879b-64ddb885ad9b-ceph\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-h5wl9\" (UID: \"39ff301d-9d0a-441f-879b-64ddb885ad9b\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-h5wl9" Jan 20 18:44:29 crc kubenswrapper[4661]: I0120 18:44:29.843383 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/39ff301d-9d0a-441f-879b-64ddb885ad9b-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-h5wl9\" (UID: \"39ff301d-9d0a-441f-879b-64ddb885ad9b\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-h5wl9" Jan 20 18:44:29 crc kubenswrapper[4661]: I0120 18:44:29.845738 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/39ff301d-9d0a-441f-879b-64ddb885ad9b-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-h5wl9\" (UID: \"39ff301d-9d0a-441f-879b-64ddb885ad9b\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-h5wl9" Jan 20 18:44:29 crc kubenswrapper[4661]: I0120 18:44:29.849196 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/39ff301d-9d0a-441f-879b-64ddb885ad9b-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-h5wl9\" (UID: \"39ff301d-9d0a-441f-879b-64ddb885ad9b\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-h5wl9" Jan 20 18:44:29 crc kubenswrapper[4661]: I0120 18:44:29.855756 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jbxnl\" (UniqueName: \"kubernetes.io/projected/39ff301d-9d0a-441f-879b-64ddb885ad9b-kube-api-access-jbxnl\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-h5wl9\" (UID: \"39ff301d-9d0a-441f-879b-64ddb885ad9b\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-h5wl9" Jan 20 18:44:29 crc kubenswrapper[4661]: I0120 18:44:29.958183 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-h5wl9" Jan 20 18:44:30 crc kubenswrapper[4661]: I0120 18:44:30.559690 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-h5wl9"] Jan 20 18:44:31 crc kubenswrapper[4661]: I0120 18:44:31.540151 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-h5wl9" event={"ID":"39ff301d-9d0a-441f-879b-64ddb885ad9b","Type":"ContainerStarted","Data":"1c16110a4ab931397bf146180368c731dcd66f4c7ef85ebca0614d3948b282ca"} Jan 20 18:44:31 crc kubenswrapper[4661]: I0120 18:44:31.540530 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-h5wl9" event={"ID":"39ff301d-9d0a-441f-879b-64ddb885ad9b","Type":"ContainerStarted","Data":"d0a68644a77f3d2156e9c522e55b8fef5966d6f2af45c43653296fc95ff9b070"} Jan 20 18:44:31 crc kubenswrapper[4661]: I0120 18:44:31.561467 4661 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-h5wl9" podStartSLOduration=2.094714613 podStartE2EDuration="2.561442668s" podCreationTimestamp="2026-01-20 18:44:29 +0000 UTC" firstStartedPulling="2026-01-20 18:44:30.57463038 +0000 UTC m=+2326.905420052" lastFinishedPulling="2026-01-20 18:44:31.041358445 +0000 UTC m=+2327.372148107" observedRunningTime="2026-01-20 18:44:31.558633415 +0000 UTC m=+2327.889423117" watchObservedRunningTime="2026-01-20 18:44:31.561442668 +0000 UTC m=+2327.892232360" Jan 20 18:44:36 crc kubenswrapper[4661]: I0120 18:44:36.143243 4661 scope.go:117] "RemoveContainer" containerID="927fc50872b021bab1ad3425d5a889eb67f5570428c82cb9e465408890506791" Jan 20 18:44:36 crc kubenswrapper[4661]: E0120 18:44:36.145645 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-svf7c_openshift-machine-config-operator(78855c94-da90-4523-8d65-70f7fd153dee)\"" pod="openshift-machine-config-operator/machine-config-daemon-svf7c" podUID="78855c94-da90-4523-8d65-70f7fd153dee" Jan 20 18:44:46 crc kubenswrapper[4661]: I0120 18:44:46.687760 4661 generic.go:334] "Generic (PLEG): container finished" podID="39ff301d-9d0a-441f-879b-64ddb885ad9b" containerID="1c16110a4ab931397bf146180368c731dcd66f4c7ef85ebca0614d3948b282ca" exitCode=0 Jan 20 18:44:46 crc kubenswrapper[4661]: I0120 18:44:46.687897 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-h5wl9" event={"ID":"39ff301d-9d0a-441f-879b-64ddb885ad9b","Type":"ContainerDied","Data":"1c16110a4ab931397bf146180368c731dcd66f4c7ef85ebca0614d3948b282ca"} Jan 20 18:44:47 crc kubenswrapper[4661]: I0120 18:44:47.142187 4661 scope.go:117] "RemoveContainer" containerID="927fc50872b021bab1ad3425d5a889eb67f5570428c82cb9e465408890506791" Jan 20 18:44:47 crc kubenswrapper[4661]: E0120 18:44:47.142732 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-svf7c_openshift-machine-config-operator(78855c94-da90-4523-8d65-70f7fd153dee)\"" pod="openshift-machine-config-operator/machine-config-daemon-svf7c" podUID="78855c94-da90-4523-8d65-70f7fd153dee" Jan 20 18:44:48 crc kubenswrapper[4661]: I0120 18:44:48.136085 4661 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-h5wl9" Jan 20 18:44:48 crc kubenswrapper[4661]: I0120 18:44:48.283985 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jbxnl\" (UniqueName: \"kubernetes.io/projected/39ff301d-9d0a-441f-879b-64ddb885ad9b-kube-api-access-jbxnl\") pod \"39ff301d-9d0a-441f-879b-64ddb885ad9b\" (UID: \"39ff301d-9d0a-441f-879b-64ddb885ad9b\") " Jan 20 18:44:48 crc kubenswrapper[4661]: I0120 18:44:48.284347 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/39ff301d-9d0a-441f-879b-64ddb885ad9b-ssh-key-openstack-edpm-ipam\") pod \"39ff301d-9d0a-441f-879b-64ddb885ad9b\" (UID: \"39ff301d-9d0a-441f-879b-64ddb885ad9b\") " Jan 20 18:44:48 crc kubenswrapper[4661]: I0120 18:44:48.284414 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/39ff301d-9d0a-441f-879b-64ddb885ad9b-ceph\") pod \"39ff301d-9d0a-441f-879b-64ddb885ad9b\" (UID: \"39ff301d-9d0a-441f-879b-64ddb885ad9b\") " Jan 20 18:44:48 crc kubenswrapper[4661]: I0120 18:44:48.284514 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/39ff301d-9d0a-441f-879b-64ddb885ad9b-repo-setup-combined-ca-bundle\") pod \"39ff301d-9d0a-441f-879b-64ddb885ad9b\" (UID: \"39ff301d-9d0a-441f-879b-64ddb885ad9b\") " Jan 20 18:44:48 crc kubenswrapper[4661]: I0120 18:44:48.284637 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/39ff301d-9d0a-441f-879b-64ddb885ad9b-inventory\") pod \"39ff301d-9d0a-441f-879b-64ddb885ad9b\" (UID: \"39ff301d-9d0a-441f-879b-64ddb885ad9b\") " Jan 20 18:44:48 crc kubenswrapper[4661]: I0120 18:44:48.290187 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/39ff301d-9d0a-441f-879b-64ddb885ad9b-ceph" (OuterVolumeSpecName: "ceph") pod "39ff301d-9d0a-441f-879b-64ddb885ad9b" (UID: "39ff301d-9d0a-441f-879b-64ddb885ad9b"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 18:44:48 crc kubenswrapper[4661]: I0120 18:44:48.290196 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/39ff301d-9d0a-441f-879b-64ddb885ad9b-kube-api-access-jbxnl" (OuterVolumeSpecName: "kube-api-access-jbxnl") pod "39ff301d-9d0a-441f-879b-64ddb885ad9b" (UID: "39ff301d-9d0a-441f-879b-64ddb885ad9b"). InnerVolumeSpecName "kube-api-access-jbxnl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 18:44:48 crc kubenswrapper[4661]: I0120 18:44:48.290229 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/39ff301d-9d0a-441f-879b-64ddb885ad9b-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "39ff301d-9d0a-441f-879b-64ddb885ad9b" (UID: "39ff301d-9d0a-441f-879b-64ddb885ad9b"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 18:44:48 crc kubenswrapper[4661]: I0120 18:44:48.313312 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/39ff301d-9d0a-441f-879b-64ddb885ad9b-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "39ff301d-9d0a-441f-879b-64ddb885ad9b" (UID: "39ff301d-9d0a-441f-879b-64ddb885ad9b"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 18:44:48 crc kubenswrapper[4661]: I0120 18:44:48.320175 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/39ff301d-9d0a-441f-879b-64ddb885ad9b-inventory" (OuterVolumeSpecName: "inventory") pod "39ff301d-9d0a-441f-879b-64ddb885ad9b" (UID: "39ff301d-9d0a-441f-879b-64ddb885ad9b"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 18:44:48 crc kubenswrapper[4661]: I0120 18:44:48.387392 4661 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jbxnl\" (UniqueName: \"kubernetes.io/projected/39ff301d-9d0a-441f-879b-64ddb885ad9b-kube-api-access-jbxnl\") on node \"crc\" DevicePath \"\"" Jan 20 18:44:48 crc kubenswrapper[4661]: I0120 18:44:48.387432 4661 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/39ff301d-9d0a-441f-879b-64ddb885ad9b-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 20 18:44:48 crc kubenswrapper[4661]: I0120 18:44:48.387445 4661 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/39ff301d-9d0a-441f-879b-64ddb885ad9b-ceph\") on node \"crc\" DevicePath \"\"" Jan 20 18:44:48 crc kubenswrapper[4661]: I0120 18:44:48.387455 4661 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/39ff301d-9d0a-441f-879b-64ddb885ad9b-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 20 18:44:48 crc kubenswrapper[4661]: I0120 18:44:48.387468 4661 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/39ff301d-9d0a-441f-879b-64ddb885ad9b-inventory\") on node \"crc\" DevicePath \"\"" Jan 20 18:44:48 crc kubenswrapper[4661]: I0120 18:44:48.707795 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-h5wl9" event={"ID":"39ff301d-9d0a-441f-879b-64ddb885ad9b","Type":"ContainerDied","Data":"d0a68644a77f3d2156e9c522e55b8fef5966d6f2af45c43653296fc95ff9b070"} Jan 20 18:44:48 crc kubenswrapper[4661]: I0120 18:44:48.707841 4661 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d0a68644a77f3d2156e9c522e55b8fef5966d6f2af45c43653296fc95ff9b070" Jan 20 18:44:48 crc kubenswrapper[4661]: I0120 18:44:48.707860 4661 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-h5wl9" Jan 20 18:44:48 crc kubenswrapper[4661]: I0120 18:44:48.825400 4661 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-r28s7"] Jan 20 18:44:48 crc kubenswrapper[4661]: E0120 18:44:48.825871 4661 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="39ff301d-9d0a-441f-879b-64ddb885ad9b" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Jan 20 18:44:48 crc kubenswrapper[4661]: I0120 18:44:48.825895 4661 state_mem.go:107] "Deleted CPUSet assignment" podUID="39ff301d-9d0a-441f-879b-64ddb885ad9b" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Jan 20 18:44:48 crc kubenswrapper[4661]: I0120 18:44:48.826111 4661 memory_manager.go:354] "RemoveStaleState removing state" podUID="39ff301d-9d0a-441f-879b-64ddb885ad9b" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Jan 20 18:44:48 crc kubenswrapper[4661]: I0120 18:44:48.826787 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-r28s7" Jan 20 18:44:48 crc kubenswrapper[4661]: I0120 18:44:48.829094 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 20 18:44:48 crc kubenswrapper[4661]: I0120 18:44:48.829546 4661 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 20 18:44:48 crc kubenswrapper[4661]: I0120 18:44:48.829910 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-mmbv8" Jan 20 18:44:48 crc kubenswrapper[4661]: I0120 18:44:48.830072 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Jan 20 18:44:48 crc kubenswrapper[4661]: I0120 18:44:48.830234 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 20 18:44:48 crc kubenswrapper[4661]: I0120 18:44:48.853598 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-r28s7"] Jan 20 18:44:49 crc kubenswrapper[4661]: I0120 18:44:49.001527 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e65dc54b-d336-441e-b167-cb297ef179a5-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-r28s7\" (UID: \"e65dc54b-d336-441e-b167-cb297ef179a5\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-r28s7" Jan 20 18:44:49 crc kubenswrapper[4661]: I0120 18:44:49.001611 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6stq7\" (UniqueName: \"kubernetes.io/projected/e65dc54b-d336-441e-b167-cb297ef179a5-kube-api-access-6stq7\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-r28s7\" (UID: \"e65dc54b-d336-441e-b167-cb297ef179a5\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-r28s7" Jan 20 18:44:49 crc kubenswrapper[4661]: I0120 18:44:49.001697 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e65dc54b-d336-441e-b167-cb297ef179a5-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-r28s7\" (UID: \"e65dc54b-d336-441e-b167-cb297ef179a5\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-r28s7" Jan 20 18:44:49 crc kubenswrapper[4661]: I0120 18:44:49.001799 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e65dc54b-d336-441e-b167-cb297ef179a5-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-r28s7\" (UID: \"e65dc54b-d336-441e-b167-cb297ef179a5\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-r28s7" Jan 20 18:44:49 crc kubenswrapper[4661]: I0120 18:44:49.001829 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/e65dc54b-d336-441e-b167-cb297ef179a5-ceph\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-r28s7\" (UID: \"e65dc54b-d336-441e-b167-cb297ef179a5\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-r28s7" Jan 20 18:44:49 crc kubenswrapper[4661]: I0120 18:44:49.103719 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e65dc54b-d336-441e-b167-cb297ef179a5-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-r28s7\" (UID: \"e65dc54b-d336-441e-b167-cb297ef179a5\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-r28s7" Jan 20 18:44:49 crc kubenswrapper[4661]: I0120 18:44:49.103790 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/e65dc54b-d336-441e-b167-cb297ef179a5-ceph\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-r28s7\" (UID: \"e65dc54b-d336-441e-b167-cb297ef179a5\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-r28s7" Jan 20 18:44:49 crc kubenswrapper[4661]: I0120 18:44:49.103865 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e65dc54b-d336-441e-b167-cb297ef179a5-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-r28s7\" (UID: \"e65dc54b-d336-441e-b167-cb297ef179a5\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-r28s7" Jan 20 18:44:49 crc kubenswrapper[4661]: I0120 18:44:49.103935 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6stq7\" (UniqueName: \"kubernetes.io/projected/e65dc54b-d336-441e-b167-cb297ef179a5-kube-api-access-6stq7\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-r28s7\" (UID: \"e65dc54b-d336-441e-b167-cb297ef179a5\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-r28s7" Jan 20 18:44:49 crc kubenswrapper[4661]: I0120 18:44:49.103955 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e65dc54b-d336-441e-b167-cb297ef179a5-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-r28s7\" (UID: \"e65dc54b-d336-441e-b167-cb297ef179a5\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-r28s7" Jan 20 18:44:49 crc kubenswrapper[4661]: I0120 18:44:49.108541 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e65dc54b-d336-441e-b167-cb297ef179a5-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-r28s7\" (UID: \"e65dc54b-d336-441e-b167-cb297ef179a5\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-r28s7" Jan 20 18:44:49 crc kubenswrapper[4661]: I0120 18:44:49.108747 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e65dc54b-d336-441e-b167-cb297ef179a5-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-r28s7\" (UID: \"e65dc54b-d336-441e-b167-cb297ef179a5\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-r28s7" Jan 20 18:44:49 crc kubenswrapper[4661]: I0120 18:44:49.111401 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/e65dc54b-d336-441e-b167-cb297ef179a5-ceph\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-r28s7\" (UID: \"e65dc54b-d336-441e-b167-cb297ef179a5\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-r28s7" Jan 20 18:44:49 crc kubenswrapper[4661]: I0120 18:44:49.113222 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e65dc54b-d336-441e-b167-cb297ef179a5-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-r28s7\" (UID: \"e65dc54b-d336-441e-b167-cb297ef179a5\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-r28s7" Jan 20 18:44:49 crc kubenswrapper[4661]: I0120 18:44:49.122893 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6stq7\" (UniqueName: \"kubernetes.io/projected/e65dc54b-d336-441e-b167-cb297ef179a5-kube-api-access-6stq7\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-r28s7\" (UID: \"e65dc54b-d336-441e-b167-cb297ef179a5\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-r28s7" Jan 20 18:44:49 crc kubenswrapper[4661]: I0120 18:44:49.165465 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-r28s7" Jan 20 18:44:49 crc kubenswrapper[4661]: I0120 18:44:49.712560 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-r28s7"] Jan 20 18:44:49 crc kubenswrapper[4661]: I0120 18:44:49.714248 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-r28s7" event={"ID":"e65dc54b-d336-441e-b167-cb297ef179a5","Type":"ContainerStarted","Data":"f94f7986e51a1e531814b2eab594a6dfca53a5a4595692a05101c2e7e2acb0e4"} Jan 20 18:44:50 crc kubenswrapper[4661]: I0120 18:44:50.724746 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-r28s7" event={"ID":"e65dc54b-d336-441e-b167-cb297ef179a5","Type":"ContainerStarted","Data":"3a0a0ed6afec075cca037be86d8d33008a6ca28a64623d1f9af3667220ad8a04"} Jan 20 18:44:50 crc kubenswrapper[4661]: I0120 18:44:50.746024 4661 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-r28s7" podStartSLOduration=2.14285887 podStartE2EDuration="2.746002372s" podCreationTimestamp="2026-01-20 18:44:48 +0000 UTC" firstStartedPulling="2026-01-20 18:44:49.705479804 +0000 UTC m=+2346.036269466" lastFinishedPulling="2026-01-20 18:44:50.308623286 +0000 UTC m=+2346.639412968" observedRunningTime="2026-01-20 18:44:50.743880017 +0000 UTC m=+2347.074669689" watchObservedRunningTime="2026-01-20 18:44:50.746002372 +0000 UTC m=+2347.076792054" Jan 20 18:45:00 crc kubenswrapper[4661]: I0120 18:45:00.165973 4661 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29482245-w4pl8"] Jan 20 18:45:00 crc kubenswrapper[4661]: I0120 18:45:00.168305 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29482245-w4pl8"] Jan 20 18:45:00 crc kubenswrapper[4661]: I0120 18:45:00.168452 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29482245-w4pl8" Jan 20 18:45:00 crc kubenswrapper[4661]: I0120 18:45:00.170838 4661 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 20 18:45:00 crc kubenswrapper[4661]: I0120 18:45:00.171157 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 20 18:45:00 crc kubenswrapper[4661]: I0120 18:45:00.307005 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2fb2a9dd-f8b0-436b-88e9-8e9b74ac7f62-secret-volume\") pod \"collect-profiles-29482245-w4pl8\" (UID: \"2fb2a9dd-f8b0-436b-88e9-8e9b74ac7f62\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29482245-w4pl8" Jan 20 18:45:00 crc kubenswrapper[4661]: I0120 18:45:00.307038 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wg6hg\" (UniqueName: \"kubernetes.io/projected/2fb2a9dd-f8b0-436b-88e9-8e9b74ac7f62-kube-api-access-wg6hg\") pod \"collect-profiles-29482245-w4pl8\" (UID: \"2fb2a9dd-f8b0-436b-88e9-8e9b74ac7f62\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29482245-w4pl8" Jan 20 18:45:00 crc kubenswrapper[4661]: I0120 18:45:00.307132 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2fb2a9dd-f8b0-436b-88e9-8e9b74ac7f62-config-volume\") pod \"collect-profiles-29482245-w4pl8\" (UID: \"2fb2a9dd-f8b0-436b-88e9-8e9b74ac7f62\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29482245-w4pl8" Jan 20 18:45:00 crc kubenswrapper[4661]: I0120 18:45:00.409442 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2fb2a9dd-f8b0-436b-88e9-8e9b74ac7f62-secret-volume\") pod \"collect-profiles-29482245-w4pl8\" (UID: \"2fb2a9dd-f8b0-436b-88e9-8e9b74ac7f62\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29482245-w4pl8" Jan 20 18:45:00 crc kubenswrapper[4661]: I0120 18:45:00.409772 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wg6hg\" (UniqueName: \"kubernetes.io/projected/2fb2a9dd-f8b0-436b-88e9-8e9b74ac7f62-kube-api-access-wg6hg\") pod \"collect-profiles-29482245-w4pl8\" (UID: \"2fb2a9dd-f8b0-436b-88e9-8e9b74ac7f62\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29482245-w4pl8" Jan 20 18:45:00 crc kubenswrapper[4661]: I0120 18:45:00.409955 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2fb2a9dd-f8b0-436b-88e9-8e9b74ac7f62-config-volume\") pod \"collect-profiles-29482245-w4pl8\" (UID: \"2fb2a9dd-f8b0-436b-88e9-8e9b74ac7f62\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29482245-w4pl8" Jan 20 18:45:00 crc kubenswrapper[4661]: I0120 18:45:00.410824 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2fb2a9dd-f8b0-436b-88e9-8e9b74ac7f62-config-volume\") pod \"collect-profiles-29482245-w4pl8\" (UID: \"2fb2a9dd-f8b0-436b-88e9-8e9b74ac7f62\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29482245-w4pl8" Jan 20 18:45:00 crc kubenswrapper[4661]: I0120 18:45:00.424249 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2fb2a9dd-f8b0-436b-88e9-8e9b74ac7f62-secret-volume\") pod \"collect-profiles-29482245-w4pl8\" (UID: \"2fb2a9dd-f8b0-436b-88e9-8e9b74ac7f62\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29482245-w4pl8" Jan 20 18:45:00 crc kubenswrapper[4661]: I0120 18:45:00.428080 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wg6hg\" (UniqueName: \"kubernetes.io/projected/2fb2a9dd-f8b0-436b-88e9-8e9b74ac7f62-kube-api-access-wg6hg\") pod \"collect-profiles-29482245-w4pl8\" (UID: \"2fb2a9dd-f8b0-436b-88e9-8e9b74ac7f62\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29482245-w4pl8" Jan 20 18:45:00 crc kubenswrapper[4661]: I0120 18:45:00.496062 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29482245-w4pl8" Jan 20 18:45:00 crc kubenswrapper[4661]: I0120 18:45:00.948386 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29482245-w4pl8"] Jan 20 18:45:01 crc kubenswrapper[4661]: I0120 18:45:01.825824 4661 generic.go:334] "Generic (PLEG): container finished" podID="2fb2a9dd-f8b0-436b-88e9-8e9b74ac7f62" containerID="b6c0c5fbff6a14c086f1e79f41da24c041b84daf259323785fe0df6e12619f9d" exitCode=0 Jan 20 18:45:01 crc kubenswrapper[4661]: I0120 18:45:01.825881 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29482245-w4pl8" event={"ID":"2fb2a9dd-f8b0-436b-88e9-8e9b74ac7f62","Type":"ContainerDied","Data":"b6c0c5fbff6a14c086f1e79f41da24c041b84daf259323785fe0df6e12619f9d"} Jan 20 18:45:01 crc kubenswrapper[4661]: I0120 18:45:01.826844 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29482245-w4pl8" event={"ID":"2fb2a9dd-f8b0-436b-88e9-8e9b74ac7f62","Type":"ContainerStarted","Data":"cb8572f2474c66c633608adfb123e11aba02854911b3c9bbf2cddb15f95d232f"} Jan 20 18:45:02 crc kubenswrapper[4661]: I0120 18:45:02.142039 4661 scope.go:117] "RemoveContainer" containerID="927fc50872b021bab1ad3425d5a889eb67f5570428c82cb9e465408890506791" Jan 20 18:45:02 crc kubenswrapper[4661]: E0120 18:45:02.142473 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-svf7c_openshift-machine-config-operator(78855c94-da90-4523-8d65-70f7fd153dee)\"" pod="openshift-machine-config-operator/machine-config-daemon-svf7c" podUID="78855c94-da90-4523-8d65-70f7fd153dee" Jan 20 18:45:03 crc kubenswrapper[4661]: I0120 18:45:03.172923 4661 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29482245-w4pl8" Jan 20 18:45:03 crc kubenswrapper[4661]: I0120 18:45:03.261329 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wg6hg\" (UniqueName: \"kubernetes.io/projected/2fb2a9dd-f8b0-436b-88e9-8e9b74ac7f62-kube-api-access-wg6hg\") pod \"2fb2a9dd-f8b0-436b-88e9-8e9b74ac7f62\" (UID: \"2fb2a9dd-f8b0-436b-88e9-8e9b74ac7f62\") " Jan 20 18:45:03 crc kubenswrapper[4661]: I0120 18:45:03.261471 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2fb2a9dd-f8b0-436b-88e9-8e9b74ac7f62-config-volume\") pod \"2fb2a9dd-f8b0-436b-88e9-8e9b74ac7f62\" (UID: \"2fb2a9dd-f8b0-436b-88e9-8e9b74ac7f62\") " Jan 20 18:45:03 crc kubenswrapper[4661]: I0120 18:45:03.261648 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2fb2a9dd-f8b0-436b-88e9-8e9b74ac7f62-secret-volume\") pod \"2fb2a9dd-f8b0-436b-88e9-8e9b74ac7f62\" (UID: \"2fb2a9dd-f8b0-436b-88e9-8e9b74ac7f62\") " Jan 20 18:45:03 crc kubenswrapper[4661]: I0120 18:45:03.262940 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2fb2a9dd-f8b0-436b-88e9-8e9b74ac7f62-config-volume" (OuterVolumeSpecName: "config-volume") pod "2fb2a9dd-f8b0-436b-88e9-8e9b74ac7f62" (UID: "2fb2a9dd-f8b0-436b-88e9-8e9b74ac7f62"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 18:45:03 crc kubenswrapper[4661]: I0120 18:45:03.269593 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2fb2a9dd-f8b0-436b-88e9-8e9b74ac7f62-kube-api-access-wg6hg" (OuterVolumeSpecName: "kube-api-access-wg6hg") pod "2fb2a9dd-f8b0-436b-88e9-8e9b74ac7f62" (UID: "2fb2a9dd-f8b0-436b-88e9-8e9b74ac7f62"). InnerVolumeSpecName "kube-api-access-wg6hg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 18:45:03 crc kubenswrapper[4661]: I0120 18:45:03.273810 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2fb2a9dd-f8b0-436b-88e9-8e9b74ac7f62-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "2fb2a9dd-f8b0-436b-88e9-8e9b74ac7f62" (UID: "2fb2a9dd-f8b0-436b-88e9-8e9b74ac7f62"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 18:45:03 crc kubenswrapper[4661]: I0120 18:45:03.364821 4661 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2fb2a9dd-f8b0-436b-88e9-8e9b74ac7f62-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 20 18:45:03 crc kubenswrapper[4661]: I0120 18:45:03.365151 4661 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wg6hg\" (UniqueName: \"kubernetes.io/projected/2fb2a9dd-f8b0-436b-88e9-8e9b74ac7f62-kube-api-access-wg6hg\") on node \"crc\" DevicePath \"\"" Jan 20 18:45:03 crc kubenswrapper[4661]: I0120 18:45:03.365301 4661 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2fb2a9dd-f8b0-436b-88e9-8e9b74ac7f62-config-volume\") on node \"crc\" DevicePath \"\"" Jan 20 18:45:03 crc kubenswrapper[4661]: I0120 18:45:03.843610 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29482245-w4pl8" event={"ID":"2fb2a9dd-f8b0-436b-88e9-8e9b74ac7f62","Type":"ContainerDied","Data":"cb8572f2474c66c633608adfb123e11aba02854911b3c9bbf2cddb15f95d232f"} Jan 20 18:45:03 crc kubenswrapper[4661]: I0120 18:45:03.843912 4661 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cb8572f2474c66c633608adfb123e11aba02854911b3c9bbf2cddb15f95d232f" Jan 20 18:45:03 crc kubenswrapper[4661]: I0120 18:45:03.843702 4661 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29482245-w4pl8" Jan 20 18:45:04 crc kubenswrapper[4661]: I0120 18:45:04.257997 4661 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29482200-ckvfc"] Jan 20 18:45:04 crc kubenswrapper[4661]: I0120 18:45:04.264453 4661 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29482200-ckvfc"] Jan 20 18:45:06 crc kubenswrapper[4661]: I0120 18:45:06.156333 4661 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7a1f08ef-d9a0-484e-9959-14d3ab178d28" path="/var/lib/kubelet/pods/7a1f08ef-d9a0-484e-9959-14d3ab178d28/volumes" Jan 20 18:45:13 crc kubenswrapper[4661]: I0120 18:45:13.142124 4661 scope.go:117] "RemoveContainer" containerID="927fc50872b021bab1ad3425d5a889eb67f5570428c82cb9e465408890506791" Jan 20 18:45:13 crc kubenswrapper[4661]: E0120 18:45:13.143386 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-svf7c_openshift-machine-config-operator(78855c94-da90-4523-8d65-70f7fd153dee)\"" pod="openshift-machine-config-operator/machine-config-daemon-svf7c" podUID="78855c94-da90-4523-8d65-70f7fd153dee" Jan 20 18:45:17 crc kubenswrapper[4661]: I0120 18:45:17.246765 4661 scope.go:117] "RemoveContainer" containerID="ac5b500f32ad20fd3818d44f140e4b9f1a7d4df30f7c916ddcd4d8434d8ab239" Jan 20 18:45:17 crc kubenswrapper[4661]: I0120 18:45:17.295878 4661 scope.go:117] "RemoveContainer" containerID="5bc222cbabf1caaccf29dc04a46ebaf7c5ea9847c82556c26cd119bc347eb80a" Jan 20 18:45:17 crc kubenswrapper[4661]: I0120 18:45:17.331258 4661 scope.go:117] "RemoveContainer" containerID="f95ab87f307fe5e17634937beab9d1ecac56bf1054ac4aac49b88b1ed27f3712" Jan 20 18:45:17 crc kubenswrapper[4661]: I0120 18:45:17.381721 4661 scope.go:117] "RemoveContainer" containerID="09d2b4b0165214612382aee45c81e5b92cbceee390be329108e78346bd4bc33d" Jan 20 18:45:17 crc kubenswrapper[4661]: I0120 18:45:17.453591 4661 scope.go:117] "RemoveContainer" containerID="6c5731613a6a89276146dee2b213cfb655cc82037b6f9976ba27a14101f5885c" Jan 20 18:45:17 crc kubenswrapper[4661]: I0120 18:45:17.489371 4661 scope.go:117] "RemoveContainer" containerID="7337c5fd849d4afc21661eac4a3715efcde8f73ebfa8acba7df21a1df12e0ce2" Jan 20 18:45:17 crc kubenswrapper[4661]: I0120 18:45:17.570190 4661 scope.go:117] "RemoveContainer" containerID="6fb88c73f55b9609f9d50f268b3726ae00d7cf53163008b6105adffe39fdd063" Jan 20 18:45:17 crc kubenswrapper[4661]: I0120 18:45:17.684462 4661 scope.go:117] "RemoveContainer" containerID="f9e5e1c9ec7c2f5244b61c235e49d2fd02ac958a806cd3e29467783a51b62a32" Jan 20 18:45:17 crc kubenswrapper[4661]: I0120 18:45:17.772908 4661 scope.go:117] "RemoveContainer" containerID="0ad173982e1363dc07bbdce62e931022fe0217632a01343613c2b02123495a44" Jan 20 18:45:17 crc kubenswrapper[4661]: I0120 18:45:17.809255 4661 scope.go:117] "RemoveContainer" containerID="dfd542129740de45defae7d074253489ae918f116b9e5462695e4151363cb82d" Jan 20 18:45:17 crc kubenswrapper[4661]: I0120 18:45:17.836066 4661 scope.go:117] "RemoveContainer" containerID="46b6d6c55a17c7a8bb051cc7f13181d92a0c99d08ba928aa462ccfa513560894" Jan 20 18:45:24 crc kubenswrapper[4661]: I0120 18:45:24.146585 4661 scope.go:117] "RemoveContainer" containerID="927fc50872b021bab1ad3425d5a889eb67f5570428c82cb9e465408890506791" Jan 20 18:45:24 crc kubenswrapper[4661]: E0120 18:45:24.147520 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-svf7c_openshift-machine-config-operator(78855c94-da90-4523-8d65-70f7fd153dee)\"" pod="openshift-machine-config-operator/machine-config-daemon-svf7c" podUID="78855c94-da90-4523-8d65-70f7fd153dee" Jan 20 18:45:39 crc kubenswrapper[4661]: I0120 18:45:39.142741 4661 scope.go:117] "RemoveContainer" containerID="927fc50872b021bab1ad3425d5a889eb67f5570428c82cb9e465408890506791" Jan 20 18:45:39 crc kubenswrapper[4661]: E0120 18:45:39.143770 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-svf7c_openshift-machine-config-operator(78855c94-da90-4523-8d65-70f7fd153dee)\"" pod="openshift-machine-config-operator/machine-config-daemon-svf7c" podUID="78855c94-da90-4523-8d65-70f7fd153dee" Jan 20 18:45:52 crc kubenswrapper[4661]: I0120 18:45:52.142259 4661 scope.go:117] "RemoveContainer" containerID="927fc50872b021bab1ad3425d5a889eb67f5570428c82cb9e465408890506791" Jan 20 18:45:52 crc kubenswrapper[4661]: E0120 18:45:52.143083 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-svf7c_openshift-machine-config-operator(78855c94-da90-4523-8d65-70f7fd153dee)\"" pod="openshift-machine-config-operator/machine-config-daemon-svf7c" podUID="78855c94-da90-4523-8d65-70f7fd153dee" Jan 20 18:46:05 crc kubenswrapper[4661]: I0120 18:46:05.142512 4661 scope.go:117] "RemoveContainer" containerID="927fc50872b021bab1ad3425d5a889eb67f5570428c82cb9e465408890506791" Jan 20 18:46:05 crc kubenswrapper[4661]: E0120 18:46:05.143272 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-svf7c_openshift-machine-config-operator(78855c94-da90-4523-8d65-70f7fd153dee)\"" pod="openshift-machine-config-operator/machine-config-daemon-svf7c" podUID="78855c94-da90-4523-8d65-70f7fd153dee" Jan 20 18:46:19 crc kubenswrapper[4661]: I0120 18:46:19.142189 4661 scope.go:117] "RemoveContainer" containerID="927fc50872b021bab1ad3425d5a889eb67f5570428c82cb9e465408890506791" Jan 20 18:46:19 crc kubenswrapper[4661]: E0120 18:46:19.143016 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-svf7c_openshift-machine-config-operator(78855c94-da90-4523-8d65-70f7fd153dee)\"" pod="openshift-machine-config-operator/machine-config-daemon-svf7c" podUID="78855c94-da90-4523-8d65-70f7fd153dee" Jan 20 18:46:31 crc kubenswrapper[4661]: I0120 18:46:31.142664 4661 scope.go:117] "RemoveContainer" containerID="927fc50872b021bab1ad3425d5a889eb67f5570428c82cb9e465408890506791" Jan 20 18:46:31 crc kubenswrapper[4661]: E0120 18:46:31.143596 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-svf7c_openshift-machine-config-operator(78855c94-da90-4523-8d65-70f7fd153dee)\"" pod="openshift-machine-config-operator/machine-config-daemon-svf7c" podUID="78855c94-da90-4523-8d65-70f7fd153dee" Jan 20 18:46:40 crc kubenswrapper[4661]: I0120 18:46:40.061164 4661 generic.go:334] "Generic (PLEG): container finished" podID="e65dc54b-d336-441e-b167-cb297ef179a5" containerID="3a0a0ed6afec075cca037be86d8d33008a6ca28a64623d1f9af3667220ad8a04" exitCode=0 Jan 20 18:46:40 crc kubenswrapper[4661]: I0120 18:46:40.061250 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-r28s7" event={"ID":"e65dc54b-d336-441e-b167-cb297ef179a5","Type":"ContainerDied","Data":"3a0a0ed6afec075cca037be86d8d33008a6ca28a64623d1f9af3667220ad8a04"} Jan 20 18:46:41 crc kubenswrapper[4661]: I0120 18:46:41.510618 4661 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-r28s7" Jan 20 18:46:41 crc kubenswrapper[4661]: I0120 18:46:41.635249 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e65dc54b-d336-441e-b167-cb297ef179a5-inventory\") pod \"e65dc54b-d336-441e-b167-cb297ef179a5\" (UID: \"e65dc54b-d336-441e-b167-cb297ef179a5\") " Jan 20 18:46:41 crc kubenswrapper[4661]: I0120 18:46:41.635337 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/e65dc54b-d336-441e-b167-cb297ef179a5-ceph\") pod \"e65dc54b-d336-441e-b167-cb297ef179a5\" (UID: \"e65dc54b-d336-441e-b167-cb297ef179a5\") " Jan 20 18:46:41 crc kubenswrapper[4661]: I0120 18:46:41.635378 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e65dc54b-d336-441e-b167-cb297ef179a5-ssh-key-openstack-edpm-ipam\") pod \"e65dc54b-d336-441e-b167-cb297ef179a5\" (UID: \"e65dc54b-d336-441e-b167-cb297ef179a5\") " Jan 20 18:46:41 crc kubenswrapper[4661]: I0120 18:46:41.635430 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e65dc54b-d336-441e-b167-cb297ef179a5-bootstrap-combined-ca-bundle\") pod \"e65dc54b-d336-441e-b167-cb297ef179a5\" (UID: \"e65dc54b-d336-441e-b167-cb297ef179a5\") " Jan 20 18:46:41 crc kubenswrapper[4661]: I0120 18:46:41.635497 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6stq7\" (UniqueName: \"kubernetes.io/projected/e65dc54b-d336-441e-b167-cb297ef179a5-kube-api-access-6stq7\") pod \"e65dc54b-d336-441e-b167-cb297ef179a5\" (UID: \"e65dc54b-d336-441e-b167-cb297ef179a5\") " Jan 20 18:46:41 crc kubenswrapper[4661]: I0120 18:46:41.640793 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e65dc54b-d336-441e-b167-cb297ef179a5-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "e65dc54b-d336-441e-b167-cb297ef179a5" (UID: "e65dc54b-d336-441e-b167-cb297ef179a5"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 18:46:41 crc kubenswrapper[4661]: I0120 18:46:41.640842 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e65dc54b-d336-441e-b167-cb297ef179a5-kube-api-access-6stq7" (OuterVolumeSpecName: "kube-api-access-6stq7") pod "e65dc54b-d336-441e-b167-cb297ef179a5" (UID: "e65dc54b-d336-441e-b167-cb297ef179a5"). InnerVolumeSpecName "kube-api-access-6stq7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 18:46:41 crc kubenswrapper[4661]: I0120 18:46:41.642712 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e65dc54b-d336-441e-b167-cb297ef179a5-ceph" (OuterVolumeSpecName: "ceph") pod "e65dc54b-d336-441e-b167-cb297ef179a5" (UID: "e65dc54b-d336-441e-b167-cb297ef179a5"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 18:46:41 crc kubenswrapper[4661]: I0120 18:46:41.662433 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e65dc54b-d336-441e-b167-cb297ef179a5-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "e65dc54b-d336-441e-b167-cb297ef179a5" (UID: "e65dc54b-d336-441e-b167-cb297ef179a5"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 18:46:41 crc kubenswrapper[4661]: I0120 18:46:41.666398 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e65dc54b-d336-441e-b167-cb297ef179a5-inventory" (OuterVolumeSpecName: "inventory") pod "e65dc54b-d336-441e-b167-cb297ef179a5" (UID: "e65dc54b-d336-441e-b167-cb297ef179a5"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 18:46:41 crc kubenswrapper[4661]: I0120 18:46:41.737255 4661 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e65dc54b-d336-441e-b167-cb297ef179a5-inventory\") on node \"crc\" DevicePath \"\"" Jan 20 18:46:41 crc kubenswrapper[4661]: I0120 18:46:41.737288 4661 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/e65dc54b-d336-441e-b167-cb297ef179a5-ceph\") on node \"crc\" DevicePath \"\"" Jan 20 18:46:41 crc kubenswrapper[4661]: I0120 18:46:41.737297 4661 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e65dc54b-d336-441e-b167-cb297ef179a5-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 20 18:46:41 crc kubenswrapper[4661]: I0120 18:46:41.737307 4661 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e65dc54b-d336-441e-b167-cb297ef179a5-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 20 18:46:41 crc kubenswrapper[4661]: I0120 18:46:41.737316 4661 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6stq7\" (UniqueName: \"kubernetes.io/projected/e65dc54b-d336-441e-b167-cb297ef179a5-kube-api-access-6stq7\") on node \"crc\" DevicePath \"\"" Jan 20 18:46:42 crc kubenswrapper[4661]: I0120 18:46:42.078916 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-r28s7" event={"ID":"e65dc54b-d336-441e-b167-cb297ef179a5","Type":"ContainerDied","Data":"f94f7986e51a1e531814b2eab594a6dfca53a5a4595692a05101c2e7e2acb0e4"} Jan 20 18:46:42 crc kubenswrapper[4661]: I0120 18:46:42.079295 4661 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f94f7986e51a1e531814b2eab594a6dfca53a5a4595692a05101c2e7e2acb0e4" Jan 20 18:46:42 crc kubenswrapper[4661]: I0120 18:46:42.078973 4661 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-r28s7" Jan 20 18:46:42 crc kubenswrapper[4661]: I0120 18:46:42.142477 4661 scope.go:117] "RemoveContainer" containerID="927fc50872b021bab1ad3425d5a889eb67f5570428c82cb9e465408890506791" Jan 20 18:46:42 crc kubenswrapper[4661]: E0120 18:46:42.143011 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-svf7c_openshift-machine-config-operator(78855c94-da90-4523-8d65-70f7fd153dee)\"" pod="openshift-machine-config-operator/machine-config-daemon-svf7c" podUID="78855c94-da90-4523-8d65-70f7fd153dee" Jan 20 18:46:42 crc kubenswrapper[4661]: I0120 18:46:42.198052 4661 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-n2tt9"] Jan 20 18:46:42 crc kubenswrapper[4661]: E0120 18:46:42.198594 4661 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e65dc54b-d336-441e-b167-cb297ef179a5" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Jan 20 18:46:42 crc kubenswrapper[4661]: I0120 18:46:42.198633 4661 state_mem.go:107] "Deleted CPUSet assignment" podUID="e65dc54b-d336-441e-b167-cb297ef179a5" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Jan 20 18:46:42 crc kubenswrapper[4661]: E0120 18:46:42.198694 4661 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2fb2a9dd-f8b0-436b-88e9-8e9b74ac7f62" containerName="collect-profiles" Jan 20 18:46:42 crc kubenswrapper[4661]: I0120 18:46:42.198708 4661 state_mem.go:107] "Deleted CPUSet assignment" podUID="2fb2a9dd-f8b0-436b-88e9-8e9b74ac7f62" containerName="collect-profiles" Jan 20 18:46:42 crc kubenswrapper[4661]: I0120 18:46:42.198962 4661 memory_manager.go:354] "RemoveStaleState removing state" podUID="2fb2a9dd-f8b0-436b-88e9-8e9b74ac7f62" containerName="collect-profiles" Jan 20 18:46:42 crc kubenswrapper[4661]: I0120 18:46:42.199021 4661 memory_manager.go:354] "RemoveStaleState removing state" podUID="e65dc54b-d336-441e-b167-cb297ef179a5" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Jan 20 18:46:42 crc kubenswrapper[4661]: I0120 18:46:42.199852 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-n2tt9" Jan 20 18:46:42 crc kubenswrapper[4661]: I0120 18:46:42.201650 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Jan 20 18:46:42 crc kubenswrapper[4661]: I0120 18:46:42.202769 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 20 18:46:42 crc kubenswrapper[4661]: I0120 18:46:42.203136 4661 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 20 18:46:42 crc kubenswrapper[4661]: I0120 18:46:42.204078 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-mmbv8" Jan 20 18:46:42 crc kubenswrapper[4661]: I0120 18:46:42.207041 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 20 18:46:42 crc kubenswrapper[4661]: I0120 18:46:42.224948 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-n2tt9"] Jan 20 18:46:42 crc kubenswrapper[4661]: I0120 18:46:42.347900 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/77d1abe1-5293-4f5d-b062-d3fc2bb71510-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-n2tt9\" (UID: \"77d1abe1-5293-4f5d-b062-d3fc2bb71510\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-n2tt9" Jan 20 18:46:42 crc kubenswrapper[4661]: I0120 18:46:42.347993 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rlxcf\" (UniqueName: \"kubernetes.io/projected/77d1abe1-5293-4f5d-b062-d3fc2bb71510-kube-api-access-rlxcf\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-n2tt9\" (UID: \"77d1abe1-5293-4f5d-b062-d3fc2bb71510\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-n2tt9" Jan 20 18:46:42 crc kubenswrapper[4661]: I0120 18:46:42.348047 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/77d1abe1-5293-4f5d-b062-d3fc2bb71510-ceph\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-n2tt9\" (UID: \"77d1abe1-5293-4f5d-b062-d3fc2bb71510\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-n2tt9" Jan 20 18:46:42 crc kubenswrapper[4661]: I0120 18:46:42.348173 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/77d1abe1-5293-4f5d-b062-d3fc2bb71510-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-n2tt9\" (UID: \"77d1abe1-5293-4f5d-b062-d3fc2bb71510\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-n2tt9" Jan 20 18:46:42 crc kubenswrapper[4661]: I0120 18:46:42.450358 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/77d1abe1-5293-4f5d-b062-d3fc2bb71510-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-n2tt9\" (UID: \"77d1abe1-5293-4f5d-b062-d3fc2bb71510\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-n2tt9" Jan 20 18:46:42 crc kubenswrapper[4661]: I0120 18:46:42.450440 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rlxcf\" (UniqueName: \"kubernetes.io/projected/77d1abe1-5293-4f5d-b062-d3fc2bb71510-kube-api-access-rlxcf\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-n2tt9\" (UID: \"77d1abe1-5293-4f5d-b062-d3fc2bb71510\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-n2tt9" Jan 20 18:46:42 crc kubenswrapper[4661]: I0120 18:46:42.450496 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/77d1abe1-5293-4f5d-b062-d3fc2bb71510-ceph\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-n2tt9\" (UID: \"77d1abe1-5293-4f5d-b062-d3fc2bb71510\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-n2tt9" Jan 20 18:46:42 crc kubenswrapper[4661]: I0120 18:46:42.450534 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/77d1abe1-5293-4f5d-b062-d3fc2bb71510-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-n2tt9\" (UID: \"77d1abe1-5293-4f5d-b062-d3fc2bb71510\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-n2tt9" Jan 20 18:46:42 crc kubenswrapper[4661]: I0120 18:46:42.455914 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/77d1abe1-5293-4f5d-b062-d3fc2bb71510-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-n2tt9\" (UID: \"77d1abe1-5293-4f5d-b062-d3fc2bb71510\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-n2tt9" Jan 20 18:46:42 crc kubenswrapper[4661]: I0120 18:46:42.456096 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/77d1abe1-5293-4f5d-b062-d3fc2bb71510-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-n2tt9\" (UID: \"77d1abe1-5293-4f5d-b062-d3fc2bb71510\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-n2tt9" Jan 20 18:46:42 crc kubenswrapper[4661]: I0120 18:46:42.466233 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/77d1abe1-5293-4f5d-b062-d3fc2bb71510-ceph\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-n2tt9\" (UID: \"77d1abe1-5293-4f5d-b062-d3fc2bb71510\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-n2tt9" Jan 20 18:46:42 crc kubenswrapper[4661]: I0120 18:46:42.474933 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rlxcf\" (UniqueName: \"kubernetes.io/projected/77d1abe1-5293-4f5d-b062-d3fc2bb71510-kube-api-access-rlxcf\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-n2tt9\" (UID: \"77d1abe1-5293-4f5d-b062-d3fc2bb71510\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-n2tt9" Jan 20 18:46:42 crc kubenswrapper[4661]: I0120 18:46:42.516795 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-n2tt9" Jan 20 18:46:43 crc kubenswrapper[4661]: I0120 18:46:43.103019 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-n2tt9"] Jan 20 18:46:43 crc kubenswrapper[4661]: W0120 18:46:43.109178 4661 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod77d1abe1_5293_4f5d_b062_d3fc2bb71510.slice/crio-c47f9d13e76b59eca8080d5547717cd4b66f3bce02d7eef6376cac8e23ce7a13 WatchSource:0}: Error finding container c47f9d13e76b59eca8080d5547717cd4b66f3bce02d7eef6376cac8e23ce7a13: Status 404 returned error can't find the container with id c47f9d13e76b59eca8080d5547717cd4b66f3bce02d7eef6376cac8e23ce7a13 Jan 20 18:46:43 crc kubenswrapper[4661]: I0120 18:46:43.111422 4661 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 20 18:46:44 crc kubenswrapper[4661]: I0120 18:46:44.095121 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-n2tt9" event={"ID":"77d1abe1-5293-4f5d-b062-d3fc2bb71510","Type":"ContainerStarted","Data":"79707604cf41fc8c8124236b0d069751fb177eb0662dfd6191a3e353243a7024"} Jan 20 18:46:44 crc kubenswrapper[4661]: I0120 18:46:44.096520 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-n2tt9" event={"ID":"77d1abe1-5293-4f5d-b062-d3fc2bb71510","Type":"ContainerStarted","Data":"c47f9d13e76b59eca8080d5547717cd4b66f3bce02d7eef6376cac8e23ce7a13"} Jan 20 18:46:44 crc kubenswrapper[4661]: I0120 18:46:44.116985 4661 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-n2tt9" podStartSLOduration=1.667487385 podStartE2EDuration="2.116968181s" podCreationTimestamp="2026-01-20 18:46:42 +0000 UTC" firstStartedPulling="2026-01-20 18:46:43.11112926 +0000 UTC m=+2459.441918932" lastFinishedPulling="2026-01-20 18:46:43.560610066 +0000 UTC m=+2459.891399728" observedRunningTime="2026-01-20 18:46:44.109694213 +0000 UTC m=+2460.440483885" watchObservedRunningTime="2026-01-20 18:46:44.116968181 +0000 UTC m=+2460.447757833" Jan 20 18:46:55 crc kubenswrapper[4661]: I0120 18:46:55.142449 4661 scope.go:117] "RemoveContainer" containerID="927fc50872b021bab1ad3425d5a889eb67f5570428c82cb9e465408890506791" Jan 20 18:46:55 crc kubenswrapper[4661]: E0120 18:46:55.143140 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-svf7c_openshift-machine-config-operator(78855c94-da90-4523-8d65-70f7fd153dee)\"" pod="openshift-machine-config-operator/machine-config-daemon-svf7c" podUID="78855c94-da90-4523-8d65-70f7fd153dee" Jan 20 18:47:09 crc kubenswrapper[4661]: I0120 18:47:09.142734 4661 scope.go:117] "RemoveContainer" containerID="927fc50872b021bab1ad3425d5a889eb67f5570428c82cb9e465408890506791" Jan 20 18:47:09 crc kubenswrapper[4661]: E0120 18:47:09.143472 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-svf7c_openshift-machine-config-operator(78855c94-da90-4523-8d65-70f7fd153dee)\"" pod="openshift-machine-config-operator/machine-config-daemon-svf7c" podUID="78855c94-da90-4523-8d65-70f7fd153dee" Jan 20 18:47:12 crc kubenswrapper[4661]: I0120 18:47:12.179798 4661 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-wfq2c"] Jan 20 18:47:12 crc kubenswrapper[4661]: I0120 18:47:12.183458 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-wfq2c" Jan 20 18:47:12 crc kubenswrapper[4661]: I0120 18:47:12.192194 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-wfq2c"] Jan 20 18:47:12 crc kubenswrapper[4661]: I0120 18:47:12.309786 4661 generic.go:334] "Generic (PLEG): container finished" podID="77d1abe1-5293-4f5d-b062-d3fc2bb71510" containerID="79707604cf41fc8c8124236b0d069751fb177eb0662dfd6191a3e353243a7024" exitCode=0 Jan 20 18:47:12 crc kubenswrapper[4661]: I0120 18:47:12.309835 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-n2tt9" event={"ID":"77d1abe1-5293-4f5d-b062-d3fc2bb71510","Type":"ContainerDied","Data":"79707604cf41fc8c8124236b0d069751fb177eb0662dfd6191a3e353243a7024"} Jan 20 18:47:12 crc kubenswrapper[4661]: I0120 18:47:12.329109 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dx2sn\" (UniqueName: \"kubernetes.io/projected/87360224-a7a7-42d5-bfff-f4ff1e8fe9b5-kube-api-access-dx2sn\") pod \"certified-operators-wfq2c\" (UID: \"87360224-a7a7-42d5-bfff-f4ff1e8fe9b5\") " pod="openshift-marketplace/certified-operators-wfq2c" Jan 20 18:47:12 crc kubenswrapper[4661]: I0120 18:47:12.329254 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/87360224-a7a7-42d5-bfff-f4ff1e8fe9b5-catalog-content\") pod \"certified-operators-wfq2c\" (UID: \"87360224-a7a7-42d5-bfff-f4ff1e8fe9b5\") " pod="openshift-marketplace/certified-operators-wfq2c" Jan 20 18:47:12 crc kubenswrapper[4661]: I0120 18:47:12.329351 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/87360224-a7a7-42d5-bfff-f4ff1e8fe9b5-utilities\") pod \"certified-operators-wfq2c\" (UID: \"87360224-a7a7-42d5-bfff-f4ff1e8fe9b5\") " pod="openshift-marketplace/certified-operators-wfq2c" Jan 20 18:47:12 crc kubenswrapper[4661]: I0120 18:47:12.430877 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/87360224-a7a7-42d5-bfff-f4ff1e8fe9b5-catalog-content\") pod \"certified-operators-wfq2c\" (UID: \"87360224-a7a7-42d5-bfff-f4ff1e8fe9b5\") " pod="openshift-marketplace/certified-operators-wfq2c" Jan 20 18:47:12 crc kubenswrapper[4661]: I0120 18:47:12.430971 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/87360224-a7a7-42d5-bfff-f4ff1e8fe9b5-utilities\") pod \"certified-operators-wfq2c\" (UID: \"87360224-a7a7-42d5-bfff-f4ff1e8fe9b5\") " pod="openshift-marketplace/certified-operators-wfq2c" Jan 20 18:47:12 crc kubenswrapper[4661]: I0120 18:47:12.431017 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dx2sn\" (UniqueName: \"kubernetes.io/projected/87360224-a7a7-42d5-bfff-f4ff1e8fe9b5-kube-api-access-dx2sn\") pod \"certified-operators-wfq2c\" (UID: \"87360224-a7a7-42d5-bfff-f4ff1e8fe9b5\") " pod="openshift-marketplace/certified-operators-wfq2c" Jan 20 18:47:12 crc kubenswrapper[4661]: I0120 18:47:12.431391 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/87360224-a7a7-42d5-bfff-f4ff1e8fe9b5-catalog-content\") pod \"certified-operators-wfq2c\" (UID: \"87360224-a7a7-42d5-bfff-f4ff1e8fe9b5\") " pod="openshift-marketplace/certified-operators-wfq2c" Jan 20 18:47:12 crc kubenswrapper[4661]: I0120 18:47:12.431523 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/87360224-a7a7-42d5-bfff-f4ff1e8fe9b5-utilities\") pod \"certified-operators-wfq2c\" (UID: \"87360224-a7a7-42d5-bfff-f4ff1e8fe9b5\") " pod="openshift-marketplace/certified-operators-wfq2c" Jan 20 18:47:12 crc kubenswrapper[4661]: I0120 18:47:12.453571 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dx2sn\" (UniqueName: \"kubernetes.io/projected/87360224-a7a7-42d5-bfff-f4ff1e8fe9b5-kube-api-access-dx2sn\") pod \"certified-operators-wfq2c\" (UID: \"87360224-a7a7-42d5-bfff-f4ff1e8fe9b5\") " pod="openshift-marketplace/certified-operators-wfq2c" Jan 20 18:47:12 crc kubenswrapper[4661]: I0120 18:47:12.548434 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-wfq2c" Jan 20 18:47:13 crc kubenswrapper[4661]: I0120 18:47:13.119174 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-wfq2c"] Jan 20 18:47:13 crc kubenswrapper[4661]: I0120 18:47:13.323458 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wfq2c" event={"ID":"87360224-a7a7-42d5-bfff-f4ff1e8fe9b5","Type":"ContainerStarted","Data":"614b4e77c1c595d48bca257d8d2689338804a8ee18d82d19677096e3e90df12a"} Jan 20 18:47:13 crc kubenswrapper[4661]: I0120 18:47:13.588364 4661 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-bxk9f"] Jan 20 18:47:13 crc kubenswrapper[4661]: I0120 18:47:13.590873 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bxk9f" Jan 20 18:47:13 crc kubenswrapper[4661]: I0120 18:47:13.613517 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-bxk9f"] Jan 20 18:47:13 crc kubenswrapper[4661]: I0120 18:47:13.761624 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6vltc\" (UniqueName: \"kubernetes.io/projected/f4f2f01d-d131-4ad9-b60b-51f90d6a6655-kube-api-access-6vltc\") pod \"community-operators-bxk9f\" (UID: \"f4f2f01d-d131-4ad9-b60b-51f90d6a6655\") " pod="openshift-marketplace/community-operators-bxk9f" Jan 20 18:47:13 crc kubenswrapper[4661]: I0120 18:47:13.761698 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f4f2f01d-d131-4ad9-b60b-51f90d6a6655-catalog-content\") pod \"community-operators-bxk9f\" (UID: \"f4f2f01d-d131-4ad9-b60b-51f90d6a6655\") " pod="openshift-marketplace/community-operators-bxk9f" Jan 20 18:47:13 crc kubenswrapper[4661]: I0120 18:47:13.761907 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f4f2f01d-d131-4ad9-b60b-51f90d6a6655-utilities\") pod \"community-operators-bxk9f\" (UID: \"f4f2f01d-d131-4ad9-b60b-51f90d6a6655\") " pod="openshift-marketplace/community-operators-bxk9f" Jan 20 18:47:13 crc kubenswrapper[4661]: I0120 18:47:13.864009 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f4f2f01d-d131-4ad9-b60b-51f90d6a6655-utilities\") pod \"community-operators-bxk9f\" (UID: \"f4f2f01d-d131-4ad9-b60b-51f90d6a6655\") " pod="openshift-marketplace/community-operators-bxk9f" Jan 20 18:47:13 crc kubenswrapper[4661]: I0120 18:47:13.864209 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6vltc\" (UniqueName: \"kubernetes.io/projected/f4f2f01d-d131-4ad9-b60b-51f90d6a6655-kube-api-access-6vltc\") pod \"community-operators-bxk9f\" (UID: \"f4f2f01d-d131-4ad9-b60b-51f90d6a6655\") " pod="openshift-marketplace/community-operators-bxk9f" Jan 20 18:47:13 crc kubenswrapper[4661]: I0120 18:47:13.864241 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f4f2f01d-d131-4ad9-b60b-51f90d6a6655-catalog-content\") pod \"community-operators-bxk9f\" (UID: \"f4f2f01d-d131-4ad9-b60b-51f90d6a6655\") " pod="openshift-marketplace/community-operators-bxk9f" Jan 20 18:47:13 crc kubenswrapper[4661]: I0120 18:47:13.864901 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f4f2f01d-d131-4ad9-b60b-51f90d6a6655-utilities\") pod \"community-operators-bxk9f\" (UID: \"f4f2f01d-d131-4ad9-b60b-51f90d6a6655\") " pod="openshift-marketplace/community-operators-bxk9f" Jan 20 18:47:13 crc kubenswrapper[4661]: I0120 18:47:13.864908 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f4f2f01d-d131-4ad9-b60b-51f90d6a6655-catalog-content\") pod \"community-operators-bxk9f\" (UID: \"f4f2f01d-d131-4ad9-b60b-51f90d6a6655\") " pod="openshift-marketplace/community-operators-bxk9f" Jan 20 18:47:13 crc kubenswrapper[4661]: I0120 18:47:13.886049 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6vltc\" (UniqueName: \"kubernetes.io/projected/f4f2f01d-d131-4ad9-b60b-51f90d6a6655-kube-api-access-6vltc\") pod \"community-operators-bxk9f\" (UID: \"f4f2f01d-d131-4ad9-b60b-51f90d6a6655\") " pod="openshift-marketplace/community-operators-bxk9f" Jan 20 18:47:13 crc kubenswrapper[4661]: I0120 18:47:13.937982 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bxk9f" Jan 20 18:47:14 crc kubenswrapper[4661]: I0120 18:47:14.122914 4661 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-n2tt9" Jan 20 18:47:14 crc kubenswrapper[4661]: I0120 18:47:14.271075 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rlxcf\" (UniqueName: \"kubernetes.io/projected/77d1abe1-5293-4f5d-b062-d3fc2bb71510-kube-api-access-rlxcf\") pod \"77d1abe1-5293-4f5d-b062-d3fc2bb71510\" (UID: \"77d1abe1-5293-4f5d-b062-d3fc2bb71510\") " Jan 20 18:47:14 crc kubenswrapper[4661]: I0120 18:47:14.271167 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/77d1abe1-5293-4f5d-b062-d3fc2bb71510-ceph\") pod \"77d1abe1-5293-4f5d-b062-d3fc2bb71510\" (UID: \"77d1abe1-5293-4f5d-b062-d3fc2bb71510\") " Jan 20 18:47:14 crc kubenswrapper[4661]: I0120 18:47:14.271248 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/77d1abe1-5293-4f5d-b062-d3fc2bb71510-ssh-key-openstack-edpm-ipam\") pod \"77d1abe1-5293-4f5d-b062-d3fc2bb71510\" (UID: \"77d1abe1-5293-4f5d-b062-d3fc2bb71510\") " Jan 20 18:47:14 crc kubenswrapper[4661]: I0120 18:47:14.271350 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/77d1abe1-5293-4f5d-b062-d3fc2bb71510-inventory\") pod \"77d1abe1-5293-4f5d-b062-d3fc2bb71510\" (UID: \"77d1abe1-5293-4f5d-b062-d3fc2bb71510\") " Jan 20 18:47:14 crc kubenswrapper[4661]: I0120 18:47:14.287112 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/77d1abe1-5293-4f5d-b062-d3fc2bb71510-ceph" (OuterVolumeSpecName: "ceph") pod "77d1abe1-5293-4f5d-b062-d3fc2bb71510" (UID: "77d1abe1-5293-4f5d-b062-d3fc2bb71510"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 18:47:14 crc kubenswrapper[4661]: I0120 18:47:14.297842 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/77d1abe1-5293-4f5d-b062-d3fc2bb71510-kube-api-access-rlxcf" (OuterVolumeSpecName: "kube-api-access-rlxcf") pod "77d1abe1-5293-4f5d-b062-d3fc2bb71510" (UID: "77d1abe1-5293-4f5d-b062-d3fc2bb71510"). InnerVolumeSpecName "kube-api-access-rlxcf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 18:47:14 crc kubenswrapper[4661]: I0120 18:47:14.324837 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/77d1abe1-5293-4f5d-b062-d3fc2bb71510-inventory" (OuterVolumeSpecName: "inventory") pod "77d1abe1-5293-4f5d-b062-d3fc2bb71510" (UID: "77d1abe1-5293-4f5d-b062-d3fc2bb71510"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 18:47:14 crc kubenswrapper[4661]: I0120 18:47:14.341408 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/77d1abe1-5293-4f5d-b062-d3fc2bb71510-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "77d1abe1-5293-4f5d-b062-d3fc2bb71510" (UID: "77d1abe1-5293-4f5d-b062-d3fc2bb71510"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 18:47:14 crc kubenswrapper[4661]: I0120 18:47:14.343856 4661 generic.go:334] "Generic (PLEG): container finished" podID="87360224-a7a7-42d5-bfff-f4ff1e8fe9b5" containerID="7da903c6c5b34aaaff14931792a4e722d481265e6e90776c7b5a2132a2eb50ad" exitCode=0 Jan 20 18:47:14 crc kubenswrapper[4661]: I0120 18:47:14.343937 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wfq2c" event={"ID":"87360224-a7a7-42d5-bfff-f4ff1e8fe9b5","Type":"ContainerDied","Data":"7da903c6c5b34aaaff14931792a4e722d481265e6e90776c7b5a2132a2eb50ad"} Jan 20 18:47:14 crc kubenswrapper[4661]: I0120 18:47:14.352181 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-n2tt9" event={"ID":"77d1abe1-5293-4f5d-b062-d3fc2bb71510","Type":"ContainerDied","Data":"c47f9d13e76b59eca8080d5547717cd4b66f3bce02d7eef6376cac8e23ce7a13"} Jan 20 18:47:14 crc kubenswrapper[4661]: I0120 18:47:14.352506 4661 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c47f9d13e76b59eca8080d5547717cd4b66f3bce02d7eef6376cac8e23ce7a13" Jan 20 18:47:14 crc kubenswrapper[4661]: I0120 18:47:14.352288 4661 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-n2tt9" Jan 20 18:47:14 crc kubenswrapper[4661]: I0120 18:47:14.377485 4661 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rlxcf\" (UniqueName: \"kubernetes.io/projected/77d1abe1-5293-4f5d-b062-d3fc2bb71510-kube-api-access-rlxcf\") on node \"crc\" DevicePath \"\"" Jan 20 18:47:14 crc kubenswrapper[4661]: I0120 18:47:14.377517 4661 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/77d1abe1-5293-4f5d-b062-d3fc2bb71510-ceph\") on node \"crc\" DevicePath \"\"" Jan 20 18:47:14 crc kubenswrapper[4661]: I0120 18:47:14.377529 4661 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/77d1abe1-5293-4f5d-b062-d3fc2bb71510-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 20 18:47:14 crc kubenswrapper[4661]: I0120 18:47:14.377538 4661 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/77d1abe1-5293-4f5d-b062-d3fc2bb71510-inventory\") on node \"crc\" DevicePath \"\"" Jan 20 18:47:14 crc kubenswrapper[4661]: I0120 18:47:14.442006 4661 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-wxksd"] Jan 20 18:47:14 crc kubenswrapper[4661]: E0120 18:47:14.442456 4661 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="77d1abe1-5293-4f5d-b062-d3fc2bb71510" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Jan 20 18:47:14 crc kubenswrapper[4661]: I0120 18:47:14.442480 4661 state_mem.go:107] "Deleted CPUSet assignment" podUID="77d1abe1-5293-4f5d-b062-d3fc2bb71510" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Jan 20 18:47:14 crc kubenswrapper[4661]: I0120 18:47:14.442736 4661 memory_manager.go:354] "RemoveStaleState removing state" podUID="77d1abe1-5293-4f5d-b062-d3fc2bb71510" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Jan 20 18:47:14 crc kubenswrapper[4661]: I0120 18:47:14.447406 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-wxksd" Jan 20 18:47:14 crc kubenswrapper[4661]: I0120 18:47:14.450269 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-mmbv8" Jan 20 18:47:14 crc kubenswrapper[4661]: I0120 18:47:14.450416 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Jan 20 18:47:14 crc kubenswrapper[4661]: I0120 18:47:14.450444 4661 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 20 18:47:14 crc kubenswrapper[4661]: I0120 18:47:14.450605 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 20 18:47:14 crc kubenswrapper[4661]: I0120 18:47:14.465999 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 20 18:47:14 crc kubenswrapper[4661]: I0120 18:47:14.492754 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-wxksd"] Jan 20 18:47:14 crc kubenswrapper[4661]: I0120 18:47:14.535314 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-bxk9f"] Jan 20 18:47:14 crc kubenswrapper[4661]: I0120 18:47:14.582858 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4190b947-a737-4a67-bfa9-dad8bb4a7499-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-wxksd\" (UID: \"4190b947-a737-4a67-bfa9-dad8bb4a7499\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-wxksd" Jan 20 18:47:14 crc kubenswrapper[4661]: I0120 18:47:14.582978 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/4190b947-a737-4a67-bfa9-dad8bb4a7499-ceph\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-wxksd\" (UID: \"4190b947-a737-4a67-bfa9-dad8bb4a7499\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-wxksd" Jan 20 18:47:14 crc kubenswrapper[4661]: I0120 18:47:14.583123 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4190b947-a737-4a67-bfa9-dad8bb4a7499-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-wxksd\" (UID: \"4190b947-a737-4a67-bfa9-dad8bb4a7499\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-wxksd" Jan 20 18:47:14 crc kubenswrapper[4661]: I0120 18:47:14.583241 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x6mqk\" (UniqueName: \"kubernetes.io/projected/4190b947-a737-4a67-bfa9-dad8bb4a7499-kube-api-access-x6mqk\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-wxksd\" (UID: \"4190b947-a737-4a67-bfa9-dad8bb4a7499\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-wxksd" Jan 20 18:47:14 crc kubenswrapper[4661]: I0120 18:47:14.684791 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4190b947-a737-4a67-bfa9-dad8bb4a7499-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-wxksd\" (UID: \"4190b947-a737-4a67-bfa9-dad8bb4a7499\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-wxksd" Jan 20 18:47:14 crc kubenswrapper[4661]: I0120 18:47:14.684926 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x6mqk\" (UniqueName: \"kubernetes.io/projected/4190b947-a737-4a67-bfa9-dad8bb4a7499-kube-api-access-x6mqk\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-wxksd\" (UID: \"4190b947-a737-4a67-bfa9-dad8bb4a7499\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-wxksd" Jan 20 18:47:14 crc kubenswrapper[4661]: I0120 18:47:14.684970 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4190b947-a737-4a67-bfa9-dad8bb4a7499-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-wxksd\" (UID: \"4190b947-a737-4a67-bfa9-dad8bb4a7499\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-wxksd" Jan 20 18:47:14 crc kubenswrapper[4661]: I0120 18:47:14.685025 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/4190b947-a737-4a67-bfa9-dad8bb4a7499-ceph\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-wxksd\" (UID: \"4190b947-a737-4a67-bfa9-dad8bb4a7499\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-wxksd" Jan 20 18:47:14 crc kubenswrapper[4661]: I0120 18:47:14.690275 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4190b947-a737-4a67-bfa9-dad8bb4a7499-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-wxksd\" (UID: \"4190b947-a737-4a67-bfa9-dad8bb4a7499\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-wxksd" Jan 20 18:47:14 crc kubenswrapper[4661]: I0120 18:47:14.690389 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/4190b947-a737-4a67-bfa9-dad8bb4a7499-ceph\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-wxksd\" (UID: \"4190b947-a737-4a67-bfa9-dad8bb4a7499\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-wxksd" Jan 20 18:47:14 crc kubenswrapper[4661]: I0120 18:47:14.690965 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4190b947-a737-4a67-bfa9-dad8bb4a7499-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-wxksd\" (UID: \"4190b947-a737-4a67-bfa9-dad8bb4a7499\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-wxksd" Jan 20 18:47:14 crc kubenswrapper[4661]: I0120 18:47:14.703478 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x6mqk\" (UniqueName: \"kubernetes.io/projected/4190b947-a737-4a67-bfa9-dad8bb4a7499-kube-api-access-x6mqk\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-wxksd\" (UID: \"4190b947-a737-4a67-bfa9-dad8bb4a7499\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-wxksd" Jan 20 18:47:14 crc kubenswrapper[4661]: I0120 18:47:14.803920 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-wxksd" Jan 20 18:47:15 crc kubenswrapper[4661]: I0120 18:47:15.285243 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-wxksd"] Jan 20 18:47:15 crc kubenswrapper[4661]: W0120 18:47:15.289859 4661 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4190b947_a737_4a67_bfa9_dad8bb4a7499.slice/crio-e238bcb0aaee93f8b143e3f14366a432f4a9cc56ffe5d0d83fc175202c2cee29 WatchSource:0}: Error finding container e238bcb0aaee93f8b143e3f14366a432f4a9cc56ffe5d0d83fc175202c2cee29: Status 404 returned error can't find the container with id e238bcb0aaee93f8b143e3f14366a432f4a9cc56ffe5d0d83fc175202c2cee29 Jan 20 18:47:15 crc kubenswrapper[4661]: I0120 18:47:15.359954 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wfq2c" event={"ID":"87360224-a7a7-42d5-bfff-f4ff1e8fe9b5","Type":"ContainerStarted","Data":"8fb129d29077134403060cafe62b2fbe17856b73050e515caffc8960146694a3"} Jan 20 18:47:15 crc kubenswrapper[4661]: I0120 18:47:15.361977 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-wxksd" event={"ID":"4190b947-a737-4a67-bfa9-dad8bb4a7499","Type":"ContainerStarted","Data":"e238bcb0aaee93f8b143e3f14366a432f4a9cc56ffe5d0d83fc175202c2cee29"} Jan 20 18:47:15 crc kubenswrapper[4661]: I0120 18:47:15.363906 4661 generic.go:334] "Generic (PLEG): container finished" podID="f4f2f01d-d131-4ad9-b60b-51f90d6a6655" containerID="0255956e590ba402ff5b7a18e33594cecaca6231ae75b389d8ba04eaa0a99a02" exitCode=0 Jan 20 18:47:15 crc kubenswrapper[4661]: I0120 18:47:15.363945 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bxk9f" event={"ID":"f4f2f01d-d131-4ad9-b60b-51f90d6a6655","Type":"ContainerDied","Data":"0255956e590ba402ff5b7a18e33594cecaca6231ae75b389d8ba04eaa0a99a02"} Jan 20 18:47:15 crc kubenswrapper[4661]: I0120 18:47:15.363965 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bxk9f" event={"ID":"f4f2f01d-d131-4ad9-b60b-51f90d6a6655","Type":"ContainerStarted","Data":"872bfbd4a85a819e784dd336d706e6a94a9a1bc993a557744121dd31b248a1c2"} Jan 20 18:47:16 crc kubenswrapper[4661]: I0120 18:47:16.371854 4661 generic.go:334] "Generic (PLEG): container finished" podID="87360224-a7a7-42d5-bfff-f4ff1e8fe9b5" containerID="8fb129d29077134403060cafe62b2fbe17856b73050e515caffc8960146694a3" exitCode=0 Jan 20 18:47:16 crc kubenswrapper[4661]: I0120 18:47:16.372806 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wfq2c" event={"ID":"87360224-a7a7-42d5-bfff-f4ff1e8fe9b5","Type":"ContainerDied","Data":"8fb129d29077134403060cafe62b2fbe17856b73050e515caffc8960146694a3"} Jan 20 18:47:16 crc kubenswrapper[4661]: I0120 18:47:16.379609 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-wxksd" event={"ID":"4190b947-a737-4a67-bfa9-dad8bb4a7499","Type":"ContainerStarted","Data":"863165a8106ca9613605eb95a79d8889a794d297d9badc237e83ebff8fc14cc3"} Jan 20 18:47:16 crc kubenswrapper[4661]: I0120 18:47:16.381161 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bxk9f" event={"ID":"f4f2f01d-d131-4ad9-b60b-51f90d6a6655","Type":"ContainerStarted","Data":"9cc143086fb554c75addb968caf553241eaf7d48ac91ff0b7985b366c49b228d"} Jan 20 18:47:16 crc kubenswrapper[4661]: I0120 18:47:16.407963 4661 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-wxksd" podStartSLOduration=1.814378238 podStartE2EDuration="2.407943477s" podCreationTimestamp="2026-01-20 18:47:14 +0000 UTC" firstStartedPulling="2026-01-20 18:47:15.292466335 +0000 UTC m=+2491.623255997" lastFinishedPulling="2026-01-20 18:47:15.886031574 +0000 UTC m=+2492.216821236" observedRunningTime="2026-01-20 18:47:16.404384425 +0000 UTC m=+2492.735174097" watchObservedRunningTime="2026-01-20 18:47:16.407943477 +0000 UTC m=+2492.738733139" Jan 20 18:47:17 crc kubenswrapper[4661]: I0120 18:47:17.394256 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wfq2c" event={"ID":"87360224-a7a7-42d5-bfff-f4ff1e8fe9b5","Type":"ContainerStarted","Data":"130cdf30f52745d8a39fbbc543711800ce0a11b718ac03312fd917f8650a4dab"} Jan 20 18:47:17 crc kubenswrapper[4661]: I0120 18:47:17.400989 4661 generic.go:334] "Generic (PLEG): container finished" podID="f4f2f01d-d131-4ad9-b60b-51f90d6a6655" containerID="9cc143086fb554c75addb968caf553241eaf7d48ac91ff0b7985b366c49b228d" exitCode=0 Jan 20 18:47:17 crc kubenswrapper[4661]: I0120 18:47:17.402307 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bxk9f" event={"ID":"f4f2f01d-d131-4ad9-b60b-51f90d6a6655","Type":"ContainerDied","Data":"9cc143086fb554c75addb968caf553241eaf7d48ac91ff0b7985b366c49b228d"} Jan 20 18:47:17 crc kubenswrapper[4661]: I0120 18:47:17.427003 4661 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-wfq2c" podStartSLOduration=2.931213145 podStartE2EDuration="5.426986351s" podCreationTimestamp="2026-01-20 18:47:12 +0000 UTC" firstStartedPulling="2026-01-20 18:47:14.346476914 +0000 UTC m=+2490.677266576" lastFinishedPulling="2026-01-20 18:47:16.84225012 +0000 UTC m=+2493.173039782" observedRunningTime="2026-01-20 18:47:17.423160222 +0000 UTC m=+2493.753949894" watchObservedRunningTime="2026-01-20 18:47:17.426986351 +0000 UTC m=+2493.757776003" Jan 20 18:47:19 crc kubenswrapper[4661]: I0120 18:47:19.421921 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bxk9f" event={"ID":"f4f2f01d-d131-4ad9-b60b-51f90d6a6655","Type":"ContainerStarted","Data":"cd57d2c3075b17c859eb1190948f600f9e7be7149fa78f9ebc298bb074bcdde5"} Jan 20 18:47:21 crc kubenswrapper[4661]: I0120 18:47:21.440875 4661 generic.go:334] "Generic (PLEG): container finished" podID="4190b947-a737-4a67-bfa9-dad8bb4a7499" containerID="863165a8106ca9613605eb95a79d8889a794d297d9badc237e83ebff8fc14cc3" exitCode=0 Jan 20 18:47:21 crc kubenswrapper[4661]: I0120 18:47:21.440940 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-wxksd" event={"ID":"4190b947-a737-4a67-bfa9-dad8bb4a7499","Type":"ContainerDied","Data":"863165a8106ca9613605eb95a79d8889a794d297d9badc237e83ebff8fc14cc3"} Jan 20 18:47:21 crc kubenswrapper[4661]: I0120 18:47:21.458431 4661 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-bxk9f" podStartSLOduration=4.967007594 podStartE2EDuration="8.458409737s" podCreationTimestamp="2026-01-20 18:47:13 +0000 UTC" firstStartedPulling="2026-01-20 18:47:15.365810166 +0000 UTC m=+2491.696599828" lastFinishedPulling="2026-01-20 18:47:18.857212309 +0000 UTC m=+2495.188001971" observedRunningTime="2026-01-20 18:47:19.447793021 +0000 UTC m=+2495.778582693" watchObservedRunningTime="2026-01-20 18:47:21.458409737 +0000 UTC m=+2497.789199399" Jan 20 18:47:22 crc kubenswrapper[4661]: I0120 18:47:22.142727 4661 scope.go:117] "RemoveContainer" containerID="927fc50872b021bab1ad3425d5a889eb67f5570428c82cb9e465408890506791" Jan 20 18:47:22 crc kubenswrapper[4661]: E0120 18:47:22.143278 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-svf7c_openshift-machine-config-operator(78855c94-da90-4523-8d65-70f7fd153dee)\"" pod="openshift-machine-config-operator/machine-config-daemon-svf7c" podUID="78855c94-da90-4523-8d65-70f7fd153dee" Jan 20 18:47:22 crc kubenswrapper[4661]: I0120 18:47:22.550013 4661 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-wfq2c" Jan 20 18:47:22 crc kubenswrapper[4661]: I0120 18:47:22.550056 4661 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-wfq2c" Jan 20 18:47:22 crc kubenswrapper[4661]: I0120 18:47:22.599230 4661 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-wfq2c" Jan 20 18:47:22 crc kubenswrapper[4661]: I0120 18:47:22.835353 4661 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-wxksd" Jan 20 18:47:22 crc kubenswrapper[4661]: I0120 18:47:22.949961 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/4190b947-a737-4a67-bfa9-dad8bb4a7499-ceph\") pod \"4190b947-a737-4a67-bfa9-dad8bb4a7499\" (UID: \"4190b947-a737-4a67-bfa9-dad8bb4a7499\") " Jan 20 18:47:22 crc kubenswrapper[4661]: I0120 18:47:22.950116 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4190b947-a737-4a67-bfa9-dad8bb4a7499-ssh-key-openstack-edpm-ipam\") pod \"4190b947-a737-4a67-bfa9-dad8bb4a7499\" (UID: \"4190b947-a737-4a67-bfa9-dad8bb4a7499\") " Jan 20 18:47:22 crc kubenswrapper[4661]: I0120 18:47:22.950159 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4190b947-a737-4a67-bfa9-dad8bb4a7499-inventory\") pod \"4190b947-a737-4a67-bfa9-dad8bb4a7499\" (UID: \"4190b947-a737-4a67-bfa9-dad8bb4a7499\") " Jan 20 18:47:22 crc kubenswrapper[4661]: I0120 18:47:22.950183 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x6mqk\" (UniqueName: \"kubernetes.io/projected/4190b947-a737-4a67-bfa9-dad8bb4a7499-kube-api-access-x6mqk\") pod \"4190b947-a737-4a67-bfa9-dad8bb4a7499\" (UID: \"4190b947-a737-4a67-bfa9-dad8bb4a7499\") " Jan 20 18:47:22 crc kubenswrapper[4661]: I0120 18:47:22.955542 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4190b947-a737-4a67-bfa9-dad8bb4a7499-ceph" (OuterVolumeSpecName: "ceph") pod "4190b947-a737-4a67-bfa9-dad8bb4a7499" (UID: "4190b947-a737-4a67-bfa9-dad8bb4a7499"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 18:47:22 crc kubenswrapper[4661]: I0120 18:47:22.969759 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4190b947-a737-4a67-bfa9-dad8bb4a7499-kube-api-access-x6mqk" (OuterVolumeSpecName: "kube-api-access-x6mqk") pod "4190b947-a737-4a67-bfa9-dad8bb4a7499" (UID: "4190b947-a737-4a67-bfa9-dad8bb4a7499"). InnerVolumeSpecName "kube-api-access-x6mqk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 18:47:22 crc kubenswrapper[4661]: I0120 18:47:22.977732 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4190b947-a737-4a67-bfa9-dad8bb4a7499-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "4190b947-a737-4a67-bfa9-dad8bb4a7499" (UID: "4190b947-a737-4a67-bfa9-dad8bb4a7499"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 18:47:22 crc kubenswrapper[4661]: I0120 18:47:22.983295 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4190b947-a737-4a67-bfa9-dad8bb4a7499-inventory" (OuterVolumeSpecName: "inventory") pod "4190b947-a737-4a67-bfa9-dad8bb4a7499" (UID: "4190b947-a737-4a67-bfa9-dad8bb4a7499"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 18:47:23 crc kubenswrapper[4661]: I0120 18:47:23.052644 4661 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4190b947-a737-4a67-bfa9-dad8bb4a7499-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 20 18:47:23 crc kubenswrapper[4661]: I0120 18:47:23.052703 4661 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4190b947-a737-4a67-bfa9-dad8bb4a7499-inventory\") on node \"crc\" DevicePath \"\"" Jan 20 18:47:23 crc kubenswrapper[4661]: I0120 18:47:23.052713 4661 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x6mqk\" (UniqueName: \"kubernetes.io/projected/4190b947-a737-4a67-bfa9-dad8bb4a7499-kube-api-access-x6mqk\") on node \"crc\" DevicePath \"\"" Jan 20 18:47:23 crc kubenswrapper[4661]: I0120 18:47:23.052722 4661 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/4190b947-a737-4a67-bfa9-dad8bb4a7499-ceph\") on node \"crc\" DevicePath \"\"" Jan 20 18:47:23 crc kubenswrapper[4661]: I0120 18:47:23.466599 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-wxksd" event={"ID":"4190b947-a737-4a67-bfa9-dad8bb4a7499","Type":"ContainerDied","Data":"e238bcb0aaee93f8b143e3f14366a432f4a9cc56ffe5d0d83fc175202c2cee29"} Jan 20 18:47:23 crc kubenswrapper[4661]: I0120 18:47:23.466730 4661 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e238bcb0aaee93f8b143e3f14366a432f4a9cc56ffe5d0d83fc175202c2cee29" Jan 20 18:47:23 crc kubenswrapper[4661]: I0120 18:47:23.466626 4661 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-wxksd" Jan 20 18:47:23 crc kubenswrapper[4661]: I0120 18:47:23.570924 4661 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-p6cj5"] Jan 20 18:47:23 crc kubenswrapper[4661]: E0120 18:47:23.571706 4661 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4190b947-a737-4a67-bfa9-dad8bb4a7499" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Jan 20 18:47:23 crc kubenswrapper[4661]: I0120 18:47:23.571725 4661 state_mem.go:107] "Deleted CPUSet assignment" podUID="4190b947-a737-4a67-bfa9-dad8bb4a7499" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Jan 20 18:47:23 crc kubenswrapper[4661]: I0120 18:47:23.571900 4661 memory_manager.go:354] "RemoveStaleState removing state" podUID="4190b947-a737-4a67-bfa9-dad8bb4a7499" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Jan 20 18:47:23 crc kubenswrapper[4661]: I0120 18:47:23.572523 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-p6cj5" Jan 20 18:47:23 crc kubenswrapper[4661]: I0120 18:47:23.573539 4661 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-wfq2c" Jan 20 18:47:23 crc kubenswrapper[4661]: I0120 18:47:23.584088 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-mmbv8" Jan 20 18:47:23 crc kubenswrapper[4661]: I0120 18:47:23.584334 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 20 18:47:23 crc kubenswrapper[4661]: I0120 18:47:23.586399 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 20 18:47:23 crc kubenswrapper[4661]: I0120 18:47:23.586604 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Jan 20 18:47:23 crc kubenswrapper[4661]: I0120 18:47:23.586788 4661 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 20 18:47:23 crc kubenswrapper[4661]: I0120 18:47:23.613870 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-p6cj5"] Jan 20 18:47:23 crc kubenswrapper[4661]: I0120 18:47:23.667603 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4fae988d-a1e4-4f89-8a5f-45989cd3584c-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-p6cj5\" (UID: \"4fae988d-a1e4-4f89-8a5f-45989cd3584c\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-p6cj5" Jan 20 18:47:23 crc kubenswrapper[4661]: I0120 18:47:23.667711 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/4fae988d-a1e4-4f89-8a5f-45989cd3584c-ceph\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-p6cj5\" (UID: \"4fae988d-a1e4-4f89-8a5f-45989cd3584c\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-p6cj5" Jan 20 18:47:23 crc kubenswrapper[4661]: I0120 18:47:23.667778 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4fae988d-a1e4-4f89-8a5f-45989cd3584c-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-p6cj5\" (UID: \"4fae988d-a1e4-4f89-8a5f-45989cd3584c\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-p6cj5" Jan 20 18:47:23 crc kubenswrapper[4661]: I0120 18:47:23.667810 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4c7zb\" (UniqueName: \"kubernetes.io/projected/4fae988d-a1e4-4f89-8a5f-45989cd3584c-kube-api-access-4c7zb\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-p6cj5\" (UID: \"4fae988d-a1e4-4f89-8a5f-45989cd3584c\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-p6cj5" Jan 20 18:47:23 crc kubenswrapper[4661]: I0120 18:47:23.754286 4661 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-wfq2c"] Jan 20 18:47:23 crc kubenswrapper[4661]: I0120 18:47:23.768798 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/4fae988d-a1e4-4f89-8a5f-45989cd3584c-ceph\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-p6cj5\" (UID: \"4fae988d-a1e4-4f89-8a5f-45989cd3584c\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-p6cj5" Jan 20 18:47:23 crc kubenswrapper[4661]: I0120 18:47:23.768891 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4fae988d-a1e4-4f89-8a5f-45989cd3584c-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-p6cj5\" (UID: \"4fae988d-a1e4-4f89-8a5f-45989cd3584c\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-p6cj5" Jan 20 18:47:23 crc kubenswrapper[4661]: I0120 18:47:23.768938 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4c7zb\" (UniqueName: \"kubernetes.io/projected/4fae988d-a1e4-4f89-8a5f-45989cd3584c-kube-api-access-4c7zb\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-p6cj5\" (UID: \"4fae988d-a1e4-4f89-8a5f-45989cd3584c\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-p6cj5" Jan 20 18:47:23 crc kubenswrapper[4661]: I0120 18:47:23.768988 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4fae988d-a1e4-4f89-8a5f-45989cd3584c-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-p6cj5\" (UID: \"4fae988d-a1e4-4f89-8a5f-45989cd3584c\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-p6cj5" Jan 20 18:47:23 crc kubenswrapper[4661]: I0120 18:47:23.775797 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4fae988d-a1e4-4f89-8a5f-45989cd3584c-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-p6cj5\" (UID: \"4fae988d-a1e4-4f89-8a5f-45989cd3584c\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-p6cj5" Jan 20 18:47:23 crc kubenswrapper[4661]: I0120 18:47:23.781240 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4fae988d-a1e4-4f89-8a5f-45989cd3584c-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-p6cj5\" (UID: \"4fae988d-a1e4-4f89-8a5f-45989cd3584c\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-p6cj5" Jan 20 18:47:23 crc kubenswrapper[4661]: I0120 18:47:23.803549 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/4fae988d-a1e4-4f89-8a5f-45989cd3584c-ceph\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-p6cj5\" (UID: \"4fae988d-a1e4-4f89-8a5f-45989cd3584c\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-p6cj5" Jan 20 18:47:23 crc kubenswrapper[4661]: I0120 18:47:23.817443 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4c7zb\" (UniqueName: \"kubernetes.io/projected/4fae988d-a1e4-4f89-8a5f-45989cd3584c-kube-api-access-4c7zb\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-p6cj5\" (UID: \"4fae988d-a1e4-4f89-8a5f-45989cd3584c\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-p6cj5" Jan 20 18:47:23 crc kubenswrapper[4661]: I0120 18:47:23.894996 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-p6cj5" Jan 20 18:47:23 crc kubenswrapper[4661]: I0120 18:47:23.938914 4661 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-bxk9f" Jan 20 18:47:23 crc kubenswrapper[4661]: I0120 18:47:23.939161 4661 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-bxk9f" Jan 20 18:47:23 crc kubenswrapper[4661]: I0120 18:47:23.988383 4661 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-bxk9f" Jan 20 18:47:24 crc kubenswrapper[4661]: I0120 18:47:24.465071 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-p6cj5"] Jan 20 18:47:24 crc kubenswrapper[4661]: I0120 18:47:24.523610 4661 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-bxk9f" Jan 20 18:47:25 crc kubenswrapper[4661]: I0120 18:47:25.490086 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-p6cj5" event={"ID":"4fae988d-a1e4-4f89-8a5f-45989cd3584c","Type":"ContainerStarted","Data":"c85681cd0f36e018382ef8289b0806f5407e8da35905cf1b4895478ebd5235d2"} Jan 20 18:47:25 crc kubenswrapper[4661]: I0120 18:47:25.490142 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-p6cj5" event={"ID":"4fae988d-a1e4-4f89-8a5f-45989cd3584c","Type":"ContainerStarted","Data":"231655ff045a82ca15fdaa7dd0a1fbc4256f563bf1ae8c4f5e93ebcf74189e17"} Jan 20 18:47:25 crc kubenswrapper[4661]: I0120 18:47:25.490325 4661 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-wfq2c" podUID="87360224-a7a7-42d5-bfff-f4ff1e8fe9b5" containerName="registry-server" containerID="cri-o://130cdf30f52745d8a39fbbc543711800ce0a11b718ac03312fd917f8650a4dab" gracePeriod=2 Jan 20 18:47:25 crc kubenswrapper[4661]: I0120 18:47:25.526882 4661 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-p6cj5" podStartSLOduration=2.059628387 podStartE2EDuration="2.526848973s" podCreationTimestamp="2026-01-20 18:47:23 +0000 UTC" firstStartedPulling="2026-01-20 18:47:24.471526879 +0000 UTC m=+2500.802316541" lastFinishedPulling="2026-01-20 18:47:24.938747455 +0000 UTC m=+2501.269537127" observedRunningTime="2026-01-20 18:47:25.516340991 +0000 UTC m=+2501.847130673" watchObservedRunningTime="2026-01-20 18:47:25.526848973 +0000 UTC m=+2501.857638675" Jan 20 18:47:25 crc kubenswrapper[4661]: I0120 18:47:25.910071 4661 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-wfq2c" Jan 20 18:47:25 crc kubenswrapper[4661]: I0120 18:47:25.968991 4661 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-bxk9f"] Jan 20 18:47:26 crc kubenswrapper[4661]: I0120 18:47:26.026947 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/87360224-a7a7-42d5-bfff-f4ff1e8fe9b5-catalog-content\") pod \"87360224-a7a7-42d5-bfff-f4ff1e8fe9b5\" (UID: \"87360224-a7a7-42d5-bfff-f4ff1e8fe9b5\") " Jan 20 18:47:26 crc kubenswrapper[4661]: I0120 18:47:26.036997 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/87360224-a7a7-42d5-bfff-f4ff1e8fe9b5-utilities" (OuterVolumeSpecName: "utilities") pod "87360224-a7a7-42d5-bfff-f4ff1e8fe9b5" (UID: "87360224-a7a7-42d5-bfff-f4ff1e8fe9b5"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 18:47:26 crc kubenswrapper[4661]: I0120 18:47:26.035649 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/87360224-a7a7-42d5-bfff-f4ff1e8fe9b5-utilities\") pod \"87360224-a7a7-42d5-bfff-f4ff1e8fe9b5\" (UID: \"87360224-a7a7-42d5-bfff-f4ff1e8fe9b5\") " Jan 20 18:47:26 crc kubenswrapper[4661]: I0120 18:47:26.040700 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dx2sn\" (UniqueName: \"kubernetes.io/projected/87360224-a7a7-42d5-bfff-f4ff1e8fe9b5-kube-api-access-dx2sn\") pod \"87360224-a7a7-42d5-bfff-f4ff1e8fe9b5\" (UID: \"87360224-a7a7-42d5-bfff-f4ff1e8fe9b5\") " Jan 20 18:47:26 crc kubenswrapper[4661]: I0120 18:47:26.043947 4661 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/87360224-a7a7-42d5-bfff-f4ff1e8fe9b5-utilities\") on node \"crc\" DevicePath \"\"" Jan 20 18:47:26 crc kubenswrapper[4661]: I0120 18:47:26.045761 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87360224-a7a7-42d5-bfff-f4ff1e8fe9b5-kube-api-access-dx2sn" (OuterVolumeSpecName: "kube-api-access-dx2sn") pod "87360224-a7a7-42d5-bfff-f4ff1e8fe9b5" (UID: "87360224-a7a7-42d5-bfff-f4ff1e8fe9b5"). InnerVolumeSpecName "kube-api-access-dx2sn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 18:47:26 crc kubenswrapper[4661]: I0120 18:47:26.081938 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/87360224-a7a7-42d5-bfff-f4ff1e8fe9b5-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "87360224-a7a7-42d5-bfff-f4ff1e8fe9b5" (UID: "87360224-a7a7-42d5-bfff-f4ff1e8fe9b5"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 18:47:26 crc kubenswrapper[4661]: I0120 18:47:26.147301 4661 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/87360224-a7a7-42d5-bfff-f4ff1e8fe9b5-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 20 18:47:26 crc kubenswrapper[4661]: I0120 18:47:26.147343 4661 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dx2sn\" (UniqueName: \"kubernetes.io/projected/87360224-a7a7-42d5-bfff-f4ff1e8fe9b5-kube-api-access-dx2sn\") on node \"crc\" DevicePath \"\"" Jan 20 18:47:26 crc kubenswrapper[4661]: I0120 18:47:26.504590 4661 generic.go:334] "Generic (PLEG): container finished" podID="87360224-a7a7-42d5-bfff-f4ff1e8fe9b5" containerID="130cdf30f52745d8a39fbbc543711800ce0a11b718ac03312fd917f8650a4dab" exitCode=0 Jan 20 18:47:26 crc kubenswrapper[4661]: I0120 18:47:26.504783 4661 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-wfq2c" Jan 20 18:47:26 crc kubenswrapper[4661]: I0120 18:47:26.504800 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wfq2c" event={"ID":"87360224-a7a7-42d5-bfff-f4ff1e8fe9b5","Type":"ContainerDied","Data":"130cdf30f52745d8a39fbbc543711800ce0a11b718ac03312fd917f8650a4dab"} Jan 20 18:47:26 crc kubenswrapper[4661]: I0120 18:47:26.504927 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wfq2c" event={"ID":"87360224-a7a7-42d5-bfff-f4ff1e8fe9b5","Type":"ContainerDied","Data":"614b4e77c1c595d48bca257d8d2689338804a8ee18d82d19677096e3e90df12a"} Jan 20 18:47:26 crc kubenswrapper[4661]: I0120 18:47:26.504983 4661 scope.go:117] "RemoveContainer" containerID="130cdf30f52745d8a39fbbc543711800ce0a11b718ac03312fd917f8650a4dab" Jan 20 18:47:26 crc kubenswrapper[4661]: I0120 18:47:26.540310 4661 scope.go:117] "RemoveContainer" containerID="8fb129d29077134403060cafe62b2fbe17856b73050e515caffc8960146694a3" Jan 20 18:47:26 crc kubenswrapper[4661]: I0120 18:47:26.544472 4661 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-wfq2c"] Jan 20 18:47:26 crc kubenswrapper[4661]: I0120 18:47:26.563220 4661 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-wfq2c"] Jan 20 18:47:26 crc kubenswrapper[4661]: I0120 18:47:26.565730 4661 scope.go:117] "RemoveContainer" containerID="7da903c6c5b34aaaff14931792a4e722d481265e6e90776c7b5a2132a2eb50ad" Jan 20 18:47:26 crc kubenswrapper[4661]: I0120 18:47:26.631104 4661 scope.go:117] "RemoveContainer" containerID="130cdf30f52745d8a39fbbc543711800ce0a11b718ac03312fd917f8650a4dab" Jan 20 18:47:26 crc kubenswrapper[4661]: E0120 18:47:26.631980 4661 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"130cdf30f52745d8a39fbbc543711800ce0a11b718ac03312fd917f8650a4dab\": container with ID starting with 130cdf30f52745d8a39fbbc543711800ce0a11b718ac03312fd917f8650a4dab not found: ID does not exist" containerID="130cdf30f52745d8a39fbbc543711800ce0a11b718ac03312fd917f8650a4dab" Jan 20 18:47:26 crc kubenswrapper[4661]: I0120 18:47:26.632075 4661 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"130cdf30f52745d8a39fbbc543711800ce0a11b718ac03312fd917f8650a4dab"} err="failed to get container status \"130cdf30f52745d8a39fbbc543711800ce0a11b718ac03312fd917f8650a4dab\": rpc error: code = NotFound desc = could not find container \"130cdf30f52745d8a39fbbc543711800ce0a11b718ac03312fd917f8650a4dab\": container with ID starting with 130cdf30f52745d8a39fbbc543711800ce0a11b718ac03312fd917f8650a4dab not found: ID does not exist" Jan 20 18:47:26 crc kubenswrapper[4661]: I0120 18:47:26.632142 4661 scope.go:117] "RemoveContainer" containerID="8fb129d29077134403060cafe62b2fbe17856b73050e515caffc8960146694a3" Jan 20 18:47:26 crc kubenswrapper[4661]: E0120 18:47:26.632733 4661 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8fb129d29077134403060cafe62b2fbe17856b73050e515caffc8960146694a3\": container with ID starting with 8fb129d29077134403060cafe62b2fbe17856b73050e515caffc8960146694a3 not found: ID does not exist" containerID="8fb129d29077134403060cafe62b2fbe17856b73050e515caffc8960146694a3" Jan 20 18:47:26 crc kubenswrapper[4661]: I0120 18:47:26.632799 4661 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8fb129d29077134403060cafe62b2fbe17856b73050e515caffc8960146694a3"} err="failed to get container status \"8fb129d29077134403060cafe62b2fbe17856b73050e515caffc8960146694a3\": rpc error: code = NotFound desc = could not find container \"8fb129d29077134403060cafe62b2fbe17856b73050e515caffc8960146694a3\": container with ID starting with 8fb129d29077134403060cafe62b2fbe17856b73050e515caffc8960146694a3 not found: ID does not exist" Jan 20 18:47:26 crc kubenswrapper[4661]: I0120 18:47:26.632841 4661 scope.go:117] "RemoveContainer" containerID="7da903c6c5b34aaaff14931792a4e722d481265e6e90776c7b5a2132a2eb50ad" Jan 20 18:47:26 crc kubenswrapper[4661]: E0120 18:47:26.633215 4661 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7da903c6c5b34aaaff14931792a4e722d481265e6e90776c7b5a2132a2eb50ad\": container with ID starting with 7da903c6c5b34aaaff14931792a4e722d481265e6e90776c7b5a2132a2eb50ad not found: ID does not exist" containerID="7da903c6c5b34aaaff14931792a4e722d481265e6e90776c7b5a2132a2eb50ad" Jan 20 18:47:26 crc kubenswrapper[4661]: I0120 18:47:26.633247 4661 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7da903c6c5b34aaaff14931792a4e722d481265e6e90776c7b5a2132a2eb50ad"} err="failed to get container status \"7da903c6c5b34aaaff14931792a4e722d481265e6e90776c7b5a2132a2eb50ad\": rpc error: code = NotFound desc = could not find container \"7da903c6c5b34aaaff14931792a4e722d481265e6e90776c7b5a2132a2eb50ad\": container with ID starting with 7da903c6c5b34aaaff14931792a4e722d481265e6e90776c7b5a2132a2eb50ad not found: ID does not exist" Jan 20 18:47:27 crc kubenswrapper[4661]: I0120 18:47:27.518281 4661 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-bxk9f" podUID="f4f2f01d-d131-4ad9-b60b-51f90d6a6655" containerName="registry-server" containerID="cri-o://cd57d2c3075b17c859eb1190948f600f9e7be7149fa78f9ebc298bb074bcdde5" gracePeriod=2 Jan 20 18:47:27 crc kubenswrapper[4661]: I0120 18:47:27.989479 4661 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bxk9f" Jan 20 18:47:28 crc kubenswrapper[4661]: I0120 18:47:28.083606 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f4f2f01d-d131-4ad9-b60b-51f90d6a6655-catalog-content\") pod \"f4f2f01d-d131-4ad9-b60b-51f90d6a6655\" (UID: \"f4f2f01d-d131-4ad9-b60b-51f90d6a6655\") " Jan 20 18:47:28 crc kubenswrapper[4661]: I0120 18:47:28.083652 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6vltc\" (UniqueName: \"kubernetes.io/projected/f4f2f01d-d131-4ad9-b60b-51f90d6a6655-kube-api-access-6vltc\") pod \"f4f2f01d-d131-4ad9-b60b-51f90d6a6655\" (UID: \"f4f2f01d-d131-4ad9-b60b-51f90d6a6655\") " Jan 20 18:47:28 crc kubenswrapper[4661]: I0120 18:47:28.083842 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f4f2f01d-d131-4ad9-b60b-51f90d6a6655-utilities\") pod \"f4f2f01d-d131-4ad9-b60b-51f90d6a6655\" (UID: \"f4f2f01d-d131-4ad9-b60b-51f90d6a6655\") " Jan 20 18:47:28 crc kubenswrapper[4661]: I0120 18:47:28.084650 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f4f2f01d-d131-4ad9-b60b-51f90d6a6655-utilities" (OuterVolumeSpecName: "utilities") pod "f4f2f01d-d131-4ad9-b60b-51f90d6a6655" (UID: "f4f2f01d-d131-4ad9-b60b-51f90d6a6655"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 18:47:28 crc kubenswrapper[4661]: I0120 18:47:28.098857 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f4f2f01d-d131-4ad9-b60b-51f90d6a6655-kube-api-access-6vltc" (OuterVolumeSpecName: "kube-api-access-6vltc") pod "f4f2f01d-d131-4ad9-b60b-51f90d6a6655" (UID: "f4f2f01d-d131-4ad9-b60b-51f90d6a6655"). InnerVolumeSpecName "kube-api-access-6vltc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 18:47:28 crc kubenswrapper[4661]: I0120 18:47:28.148710 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f4f2f01d-d131-4ad9-b60b-51f90d6a6655-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f4f2f01d-d131-4ad9-b60b-51f90d6a6655" (UID: "f4f2f01d-d131-4ad9-b60b-51f90d6a6655"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 18:47:28 crc kubenswrapper[4661]: I0120 18:47:28.153209 4661 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87360224-a7a7-42d5-bfff-f4ff1e8fe9b5" path="/var/lib/kubelet/pods/87360224-a7a7-42d5-bfff-f4ff1e8fe9b5/volumes" Jan 20 18:47:28 crc kubenswrapper[4661]: I0120 18:47:28.186300 4661 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f4f2f01d-d131-4ad9-b60b-51f90d6a6655-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 20 18:47:28 crc kubenswrapper[4661]: I0120 18:47:28.186328 4661 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6vltc\" (UniqueName: \"kubernetes.io/projected/f4f2f01d-d131-4ad9-b60b-51f90d6a6655-kube-api-access-6vltc\") on node \"crc\" DevicePath \"\"" Jan 20 18:47:28 crc kubenswrapper[4661]: I0120 18:47:28.186338 4661 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f4f2f01d-d131-4ad9-b60b-51f90d6a6655-utilities\") on node \"crc\" DevicePath \"\"" Jan 20 18:47:28 crc kubenswrapper[4661]: I0120 18:47:28.530877 4661 generic.go:334] "Generic (PLEG): container finished" podID="f4f2f01d-d131-4ad9-b60b-51f90d6a6655" containerID="cd57d2c3075b17c859eb1190948f600f9e7be7149fa78f9ebc298bb074bcdde5" exitCode=0 Jan 20 18:47:28 crc kubenswrapper[4661]: I0120 18:47:28.530936 4661 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bxk9f" Jan 20 18:47:28 crc kubenswrapper[4661]: I0120 18:47:28.530956 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bxk9f" event={"ID":"f4f2f01d-d131-4ad9-b60b-51f90d6a6655","Type":"ContainerDied","Data":"cd57d2c3075b17c859eb1190948f600f9e7be7149fa78f9ebc298bb074bcdde5"} Jan 20 18:47:28 crc kubenswrapper[4661]: I0120 18:47:28.531010 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bxk9f" event={"ID":"f4f2f01d-d131-4ad9-b60b-51f90d6a6655","Type":"ContainerDied","Data":"872bfbd4a85a819e784dd336d706e6a94a9a1bc993a557744121dd31b248a1c2"} Jan 20 18:47:28 crc kubenswrapper[4661]: I0120 18:47:28.531047 4661 scope.go:117] "RemoveContainer" containerID="cd57d2c3075b17c859eb1190948f600f9e7be7149fa78f9ebc298bb074bcdde5" Jan 20 18:47:28 crc kubenswrapper[4661]: I0120 18:47:28.561890 4661 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-bxk9f"] Jan 20 18:47:28 crc kubenswrapper[4661]: I0120 18:47:28.572343 4661 scope.go:117] "RemoveContainer" containerID="9cc143086fb554c75addb968caf553241eaf7d48ac91ff0b7985b366c49b228d" Jan 20 18:47:28 crc kubenswrapper[4661]: I0120 18:47:28.579752 4661 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-bxk9f"] Jan 20 18:47:28 crc kubenswrapper[4661]: I0120 18:47:28.598100 4661 scope.go:117] "RemoveContainer" containerID="0255956e590ba402ff5b7a18e33594cecaca6231ae75b389d8ba04eaa0a99a02" Jan 20 18:47:28 crc kubenswrapper[4661]: I0120 18:47:28.634393 4661 scope.go:117] "RemoveContainer" containerID="cd57d2c3075b17c859eb1190948f600f9e7be7149fa78f9ebc298bb074bcdde5" Jan 20 18:47:28 crc kubenswrapper[4661]: E0120 18:47:28.634968 4661 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cd57d2c3075b17c859eb1190948f600f9e7be7149fa78f9ebc298bb074bcdde5\": container with ID starting with cd57d2c3075b17c859eb1190948f600f9e7be7149fa78f9ebc298bb074bcdde5 not found: ID does not exist" containerID="cd57d2c3075b17c859eb1190948f600f9e7be7149fa78f9ebc298bb074bcdde5" Jan 20 18:47:28 crc kubenswrapper[4661]: I0120 18:47:28.635014 4661 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cd57d2c3075b17c859eb1190948f600f9e7be7149fa78f9ebc298bb074bcdde5"} err="failed to get container status \"cd57d2c3075b17c859eb1190948f600f9e7be7149fa78f9ebc298bb074bcdde5\": rpc error: code = NotFound desc = could not find container \"cd57d2c3075b17c859eb1190948f600f9e7be7149fa78f9ebc298bb074bcdde5\": container with ID starting with cd57d2c3075b17c859eb1190948f600f9e7be7149fa78f9ebc298bb074bcdde5 not found: ID does not exist" Jan 20 18:47:28 crc kubenswrapper[4661]: I0120 18:47:28.635046 4661 scope.go:117] "RemoveContainer" containerID="9cc143086fb554c75addb968caf553241eaf7d48ac91ff0b7985b366c49b228d" Jan 20 18:47:28 crc kubenswrapper[4661]: E0120 18:47:28.635449 4661 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9cc143086fb554c75addb968caf553241eaf7d48ac91ff0b7985b366c49b228d\": container with ID starting with 9cc143086fb554c75addb968caf553241eaf7d48ac91ff0b7985b366c49b228d not found: ID does not exist" containerID="9cc143086fb554c75addb968caf553241eaf7d48ac91ff0b7985b366c49b228d" Jan 20 18:47:28 crc kubenswrapper[4661]: I0120 18:47:28.635474 4661 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9cc143086fb554c75addb968caf553241eaf7d48ac91ff0b7985b366c49b228d"} err="failed to get container status \"9cc143086fb554c75addb968caf553241eaf7d48ac91ff0b7985b366c49b228d\": rpc error: code = NotFound desc = could not find container \"9cc143086fb554c75addb968caf553241eaf7d48ac91ff0b7985b366c49b228d\": container with ID starting with 9cc143086fb554c75addb968caf553241eaf7d48ac91ff0b7985b366c49b228d not found: ID does not exist" Jan 20 18:47:28 crc kubenswrapper[4661]: I0120 18:47:28.635493 4661 scope.go:117] "RemoveContainer" containerID="0255956e590ba402ff5b7a18e33594cecaca6231ae75b389d8ba04eaa0a99a02" Jan 20 18:47:28 crc kubenswrapper[4661]: E0120 18:47:28.635859 4661 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0255956e590ba402ff5b7a18e33594cecaca6231ae75b389d8ba04eaa0a99a02\": container with ID starting with 0255956e590ba402ff5b7a18e33594cecaca6231ae75b389d8ba04eaa0a99a02 not found: ID does not exist" containerID="0255956e590ba402ff5b7a18e33594cecaca6231ae75b389d8ba04eaa0a99a02" Jan 20 18:47:28 crc kubenswrapper[4661]: I0120 18:47:28.635911 4661 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0255956e590ba402ff5b7a18e33594cecaca6231ae75b389d8ba04eaa0a99a02"} err="failed to get container status \"0255956e590ba402ff5b7a18e33594cecaca6231ae75b389d8ba04eaa0a99a02\": rpc error: code = NotFound desc = could not find container \"0255956e590ba402ff5b7a18e33594cecaca6231ae75b389d8ba04eaa0a99a02\": container with ID starting with 0255956e590ba402ff5b7a18e33594cecaca6231ae75b389d8ba04eaa0a99a02 not found: ID does not exist" Jan 20 18:47:30 crc kubenswrapper[4661]: I0120 18:47:30.164489 4661 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4f2f01d-d131-4ad9-b60b-51f90d6a6655" path="/var/lib/kubelet/pods/f4f2f01d-d131-4ad9-b60b-51f90d6a6655/volumes" Jan 20 18:47:36 crc kubenswrapper[4661]: I0120 18:47:36.143171 4661 scope.go:117] "RemoveContainer" containerID="927fc50872b021bab1ad3425d5a889eb67f5570428c82cb9e465408890506791" Jan 20 18:47:36 crc kubenswrapper[4661]: E0120 18:47:36.144407 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-svf7c_openshift-machine-config-operator(78855c94-da90-4523-8d65-70f7fd153dee)\"" pod="openshift-machine-config-operator/machine-config-daemon-svf7c" podUID="78855c94-da90-4523-8d65-70f7fd153dee" Jan 20 18:47:47 crc kubenswrapper[4661]: I0120 18:47:47.141816 4661 scope.go:117] "RemoveContainer" containerID="927fc50872b021bab1ad3425d5a889eb67f5570428c82cb9e465408890506791" Jan 20 18:47:47 crc kubenswrapper[4661]: E0120 18:47:47.142478 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-svf7c_openshift-machine-config-operator(78855c94-da90-4523-8d65-70f7fd153dee)\"" pod="openshift-machine-config-operator/machine-config-daemon-svf7c" podUID="78855c94-da90-4523-8d65-70f7fd153dee" Jan 20 18:48:02 crc kubenswrapper[4661]: I0120 18:48:02.142296 4661 scope.go:117] "RemoveContainer" containerID="927fc50872b021bab1ad3425d5a889eb67f5570428c82cb9e465408890506791" Jan 20 18:48:02 crc kubenswrapper[4661]: E0120 18:48:02.143922 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-svf7c_openshift-machine-config-operator(78855c94-da90-4523-8d65-70f7fd153dee)\"" pod="openshift-machine-config-operator/machine-config-daemon-svf7c" podUID="78855c94-da90-4523-8d65-70f7fd153dee" Jan 20 18:48:07 crc kubenswrapper[4661]: I0120 18:48:07.877507 4661 generic.go:334] "Generic (PLEG): container finished" podID="4fae988d-a1e4-4f89-8a5f-45989cd3584c" containerID="c85681cd0f36e018382ef8289b0806f5407e8da35905cf1b4895478ebd5235d2" exitCode=0 Jan 20 18:48:07 crc kubenswrapper[4661]: I0120 18:48:07.877639 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-p6cj5" event={"ID":"4fae988d-a1e4-4f89-8a5f-45989cd3584c","Type":"ContainerDied","Data":"c85681cd0f36e018382ef8289b0806f5407e8da35905cf1b4895478ebd5235d2"} Jan 20 18:48:09 crc kubenswrapper[4661]: I0120 18:48:09.284593 4661 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-p6cj5" Jan 20 18:48:09 crc kubenswrapper[4661]: I0120 18:48:09.383392 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4c7zb\" (UniqueName: \"kubernetes.io/projected/4fae988d-a1e4-4f89-8a5f-45989cd3584c-kube-api-access-4c7zb\") pod \"4fae988d-a1e4-4f89-8a5f-45989cd3584c\" (UID: \"4fae988d-a1e4-4f89-8a5f-45989cd3584c\") " Jan 20 18:48:09 crc kubenswrapper[4661]: I0120 18:48:09.383794 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/4fae988d-a1e4-4f89-8a5f-45989cd3584c-ceph\") pod \"4fae988d-a1e4-4f89-8a5f-45989cd3584c\" (UID: \"4fae988d-a1e4-4f89-8a5f-45989cd3584c\") " Jan 20 18:48:09 crc kubenswrapper[4661]: I0120 18:48:09.383885 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4fae988d-a1e4-4f89-8a5f-45989cd3584c-ssh-key-openstack-edpm-ipam\") pod \"4fae988d-a1e4-4f89-8a5f-45989cd3584c\" (UID: \"4fae988d-a1e4-4f89-8a5f-45989cd3584c\") " Jan 20 18:48:09 crc kubenswrapper[4661]: I0120 18:48:09.383912 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4fae988d-a1e4-4f89-8a5f-45989cd3584c-inventory\") pod \"4fae988d-a1e4-4f89-8a5f-45989cd3584c\" (UID: \"4fae988d-a1e4-4f89-8a5f-45989cd3584c\") " Jan 20 18:48:09 crc kubenswrapper[4661]: I0120 18:48:09.390056 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4fae988d-a1e4-4f89-8a5f-45989cd3584c-kube-api-access-4c7zb" (OuterVolumeSpecName: "kube-api-access-4c7zb") pod "4fae988d-a1e4-4f89-8a5f-45989cd3584c" (UID: "4fae988d-a1e4-4f89-8a5f-45989cd3584c"). InnerVolumeSpecName "kube-api-access-4c7zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 18:48:09 crc kubenswrapper[4661]: I0120 18:48:09.392054 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4fae988d-a1e4-4f89-8a5f-45989cd3584c-ceph" (OuterVolumeSpecName: "ceph") pod "4fae988d-a1e4-4f89-8a5f-45989cd3584c" (UID: "4fae988d-a1e4-4f89-8a5f-45989cd3584c"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 18:48:09 crc kubenswrapper[4661]: I0120 18:48:09.413033 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4fae988d-a1e4-4f89-8a5f-45989cd3584c-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "4fae988d-a1e4-4f89-8a5f-45989cd3584c" (UID: "4fae988d-a1e4-4f89-8a5f-45989cd3584c"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 18:48:09 crc kubenswrapper[4661]: I0120 18:48:09.434439 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4fae988d-a1e4-4f89-8a5f-45989cd3584c-inventory" (OuterVolumeSpecName: "inventory") pod "4fae988d-a1e4-4f89-8a5f-45989cd3584c" (UID: "4fae988d-a1e4-4f89-8a5f-45989cd3584c"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 18:48:09 crc kubenswrapper[4661]: I0120 18:48:09.485270 4661 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/4fae988d-a1e4-4f89-8a5f-45989cd3584c-ceph\") on node \"crc\" DevicePath \"\"" Jan 20 18:48:09 crc kubenswrapper[4661]: I0120 18:48:09.485307 4661 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4fae988d-a1e4-4f89-8a5f-45989cd3584c-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 20 18:48:09 crc kubenswrapper[4661]: I0120 18:48:09.485319 4661 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4fae988d-a1e4-4f89-8a5f-45989cd3584c-inventory\") on node \"crc\" DevicePath \"\"" Jan 20 18:48:09 crc kubenswrapper[4661]: I0120 18:48:09.485328 4661 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4c7zb\" (UniqueName: \"kubernetes.io/projected/4fae988d-a1e4-4f89-8a5f-45989cd3584c-kube-api-access-4c7zb\") on node \"crc\" DevicePath \"\"" Jan 20 18:48:09 crc kubenswrapper[4661]: I0120 18:48:09.903822 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-p6cj5" event={"ID":"4fae988d-a1e4-4f89-8a5f-45989cd3584c","Type":"ContainerDied","Data":"231655ff045a82ca15fdaa7dd0a1fbc4256f563bf1ae8c4f5e93ebcf74189e17"} Jan 20 18:48:09 crc kubenswrapper[4661]: I0120 18:48:09.903911 4661 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="231655ff045a82ca15fdaa7dd0a1fbc4256f563bf1ae8c4f5e93ebcf74189e17" Jan 20 18:48:09 crc kubenswrapper[4661]: I0120 18:48:09.903852 4661 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-p6cj5" Jan 20 18:48:10 crc kubenswrapper[4661]: I0120 18:48:10.036636 4661 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-7nsxk"] Jan 20 18:48:10 crc kubenswrapper[4661]: E0120 18:48:10.037205 4661 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="87360224-a7a7-42d5-bfff-f4ff1e8fe9b5" containerName="extract-content" Jan 20 18:48:10 crc kubenswrapper[4661]: I0120 18:48:10.037234 4661 state_mem.go:107] "Deleted CPUSet assignment" podUID="87360224-a7a7-42d5-bfff-f4ff1e8fe9b5" containerName="extract-content" Jan 20 18:48:10 crc kubenswrapper[4661]: E0120 18:48:10.037262 4661 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4f2f01d-d131-4ad9-b60b-51f90d6a6655" containerName="extract-utilities" Jan 20 18:48:10 crc kubenswrapper[4661]: I0120 18:48:10.037275 4661 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4f2f01d-d131-4ad9-b60b-51f90d6a6655" containerName="extract-utilities" Jan 20 18:48:10 crc kubenswrapper[4661]: E0120 18:48:10.037300 4661 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4fae988d-a1e4-4f89-8a5f-45989cd3584c" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Jan 20 18:48:10 crc kubenswrapper[4661]: I0120 18:48:10.037314 4661 state_mem.go:107] "Deleted CPUSet assignment" podUID="4fae988d-a1e4-4f89-8a5f-45989cd3584c" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Jan 20 18:48:10 crc kubenswrapper[4661]: E0120 18:48:10.037340 4661 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="87360224-a7a7-42d5-bfff-f4ff1e8fe9b5" containerName="extract-utilities" Jan 20 18:48:10 crc kubenswrapper[4661]: I0120 18:48:10.037351 4661 state_mem.go:107] "Deleted CPUSet assignment" podUID="87360224-a7a7-42d5-bfff-f4ff1e8fe9b5" containerName="extract-utilities" Jan 20 18:48:10 crc kubenswrapper[4661]: E0120 18:48:10.037369 4661 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4f2f01d-d131-4ad9-b60b-51f90d6a6655" containerName="registry-server" Jan 20 18:48:10 crc kubenswrapper[4661]: I0120 18:48:10.037381 4661 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4f2f01d-d131-4ad9-b60b-51f90d6a6655" containerName="registry-server" Jan 20 18:48:10 crc kubenswrapper[4661]: E0120 18:48:10.037403 4661 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="87360224-a7a7-42d5-bfff-f4ff1e8fe9b5" containerName="registry-server" Jan 20 18:48:10 crc kubenswrapper[4661]: I0120 18:48:10.037414 4661 state_mem.go:107] "Deleted CPUSet assignment" podUID="87360224-a7a7-42d5-bfff-f4ff1e8fe9b5" containerName="registry-server" Jan 20 18:48:10 crc kubenswrapper[4661]: E0120 18:48:10.037432 4661 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4f2f01d-d131-4ad9-b60b-51f90d6a6655" containerName="extract-content" Jan 20 18:48:10 crc kubenswrapper[4661]: I0120 18:48:10.037443 4661 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4f2f01d-d131-4ad9-b60b-51f90d6a6655" containerName="extract-content" Jan 20 18:48:10 crc kubenswrapper[4661]: I0120 18:48:10.037743 4661 memory_manager.go:354] "RemoveStaleState removing state" podUID="4fae988d-a1e4-4f89-8a5f-45989cd3584c" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Jan 20 18:48:10 crc kubenswrapper[4661]: I0120 18:48:10.037777 4661 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4f2f01d-d131-4ad9-b60b-51f90d6a6655" containerName="registry-server" Jan 20 18:48:10 crc kubenswrapper[4661]: I0120 18:48:10.037801 4661 memory_manager.go:354] "RemoveStaleState removing state" podUID="87360224-a7a7-42d5-bfff-f4ff1e8fe9b5" containerName="registry-server" Jan 20 18:48:10 crc kubenswrapper[4661]: I0120 18:48:10.038826 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-7nsxk" Jan 20 18:48:10 crc kubenswrapper[4661]: I0120 18:48:10.041848 4661 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 20 18:48:10 crc kubenswrapper[4661]: I0120 18:48:10.042778 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-mmbv8" Jan 20 18:48:10 crc kubenswrapper[4661]: I0120 18:48:10.042967 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 20 18:48:10 crc kubenswrapper[4661]: I0120 18:48:10.043123 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 20 18:48:10 crc kubenswrapper[4661]: I0120 18:48:10.043295 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Jan 20 18:48:10 crc kubenswrapper[4661]: I0120 18:48:10.048777 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-7nsxk"] Jan 20 18:48:10 crc kubenswrapper[4661]: I0120 18:48:10.204085 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/98f2afa8-7c09-4427-a558-a3da2bfd4df4-inventory\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-7nsxk\" (UID: \"98f2afa8-7c09-4427-a558-a3da2bfd4df4\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-7nsxk" Jan 20 18:48:10 crc kubenswrapper[4661]: I0120 18:48:10.204255 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/98f2afa8-7c09-4427-a558-a3da2bfd4df4-ceph\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-7nsxk\" (UID: \"98f2afa8-7c09-4427-a558-a3da2bfd4df4\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-7nsxk" Jan 20 18:48:10 crc kubenswrapper[4661]: I0120 18:48:10.204458 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j2dzg\" (UniqueName: \"kubernetes.io/projected/98f2afa8-7c09-4427-a558-a3da2bfd4df4-kube-api-access-j2dzg\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-7nsxk\" (UID: \"98f2afa8-7c09-4427-a558-a3da2bfd4df4\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-7nsxk" Jan 20 18:48:10 crc kubenswrapper[4661]: I0120 18:48:10.204643 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/98f2afa8-7c09-4427-a558-a3da2bfd4df4-ssh-key-openstack-edpm-ipam\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-7nsxk\" (UID: \"98f2afa8-7c09-4427-a558-a3da2bfd4df4\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-7nsxk" Jan 20 18:48:10 crc kubenswrapper[4661]: I0120 18:48:10.306541 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/98f2afa8-7c09-4427-a558-a3da2bfd4df4-ceph\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-7nsxk\" (UID: \"98f2afa8-7c09-4427-a558-a3da2bfd4df4\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-7nsxk" Jan 20 18:48:10 crc kubenswrapper[4661]: I0120 18:48:10.306707 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j2dzg\" (UniqueName: \"kubernetes.io/projected/98f2afa8-7c09-4427-a558-a3da2bfd4df4-kube-api-access-j2dzg\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-7nsxk\" (UID: \"98f2afa8-7c09-4427-a558-a3da2bfd4df4\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-7nsxk" Jan 20 18:48:10 crc kubenswrapper[4661]: I0120 18:48:10.306766 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/98f2afa8-7c09-4427-a558-a3da2bfd4df4-ssh-key-openstack-edpm-ipam\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-7nsxk\" (UID: \"98f2afa8-7c09-4427-a558-a3da2bfd4df4\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-7nsxk" Jan 20 18:48:10 crc kubenswrapper[4661]: I0120 18:48:10.306813 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/98f2afa8-7c09-4427-a558-a3da2bfd4df4-inventory\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-7nsxk\" (UID: \"98f2afa8-7c09-4427-a558-a3da2bfd4df4\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-7nsxk" Jan 20 18:48:10 crc kubenswrapper[4661]: I0120 18:48:10.310421 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/98f2afa8-7c09-4427-a558-a3da2bfd4df4-ceph\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-7nsxk\" (UID: \"98f2afa8-7c09-4427-a558-a3da2bfd4df4\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-7nsxk" Jan 20 18:48:10 crc kubenswrapper[4661]: I0120 18:48:10.310950 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/98f2afa8-7c09-4427-a558-a3da2bfd4df4-inventory\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-7nsxk\" (UID: \"98f2afa8-7c09-4427-a558-a3da2bfd4df4\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-7nsxk" Jan 20 18:48:10 crc kubenswrapper[4661]: I0120 18:48:10.314183 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/98f2afa8-7c09-4427-a558-a3da2bfd4df4-ssh-key-openstack-edpm-ipam\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-7nsxk\" (UID: \"98f2afa8-7c09-4427-a558-a3da2bfd4df4\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-7nsxk" Jan 20 18:48:10 crc kubenswrapper[4661]: I0120 18:48:10.338553 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j2dzg\" (UniqueName: \"kubernetes.io/projected/98f2afa8-7c09-4427-a558-a3da2bfd4df4-kube-api-access-j2dzg\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-7nsxk\" (UID: \"98f2afa8-7c09-4427-a558-a3da2bfd4df4\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-7nsxk" Jan 20 18:48:10 crc kubenswrapper[4661]: I0120 18:48:10.402140 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-7nsxk" Jan 20 18:48:10 crc kubenswrapper[4661]: I0120 18:48:10.944904 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-7nsxk"] Jan 20 18:48:11 crc kubenswrapper[4661]: I0120 18:48:11.924697 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-7nsxk" event={"ID":"98f2afa8-7c09-4427-a558-a3da2bfd4df4","Type":"ContainerStarted","Data":"fb24131e0e551464c37b2b8df05e6363b83f6f9bd079a6f3a34884e47d67290a"} Jan 20 18:48:11 crc kubenswrapper[4661]: I0120 18:48:11.925447 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-7nsxk" event={"ID":"98f2afa8-7c09-4427-a558-a3da2bfd4df4","Type":"ContainerStarted","Data":"17ad69778659876811e5d18ebb6f8033c29a433c769ec184c5f72dd2fd58354c"} Jan 20 18:48:14 crc kubenswrapper[4661]: I0120 18:48:14.737108 4661 patch_prober.go:28] interesting pod/oauth-openshift-7f54ff7574-qv4rm container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.56:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 20 18:48:14 crc kubenswrapper[4661]: I0120 18:48:14.737589 4661 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-7f54ff7574-qv4rm" podUID="ff8652fe-0f18-4e15-97d2-08e54353a88e" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.56:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 20 18:48:14 crc kubenswrapper[4661]: I0120 18:48:14.745771 4661 patch_prober.go:28] interesting pod/oauth-openshift-7f54ff7574-qv4rm container/oauth-openshift namespace/openshift-authentication: Liveness probe status=failure output="Get \"https://10.217.0.56:6443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 20 18:48:14 crc kubenswrapper[4661]: I0120 18:48:14.745822 4661 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-authentication/oauth-openshift-7f54ff7574-qv4rm" podUID="ff8652fe-0f18-4e15-97d2-08e54353a88e" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.56:6443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 20 18:48:14 crc kubenswrapper[4661]: I0120 18:48:14.771560 4661 scope.go:117] "RemoveContainer" containerID="927fc50872b021bab1ad3425d5a889eb67f5570428c82cb9e465408890506791" Jan 20 18:48:14 crc kubenswrapper[4661]: E0120 18:48:14.771861 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-svf7c_openshift-machine-config-operator(78855c94-da90-4523-8d65-70f7fd153dee)\"" pod="openshift-machine-config-operator/machine-config-daemon-svf7c" podUID="78855c94-da90-4523-8d65-70f7fd153dee" Jan 20 18:48:16 crc kubenswrapper[4661]: I0120 18:48:16.966321 4661 generic.go:334] "Generic (PLEG): container finished" podID="98f2afa8-7c09-4427-a558-a3da2bfd4df4" containerID="fb24131e0e551464c37b2b8df05e6363b83f6f9bd079a6f3a34884e47d67290a" exitCode=0 Jan 20 18:48:16 crc kubenswrapper[4661]: I0120 18:48:16.966376 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-7nsxk" event={"ID":"98f2afa8-7c09-4427-a558-a3da2bfd4df4","Type":"ContainerDied","Data":"fb24131e0e551464c37b2b8df05e6363b83f6f9bd079a6f3a34884e47d67290a"} Jan 20 18:48:18 crc kubenswrapper[4661]: I0120 18:48:18.466568 4661 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-7nsxk" Jan 20 18:48:18 crc kubenswrapper[4661]: I0120 18:48:18.582482 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/98f2afa8-7c09-4427-a558-a3da2bfd4df4-ceph\") pod \"98f2afa8-7c09-4427-a558-a3da2bfd4df4\" (UID: \"98f2afa8-7c09-4427-a558-a3da2bfd4df4\") " Jan 20 18:48:18 crc kubenswrapper[4661]: I0120 18:48:18.582628 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/98f2afa8-7c09-4427-a558-a3da2bfd4df4-inventory\") pod \"98f2afa8-7c09-4427-a558-a3da2bfd4df4\" (UID: \"98f2afa8-7c09-4427-a558-a3da2bfd4df4\") " Jan 20 18:48:18 crc kubenswrapper[4661]: I0120 18:48:18.582726 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j2dzg\" (UniqueName: \"kubernetes.io/projected/98f2afa8-7c09-4427-a558-a3da2bfd4df4-kube-api-access-j2dzg\") pod \"98f2afa8-7c09-4427-a558-a3da2bfd4df4\" (UID: \"98f2afa8-7c09-4427-a558-a3da2bfd4df4\") " Jan 20 18:48:18 crc kubenswrapper[4661]: I0120 18:48:18.582795 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/98f2afa8-7c09-4427-a558-a3da2bfd4df4-ssh-key-openstack-edpm-ipam\") pod \"98f2afa8-7c09-4427-a558-a3da2bfd4df4\" (UID: \"98f2afa8-7c09-4427-a558-a3da2bfd4df4\") " Jan 20 18:48:18 crc kubenswrapper[4661]: I0120 18:48:18.598964 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/98f2afa8-7c09-4427-a558-a3da2bfd4df4-kube-api-access-j2dzg" (OuterVolumeSpecName: "kube-api-access-j2dzg") pod "98f2afa8-7c09-4427-a558-a3da2bfd4df4" (UID: "98f2afa8-7c09-4427-a558-a3da2bfd4df4"). InnerVolumeSpecName "kube-api-access-j2dzg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 18:48:18 crc kubenswrapper[4661]: I0120 18:48:18.599099 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/98f2afa8-7c09-4427-a558-a3da2bfd4df4-ceph" (OuterVolumeSpecName: "ceph") pod "98f2afa8-7c09-4427-a558-a3da2bfd4df4" (UID: "98f2afa8-7c09-4427-a558-a3da2bfd4df4"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 18:48:18 crc kubenswrapper[4661]: I0120 18:48:18.613307 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/98f2afa8-7c09-4427-a558-a3da2bfd4df4-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "98f2afa8-7c09-4427-a558-a3da2bfd4df4" (UID: "98f2afa8-7c09-4427-a558-a3da2bfd4df4"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 18:48:18 crc kubenswrapper[4661]: I0120 18:48:18.622296 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/98f2afa8-7c09-4427-a558-a3da2bfd4df4-inventory" (OuterVolumeSpecName: "inventory") pod "98f2afa8-7c09-4427-a558-a3da2bfd4df4" (UID: "98f2afa8-7c09-4427-a558-a3da2bfd4df4"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 18:48:18 crc kubenswrapper[4661]: I0120 18:48:18.685198 4661 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/98f2afa8-7c09-4427-a558-a3da2bfd4df4-ceph\") on node \"crc\" DevicePath \"\"" Jan 20 18:48:18 crc kubenswrapper[4661]: I0120 18:48:18.685231 4661 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/98f2afa8-7c09-4427-a558-a3da2bfd4df4-inventory\") on node \"crc\" DevicePath \"\"" Jan 20 18:48:18 crc kubenswrapper[4661]: I0120 18:48:18.685245 4661 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j2dzg\" (UniqueName: \"kubernetes.io/projected/98f2afa8-7c09-4427-a558-a3da2bfd4df4-kube-api-access-j2dzg\") on node \"crc\" DevicePath \"\"" Jan 20 18:48:18 crc kubenswrapper[4661]: I0120 18:48:18.685257 4661 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/98f2afa8-7c09-4427-a558-a3da2bfd4df4-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 20 18:48:19 crc kubenswrapper[4661]: I0120 18:48:19.005144 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-7nsxk" event={"ID":"98f2afa8-7c09-4427-a558-a3da2bfd4df4","Type":"ContainerDied","Data":"17ad69778659876811e5d18ebb6f8033c29a433c769ec184c5f72dd2fd58354c"} Jan 20 18:48:19 crc kubenswrapper[4661]: I0120 18:48:19.005190 4661 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="17ad69778659876811e5d18ebb6f8033c29a433c769ec184c5f72dd2fd58354c" Jan 20 18:48:19 crc kubenswrapper[4661]: I0120 18:48:19.005283 4661 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-7nsxk" Jan 20 18:48:19 crc kubenswrapper[4661]: I0120 18:48:19.092465 4661 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-nzqc8"] Jan 20 18:48:19 crc kubenswrapper[4661]: E0120 18:48:19.092954 4661 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="98f2afa8-7c09-4427-a558-a3da2bfd4df4" containerName="ceph-hci-pre-edpm-deployment-openstack-edpm-ipam" Jan 20 18:48:19 crc kubenswrapper[4661]: I0120 18:48:19.092980 4661 state_mem.go:107] "Deleted CPUSet assignment" podUID="98f2afa8-7c09-4427-a558-a3da2bfd4df4" containerName="ceph-hci-pre-edpm-deployment-openstack-edpm-ipam" Jan 20 18:48:19 crc kubenswrapper[4661]: I0120 18:48:19.093172 4661 memory_manager.go:354] "RemoveStaleState removing state" podUID="98f2afa8-7c09-4427-a558-a3da2bfd4df4" containerName="ceph-hci-pre-edpm-deployment-openstack-edpm-ipam" Jan 20 18:48:19 crc kubenswrapper[4661]: I0120 18:48:19.095842 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-nzqc8" Jan 20 18:48:19 crc kubenswrapper[4661]: I0120 18:48:19.099128 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 20 18:48:19 crc kubenswrapper[4661]: I0120 18:48:19.099316 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Jan 20 18:48:19 crc kubenswrapper[4661]: I0120 18:48:19.099420 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-mmbv8" Jan 20 18:48:19 crc kubenswrapper[4661]: I0120 18:48:19.099441 4661 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 20 18:48:19 crc kubenswrapper[4661]: I0120 18:48:19.099597 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 20 18:48:19 crc kubenswrapper[4661]: I0120 18:48:19.107821 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-nzqc8"] Jan 20 18:48:19 crc kubenswrapper[4661]: I0120 18:48:19.206943 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e38a8deb-8469-4d4d-a865-4d374e8fcb7c-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-nzqc8\" (UID: \"e38a8deb-8469-4d4d-a865-4d374e8fcb7c\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-nzqc8" Jan 20 18:48:19 crc kubenswrapper[4661]: I0120 18:48:19.207056 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/e38a8deb-8469-4d4d-a865-4d374e8fcb7c-ceph\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-nzqc8\" (UID: \"e38a8deb-8469-4d4d-a865-4d374e8fcb7c\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-nzqc8" Jan 20 18:48:19 crc kubenswrapper[4661]: I0120 18:48:19.207204 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d2hpz\" (UniqueName: \"kubernetes.io/projected/e38a8deb-8469-4d4d-a865-4d374e8fcb7c-kube-api-access-d2hpz\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-nzqc8\" (UID: \"e38a8deb-8469-4d4d-a865-4d374e8fcb7c\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-nzqc8" Jan 20 18:48:19 crc kubenswrapper[4661]: I0120 18:48:19.207290 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e38a8deb-8469-4d4d-a865-4d374e8fcb7c-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-nzqc8\" (UID: \"e38a8deb-8469-4d4d-a865-4d374e8fcb7c\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-nzqc8" Jan 20 18:48:19 crc kubenswrapper[4661]: I0120 18:48:19.308741 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e38a8deb-8469-4d4d-a865-4d374e8fcb7c-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-nzqc8\" (UID: \"e38a8deb-8469-4d4d-a865-4d374e8fcb7c\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-nzqc8" Jan 20 18:48:19 crc kubenswrapper[4661]: I0120 18:48:19.309552 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/e38a8deb-8469-4d4d-a865-4d374e8fcb7c-ceph\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-nzqc8\" (UID: \"e38a8deb-8469-4d4d-a865-4d374e8fcb7c\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-nzqc8" Jan 20 18:48:19 crc kubenswrapper[4661]: I0120 18:48:19.309645 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d2hpz\" (UniqueName: \"kubernetes.io/projected/e38a8deb-8469-4d4d-a865-4d374e8fcb7c-kube-api-access-d2hpz\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-nzqc8\" (UID: \"e38a8deb-8469-4d4d-a865-4d374e8fcb7c\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-nzqc8" Jan 20 18:48:19 crc kubenswrapper[4661]: I0120 18:48:19.309732 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e38a8deb-8469-4d4d-a865-4d374e8fcb7c-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-nzqc8\" (UID: \"e38a8deb-8469-4d4d-a865-4d374e8fcb7c\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-nzqc8" Jan 20 18:48:19 crc kubenswrapper[4661]: I0120 18:48:19.314763 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/e38a8deb-8469-4d4d-a865-4d374e8fcb7c-ceph\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-nzqc8\" (UID: \"e38a8deb-8469-4d4d-a865-4d374e8fcb7c\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-nzqc8" Jan 20 18:48:19 crc kubenswrapper[4661]: I0120 18:48:19.316149 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e38a8deb-8469-4d4d-a865-4d374e8fcb7c-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-nzqc8\" (UID: \"e38a8deb-8469-4d4d-a865-4d374e8fcb7c\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-nzqc8" Jan 20 18:48:19 crc kubenswrapper[4661]: I0120 18:48:19.316912 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e38a8deb-8469-4d4d-a865-4d374e8fcb7c-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-nzqc8\" (UID: \"e38a8deb-8469-4d4d-a865-4d374e8fcb7c\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-nzqc8" Jan 20 18:48:19 crc kubenswrapper[4661]: I0120 18:48:19.328128 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d2hpz\" (UniqueName: \"kubernetes.io/projected/e38a8deb-8469-4d4d-a865-4d374e8fcb7c-kube-api-access-d2hpz\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-nzqc8\" (UID: \"e38a8deb-8469-4d4d-a865-4d374e8fcb7c\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-nzqc8" Jan 20 18:48:19 crc kubenswrapper[4661]: I0120 18:48:19.425924 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-nzqc8" Jan 20 18:48:19 crc kubenswrapper[4661]: I0120 18:48:19.985757 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-nzqc8"] Jan 20 18:48:20 crc kubenswrapper[4661]: I0120 18:48:20.013794 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-nzqc8" event={"ID":"e38a8deb-8469-4d4d-a865-4d374e8fcb7c","Type":"ContainerStarted","Data":"9664deffeb6d0ffee5778ab97b0c179262e5626a9793c30e9aa814f69a28ffd2"} Jan 20 18:48:21 crc kubenswrapper[4661]: I0120 18:48:21.028415 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-nzqc8" event={"ID":"e38a8deb-8469-4d4d-a865-4d374e8fcb7c","Type":"ContainerStarted","Data":"bc63a61a2075dbd910d5c8812830ab0d1bfbc414ae00c00e4ebb31c6e93c4e5c"} Jan 20 18:48:21 crc kubenswrapper[4661]: I0120 18:48:21.051887 4661 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-nzqc8" podStartSLOduration=1.316819861 podStartE2EDuration="2.05186146s" podCreationTimestamp="2026-01-20 18:48:19 +0000 UTC" firstStartedPulling="2026-01-20 18:48:19.980398668 +0000 UTC m=+2556.311188330" lastFinishedPulling="2026-01-20 18:48:20.715440247 +0000 UTC m=+2557.046229929" observedRunningTime="2026-01-20 18:48:21.043060385 +0000 UTC m=+2557.373850047" watchObservedRunningTime="2026-01-20 18:48:21.05186146 +0000 UTC m=+2557.382651122" Jan 20 18:48:29 crc kubenswrapper[4661]: I0120 18:48:29.142913 4661 scope.go:117] "RemoveContainer" containerID="927fc50872b021bab1ad3425d5a889eb67f5570428c82cb9e465408890506791" Jan 20 18:48:29 crc kubenswrapper[4661]: E0120 18:48:29.143772 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-svf7c_openshift-machine-config-operator(78855c94-da90-4523-8d65-70f7fd153dee)\"" pod="openshift-machine-config-operator/machine-config-daemon-svf7c" podUID="78855c94-da90-4523-8d65-70f7fd153dee" Jan 20 18:48:40 crc kubenswrapper[4661]: I0120 18:48:40.143091 4661 scope.go:117] "RemoveContainer" containerID="927fc50872b021bab1ad3425d5a889eb67f5570428c82cb9e465408890506791" Jan 20 18:48:41 crc kubenswrapper[4661]: I0120 18:48:41.231258 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-svf7c" event={"ID":"78855c94-da90-4523-8d65-70f7fd153dee","Type":"ContainerStarted","Data":"09f90b8546dddfc6e996cdbb77ceb056fba14e8d7b35198aeb2ca0341a0647b6"} Jan 20 18:49:13 crc kubenswrapper[4661]: I0120 18:49:13.491432 4661 generic.go:334] "Generic (PLEG): container finished" podID="e38a8deb-8469-4d4d-a865-4d374e8fcb7c" containerID="bc63a61a2075dbd910d5c8812830ab0d1bfbc414ae00c00e4ebb31c6e93c4e5c" exitCode=0 Jan 20 18:49:13 crc kubenswrapper[4661]: I0120 18:49:13.491533 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-nzqc8" event={"ID":"e38a8deb-8469-4d4d-a865-4d374e8fcb7c","Type":"ContainerDied","Data":"bc63a61a2075dbd910d5c8812830ab0d1bfbc414ae00c00e4ebb31c6e93c4e5c"} Jan 20 18:49:14 crc kubenswrapper[4661]: I0120 18:49:14.927501 4661 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-nzqc8" Jan 20 18:49:15 crc kubenswrapper[4661]: I0120 18:49:15.034477 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e38a8deb-8469-4d4d-a865-4d374e8fcb7c-ssh-key-openstack-edpm-ipam\") pod \"e38a8deb-8469-4d4d-a865-4d374e8fcb7c\" (UID: \"e38a8deb-8469-4d4d-a865-4d374e8fcb7c\") " Jan 20 18:49:15 crc kubenswrapper[4661]: I0120 18:49:15.034572 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e38a8deb-8469-4d4d-a865-4d374e8fcb7c-inventory\") pod \"e38a8deb-8469-4d4d-a865-4d374e8fcb7c\" (UID: \"e38a8deb-8469-4d4d-a865-4d374e8fcb7c\") " Jan 20 18:49:15 crc kubenswrapper[4661]: I0120 18:49:15.034616 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d2hpz\" (UniqueName: \"kubernetes.io/projected/e38a8deb-8469-4d4d-a865-4d374e8fcb7c-kube-api-access-d2hpz\") pod \"e38a8deb-8469-4d4d-a865-4d374e8fcb7c\" (UID: \"e38a8deb-8469-4d4d-a865-4d374e8fcb7c\") " Jan 20 18:49:15 crc kubenswrapper[4661]: I0120 18:49:15.034727 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/e38a8deb-8469-4d4d-a865-4d374e8fcb7c-ceph\") pod \"e38a8deb-8469-4d4d-a865-4d374e8fcb7c\" (UID: \"e38a8deb-8469-4d4d-a865-4d374e8fcb7c\") " Jan 20 18:49:15 crc kubenswrapper[4661]: I0120 18:49:15.039874 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e38a8deb-8469-4d4d-a865-4d374e8fcb7c-kube-api-access-d2hpz" (OuterVolumeSpecName: "kube-api-access-d2hpz") pod "e38a8deb-8469-4d4d-a865-4d374e8fcb7c" (UID: "e38a8deb-8469-4d4d-a865-4d374e8fcb7c"). InnerVolumeSpecName "kube-api-access-d2hpz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 18:49:15 crc kubenswrapper[4661]: I0120 18:49:15.040779 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e38a8deb-8469-4d4d-a865-4d374e8fcb7c-ceph" (OuterVolumeSpecName: "ceph") pod "e38a8deb-8469-4d4d-a865-4d374e8fcb7c" (UID: "e38a8deb-8469-4d4d-a865-4d374e8fcb7c"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 18:49:15 crc kubenswrapper[4661]: I0120 18:49:15.060483 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e38a8deb-8469-4d4d-a865-4d374e8fcb7c-inventory" (OuterVolumeSpecName: "inventory") pod "e38a8deb-8469-4d4d-a865-4d374e8fcb7c" (UID: "e38a8deb-8469-4d4d-a865-4d374e8fcb7c"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 18:49:15 crc kubenswrapper[4661]: I0120 18:49:15.073192 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e38a8deb-8469-4d4d-a865-4d374e8fcb7c-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "e38a8deb-8469-4d4d-a865-4d374e8fcb7c" (UID: "e38a8deb-8469-4d4d-a865-4d374e8fcb7c"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 18:49:15 crc kubenswrapper[4661]: I0120 18:49:15.136993 4661 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e38a8deb-8469-4d4d-a865-4d374e8fcb7c-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 20 18:49:15 crc kubenswrapper[4661]: I0120 18:49:15.137061 4661 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e38a8deb-8469-4d4d-a865-4d374e8fcb7c-inventory\") on node \"crc\" DevicePath \"\"" Jan 20 18:49:15 crc kubenswrapper[4661]: I0120 18:49:15.137072 4661 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d2hpz\" (UniqueName: \"kubernetes.io/projected/e38a8deb-8469-4d4d-a865-4d374e8fcb7c-kube-api-access-d2hpz\") on node \"crc\" DevicePath \"\"" Jan 20 18:49:15 crc kubenswrapper[4661]: I0120 18:49:15.137082 4661 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/e38a8deb-8469-4d4d-a865-4d374e8fcb7c-ceph\") on node \"crc\" DevicePath \"\"" Jan 20 18:49:15 crc kubenswrapper[4661]: I0120 18:49:15.523941 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-nzqc8" event={"ID":"e38a8deb-8469-4d4d-a865-4d374e8fcb7c","Type":"ContainerDied","Data":"9664deffeb6d0ffee5778ab97b0c179262e5626a9793c30e9aa814f69a28ffd2"} Jan 20 18:49:15 crc kubenswrapper[4661]: I0120 18:49:15.525462 4661 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9664deffeb6d0ffee5778ab97b0c179262e5626a9793c30e9aa814f69a28ffd2" Jan 20 18:49:15 crc kubenswrapper[4661]: I0120 18:49:15.525557 4661 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-nzqc8" Jan 20 18:49:15 crc kubenswrapper[4661]: I0120 18:49:15.683800 4661 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-gr4vd"] Jan 20 18:49:15 crc kubenswrapper[4661]: E0120 18:49:15.684187 4661 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e38a8deb-8469-4d4d-a865-4d374e8fcb7c" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Jan 20 18:49:15 crc kubenswrapper[4661]: I0120 18:49:15.684204 4661 state_mem.go:107] "Deleted CPUSet assignment" podUID="e38a8deb-8469-4d4d-a865-4d374e8fcb7c" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Jan 20 18:49:15 crc kubenswrapper[4661]: I0120 18:49:15.684384 4661 memory_manager.go:354] "RemoveStaleState removing state" podUID="e38a8deb-8469-4d4d-a865-4d374e8fcb7c" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Jan 20 18:49:15 crc kubenswrapper[4661]: I0120 18:49:15.685072 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-gr4vd" Jan 20 18:49:15 crc kubenswrapper[4661]: I0120 18:49:15.688594 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-mmbv8" Jan 20 18:49:15 crc kubenswrapper[4661]: I0120 18:49:15.688633 4661 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 20 18:49:15 crc kubenswrapper[4661]: I0120 18:49:15.688842 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Jan 20 18:49:15 crc kubenswrapper[4661]: I0120 18:49:15.690029 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 20 18:49:15 crc kubenswrapper[4661]: I0120 18:49:15.691034 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 20 18:49:15 crc kubenswrapper[4661]: I0120 18:49:15.692399 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-gr4vd"] Jan 20 18:49:15 crc kubenswrapper[4661]: I0120 18:49:15.749181 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/e10ea9a4-1fd9-43d6-bf38-0365ddbdb5d0-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-gr4vd\" (UID: \"e10ea9a4-1fd9-43d6-bf38-0365ddbdb5d0\") " pod="openstack/ssh-known-hosts-edpm-deployment-gr4vd" Jan 20 18:49:15 crc kubenswrapper[4661]: I0120 18:49:15.749392 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/e10ea9a4-1fd9-43d6-bf38-0365ddbdb5d0-ceph\") pod \"ssh-known-hosts-edpm-deployment-gr4vd\" (UID: \"e10ea9a4-1fd9-43d6-bf38-0365ddbdb5d0\") " pod="openstack/ssh-known-hosts-edpm-deployment-gr4vd" Jan 20 18:49:15 crc kubenswrapper[4661]: I0120 18:49:15.749525 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z7vsn\" (UniqueName: \"kubernetes.io/projected/e10ea9a4-1fd9-43d6-bf38-0365ddbdb5d0-kube-api-access-z7vsn\") pod \"ssh-known-hosts-edpm-deployment-gr4vd\" (UID: \"e10ea9a4-1fd9-43d6-bf38-0365ddbdb5d0\") " pod="openstack/ssh-known-hosts-edpm-deployment-gr4vd" Jan 20 18:49:15 crc kubenswrapper[4661]: I0120 18:49:15.749620 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e10ea9a4-1fd9-43d6-bf38-0365ddbdb5d0-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-gr4vd\" (UID: \"e10ea9a4-1fd9-43d6-bf38-0365ddbdb5d0\") " pod="openstack/ssh-known-hosts-edpm-deployment-gr4vd" Jan 20 18:49:15 crc kubenswrapper[4661]: I0120 18:49:15.851775 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/e10ea9a4-1fd9-43d6-bf38-0365ddbdb5d0-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-gr4vd\" (UID: \"e10ea9a4-1fd9-43d6-bf38-0365ddbdb5d0\") " pod="openstack/ssh-known-hosts-edpm-deployment-gr4vd" Jan 20 18:49:15 crc kubenswrapper[4661]: I0120 18:49:15.851823 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/e10ea9a4-1fd9-43d6-bf38-0365ddbdb5d0-ceph\") pod \"ssh-known-hosts-edpm-deployment-gr4vd\" (UID: \"e10ea9a4-1fd9-43d6-bf38-0365ddbdb5d0\") " pod="openstack/ssh-known-hosts-edpm-deployment-gr4vd" Jan 20 18:49:15 crc kubenswrapper[4661]: I0120 18:49:15.851893 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z7vsn\" (UniqueName: \"kubernetes.io/projected/e10ea9a4-1fd9-43d6-bf38-0365ddbdb5d0-kube-api-access-z7vsn\") pod \"ssh-known-hosts-edpm-deployment-gr4vd\" (UID: \"e10ea9a4-1fd9-43d6-bf38-0365ddbdb5d0\") " pod="openstack/ssh-known-hosts-edpm-deployment-gr4vd" Jan 20 18:49:15 crc kubenswrapper[4661]: I0120 18:49:15.851938 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e10ea9a4-1fd9-43d6-bf38-0365ddbdb5d0-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-gr4vd\" (UID: \"e10ea9a4-1fd9-43d6-bf38-0365ddbdb5d0\") " pod="openstack/ssh-known-hosts-edpm-deployment-gr4vd" Jan 20 18:49:15 crc kubenswrapper[4661]: I0120 18:49:15.856316 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e10ea9a4-1fd9-43d6-bf38-0365ddbdb5d0-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-gr4vd\" (UID: \"e10ea9a4-1fd9-43d6-bf38-0365ddbdb5d0\") " pod="openstack/ssh-known-hosts-edpm-deployment-gr4vd" Jan 20 18:49:15 crc kubenswrapper[4661]: I0120 18:49:15.857433 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/e10ea9a4-1fd9-43d6-bf38-0365ddbdb5d0-ceph\") pod \"ssh-known-hosts-edpm-deployment-gr4vd\" (UID: \"e10ea9a4-1fd9-43d6-bf38-0365ddbdb5d0\") " pod="openstack/ssh-known-hosts-edpm-deployment-gr4vd" Jan 20 18:49:15 crc kubenswrapper[4661]: I0120 18:49:15.858277 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/e10ea9a4-1fd9-43d6-bf38-0365ddbdb5d0-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-gr4vd\" (UID: \"e10ea9a4-1fd9-43d6-bf38-0365ddbdb5d0\") " pod="openstack/ssh-known-hosts-edpm-deployment-gr4vd" Jan 20 18:49:15 crc kubenswrapper[4661]: I0120 18:49:15.873851 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z7vsn\" (UniqueName: \"kubernetes.io/projected/e10ea9a4-1fd9-43d6-bf38-0365ddbdb5d0-kube-api-access-z7vsn\") pod \"ssh-known-hosts-edpm-deployment-gr4vd\" (UID: \"e10ea9a4-1fd9-43d6-bf38-0365ddbdb5d0\") " pod="openstack/ssh-known-hosts-edpm-deployment-gr4vd" Jan 20 18:49:16 crc kubenswrapper[4661]: I0120 18:49:16.018186 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-gr4vd" Jan 20 18:49:16 crc kubenswrapper[4661]: I0120 18:49:16.535252 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-gr4vd"] Jan 20 18:49:17 crc kubenswrapper[4661]: I0120 18:49:17.540548 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-gr4vd" event={"ID":"e10ea9a4-1fd9-43d6-bf38-0365ddbdb5d0","Type":"ContainerStarted","Data":"21f47151ea30bff9581eee2bd3f25238a9babb366514e8ee86c2da4aed67b03e"} Jan 20 18:49:17 crc kubenswrapper[4661]: I0120 18:49:17.541091 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-gr4vd" event={"ID":"e10ea9a4-1fd9-43d6-bf38-0365ddbdb5d0","Type":"ContainerStarted","Data":"88268e576185b25d4f17e8308531d3966435511c26aac6b470eeda7b16cb2719"} Jan 20 18:49:17 crc kubenswrapper[4661]: I0120 18:49:17.563571 4661 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ssh-known-hosts-edpm-deployment-gr4vd" podStartSLOduration=2.121805826 podStartE2EDuration="2.563545426s" podCreationTimestamp="2026-01-20 18:49:15 +0000 UTC" firstStartedPulling="2026-01-20 18:49:16.538240986 +0000 UTC m=+2612.869030648" lastFinishedPulling="2026-01-20 18:49:16.979980576 +0000 UTC m=+2613.310770248" observedRunningTime="2026-01-20 18:49:17.562389766 +0000 UTC m=+2613.893179458" watchObservedRunningTime="2026-01-20 18:49:17.563545426 +0000 UTC m=+2613.894335108" Jan 20 18:49:27 crc kubenswrapper[4661]: I0120 18:49:27.635939 4661 generic.go:334] "Generic (PLEG): container finished" podID="e10ea9a4-1fd9-43d6-bf38-0365ddbdb5d0" containerID="21f47151ea30bff9581eee2bd3f25238a9babb366514e8ee86c2da4aed67b03e" exitCode=0 Jan 20 18:49:27 crc kubenswrapper[4661]: I0120 18:49:27.636198 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-gr4vd" event={"ID":"e10ea9a4-1fd9-43d6-bf38-0365ddbdb5d0","Type":"ContainerDied","Data":"21f47151ea30bff9581eee2bd3f25238a9babb366514e8ee86c2da4aed67b03e"} Jan 20 18:49:29 crc kubenswrapper[4661]: I0120 18:49:29.115890 4661 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-gr4vd" Jan 20 18:49:29 crc kubenswrapper[4661]: I0120 18:49:29.230685 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z7vsn\" (UniqueName: \"kubernetes.io/projected/e10ea9a4-1fd9-43d6-bf38-0365ddbdb5d0-kube-api-access-z7vsn\") pod \"e10ea9a4-1fd9-43d6-bf38-0365ddbdb5d0\" (UID: \"e10ea9a4-1fd9-43d6-bf38-0365ddbdb5d0\") " Jan 20 18:49:29 crc kubenswrapper[4661]: I0120 18:49:29.230917 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e10ea9a4-1fd9-43d6-bf38-0365ddbdb5d0-ssh-key-openstack-edpm-ipam\") pod \"e10ea9a4-1fd9-43d6-bf38-0365ddbdb5d0\" (UID: \"e10ea9a4-1fd9-43d6-bf38-0365ddbdb5d0\") " Jan 20 18:49:29 crc kubenswrapper[4661]: I0120 18:49:29.230990 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/e10ea9a4-1fd9-43d6-bf38-0365ddbdb5d0-inventory-0\") pod \"e10ea9a4-1fd9-43d6-bf38-0365ddbdb5d0\" (UID: \"e10ea9a4-1fd9-43d6-bf38-0365ddbdb5d0\") " Jan 20 18:49:29 crc kubenswrapper[4661]: I0120 18:49:29.231039 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/e10ea9a4-1fd9-43d6-bf38-0365ddbdb5d0-ceph\") pod \"e10ea9a4-1fd9-43d6-bf38-0365ddbdb5d0\" (UID: \"e10ea9a4-1fd9-43d6-bf38-0365ddbdb5d0\") " Jan 20 18:49:29 crc kubenswrapper[4661]: I0120 18:49:29.236580 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e10ea9a4-1fd9-43d6-bf38-0365ddbdb5d0-ceph" (OuterVolumeSpecName: "ceph") pod "e10ea9a4-1fd9-43d6-bf38-0365ddbdb5d0" (UID: "e10ea9a4-1fd9-43d6-bf38-0365ddbdb5d0"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 18:49:29 crc kubenswrapper[4661]: I0120 18:49:29.237594 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e10ea9a4-1fd9-43d6-bf38-0365ddbdb5d0-kube-api-access-z7vsn" (OuterVolumeSpecName: "kube-api-access-z7vsn") pod "e10ea9a4-1fd9-43d6-bf38-0365ddbdb5d0" (UID: "e10ea9a4-1fd9-43d6-bf38-0365ddbdb5d0"). InnerVolumeSpecName "kube-api-access-z7vsn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 18:49:29 crc kubenswrapper[4661]: I0120 18:49:29.260797 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e10ea9a4-1fd9-43d6-bf38-0365ddbdb5d0-inventory-0" (OuterVolumeSpecName: "inventory-0") pod "e10ea9a4-1fd9-43d6-bf38-0365ddbdb5d0" (UID: "e10ea9a4-1fd9-43d6-bf38-0365ddbdb5d0"). InnerVolumeSpecName "inventory-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 18:49:29 crc kubenswrapper[4661]: I0120 18:49:29.264610 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e10ea9a4-1fd9-43d6-bf38-0365ddbdb5d0-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "e10ea9a4-1fd9-43d6-bf38-0365ddbdb5d0" (UID: "e10ea9a4-1fd9-43d6-bf38-0365ddbdb5d0"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 18:49:29 crc kubenswrapper[4661]: I0120 18:49:29.335979 4661 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e10ea9a4-1fd9-43d6-bf38-0365ddbdb5d0-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 20 18:49:29 crc kubenswrapper[4661]: I0120 18:49:29.336014 4661 reconciler_common.go:293] "Volume detached for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/e10ea9a4-1fd9-43d6-bf38-0365ddbdb5d0-inventory-0\") on node \"crc\" DevicePath \"\"" Jan 20 18:49:29 crc kubenswrapper[4661]: I0120 18:49:29.336027 4661 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/e10ea9a4-1fd9-43d6-bf38-0365ddbdb5d0-ceph\") on node \"crc\" DevicePath \"\"" Jan 20 18:49:29 crc kubenswrapper[4661]: I0120 18:49:29.336036 4661 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z7vsn\" (UniqueName: \"kubernetes.io/projected/e10ea9a4-1fd9-43d6-bf38-0365ddbdb5d0-kube-api-access-z7vsn\") on node \"crc\" DevicePath \"\"" Jan 20 18:49:29 crc kubenswrapper[4661]: I0120 18:49:29.658455 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-gr4vd" event={"ID":"e10ea9a4-1fd9-43d6-bf38-0365ddbdb5d0","Type":"ContainerDied","Data":"88268e576185b25d4f17e8308531d3966435511c26aac6b470eeda7b16cb2719"} Jan 20 18:49:29 crc kubenswrapper[4661]: I0120 18:49:29.658499 4661 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="88268e576185b25d4f17e8308531d3966435511c26aac6b470eeda7b16cb2719" Jan 20 18:49:29 crc kubenswrapper[4661]: I0120 18:49:29.658500 4661 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-gr4vd" Jan 20 18:49:29 crc kubenswrapper[4661]: I0120 18:49:29.759375 4661 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-kzmcr"] Jan 20 18:49:29 crc kubenswrapper[4661]: E0120 18:49:29.759824 4661 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e10ea9a4-1fd9-43d6-bf38-0365ddbdb5d0" containerName="ssh-known-hosts-edpm-deployment" Jan 20 18:49:29 crc kubenswrapper[4661]: I0120 18:49:29.759846 4661 state_mem.go:107] "Deleted CPUSet assignment" podUID="e10ea9a4-1fd9-43d6-bf38-0365ddbdb5d0" containerName="ssh-known-hosts-edpm-deployment" Jan 20 18:49:29 crc kubenswrapper[4661]: I0120 18:49:29.760032 4661 memory_manager.go:354] "RemoveStaleState removing state" podUID="e10ea9a4-1fd9-43d6-bf38-0365ddbdb5d0" containerName="ssh-known-hosts-edpm-deployment" Jan 20 18:49:29 crc kubenswrapper[4661]: I0120 18:49:29.760750 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-kzmcr" Jan 20 18:49:29 crc kubenswrapper[4661]: I0120 18:49:29.767116 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 20 18:49:29 crc kubenswrapper[4661]: I0120 18:49:29.767896 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 20 18:49:29 crc kubenswrapper[4661]: I0120 18:49:29.768112 4661 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 20 18:49:29 crc kubenswrapper[4661]: I0120 18:49:29.768423 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Jan 20 18:49:29 crc kubenswrapper[4661]: I0120 18:49:29.769054 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-mmbv8" Jan 20 18:49:29 crc kubenswrapper[4661]: I0120 18:49:29.773843 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-kzmcr"] Jan 20 18:49:29 crc kubenswrapper[4661]: I0120 18:49:29.844450 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m9zbw\" (UniqueName: \"kubernetes.io/projected/fd038034-bcb2-4723-a94f-16af58612f58-kube-api-access-m9zbw\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-kzmcr\" (UID: \"fd038034-bcb2-4723-a94f-16af58612f58\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-kzmcr" Jan 20 18:49:29 crc kubenswrapper[4661]: I0120 18:49:29.844565 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/fd038034-bcb2-4723-a94f-16af58612f58-ceph\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-kzmcr\" (UID: \"fd038034-bcb2-4723-a94f-16af58612f58\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-kzmcr" Jan 20 18:49:29 crc kubenswrapper[4661]: I0120 18:49:29.844609 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fd038034-bcb2-4723-a94f-16af58612f58-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-kzmcr\" (UID: \"fd038034-bcb2-4723-a94f-16af58612f58\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-kzmcr" Jan 20 18:49:29 crc kubenswrapper[4661]: I0120 18:49:29.844636 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/fd038034-bcb2-4723-a94f-16af58612f58-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-kzmcr\" (UID: \"fd038034-bcb2-4723-a94f-16af58612f58\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-kzmcr" Jan 20 18:49:29 crc kubenswrapper[4661]: I0120 18:49:29.946294 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fd038034-bcb2-4723-a94f-16af58612f58-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-kzmcr\" (UID: \"fd038034-bcb2-4723-a94f-16af58612f58\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-kzmcr" Jan 20 18:49:29 crc kubenswrapper[4661]: I0120 18:49:29.946560 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/fd038034-bcb2-4723-a94f-16af58612f58-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-kzmcr\" (UID: \"fd038034-bcb2-4723-a94f-16af58612f58\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-kzmcr" Jan 20 18:49:29 crc kubenswrapper[4661]: I0120 18:49:29.946983 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m9zbw\" (UniqueName: \"kubernetes.io/projected/fd038034-bcb2-4723-a94f-16af58612f58-kube-api-access-m9zbw\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-kzmcr\" (UID: \"fd038034-bcb2-4723-a94f-16af58612f58\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-kzmcr" Jan 20 18:49:29 crc kubenswrapper[4661]: I0120 18:49:29.947228 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/fd038034-bcb2-4723-a94f-16af58612f58-ceph\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-kzmcr\" (UID: \"fd038034-bcb2-4723-a94f-16af58612f58\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-kzmcr" Jan 20 18:49:29 crc kubenswrapper[4661]: I0120 18:49:29.950653 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fd038034-bcb2-4723-a94f-16af58612f58-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-kzmcr\" (UID: \"fd038034-bcb2-4723-a94f-16af58612f58\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-kzmcr" Jan 20 18:49:29 crc kubenswrapper[4661]: I0120 18:49:29.954511 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/fd038034-bcb2-4723-a94f-16af58612f58-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-kzmcr\" (UID: \"fd038034-bcb2-4723-a94f-16af58612f58\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-kzmcr" Jan 20 18:49:29 crc kubenswrapper[4661]: I0120 18:49:29.956447 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/fd038034-bcb2-4723-a94f-16af58612f58-ceph\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-kzmcr\" (UID: \"fd038034-bcb2-4723-a94f-16af58612f58\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-kzmcr" Jan 20 18:49:29 crc kubenswrapper[4661]: I0120 18:49:29.963393 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m9zbw\" (UniqueName: \"kubernetes.io/projected/fd038034-bcb2-4723-a94f-16af58612f58-kube-api-access-m9zbw\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-kzmcr\" (UID: \"fd038034-bcb2-4723-a94f-16af58612f58\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-kzmcr" Jan 20 18:49:30 crc kubenswrapper[4661]: I0120 18:49:30.077965 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-kzmcr" Jan 20 18:49:30 crc kubenswrapper[4661]: I0120 18:49:30.658205 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-kzmcr"] Jan 20 18:49:30 crc kubenswrapper[4661]: W0120 18:49:30.667589 4661 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfd038034_bcb2_4723_a94f_16af58612f58.slice/crio-2f6a8eb3e73162aeb7de28a03a299d90ed63b3facad7e8ba0a1bd2fa46198365 WatchSource:0}: Error finding container 2f6a8eb3e73162aeb7de28a03a299d90ed63b3facad7e8ba0a1bd2fa46198365: Status 404 returned error can't find the container with id 2f6a8eb3e73162aeb7de28a03a299d90ed63b3facad7e8ba0a1bd2fa46198365 Jan 20 18:49:31 crc kubenswrapper[4661]: I0120 18:49:31.684628 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-kzmcr" event={"ID":"fd038034-bcb2-4723-a94f-16af58612f58","Type":"ContainerStarted","Data":"8b979341ac3f9dfd61639777dd8d540837c3fd76ec655c0892a7378e91456186"} Jan 20 18:49:31 crc kubenswrapper[4661]: I0120 18:49:31.685436 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-kzmcr" event={"ID":"fd038034-bcb2-4723-a94f-16af58612f58","Type":"ContainerStarted","Data":"2f6a8eb3e73162aeb7de28a03a299d90ed63b3facad7e8ba0a1bd2fa46198365"} Jan 20 18:49:31 crc kubenswrapper[4661]: I0120 18:49:31.713151 4661 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-kzmcr" podStartSLOduration=2.205360124 podStartE2EDuration="2.713122054s" podCreationTimestamp="2026-01-20 18:49:29 +0000 UTC" firstStartedPulling="2026-01-20 18:49:30.678435643 +0000 UTC m=+2627.009225325" lastFinishedPulling="2026-01-20 18:49:31.186197593 +0000 UTC m=+2627.516987255" observedRunningTime="2026-01-20 18:49:31.709092421 +0000 UTC m=+2628.039882093" watchObservedRunningTime="2026-01-20 18:49:31.713122054 +0000 UTC m=+2628.043911726" Jan 20 18:49:40 crc kubenswrapper[4661]: I0120 18:49:40.762729 4661 generic.go:334] "Generic (PLEG): container finished" podID="fd038034-bcb2-4723-a94f-16af58612f58" containerID="8b979341ac3f9dfd61639777dd8d540837c3fd76ec655c0892a7378e91456186" exitCode=0 Jan 20 18:49:40 crc kubenswrapper[4661]: I0120 18:49:40.762864 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-kzmcr" event={"ID":"fd038034-bcb2-4723-a94f-16af58612f58","Type":"ContainerDied","Data":"8b979341ac3f9dfd61639777dd8d540837c3fd76ec655c0892a7378e91456186"} Jan 20 18:49:42 crc kubenswrapper[4661]: I0120 18:49:42.160689 4661 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-kzmcr" Jan 20 18:49:42 crc kubenswrapper[4661]: I0120 18:49:42.202006 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/fd038034-bcb2-4723-a94f-16af58612f58-ssh-key-openstack-edpm-ipam\") pod \"fd038034-bcb2-4723-a94f-16af58612f58\" (UID: \"fd038034-bcb2-4723-a94f-16af58612f58\") " Jan 20 18:49:42 crc kubenswrapper[4661]: I0120 18:49:42.202128 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m9zbw\" (UniqueName: \"kubernetes.io/projected/fd038034-bcb2-4723-a94f-16af58612f58-kube-api-access-m9zbw\") pod \"fd038034-bcb2-4723-a94f-16af58612f58\" (UID: \"fd038034-bcb2-4723-a94f-16af58612f58\") " Jan 20 18:49:42 crc kubenswrapper[4661]: I0120 18:49:42.202169 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fd038034-bcb2-4723-a94f-16af58612f58-inventory\") pod \"fd038034-bcb2-4723-a94f-16af58612f58\" (UID: \"fd038034-bcb2-4723-a94f-16af58612f58\") " Jan 20 18:49:42 crc kubenswrapper[4661]: I0120 18:49:42.202265 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/fd038034-bcb2-4723-a94f-16af58612f58-ceph\") pod \"fd038034-bcb2-4723-a94f-16af58612f58\" (UID: \"fd038034-bcb2-4723-a94f-16af58612f58\") " Jan 20 18:49:42 crc kubenswrapper[4661]: I0120 18:49:42.226068 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fd038034-bcb2-4723-a94f-16af58612f58-kube-api-access-m9zbw" (OuterVolumeSpecName: "kube-api-access-m9zbw") pod "fd038034-bcb2-4723-a94f-16af58612f58" (UID: "fd038034-bcb2-4723-a94f-16af58612f58"). InnerVolumeSpecName "kube-api-access-m9zbw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 18:49:42 crc kubenswrapper[4661]: I0120 18:49:42.226177 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fd038034-bcb2-4723-a94f-16af58612f58-ceph" (OuterVolumeSpecName: "ceph") pod "fd038034-bcb2-4723-a94f-16af58612f58" (UID: "fd038034-bcb2-4723-a94f-16af58612f58"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 18:49:42 crc kubenswrapper[4661]: I0120 18:49:42.229502 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fd038034-bcb2-4723-a94f-16af58612f58-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "fd038034-bcb2-4723-a94f-16af58612f58" (UID: "fd038034-bcb2-4723-a94f-16af58612f58"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 18:49:42 crc kubenswrapper[4661]: I0120 18:49:42.251931 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fd038034-bcb2-4723-a94f-16af58612f58-inventory" (OuterVolumeSpecName: "inventory") pod "fd038034-bcb2-4723-a94f-16af58612f58" (UID: "fd038034-bcb2-4723-a94f-16af58612f58"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 18:49:42 crc kubenswrapper[4661]: I0120 18:49:42.305438 4661 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/fd038034-bcb2-4723-a94f-16af58612f58-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 20 18:49:42 crc kubenswrapper[4661]: I0120 18:49:42.305486 4661 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m9zbw\" (UniqueName: \"kubernetes.io/projected/fd038034-bcb2-4723-a94f-16af58612f58-kube-api-access-m9zbw\") on node \"crc\" DevicePath \"\"" Jan 20 18:49:42 crc kubenswrapper[4661]: I0120 18:49:42.305497 4661 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fd038034-bcb2-4723-a94f-16af58612f58-inventory\") on node \"crc\" DevicePath \"\"" Jan 20 18:49:42 crc kubenswrapper[4661]: I0120 18:49:42.305505 4661 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/fd038034-bcb2-4723-a94f-16af58612f58-ceph\") on node \"crc\" DevicePath \"\"" Jan 20 18:49:42 crc kubenswrapper[4661]: I0120 18:49:42.783027 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-kzmcr" event={"ID":"fd038034-bcb2-4723-a94f-16af58612f58","Type":"ContainerDied","Data":"2f6a8eb3e73162aeb7de28a03a299d90ed63b3facad7e8ba0a1bd2fa46198365"} Jan 20 18:49:42 crc kubenswrapper[4661]: I0120 18:49:42.783069 4661 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2f6a8eb3e73162aeb7de28a03a299d90ed63b3facad7e8ba0a1bd2fa46198365" Jan 20 18:49:42 crc kubenswrapper[4661]: I0120 18:49:42.783116 4661 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-kzmcr" Jan 20 18:49:42 crc kubenswrapper[4661]: I0120 18:49:42.912637 4661 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-zl299"] Jan 20 18:49:42 crc kubenswrapper[4661]: E0120 18:49:42.913644 4661 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fd038034-bcb2-4723-a94f-16af58612f58" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Jan 20 18:49:42 crc kubenswrapper[4661]: I0120 18:49:42.913664 4661 state_mem.go:107] "Deleted CPUSet assignment" podUID="fd038034-bcb2-4723-a94f-16af58612f58" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Jan 20 18:49:42 crc kubenswrapper[4661]: I0120 18:49:42.913856 4661 memory_manager.go:354] "RemoveStaleState removing state" podUID="fd038034-bcb2-4723-a94f-16af58612f58" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Jan 20 18:49:42 crc kubenswrapper[4661]: I0120 18:49:42.914439 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-zl299" Jan 20 18:49:42 crc kubenswrapper[4661]: I0120 18:49:42.919529 4661 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 20 18:49:42 crc kubenswrapper[4661]: I0120 18:49:42.919792 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 20 18:49:42 crc kubenswrapper[4661]: I0120 18:49:42.919829 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-mmbv8" Jan 20 18:49:42 crc kubenswrapper[4661]: I0120 18:49:42.920186 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 20 18:49:42 crc kubenswrapper[4661]: I0120 18:49:42.921632 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Jan 20 18:49:42 crc kubenswrapper[4661]: I0120 18:49:42.931874 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-zl299"] Jan 20 18:49:43 crc kubenswrapper[4661]: I0120 18:49:43.015804 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9898267e-7857-4ef5-8f1a-10d5f1a97cec-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-zl299\" (UID: \"9898267e-7857-4ef5-8f1a-10d5f1a97cec\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-zl299" Jan 20 18:49:43 crc kubenswrapper[4661]: I0120 18:49:43.015844 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/9898267e-7857-4ef5-8f1a-10d5f1a97cec-ceph\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-zl299\" (UID: \"9898267e-7857-4ef5-8f1a-10d5f1a97cec\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-zl299" Jan 20 18:49:43 crc kubenswrapper[4661]: I0120 18:49:43.015908 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cp2l8\" (UniqueName: \"kubernetes.io/projected/9898267e-7857-4ef5-8f1a-10d5f1a97cec-kube-api-access-cp2l8\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-zl299\" (UID: \"9898267e-7857-4ef5-8f1a-10d5f1a97cec\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-zl299" Jan 20 18:49:43 crc kubenswrapper[4661]: I0120 18:49:43.015967 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9898267e-7857-4ef5-8f1a-10d5f1a97cec-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-zl299\" (UID: \"9898267e-7857-4ef5-8f1a-10d5f1a97cec\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-zl299" Jan 20 18:49:43 crc kubenswrapper[4661]: I0120 18:49:43.117940 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9898267e-7857-4ef5-8f1a-10d5f1a97cec-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-zl299\" (UID: \"9898267e-7857-4ef5-8f1a-10d5f1a97cec\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-zl299" Jan 20 18:49:43 crc kubenswrapper[4661]: I0120 18:49:43.118033 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9898267e-7857-4ef5-8f1a-10d5f1a97cec-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-zl299\" (UID: \"9898267e-7857-4ef5-8f1a-10d5f1a97cec\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-zl299" Jan 20 18:49:43 crc kubenswrapper[4661]: I0120 18:49:43.118057 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/9898267e-7857-4ef5-8f1a-10d5f1a97cec-ceph\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-zl299\" (UID: \"9898267e-7857-4ef5-8f1a-10d5f1a97cec\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-zl299" Jan 20 18:49:43 crc kubenswrapper[4661]: I0120 18:49:43.118115 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cp2l8\" (UniqueName: \"kubernetes.io/projected/9898267e-7857-4ef5-8f1a-10d5f1a97cec-kube-api-access-cp2l8\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-zl299\" (UID: \"9898267e-7857-4ef5-8f1a-10d5f1a97cec\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-zl299" Jan 20 18:49:43 crc kubenswrapper[4661]: I0120 18:49:43.122425 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9898267e-7857-4ef5-8f1a-10d5f1a97cec-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-zl299\" (UID: \"9898267e-7857-4ef5-8f1a-10d5f1a97cec\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-zl299" Jan 20 18:49:43 crc kubenswrapper[4661]: I0120 18:49:43.123032 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/9898267e-7857-4ef5-8f1a-10d5f1a97cec-ceph\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-zl299\" (UID: \"9898267e-7857-4ef5-8f1a-10d5f1a97cec\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-zl299" Jan 20 18:49:43 crc kubenswrapper[4661]: I0120 18:49:43.130530 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9898267e-7857-4ef5-8f1a-10d5f1a97cec-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-zl299\" (UID: \"9898267e-7857-4ef5-8f1a-10d5f1a97cec\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-zl299" Jan 20 18:49:43 crc kubenswrapper[4661]: I0120 18:49:43.137474 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cp2l8\" (UniqueName: \"kubernetes.io/projected/9898267e-7857-4ef5-8f1a-10d5f1a97cec-kube-api-access-cp2l8\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-zl299\" (UID: \"9898267e-7857-4ef5-8f1a-10d5f1a97cec\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-zl299" Jan 20 18:49:43 crc kubenswrapper[4661]: I0120 18:49:43.237134 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-zl299" Jan 20 18:49:43 crc kubenswrapper[4661]: I0120 18:49:43.858121 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-zl299"] Jan 20 18:49:44 crc kubenswrapper[4661]: I0120 18:49:44.325384 4661 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 20 18:49:44 crc kubenswrapper[4661]: I0120 18:49:44.801559 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-zl299" event={"ID":"9898267e-7857-4ef5-8f1a-10d5f1a97cec","Type":"ContainerStarted","Data":"455d9c8cf94eb64e611103dc3c71c4a145d9d21ad0d08c42e4da4b82cf5fc1ed"} Jan 20 18:49:44 crc kubenswrapper[4661]: I0120 18:49:44.802082 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-zl299" event={"ID":"9898267e-7857-4ef5-8f1a-10d5f1a97cec","Type":"ContainerStarted","Data":"ff9e0f7a0a0c6eb48f9dc508b9ed3be8f6b4cc88723edd2d640452a9b889181e"} Jan 20 18:49:44 crc kubenswrapper[4661]: I0120 18:49:44.820602 4661 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-zl299" podStartSLOduration=2.369359459 podStartE2EDuration="2.820586092s" podCreationTimestamp="2026-01-20 18:49:42 +0000 UTC" firstStartedPulling="2026-01-20 18:49:43.870965399 +0000 UTC m=+2640.201755061" lastFinishedPulling="2026-01-20 18:49:44.322192032 +0000 UTC m=+2640.652981694" observedRunningTime="2026-01-20 18:49:44.819084894 +0000 UTC m=+2641.149874546" watchObservedRunningTime="2026-01-20 18:49:44.820586092 +0000 UTC m=+2641.151375754" Jan 20 18:49:55 crc kubenswrapper[4661]: I0120 18:49:55.886052 4661 generic.go:334] "Generic (PLEG): container finished" podID="9898267e-7857-4ef5-8f1a-10d5f1a97cec" containerID="455d9c8cf94eb64e611103dc3c71c4a145d9d21ad0d08c42e4da4b82cf5fc1ed" exitCode=0 Jan 20 18:49:55 crc kubenswrapper[4661]: I0120 18:49:55.886127 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-zl299" event={"ID":"9898267e-7857-4ef5-8f1a-10d5f1a97cec","Type":"ContainerDied","Data":"455d9c8cf94eb64e611103dc3c71c4a145d9d21ad0d08c42e4da4b82cf5fc1ed"} Jan 20 18:49:57 crc kubenswrapper[4661]: I0120 18:49:57.304612 4661 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-zl299" Jan 20 18:49:57 crc kubenswrapper[4661]: I0120 18:49:57.403756 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cp2l8\" (UniqueName: \"kubernetes.io/projected/9898267e-7857-4ef5-8f1a-10d5f1a97cec-kube-api-access-cp2l8\") pod \"9898267e-7857-4ef5-8f1a-10d5f1a97cec\" (UID: \"9898267e-7857-4ef5-8f1a-10d5f1a97cec\") " Jan 20 18:49:57 crc kubenswrapper[4661]: I0120 18:49:57.403825 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9898267e-7857-4ef5-8f1a-10d5f1a97cec-ssh-key-openstack-edpm-ipam\") pod \"9898267e-7857-4ef5-8f1a-10d5f1a97cec\" (UID: \"9898267e-7857-4ef5-8f1a-10d5f1a97cec\") " Jan 20 18:49:57 crc kubenswrapper[4661]: I0120 18:49:57.403852 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9898267e-7857-4ef5-8f1a-10d5f1a97cec-inventory\") pod \"9898267e-7857-4ef5-8f1a-10d5f1a97cec\" (UID: \"9898267e-7857-4ef5-8f1a-10d5f1a97cec\") " Jan 20 18:49:57 crc kubenswrapper[4661]: I0120 18:49:57.404013 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/9898267e-7857-4ef5-8f1a-10d5f1a97cec-ceph\") pod \"9898267e-7857-4ef5-8f1a-10d5f1a97cec\" (UID: \"9898267e-7857-4ef5-8f1a-10d5f1a97cec\") " Jan 20 18:49:57 crc kubenswrapper[4661]: I0120 18:49:57.411003 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9898267e-7857-4ef5-8f1a-10d5f1a97cec-ceph" (OuterVolumeSpecName: "ceph") pod "9898267e-7857-4ef5-8f1a-10d5f1a97cec" (UID: "9898267e-7857-4ef5-8f1a-10d5f1a97cec"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 18:49:57 crc kubenswrapper[4661]: I0120 18:49:57.411343 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9898267e-7857-4ef5-8f1a-10d5f1a97cec-kube-api-access-cp2l8" (OuterVolumeSpecName: "kube-api-access-cp2l8") pod "9898267e-7857-4ef5-8f1a-10d5f1a97cec" (UID: "9898267e-7857-4ef5-8f1a-10d5f1a97cec"). InnerVolumeSpecName "kube-api-access-cp2l8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 18:49:57 crc kubenswrapper[4661]: I0120 18:49:57.433540 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9898267e-7857-4ef5-8f1a-10d5f1a97cec-inventory" (OuterVolumeSpecName: "inventory") pod "9898267e-7857-4ef5-8f1a-10d5f1a97cec" (UID: "9898267e-7857-4ef5-8f1a-10d5f1a97cec"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 18:49:57 crc kubenswrapper[4661]: I0120 18:49:57.436313 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9898267e-7857-4ef5-8f1a-10d5f1a97cec-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "9898267e-7857-4ef5-8f1a-10d5f1a97cec" (UID: "9898267e-7857-4ef5-8f1a-10d5f1a97cec"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 18:49:57 crc kubenswrapper[4661]: I0120 18:49:57.506941 4661 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/9898267e-7857-4ef5-8f1a-10d5f1a97cec-ceph\") on node \"crc\" DevicePath \"\"" Jan 20 18:49:57 crc kubenswrapper[4661]: I0120 18:49:57.507001 4661 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cp2l8\" (UniqueName: \"kubernetes.io/projected/9898267e-7857-4ef5-8f1a-10d5f1a97cec-kube-api-access-cp2l8\") on node \"crc\" DevicePath \"\"" Jan 20 18:49:57 crc kubenswrapper[4661]: I0120 18:49:57.507021 4661 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9898267e-7857-4ef5-8f1a-10d5f1a97cec-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 20 18:49:57 crc kubenswrapper[4661]: I0120 18:49:57.507039 4661 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9898267e-7857-4ef5-8f1a-10d5f1a97cec-inventory\") on node \"crc\" DevicePath \"\"" Jan 20 18:49:57 crc kubenswrapper[4661]: I0120 18:49:57.905878 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-zl299" event={"ID":"9898267e-7857-4ef5-8f1a-10d5f1a97cec","Type":"ContainerDied","Data":"ff9e0f7a0a0c6eb48f9dc508b9ed3be8f6b4cc88723edd2d640452a9b889181e"} Jan 20 18:49:57 crc kubenswrapper[4661]: I0120 18:49:57.905922 4661 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ff9e0f7a0a0c6eb48f9dc508b9ed3be8f6b4cc88723edd2d640452a9b889181e" Jan 20 18:49:57 crc kubenswrapper[4661]: I0120 18:49:57.905949 4661 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-zl299" Jan 20 18:49:58 crc kubenswrapper[4661]: I0120 18:49:58.014042 4661 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5r5z7"] Jan 20 18:49:58 crc kubenswrapper[4661]: E0120 18:49:58.014483 4661 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9898267e-7857-4ef5-8f1a-10d5f1a97cec" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Jan 20 18:49:58 crc kubenswrapper[4661]: I0120 18:49:58.014506 4661 state_mem.go:107] "Deleted CPUSet assignment" podUID="9898267e-7857-4ef5-8f1a-10d5f1a97cec" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Jan 20 18:49:58 crc kubenswrapper[4661]: I0120 18:49:58.014741 4661 memory_manager.go:354] "RemoveStaleState removing state" podUID="9898267e-7857-4ef5-8f1a-10d5f1a97cec" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Jan 20 18:49:58 crc kubenswrapper[4661]: I0120 18:49:58.015422 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5r5z7" Jan 20 18:49:58 crc kubenswrapper[4661]: I0120 18:49:58.018788 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 20 18:49:58 crc kubenswrapper[4661]: I0120 18:49:58.019326 4661 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 20 18:49:58 crc kubenswrapper[4661]: I0120 18:49:58.019377 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-neutron-metadata-default-certs-0" Jan 20 18:49:58 crc kubenswrapper[4661]: I0120 18:49:58.019336 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-ovn-default-certs-0" Jan 20 18:49:58 crc kubenswrapper[4661]: I0120 18:49:58.019715 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-mmbv8" Jan 20 18:49:58 crc kubenswrapper[4661]: I0120 18:49:58.020114 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Jan 20 18:49:58 crc kubenswrapper[4661]: I0120 18:49:58.020829 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 20 18:49:58 crc kubenswrapper[4661]: I0120 18:49:58.023119 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-libvirt-default-certs-0" Jan 20 18:49:58 crc kubenswrapper[4661]: I0120 18:49:58.039599 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5r5z7"] Jan 20 18:49:58 crc kubenswrapper[4661]: I0120 18:49:58.116454 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/47dfef92-6673-4a9f-9999-47f830dd42bc-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5r5z7\" (UID: \"47dfef92-6673-4a9f-9999-47f830dd42bc\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5r5z7" Jan 20 18:49:58 crc kubenswrapper[4661]: I0120 18:49:58.116542 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/47dfef92-6673-4a9f-9999-47f830dd42bc-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5r5z7\" (UID: \"47dfef92-6673-4a9f-9999-47f830dd42bc\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5r5z7" Jan 20 18:49:58 crc kubenswrapper[4661]: I0120 18:49:58.116564 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/47dfef92-6673-4a9f-9999-47f830dd42bc-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5r5z7\" (UID: \"47dfef92-6673-4a9f-9999-47f830dd42bc\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5r5z7" Jan 20 18:49:58 crc kubenswrapper[4661]: I0120 18:49:58.116586 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cvhkx\" (UniqueName: \"kubernetes.io/projected/47dfef92-6673-4a9f-9999-47f830dd42bc-kube-api-access-cvhkx\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5r5z7\" (UID: \"47dfef92-6673-4a9f-9999-47f830dd42bc\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5r5z7" Jan 20 18:49:58 crc kubenswrapper[4661]: I0120 18:49:58.116613 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/47dfef92-6673-4a9f-9999-47f830dd42bc-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5r5z7\" (UID: \"47dfef92-6673-4a9f-9999-47f830dd42bc\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5r5z7" Jan 20 18:49:58 crc kubenswrapper[4661]: I0120 18:49:58.116639 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/47dfef92-6673-4a9f-9999-47f830dd42bc-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5r5z7\" (UID: \"47dfef92-6673-4a9f-9999-47f830dd42bc\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5r5z7" Jan 20 18:49:58 crc kubenswrapper[4661]: I0120 18:49:58.116662 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/47dfef92-6673-4a9f-9999-47f830dd42bc-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5r5z7\" (UID: \"47dfef92-6673-4a9f-9999-47f830dd42bc\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5r5z7" Jan 20 18:49:58 crc kubenswrapper[4661]: I0120 18:49:58.116755 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/47dfef92-6673-4a9f-9999-47f830dd42bc-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5r5z7\" (UID: \"47dfef92-6673-4a9f-9999-47f830dd42bc\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5r5z7" Jan 20 18:49:58 crc kubenswrapper[4661]: I0120 18:49:58.116775 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/47dfef92-6673-4a9f-9999-47f830dd42bc-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5r5z7\" (UID: \"47dfef92-6673-4a9f-9999-47f830dd42bc\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5r5z7" Jan 20 18:49:58 crc kubenswrapper[4661]: I0120 18:49:58.116809 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/47dfef92-6673-4a9f-9999-47f830dd42bc-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5r5z7\" (UID: \"47dfef92-6673-4a9f-9999-47f830dd42bc\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5r5z7" Jan 20 18:49:58 crc kubenswrapper[4661]: I0120 18:49:58.116846 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/47dfef92-6673-4a9f-9999-47f830dd42bc-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5r5z7\" (UID: \"47dfef92-6673-4a9f-9999-47f830dd42bc\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5r5z7" Jan 20 18:49:58 crc kubenswrapper[4661]: I0120 18:49:58.116868 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/47dfef92-6673-4a9f-9999-47f830dd42bc-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5r5z7\" (UID: \"47dfef92-6673-4a9f-9999-47f830dd42bc\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5r5z7" Jan 20 18:49:58 crc kubenswrapper[4661]: I0120 18:49:58.116891 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/47dfef92-6673-4a9f-9999-47f830dd42bc-ceph\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5r5z7\" (UID: \"47dfef92-6673-4a9f-9999-47f830dd42bc\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5r5z7" Jan 20 18:49:58 crc kubenswrapper[4661]: I0120 18:49:58.218844 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cvhkx\" (UniqueName: \"kubernetes.io/projected/47dfef92-6673-4a9f-9999-47f830dd42bc-kube-api-access-cvhkx\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5r5z7\" (UID: \"47dfef92-6673-4a9f-9999-47f830dd42bc\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5r5z7" Jan 20 18:49:58 crc kubenswrapper[4661]: I0120 18:49:58.219068 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/47dfef92-6673-4a9f-9999-47f830dd42bc-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5r5z7\" (UID: \"47dfef92-6673-4a9f-9999-47f830dd42bc\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5r5z7" Jan 20 18:49:58 crc kubenswrapper[4661]: I0120 18:49:58.219161 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/47dfef92-6673-4a9f-9999-47f830dd42bc-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5r5z7\" (UID: \"47dfef92-6673-4a9f-9999-47f830dd42bc\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5r5z7" Jan 20 18:49:58 crc kubenswrapper[4661]: I0120 18:49:58.219257 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/47dfef92-6673-4a9f-9999-47f830dd42bc-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5r5z7\" (UID: \"47dfef92-6673-4a9f-9999-47f830dd42bc\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5r5z7" Jan 20 18:49:58 crc kubenswrapper[4661]: I0120 18:49:58.219355 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/47dfef92-6673-4a9f-9999-47f830dd42bc-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5r5z7\" (UID: \"47dfef92-6673-4a9f-9999-47f830dd42bc\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5r5z7" Jan 20 18:49:58 crc kubenswrapper[4661]: I0120 18:49:58.219429 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/47dfef92-6673-4a9f-9999-47f830dd42bc-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5r5z7\" (UID: \"47dfef92-6673-4a9f-9999-47f830dd42bc\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5r5z7" Jan 20 18:49:58 crc kubenswrapper[4661]: I0120 18:49:58.219502 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/47dfef92-6673-4a9f-9999-47f830dd42bc-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5r5z7\" (UID: \"47dfef92-6673-4a9f-9999-47f830dd42bc\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5r5z7" Jan 20 18:49:58 crc kubenswrapper[4661]: I0120 18:49:58.219598 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/47dfef92-6673-4a9f-9999-47f830dd42bc-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5r5z7\" (UID: \"47dfef92-6673-4a9f-9999-47f830dd42bc\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5r5z7" Jan 20 18:49:58 crc kubenswrapper[4661]: I0120 18:49:58.219692 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/47dfef92-6673-4a9f-9999-47f830dd42bc-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5r5z7\" (UID: \"47dfef92-6673-4a9f-9999-47f830dd42bc\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5r5z7" Jan 20 18:49:58 crc kubenswrapper[4661]: I0120 18:49:58.219769 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/47dfef92-6673-4a9f-9999-47f830dd42bc-ceph\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5r5z7\" (UID: \"47dfef92-6673-4a9f-9999-47f830dd42bc\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5r5z7" Jan 20 18:49:58 crc kubenswrapper[4661]: I0120 18:49:58.219869 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/47dfef92-6673-4a9f-9999-47f830dd42bc-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5r5z7\" (UID: \"47dfef92-6673-4a9f-9999-47f830dd42bc\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5r5z7" Jan 20 18:49:58 crc kubenswrapper[4661]: I0120 18:49:58.219985 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/47dfef92-6673-4a9f-9999-47f830dd42bc-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5r5z7\" (UID: \"47dfef92-6673-4a9f-9999-47f830dd42bc\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5r5z7" Jan 20 18:49:58 crc kubenswrapper[4661]: I0120 18:49:58.220054 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/47dfef92-6673-4a9f-9999-47f830dd42bc-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5r5z7\" (UID: \"47dfef92-6673-4a9f-9999-47f830dd42bc\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5r5z7" Jan 20 18:49:58 crc kubenswrapper[4661]: I0120 18:49:58.223063 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/47dfef92-6673-4a9f-9999-47f830dd42bc-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5r5z7\" (UID: \"47dfef92-6673-4a9f-9999-47f830dd42bc\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5r5z7" Jan 20 18:49:58 crc kubenswrapper[4661]: I0120 18:49:58.223446 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/47dfef92-6673-4a9f-9999-47f830dd42bc-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5r5z7\" (UID: \"47dfef92-6673-4a9f-9999-47f830dd42bc\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5r5z7" Jan 20 18:49:58 crc kubenswrapper[4661]: I0120 18:49:58.223900 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/47dfef92-6673-4a9f-9999-47f830dd42bc-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5r5z7\" (UID: \"47dfef92-6673-4a9f-9999-47f830dd42bc\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5r5z7" Jan 20 18:49:58 crc kubenswrapper[4661]: I0120 18:49:58.224383 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/47dfef92-6673-4a9f-9999-47f830dd42bc-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5r5z7\" (UID: \"47dfef92-6673-4a9f-9999-47f830dd42bc\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5r5z7" Jan 20 18:49:58 crc kubenswrapper[4661]: I0120 18:49:58.225235 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/47dfef92-6673-4a9f-9999-47f830dd42bc-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5r5z7\" (UID: \"47dfef92-6673-4a9f-9999-47f830dd42bc\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5r5z7" Jan 20 18:49:58 crc kubenswrapper[4661]: I0120 18:49:58.225723 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/47dfef92-6673-4a9f-9999-47f830dd42bc-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5r5z7\" (UID: \"47dfef92-6673-4a9f-9999-47f830dd42bc\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5r5z7" Jan 20 18:49:58 crc kubenswrapper[4661]: I0120 18:49:58.226480 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/47dfef92-6673-4a9f-9999-47f830dd42bc-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5r5z7\" (UID: \"47dfef92-6673-4a9f-9999-47f830dd42bc\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5r5z7" Jan 20 18:49:58 crc kubenswrapper[4661]: I0120 18:49:58.227580 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/47dfef92-6673-4a9f-9999-47f830dd42bc-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5r5z7\" (UID: \"47dfef92-6673-4a9f-9999-47f830dd42bc\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5r5z7" Jan 20 18:49:58 crc kubenswrapper[4661]: I0120 18:49:58.228913 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/47dfef92-6673-4a9f-9999-47f830dd42bc-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5r5z7\" (UID: \"47dfef92-6673-4a9f-9999-47f830dd42bc\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5r5z7" Jan 20 18:49:58 crc kubenswrapper[4661]: I0120 18:49:58.230309 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/47dfef92-6673-4a9f-9999-47f830dd42bc-ceph\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5r5z7\" (UID: \"47dfef92-6673-4a9f-9999-47f830dd42bc\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5r5z7" Jan 20 18:49:58 crc kubenswrapper[4661]: I0120 18:49:58.241009 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/47dfef92-6673-4a9f-9999-47f830dd42bc-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5r5z7\" (UID: \"47dfef92-6673-4a9f-9999-47f830dd42bc\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5r5z7" Jan 20 18:49:58 crc kubenswrapper[4661]: I0120 18:49:58.242355 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/47dfef92-6673-4a9f-9999-47f830dd42bc-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5r5z7\" (UID: \"47dfef92-6673-4a9f-9999-47f830dd42bc\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5r5z7" Jan 20 18:49:58 crc kubenswrapper[4661]: I0120 18:49:58.246134 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cvhkx\" (UniqueName: \"kubernetes.io/projected/47dfef92-6673-4a9f-9999-47f830dd42bc-kube-api-access-cvhkx\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5r5z7\" (UID: \"47dfef92-6673-4a9f-9999-47f830dd42bc\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5r5z7" Jan 20 18:49:58 crc kubenswrapper[4661]: I0120 18:49:58.339959 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5r5z7" Jan 20 18:49:58 crc kubenswrapper[4661]: I0120 18:49:58.892835 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5r5z7"] Jan 20 18:49:58 crc kubenswrapper[4661]: W0120 18:49:58.895295 4661 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod47dfef92_6673_4a9f_9999_47f830dd42bc.slice/crio-b959d811d27f6a5e6f0b0bafbabc8b340360e39d1eb033e54726055a6a547fc7 WatchSource:0}: Error finding container b959d811d27f6a5e6f0b0bafbabc8b340360e39d1eb033e54726055a6a547fc7: Status 404 returned error can't find the container with id b959d811d27f6a5e6f0b0bafbabc8b340360e39d1eb033e54726055a6a547fc7 Jan 20 18:49:58 crc kubenswrapper[4661]: I0120 18:49:58.914809 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5r5z7" event={"ID":"47dfef92-6673-4a9f-9999-47f830dd42bc","Type":"ContainerStarted","Data":"b959d811d27f6a5e6f0b0bafbabc8b340360e39d1eb033e54726055a6a547fc7"} Jan 20 18:49:59 crc kubenswrapper[4661]: I0120 18:49:59.924198 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5r5z7" event={"ID":"47dfef92-6673-4a9f-9999-47f830dd42bc","Type":"ContainerStarted","Data":"793f3b705f62ab607a1bff4c388d4adae713bacf8cbbfa10469ce6ca6a44fe39"} Jan 20 18:49:59 crc kubenswrapper[4661]: I0120 18:49:59.943946 4661 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5r5z7" podStartSLOduration=2.508918462 podStartE2EDuration="2.943927669s" podCreationTimestamp="2026-01-20 18:49:57 +0000 UTC" firstStartedPulling="2026-01-20 18:49:58.898914704 +0000 UTC m=+2655.229704366" lastFinishedPulling="2026-01-20 18:49:59.333923911 +0000 UTC m=+2655.664713573" observedRunningTime="2026-01-20 18:49:59.942636604 +0000 UTC m=+2656.273426266" watchObservedRunningTime="2026-01-20 18:49:59.943927669 +0000 UTC m=+2656.274717331" Jan 20 18:50:35 crc kubenswrapper[4661]: I0120 18:50:35.238312 4661 generic.go:334] "Generic (PLEG): container finished" podID="47dfef92-6673-4a9f-9999-47f830dd42bc" containerID="793f3b705f62ab607a1bff4c388d4adae713bacf8cbbfa10469ce6ca6a44fe39" exitCode=0 Jan 20 18:50:35 crc kubenswrapper[4661]: I0120 18:50:35.238373 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5r5z7" event={"ID":"47dfef92-6673-4a9f-9999-47f830dd42bc","Type":"ContainerDied","Data":"793f3b705f62ab607a1bff4c388d4adae713bacf8cbbfa10469ce6ca6a44fe39"} Jan 20 18:50:36 crc kubenswrapper[4661]: I0120 18:50:36.649273 4661 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5r5z7" Jan 20 18:50:36 crc kubenswrapper[4661]: I0120 18:50:36.841348 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/47dfef92-6673-4a9f-9999-47f830dd42bc-inventory\") pod \"47dfef92-6673-4a9f-9999-47f830dd42bc\" (UID: \"47dfef92-6673-4a9f-9999-47f830dd42bc\") " Jan 20 18:50:36 crc kubenswrapper[4661]: I0120 18:50:36.841440 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/47dfef92-6673-4a9f-9999-47f830dd42bc-nova-combined-ca-bundle\") pod \"47dfef92-6673-4a9f-9999-47f830dd42bc\" (UID: \"47dfef92-6673-4a9f-9999-47f830dd42bc\") " Jan 20 18:50:36 crc kubenswrapper[4661]: I0120 18:50:36.841524 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/47dfef92-6673-4a9f-9999-47f830dd42bc-repo-setup-combined-ca-bundle\") pod \"47dfef92-6673-4a9f-9999-47f830dd42bc\" (UID: \"47dfef92-6673-4a9f-9999-47f830dd42bc\") " Jan 20 18:50:36 crc kubenswrapper[4661]: I0120 18:50:36.841576 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/47dfef92-6673-4a9f-9999-47f830dd42bc-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"47dfef92-6673-4a9f-9999-47f830dd42bc\" (UID: \"47dfef92-6673-4a9f-9999-47f830dd42bc\") " Jan 20 18:50:36 crc kubenswrapper[4661]: I0120 18:50:36.841651 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/47dfef92-6673-4a9f-9999-47f830dd42bc-ssh-key-openstack-edpm-ipam\") pod \"47dfef92-6673-4a9f-9999-47f830dd42bc\" (UID: \"47dfef92-6673-4a9f-9999-47f830dd42bc\") " Jan 20 18:50:36 crc kubenswrapper[4661]: I0120 18:50:36.842644 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/47dfef92-6673-4a9f-9999-47f830dd42bc-bootstrap-combined-ca-bundle\") pod \"47dfef92-6673-4a9f-9999-47f830dd42bc\" (UID: \"47dfef92-6673-4a9f-9999-47f830dd42bc\") " Jan 20 18:50:36 crc kubenswrapper[4661]: I0120 18:50:36.842791 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/47dfef92-6673-4a9f-9999-47f830dd42bc-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"47dfef92-6673-4a9f-9999-47f830dd42bc\" (UID: \"47dfef92-6673-4a9f-9999-47f830dd42bc\") " Jan 20 18:50:36 crc kubenswrapper[4661]: I0120 18:50:36.842904 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/47dfef92-6673-4a9f-9999-47f830dd42bc-ceph\") pod \"47dfef92-6673-4a9f-9999-47f830dd42bc\" (UID: \"47dfef92-6673-4a9f-9999-47f830dd42bc\") " Jan 20 18:50:36 crc kubenswrapper[4661]: I0120 18:50:36.842942 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/47dfef92-6673-4a9f-9999-47f830dd42bc-neutron-metadata-combined-ca-bundle\") pod \"47dfef92-6673-4a9f-9999-47f830dd42bc\" (UID: \"47dfef92-6673-4a9f-9999-47f830dd42bc\") " Jan 20 18:50:36 crc kubenswrapper[4661]: I0120 18:50:36.843057 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/47dfef92-6673-4a9f-9999-47f830dd42bc-openstack-edpm-ipam-ovn-default-certs-0\") pod \"47dfef92-6673-4a9f-9999-47f830dd42bc\" (UID: \"47dfef92-6673-4a9f-9999-47f830dd42bc\") " Jan 20 18:50:36 crc kubenswrapper[4661]: I0120 18:50:36.843133 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/47dfef92-6673-4a9f-9999-47f830dd42bc-libvirt-combined-ca-bundle\") pod \"47dfef92-6673-4a9f-9999-47f830dd42bc\" (UID: \"47dfef92-6673-4a9f-9999-47f830dd42bc\") " Jan 20 18:50:36 crc kubenswrapper[4661]: I0120 18:50:36.843174 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cvhkx\" (UniqueName: \"kubernetes.io/projected/47dfef92-6673-4a9f-9999-47f830dd42bc-kube-api-access-cvhkx\") pod \"47dfef92-6673-4a9f-9999-47f830dd42bc\" (UID: \"47dfef92-6673-4a9f-9999-47f830dd42bc\") " Jan 20 18:50:36 crc kubenswrapper[4661]: I0120 18:50:36.843210 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/47dfef92-6673-4a9f-9999-47f830dd42bc-ovn-combined-ca-bundle\") pod \"47dfef92-6673-4a9f-9999-47f830dd42bc\" (UID: \"47dfef92-6673-4a9f-9999-47f830dd42bc\") " Jan 20 18:50:36 crc kubenswrapper[4661]: I0120 18:50:36.848229 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/47dfef92-6673-4a9f-9999-47f830dd42bc-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "47dfef92-6673-4a9f-9999-47f830dd42bc" (UID: "47dfef92-6673-4a9f-9999-47f830dd42bc"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 18:50:36 crc kubenswrapper[4661]: I0120 18:50:36.848875 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/47dfef92-6673-4a9f-9999-47f830dd42bc-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "47dfef92-6673-4a9f-9999-47f830dd42bc" (UID: "47dfef92-6673-4a9f-9999-47f830dd42bc"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 18:50:36 crc kubenswrapper[4661]: I0120 18:50:36.850535 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/47dfef92-6673-4a9f-9999-47f830dd42bc-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "47dfef92-6673-4a9f-9999-47f830dd42bc" (UID: "47dfef92-6673-4a9f-9999-47f830dd42bc"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 18:50:36 crc kubenswrapper[4661]: I0120 18:50:36.851172 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/47dfef92-6673-4a9f-9999-47f830dd42bc-openstack-edpm-ipam-neutron-metadata-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-neutron-metadata-default-certs-0") pod "47dfef92-6673-4a9f-9999-47f830dd42bc" (UID: "47dfef92-6673-4a9f-9999-47f830dd42bc"). InnerVolumeSpecName "openstack-edpm-ipam-neutron-metadata-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 18:50:36 crc kubenswrapper[4661]: I0120 18:50:36.851452 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/47dfef92-6673-4a9f-9999-47f830dd42bc-openstack-edpm-ipam-libvirt-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-libvirt-default-certs-0") pod "47dfef92-6673-4a9f-9999-47f830dd42bc" (UID: "47dfef92-6673-4a9f-9999-47f830dd42bc"). InnerVolumeSpecName "openstack-edpm-ipam-libvirt-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 18:50:36 crc kubenswrapper[4661]: I0120 18:50:36.851665 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/47dfef92-6673-4a9f-9999-47f830dd42bc-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "47dfef92-6673-4a9f-9999-47f830dd42bc" (UID: "47dfef92-6673-4a9f-9999-47f830dd42bc"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 18:50:36 crc kubenswrapper[4661]: I0120 18:50:36.852247 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/47dfef92-6673-4a9f-9999-47f830dd42bc-openstack-edpm-ipam-ovn-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-ovn-default-certs-0") pod "47dfef92-6673-4a9f-9999-47f830dd42bc" (UID: "47dfef92-6673-4a9f-9999-47f830dd42bc"). InnerVolumeSpecName "openstack-edpm-ipam-ovn-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 18:50:36 crc kubenswrapper[4661]: I0120 18:50:36.857274 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/47dfef92-6673-4a9f-9999-47f830dd42bc-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "47dfef92-6673-4a9f-9999-47f830dd42bc" (UID: "47dfef92-6673-4a9f-9999-47f830dd42bc"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 18:50:36 crc kubenswrapper[4661]: I0120 18:50:36.857998 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/47dfef92-6673-4a9f-9999-47f830dd42bc-ceph" (OuterVolumeSpecName: "ceph") pod "47dfef92-6673-4a9f-9999-47f830dd42bc" (UID: "47dfef92-6673-4a9f-9999-47f830dd42bc"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 18:50:36 crc kubenswrapper[4661]: I0120 18:50:36.859022 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/47dfef92-6673-4a9f-9999-47f830dd42bc-kube-api-access-cvhkx" (OuterVolumeSpecName: "kube-api-access-cvhkx") pod "47dfef92-6673-4a9f-9999-47f830dd42bc" (UID: "47dfef92-6673-4a9f-9999-47f830dd42bc"). InnerVolumeSpecName "kube-api-access-cvhkx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 18:50:36 crc kubenswrapper[4661]: I0120 18:50:36.867837 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/47dfef92-6673-4a9f-9999-47f830dd42bc-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "47dfef92-6673-4a9f-9999-47f830dd42bc" (UID: "47dfef92-6673-4a9f-9999-47f830dd42bc"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 18:50:36 crc kubenswrapper[4661]: I0120 18:50:36.884121 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/47dfef92-6673-4a9f-9999-47f830dd42bc-inventory" (OuterVolumeSpecName: "inventory") pod "47dfef92-6673-4a9f-9999-47f830dd42bc" (UID: "47dfef92-6673-4a9f-9999-47f830dd42bc"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 18:50:36 crc kubenswrapper[4661]: I0120 18:50:36.890182 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/47dfef92-6673-4a9f-9999-47f830dd42bc-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "47dfef92-6673-4a9f-9999-47f830dd42bc" (UID: "47dfef92-6673-4a9f-9999-47f830dd42bc"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 18:50:36 crc kubenswrapper[4661]: I0120 18:50:36.946218 4661 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/47dfef92-6673-4a9f-9999-47f830dd42bc-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 20 18:50:36 crc kubenswrapper[4661]: I0120 18:50:36.946265 4661 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cvhkx\" (UniqueName: \"kubernetes.io/projected/47dfef92-6673-4a9f-9999-47f830dd42bc-kube-api-access-cvhkx\") on node \"crc\" DevicePath \"\"" Jan 20 18:50:36 crc kubenswrapper[4661]: I0120 18:50:36.946282 4661 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/47dfef92-6673-4a9f-9999-47f830dd42bc-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 20 18:50:36 crc kubenswrapper[4661]: I0120 18:50:36.946301 4661 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/47dfef92-6673-4a9f-9999-47f830dd42bc-inventory\") on node \"crc\" DevicePath \"\"" Jan 20 18:50:36 crc kubenswrapper[4661]: I0120 18:50:36.946316 4661 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/47dfef92-6673-4a9f-9999-47f830dd42bc-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 20 18:50:36 crc kubenswrapper[4661]: I0120 18:50:36.946331 4661 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/47dfef92-6673-4a9f-9999-47f830dd42bc-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 20 18:50:36 crc kubenswrapper[4661]: I0120 18:50:36.946346 4661 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/47dfef92-6673-4a9f-9999-47f830dd42bc-openstack-edpm-ipam-neutron-metadata-default-certs-0\") on node \"crc\" DevicePath \"\"" Jan 20 18:50:36 crc kubenswrapper[4661]: I0120 18:50:36.946366 4661 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/47dfef92-6673-4a9f-9999-47f830dd42bc-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 20 18:50:36 crc kubenswrapper[4661]: I0120 18:50:36.946381 4661 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/47dfef92-6673-4a9f-9999-47f830dd42bc-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 20 18:50:36 crc kubenswrapper[4661]: I0120 18:50:36.946396 4661 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/47dfef92-6673-4a9f-9999-47f830dd42bc-openstack-edpm-ipam-libvirt-default-certs-0\") on node \"crc\" DevicePath \"\"" Jan 20 18:50:36 crc kubenswrapper[4661]: I0120 18:50:36.946411 4661 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/47dfef92-6673-4a9f-9999-47f830dd42bc-ceph\") on node \"crc\" DevicePath \"\"" Jan 20 18:50:36 crc kubenswrapper[4661]: I0120 18:50:36.946428 4661 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/47dfef92-6673-4a9f-9999-47f830dd42bc-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 20 18:50:36 crc kubenswrapper[4661]: I0120 18:50:36.946444 4661 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/47dfef92-6673-4a9f-9999-47f830dd42bc-openstack-edpm-ipam-ovn-default-certs-0\") on node \"crc\" DevicePath \"\"" Jan 20 18:50:37 crc kubenswrapper[4661]: I0120 18:50:37.261471 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5r5z7" event={"ID":"47dfef92-6673-4a9f-9999-47f830dd42bc","Type":"ContainerDied","Data":"b959d811d27f6a5e6f0b0bafbabc8b340360e39d1eb033e54726055a6a547fc7"} Jan 20 18:50:37 crc kubenswrapper[4661]: I0120 18:50:37.262070 4661 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b959d811d27f6a5e6f0b0bafbabc8b340360e39d1eb033e54726055a6a547fc7" Jan 20 18:50:37 crc kubenswrapper[4661]: I0120 18:50:37.261773 4661 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5r5z7" Jan 20 18:50:37 crc kubenswrapper[4661]: I0120 18:50:37.388886 4661 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-28cfq"] Jan 20 18:50:37 crc kubenswrapper[4661]: E0120 18:50:37.389341 4661 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="47dfef92-6673-4a9f-9999-47f830dd42bc" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Jan 20 18:50:37 crc kubenswrapper[4661]: I0120 18:50:37.389377 4661 state_mem.go:107] "Deleted CPUSet assignment" podUID="47dfef92-6673-4a9f-9999-47f830dd42bc" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Jan 20 18:50:37 crc kubenswrapper[4661]: I0120 18:50:37.389585 4661 memory_manager.go:354] "RemoveStaleState removing state" podUID="47dfef92-6673-4a9f-9999-47f830dd42bc" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Jan 20 18:50:37 crc kubenswrapper[4661]: I0120 18:50:37.390220 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-28cfq" Jan 20 18:50:37 crc kubenswrapper[4661]: I0120 18:50:37.393105 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Jan 20 18:50:37 crc kubenswrapper[4661]: I0120 18:50:37.393389 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-mmbv8" Jan 20 18:50:37 crc kubenswrapper[4661]: I0120 18:50:37.395364 4661 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 20 18:50:37 crc kubenswrapper[4661]: I0120 18:50:37.398093 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 20 18:50:37 crc kubenswrapper[4661]: I0120 18:50:37.398860 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 20 18:50:37 crc kubenswrapper[4661]: I0120 18:50:37.406523 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-28cfq"] Jan 20 18:50:37 crc kubenswrapper[4661]: I0120 18:50:37.454339 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3689afcd-a340-4415-a127-c9ce66ab8d7b-inventory\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-28cfq\" (UID: \"3689afcd-a340-4415-a127-c9ce66ab8d7b\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-28cfq" Jan 20 18:50:37 crc kubenswrapper[4661]: I0120 18:50:37.454551 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/3689afcd-a340-4415-a127-c9ce66ab8d7b-ceph\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-28cfq\" (UID: \"3689afcd-a340-4415-a127-c9ce66ab8d7b\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-28cfq" Jan 20 18:50:37 crc kubenswrapper[4661]: I0120 18:50:37.454610 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xvbjn\" (UniqueName: \"kubernetes.io/projected/3689afcd-a340-4415-a127-c9ce66ab8d7b-kube-api-access-xvbjn\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-28cfq\" (UID: \"3689afcd-a340-4415-a127-c9ce66ab8d7b\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-28cfq" Jan 20 18:50:37 crc kubenswrapper[4661]: I0120 18:50:37.454808 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3689afcd-a340-4415-a127-c9ce66ab8d7b-ssh-key-openstack-edpm-ipam\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-28cfq\" (UID: \"3689afcd-a340-4415-a127-c9ce66ab8d7b\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-28cfq" Jan 20 18:50:37 crc kubenswrapper[4661]: I0120 18:50:37.556421 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xvbjn\" (UniqueName: \"kubernetes.io/projected/3689afcd-a340-4415-a127-c9ce66ab8d7b-kube-api-access-xvbjn\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-28cfq\" (UID: \"3689afcd-a340-4415-a127-c9ce66ab8d7b\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-28cfq" Jan 20 18:50:37 crc kubenswrapper[4661]: I0120 18:50:37.556954 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3689afcd-a340-4415-a127-c9ce66ab8d7b-ssh-key-openstack-edpm-ipam\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-28cfq\" (UID: \"3689afcd-a340-4415-a127-c9ce66ab8d7b\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-28cfq" Jan 20 18:50:37 crc kubenswrapper[4661]: I0120 18:50:37.557240 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3689afcd-a340-4415-a127-c9ce66ab8d7b-inventory\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-28cfq\" (UID: \"3689afcd-a340-4415-a127-c9ce66ab8d7b\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-28cfq" Jan 20 18:50:37 crc kubenswrapper[4661]: I0120 18:50:37.558011 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/3689afcd-a340-4415-a127-c9ce66ab8d7b-ceph\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-28cfq\" (UID: \"3689afcd-a340-4415-a127-c9ce66ab8d7b\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-28cfq" Jan 20 18:50:37 crc kubenswrapper[4661]: I0120 18:50:37.561273 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3689afcd-a340-4415-a127-c9ce66ab8d7b-inventory\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-28cfq\" (UID: \"3689afcd-a340-4415-a127-c9ce66ab8d7b\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-28cfq" Jan 20 18:50:37 crc kubenswrapper[4661]: I0120 18:50:37.566079 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3689afcd-a340-4415-a127-c9ce66ab8d7b-ssh-key-openstack-edpm-ipam\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-28cfq\" (UID: \"3689afcd-a340-4415-a127-c9ce66ab8d7b\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-28cfq" Jan 20 18:50:37 crc kubenswrapper[4661]: I0120 18:50:37.574472 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/3689afcd-a340-4415-a127-c9ce66ab8d7b-ceph\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-28cfq\" (UID: \"3689afcd-a340-4415-a127-c9ce66ab8d7b\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-28cfq" Jan 20 18:50:37 crc kubenswrapper[4661]: I0120 18:50:37.579056 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xvbjn\" (UniqueName: \"kubernetes.io/projected/3689afcd-a340-4415-a127-c9ce66ab8d7b-kube-api-access-xvbjn\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-28cfq\" (UID: \"3689afcd-a340-4415-a127-c9ce66ab8d7b\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-28cfq" Jan 20 18:50:37 crc kubenswrapper[4661]: I0120 18:50:37.707955 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-28cfq" Jan 20 18:50:38 crc kubenswrapper[4661]: I0120 18:50:38.235953 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-28cfq"] Jan 20 18:50:38 crc kubenswrapper[4661]: I0120 18:50:38.271310 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-28cfq" event={"ID":"3689afcd-a340-4415-a127-c9ce66ab8d7b","Type":"ContainerStarted","Data":"95fcabedd12444ca8637bd0c6bf7f6f93dc7d1264a3b8c9f14e1e8954ed67fe3"} Jan 20 18:50:39 crc kubenswrapper[4661]: I0120 18:50:39.280912 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-28cfq" event={"ID":"3689afcd-a340-4415-a127-c9ce66ab8d7b","Type":"ContainerStarted","Data":"43b07652f93fb659d8802c7126dd129495bc97518a45399ec7a43cb3a0994392"} Jan 20 18:50:39 crc kubenswrapper[4661]: I0120 18:50:39.298281 4661 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-28cfq" podStartSLOduration=1.743572551 podStartE2EDuration="2.298265759s" podCreationTimestamp="2026-01-20 18:50:37 +0000 UTC" firstStartedPulling="2026-01-20 18:50:38.237945966 +0000 UTC m=+2694.568735638" lastFinishedPulling="2026-01-20 18:50:38.792639174 +0000 UTC m=+2695.123428846" observedRunningTime="2026-01-20 18:50:39.297473697 +0000 UTC m=+2695.628263359" watchObservedRunningTime="2026-01-20 18:50:39.298265759 +0000 UTC m=+2695.629055421" Jan 20 18:50:45 crc kubenswrapper[4661]: I0120 18:50:45.325213 4661 generic.go:334] "Generic (PLEG): container finished" podID="3689afcd-a340-4415-a127-c9ce66ab8d7b" containerID="43b07652f93fb659d8802c7126dd129495bc97518a45399ec7a43cb3a0994392" exitCode=0 Jan 20 18:50:45 crc kubenswrapper[4661]: I0120 18:50:45.325274 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-28cfq" event={"ID":"3689afcd-a340-4415-a127-c9ce66ab8d7b","Type":"ContainerDied","Data":"43b07652f93fb659d8802c7126dd129495bc97518a45399ec7a43cb3a0994392"} Jan 20 18:50:46 crc kubenswrapper[4661]: I0120 18:50:46.697651 4661 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-28cfq" Jan 20 18:50:46 crc kubenswrapper[4661]: I0120 18:50:46.842491 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3689afcd-a340-4415-a127-c9ce66ab8d7b-ssh-key-openstack-edpm-ipam\") pod \"3689afcd-a340-4415-a127-c9ce66ab8d7b\" (UID: \"3689afcd-a340-4415-a127-c9ce66ab8d7b\") " Jan 20 18:50:46 crc kubenswrapper[4661]: I0120 18:50:46.842574 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/3689afcd-a340-4415-a127-c9ce66ab8d7b-ceph\") pod \"3689afcd-a340-4415-a127-c9ce66ab8d7b\" (UID: \"3689afcd-a340-4415-a127-c9ce66ab8d7b\") " Jan 20 18:50:46 crc kubenswrapper[4661]: I0120 18:50:46.842652 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3689afcd-a340-4415-a127-c9ce66ab8d7b-inventory\") pod \"3689afcd-a340-4415-a127-c9ce66ab8d7b\" (UID: \"3689afcd-a340-4415-a127-c9ce66ab8d7b\") " Jan 20 18:50:46 crc kubenswrapper[4661]: I0120 18:50:46.842762 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xvbjn\" (UniqueName: \"kubernetes.io/projected/3689afcd-a340-4415-a127-c9ce66ab8d7b-kube-api-access-xvbjn\") pod \"3689afcd-a340-4415-a127-c9ce66ab8d7b\" (UID: \"3689afcd-a340-4415-a127-c9ce66ab8d7b\") " Jan 20 18:50:46 crc kubenswrapper[4661]: I0120 18:50:46.848279 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3689afcd-a340-4415-a127-c9ce66ab8d7b-ceph" (OuterVolumeSpecName: "ceph") pod "3689afcd-a340-4415-a127-c9ce66ab8d7b" (UID: "3689afcd-a340-4415-a127-c9ce66ab8d7b"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 18:50:46 crc kubenswrapper[4661]: I0120 18:50:46.848636 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3689afcd-a340-4415-a127-c9ce66ab8d7b-kube-api-access-xvbjn" (OuterVolumeSpecName: "kube-api-access-xvbjn") pod "3689afcd-a340-4415-a127-c9ce66ab8d7b" (UID: "3689afcd-a340-4415-a127-c9ce66ab8d7b"). InnerVolumeSpecName "kube-api-access-xvbjn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 18:50:46 crc kubenswrapper[4661]: I0120 18:50:46.871856 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3689afcd-a340-4415-a127-c9ce66ab8d7b-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "3689afcd-a340-4415-a127-c9ce66ab8d7b" (UID: "3689afcd-a340-4415-a127-c9ce66ab8d7b"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 18:50:46 crc kubenswrapper[4661]: I0120 18:50:46.872627 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3689afcd-a340-4415-a127-c9ce66ab8d7b-inventory" (OuterVolumeSpecName: "inventory") pod "3689afcd-a340-4415-a127-c9ce66ab8d7b" (UID: "3689afcd-a340-4415-a127-c9ce66ab8d7b"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 18:50:46 crc kubenswrapper[4661]: I0120 18:50:46.945972 4661 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3689afcd-a340-4415-a127-c9ce66ab8d7b-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 20 18:50:46 crc kubenswrapper[4661]: I0120 18:50:46.946024 4661 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/3689afcd-a340-4415-a127-c9ce66ab8d7b-ceph\") on node \"crc\" DevicePath \"\"" Jan 20 18:50:46 crc kubenswrapper[4661]: I0120 18:50:46.946050 4661 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3689afcd-a340-4415-a127-c9ce66ab8d7b-inventory\") on node \"crc\" DevicePath \"\"" Jan 20 18:50:46 crc kubenswrapper[4661]: I0120 18:50:46.946074 4661 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xvbjn\" (UniqueName: \"kubernetes.io/projected/3689afcd-a340-4415-a127-c9ce66ab8d7b-kube-api-access-xvbjn\") on node \"crc\" DevicePath \"\"" Jan 20 18:50:47 crc kubenswrapper[4661]: I0120 18:50:47.346197 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-28cfq" event={"ID":"3689afcd-a340-4415-a127-c9ce66ab8d7b","Type":"ContainerDied","Data":"95fcabedd12444ca8637bd0c6bf7f6f93dc7d1264a3b8c9f14e1e8954ed67fe3"} Jan 20 18:50:47 crc kubenswrapper[4661]: I0120 18:50:47.346736 4661 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="95fcabedd12444ca8637bd0c6bf7f6f93dc7d1264a3b8c9f14e1e8954ed67fe3" Jan 20 18:50:47 crc kubenswrapper[4661]: I0120 18:50:47.346846 4661 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-28cfq" Jan 20 18:50:47 crc kubenswrapper[4661]: I0120 18:50:47.457408 4661 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-kgwml"] Jan 20 18:50:47 crc kubenswrapper[4661]: E0120 18:50:47.457965 4661 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3689afcd-a340-4415-a127-c9ce66ab8d7b" containerName="ceph-client-edpm-deployment-openstack-edpm-ipam" Jan 20 18:50:47 crc kubenswrapper[4661]: I0120 18:50:47.458033 4661 state_mem.go:107] "Deleted CPUSet assignment" podUID="3689afcd-a340-4415-a127-c9ce66ab8d7b" containerName="ceph-client-edpm-deployment-openstack-edpm-ipam" Jan 20 18:50:47 crc kubenswrapper[4661]: I0120 18:50:47.458276 4661 memory_manager.go:354] "RemoveStaleState removing state" podUID="3689afcd-a340-4415-a127-c9ce66ab8d7b" containerName="ceph-client-edpm-deployment-openstack-edpm-ipam" Jan 20 18:50:47 crc kubenswrapper[4661]: I0120 18:50:47.458899 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-kgwml" Jan 20 18:50:47 crc kubenswrapper[4661]: I0120 18:50:47.469757 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 20 18:50:47 crc kubenswrapper[4661]: I0120 18:50:47.470242 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-mmbv8" Jan 20 18:50:47 crc kubenswrapper[4661]: I0120 18:50:47.470511 4661 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 20 18:50:47 crc kubenswrapper[4661]: I0120 18:50:47.471014 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Jan 20 18:50:47 crc kubenswrapper[4661]: I0120 18:50:47.471189 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 20 18:50:47 crc kubenswrapper[4661]: I0120 18:50:47.472525 4661 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-config" Jan 20 18:50:47 crc kubenswrapper[4661]: I0120 18:50:47.475053 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-kgwml"] Jan 20 18:50:47 crc kubenswrapper[4661]: I0120 18:50:47.656616 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7f25c2cb-31da-4f0d-b7cb-472e09443f4a-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-kgwml\" (UID: \"7f25c2cb-31da-4f0d-b7cb-472e09443f4a\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-kgwml" Jan 20 18:50:47 crc kubenswrapper[4661]: I0120 18:50:47.656711 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/7f25c2cb-31da-4f0d-b7cb-472e09443f4a-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-kgwml\" (UID: \"7f25c2cb-31da-4f0d-b7cb-472e09443f4a\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-kgwml" Jan 20 18:50:47 crc kubenswrapper[4661]: I0120 18:50:47.657208 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/7f25c2cb-31da-4f0d-b7cb-472e09443f4a-ceph\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-kgwml\" (UID: \"7f25c2cb-31da-4f0d-b7cb-472e09443f4a\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-kgwml" Jan 20 18:50:47 crc kubenswrapper[4661]: I0120 18:50:47.657267 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7f25c2cb-31da-4f0d-b7cb-472e09443f4a-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-kgwml\" (UID: \"7f25c2cb-31da-4f0d-b7cb-472e09443f4a\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-kgwml" Jan 20 18:50:47 crc kubenswrapper[4661]: I0120 18:50:47.657312 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7f25c2cb-31da-4f0d-b7cb-472e09443f4a-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-kgwml\" (UID: \"7f25c2cb-31da-4f0d-b7cb-472e09443f4a\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-kgwml" Jan 20 18:50:47 crc kubenswrapper[4661]: I0120 18:50:47.657390 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9tnvr\" (UniqueName: \"kubernetes.io/projected/7f25c2cb-31da-4f0d-b7cb-472e09443f4a-kube-api-access-9tnvr\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-kgwml\" (UID: \"7f25c2cb-31da-4f0d-b7cb-472e09443f4a\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-kgwml" Jan 20 18:50:47 crc kubenswrapper[4661]: I0120 18:50:47.758766 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9tnvr\" (UniqueName: \"kubernetes.io/projected/7f25c2cb-31da-4f0d-b7cb-472e09443f4a-kube-api-access-9tnvr\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-kgwml\" (UID: \"7f25c2cb-31da-4f0d-b7cb-472e09443f4a\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-kgwml" Jan 20 18:50:47 crc kubenswrapper[4661]: I0120 18:50:47.758873 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7f25c2cb-31da-4f0d-b7cb-472e09443f4a-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-kgwml\" (UID: \"7f25c2cb-31da-4f0d-b7cb-472e09443f4a\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-kgwml" Jan 20 18:50:47 crc kubenswrapper[4661]: I0120 18:50:47.758907 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/7f25c2cb-31da-4f0d-b7cb-472e09443f4a-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-kgwml\" (UID: \"7f25c2cb-31da-4f0d-b7cb-472e09443f4a\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-kgwml" Jan 20 18:50:47 crc kubenswrapper[4661]: I0120 18:50:47.758936 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/7f25c2cb-31da-4f0d-b7cb-472e09443f4a-ceph\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-kgwml\" (UID: \"7f25c2cb-31da-4f0d-b7cb-472e09443f4a\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-kgwml" Jan 20 18:50:47 crc kubenswrapper[4661]: I0120 18:50:47.758966 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7f25c2cb-31da-4f0d-b7cb-472e09443f4a-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-kgwml\" (UID: \"7f25c2cb-31da-4f0d-b7cb-472e09443f4a\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-kgwml" Jan 20 18:50:47 crc kubenswrapper[4661]: I0120 18:50:47.758990 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7f25c2cb-31da-4f0d-b7cb-472e09443f4a-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-kgwml\" (UID: \"7f25c2cb-31da-4f0d-b7cb-472e09443f4a\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-kgwml" Jan 20 18:50:47 crc kubenswrapper[4661]: I0120 18:50:47.760167 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/7f25c2cb-31da-4f0d-b7cb-472e09443f4a-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-kgwml\" (UID: \"7f25c2cb-31da-4f0d-b7cb-472e09443f4a\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-kgwml" Jan 20 18:50:47 crc kubenswrapper[4661]: I0120 18:50:47.763055 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7f25c2cb-31da-4f0d-b7cb-472e09443f4a-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-kgwml\" (UID: \"7f25c2cb-31da-4f0d-b7cb-472e09443f4a\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-kgwml" Jan 20 18:50:47 crc kubenswrapper[4661]: I0120 18:50:47.763115 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7f25c2cb-31da-4f0d-b7cb-472e09443f4a-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-kgwml\" (UID: \"7f25c2cb-31da-4f0d-b7cb-472e09443f4a\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-kgwml" Jan 20 18:50:47 crc kubenswrapper[4661]: I0120 18:50:47.765458 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7f25c2cb-31da-4f0d-b7cb-472e09443f4a-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-kgwml\" (UID: \"7f25c2cb-31da-4f0d-b7cb-472e09443f4a\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-kgwml" Jan 20 18:50:47 crc kubenswrapper[4661]: I0120 18:50:47.780421 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/7f25c2cb-31da-4f0d-b7cb-472e09443f4a-ceph\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-kgwml\" (UID: \"7f25c2cb-31da-4f0d-b7cb-472e09443f4a\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-kgwml" Jan 20 18:50:47 crc kubenswrapper[4661]: I0120 18:50:47.788970 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9tnvr\" (UniqueName: \"kubernetes.io/projected/7f25c2cb-31da-4f0d-b7cb-472e09443f4a-kube-api-access-9tnvr\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-kgwml\" (UID: \"7f25c2cb-31da-4f0d-b7cb-472e09443f4a\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-kgwml" Jan 20 18:50:48 crc kubenswrapper[4661]: I0120 18:50:48.075590 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-kgwml" Jan 20 18:50:48 crc kubenswrapper[4661]: I0120 18:50:48.607612 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-kgwml"] Jan 20 18:50:49 crc kubenswrapper[4661]: I0120 18:50:49.362212 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-kgwml" event={"ID":"7f25c2cb-31da-4f0d-b7cb-472e09443f4a","Type":"ContainerStarted","Data":"af83bb2e61c28cd29e7cf5c7edc0a05a1197c4ad48fe2d89d35496e8f41775df"} Jan 20 18:50:49 crc kubenswrapper[4661]: I0120 18:50:49.362926 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-kgwml" event={"ID":"7f25c2cb-31da-4f0d-b7cb-472e09443f4a","Type":"ContainerStarted","Data":"f25da0e392d5637b38adadc32d4787569e7377e14f354c2cacca5c8468d90a18"} Jan 20 18:50:49 crc kubenswrapper[4661]: I0120 18:50:49.385223 4661 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-kgwml" podStartSLOduration=1.893320473 podStartE2EDuration="2.385194332s" podCreationTimestamp="2026-01-20 18:50:47 +0000 UTC" firstStartedPulling="2026-01-20 18:50:48.600422211 +0000 UTC m=+2704.931211873" lastFinishedPulling="2026-01-20 18:50:49.09229607 +0000 UTC m=+2705.423085732" observedRunningTime="2026-01-20 18:50:49.379983093 +0000 UTC m=+2705.710772745" watchObservedRunningTime="2026-01-20 18:50:49.385194332 +0000 UTC m=+2705.715983994" Jan 20 18:50:59 crc kubenswrapper[4661]: I0120 18:50:59.324107 4661 patch_prober.go:28] interesting pod/machine-config-daemon-svf7c container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 20 18:50:59 crc kubenswrapper[4661]: I0120 18:50:59.324737 4661 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-svf7c" podUID="78855c94-da90-4523-8d65-70f7fd153dee" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 20 18:51:29 crc kubenswrapper[4661]: I0120 18:51:29.323292 4661 patch_prober.go:28] interesting pod/machine-config-daemon-svf7c container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 20 18:51:29 crc kubenswrapper[4661]: I0120 18:51:29.323962 4661 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-svf7c" podUID="78855c94-da90-4523-8d65-70f7fd153dee" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 20 18:51:43 crc kubenswrapper[4661]: I0120 18:51:43.673590 4661 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-ndfc8"] Jan 20 18:51:43 crc kubenswrapper[4661]: I0120 18:51:43.676942 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ndfc8" Jan 20 18:51:43 crc kubenswrapper[4661]: I0120 18:51:43.684907 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-ndfc8"] Jan 20 18:51:43 crc kubenswrapper[4661]: I0120 18:51:43.780358 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7caaf7f6-5372-450e-ab92-1026d295f050-catalog-content\") pod \"redhat-operators-ndfc8\" (UID: \"7caaf7f6-5372-450e-ab92-1026d295f050\") " pod="openshift-marketplace/redhat-operators-ndfc8" Jan 20 18:51:43 crc kubenswrapper[4661]: I0120 18:51:43.780424 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7caaf7f6-5372-450e-ab92-1026d295f050-utilities\") pod \"redhat-operators-ndfc8\" (UID: \"7caaf7f6-5372-450e-ab92-1026d295f050\") " pod="openshift-marketplace/redhat-operators-ndfc8" Jan 20 18:51:43 crc kubenswrapper[4661]: I0120 18:51:43.780478 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8v6t6\" (UniqueName: \"kubernetes.io/projected/7caaf7f6-5372-450e-ab92-1026d295f050-kube-api-access-8v6t6\") pod \"redhat-operators-ndfc8\" (UID: \"7caaf7f6-5372-450e-ab92-1026d295f050\") " pod="openshift-marketplace/redhat-operators-ndfc8" Jan 20 18:51:43 crc kubenswrapper[4661]: I0120 18:51:43.882104 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7caaf7f6-5372-450e-ab92-1026d295f050-catalog-content\") pod \"redhat-operators-ndfc8\" (UID: \"7caaf7f6-5372-450e-ab92-1026d295f050\") " pod="openshift-marketplace/redhat-operators-ndfc8" Jan 20 18:51:43 crc kubenswrapper[4661]: I0120 18:51:43.882177 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7caaf7f6-5372-450e-ab92-1026d295f050-utilities\") pod \"redhat-operators-ndfc8\" (UID: \"7caaf7f6-5372-450e-ab92-1026d295f050\") " pod="openshift-marketplace/redhat-operators-ndfc8" Jan 20 18:51:43 crc kubenswrapper[4661]: I0120 18:51:43.882228 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8v6t6\" (UniqueName: \"kubernetes.io/projected/7caaf7f6-5372-450e-ab92-1026d295f050-kube-api-access-8v6t6\") pod \"redhat-operators-ndfc8\" (UID: \"7caaf7f6-5372-450e-ab92-1026d295f050\") " pod="openshift-marketplace/redhat-operators-ndfc8" Jan 20 18:51:43 crc kubenswrapper[4661]: I0120 18:51:43.882764 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7caaf7f6-5372-450e-ab92-1026d295f050-catalog-content\") pod \"redhat-operators-ndfc8\" (UID: \"7caaf7f6-5372-450e-ab92-1026d295f050\") " pod="openshift-marketplace/redhat-operators-ndfc8" Jan 20 18:51:43 crc kubenswrapper[4661]: I0120 18:51:43.882889 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7caaf7f6-5372-450e-ab92-1026d295f050-utilities\") pod \"redhat-operators-ndfc8\" (UID: \"7caaf7f6-5372-450e-ab92-1026d295f050\") " pod="openshift-marketplace/redhat-operators-ndfc8" Jan 20 18:51:43 crc kubenswrapper[4661]: I0120 18:51:43.905590 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8v6t6\" (UniqueName: \"kubernetes.io/projected/7caaf7f6-5372-450e-ab92-1026d295f050-kube-api-access-8v6t6\") pod \"redhat-operators-ndfc8\" (UID: \"7caaf7f6-5372-450e-ab92-1026d295f050\") " pod="openshift-marketplace/redhat-operators-ndfc8" Jan 20 18:51:43 crc kubenswrapper[4661]: I0120 18:51:43.999764 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ndfc8" Jan 20 18:51:44 crc kubenswrapper[4661]: I0120 18:51:44.507497 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-ndfc8"] Jan 20 18:51:44 crc kubenswrapper[4661]: I0120 18:51:44.846573 4661 generic.go:334] "Generic (PLEG): container finished" podID="7caaf7f6-5372-450e-ab92-1026d295f050" containerID="a4792ef7f7b23326b5b3e98ee12ee04c0ac9a91974c913aede2505192badf4b3" exitCode=0 Jan 20 18:51:44 crc kubenswrapper[4661]: I0120 18:51:44.846621 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ndfc8" event={"ID":"7caaf7f6-5372-450e-ab92-1026d295f050","Type":"ContainerDied","Data":"a4792ef7f7b23326b5b3e98ee12ee04c0ac9a91974c913aede2505192badf4b3"} Jan 20 18:51:44 crc kubenswrapper[4661]: I0120 18:51:44.846651 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ndfc8" event={"ID":"7caaf7f6-5372-450e-ab92-1026d295f050","Type":"ContainerStarted","Data":"2c6b7190a31f8ad73ab9c196ef863fef756f6e6ecbc613cf663149f86c109f20"} Jan 20 18:51:44 crc kubenswrapper[4661]: I0120 18:51:44.851376 4661 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 20 18:51:46 crc kubenswrapper[4661]: I0120 18:51:46.868774 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ndfc8" event={"ID":"7caaf7f6-5372-450e-ab92-1026d295f050","Type":"ContainerStarted","Data":"4faaac4086b7d86d1c570d24cf4e24b5687acd384789a10a17819a515958c261"} Jan 20 18:51:47 crc kubenswrapper[4661]: I0120 18:51:47.877291 4661 generic.go:334] "Generic (PLEG): container finished" podID="7caaf7f6-5372-450e-ab92-1026d295f050" containerID="4faaac4086b7d86d1c570d24cf4e24b5687acd384789a10a17819a515958c261" exitCode=0 Jan 20 18:51:47 crc kubenswrapper[4661]: I0120 18:51:47.877349 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ndfc8" event={"ID":"7caaf7f6-5372-450e-ab92-1026d295f050","Type":"ContainerDied","Data":"4faaac4086b7d86d1c570d24cf4e24b5687acd384789a10a17819a515958c261"} Jan 20 18:51:49 crc kubenswrapper[4661]: I0120 18:51:49.894007 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ndfc8" event={"ID":"7caaf7f6-5372-450e-ab92-1026d295f050","Type":"ContainerStarted","Data":"03aa3f67d40dffb3612a6ee3441ba081a462c9bfbc808d0bf151337ec7027b37"} Jan 20 18:51:49 crc kubenswrapper[4661]: I0120 18:51:49.935877 4661 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-ndfc8" podStartSLOduration=2.670253071 podStartE2EDuration="6.935847845s" podCreationTimestamp="2026-01-20 18:51:43 +0000 UTC" firstStartedPulling="2026-01-20 18:51:44.851055275 +0000 UTC m=+2761.181844937" lastFinishedPulling="2026-01-20 18:51:49.116650049 +0000 UTC m=+2765.447439711" observedRunningTime="2026-01-20 18:51:49.912821023 +0000 UTC m=+2766.243610685" watchObservedRunningTime="2026-01-20 18:51:49.935847845 +0000 UTC m=+2766.266637507" Jan 20 18:51:54 crc kubenswrapper[4661]: I0120 18:51:54.000209 4661 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-ndfc8" Jan 20 18:51:54 crc kubenswrapper[4661]: I0120 18:51:54.000708 4661 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-ndfc8" Jan 20 18:51:55 crc kubenswrapper[4661]: I0120 18:51:55.062737 4661 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-ndfc8" podUID="7caaf7f6-5372-450e-ab92-1026d295f050" containerName="registry-server" probeResult="failure" output=< Jan 20 18:51:55 crc kubenswrapper[4661]: timeout: failed to connect service ":50051" within 1s Jan 20 18:51:55 crc kubenswrapper[4661]: > Jan 20 18:51:59 crc kubenswrapper[4661]: I0120 18:51:59.324071 4661 patch_prober.go:28] interesting pod/machine-config-daemon-svf7c container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 20 18:51:59 crc kubenswrapper[4661]: I0120 18:51:59.325100 4661 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-svf7c" podUID="78855c94-da90-4523-8d65-70f7fd153dee" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 20 18:51:59 crc kubenswrapper[4661]: I0120 18:51:59.325202 4661 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-svf7c" Jan 20 18:51:59 crc kubenswrapper[4661]: I0120 18:51:59.326177 4661 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"09f90b8546dddfc6e996cdbb77ceb056fba14e8d7b35198aeb2ca0341a0647b6"} pod="openshift-machine-config-operator/machine-config-daemon-svf7c" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 20 18:51:59 crc kubenswrapper[4661]: I0120 18:51:59.326307 4661 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-svf7c" podUID="78855c94-da90-4523-8d65-70f7fd153dee" containerName="machine-config-daemon" containerID="cri-o://09f90b8546dddfc6e996cdbb77ceb056fba14e8d7b35198aeb2ca0341a0647b6" gracePeriod=600 Jan 20 18:51:59 crc kubenswrapper[4661]: I0120 18:51:59.998907 4661 generic.go:334] "Generic (PLEG): container finished" podID="78855c94-da90-4523-8d65-70f7fd153dee" containerID="09f90b8546dddfc6e996cdbb77ceb056fba14e8d7b35198aeb2ca0341a0647b6" exitCode=0 Jan 20 18:51:59 crc kubenswrapper[4661]: I0120 18:51:59.999156 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-svf7c" event={"ID":"78855c94-da90-4523-8d65-70f7fd153dee","Type":"ContainerDied","Data":"09f90b8546dddfc6e996cdbb77ceb056fba14e8d7b35198aeb2ca0341a0647b6"} Jan 20 18:51:59 crc kubenswrapper[4661]: I0120 18:51:59.999183 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-svf7c" event={"ID":"78855c94-da90-4523-8d65-70f7fd153dee","Type":"ContainerStarted","Data":"8cf060cb371bfd85fb7ec3dfa349258b41b6b6371ddbb02e6b4120e489296593"} Jan 20 18:51:59 crc kubenswrapper[4661]: I0120 18:51:59.999199 4661 scope.go:117] "RemoveContainer" containerID="927fc50872b021bab1ad3425d5a889eb67f5570428c82cb9e465408890506791" Jan 20 18:52:04 crc kubenswrapper[4661]: I0120 18:52:04.070830 4661 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-ndfc8" Jan 20 18:52:04 crc kubenswrapper[4661]: I0120 18:52:04.159476 4661 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-ndfc8" Jan 20 18:52:04 crc kubenswrapper[4661]: I0120 18:52:04.310881 4661 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-ndfc8"] Jan 20 18:52:06 crc kubenswrapper[4661]: I0120 18:52:06.062760 4661 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-ndfc8" podUID="7caaf7f6-5372-450e-ab92-1026d295f050" containerName="registry-server" containerID="cri-o://03aa3f67d40dffb3612a6ee3441ba081a462c9bfbc808d0bf151337ec7027b37" gracePeriod=2 Jan 20 18:52:06 crc kubenswrapper[4661]: I0120 18:52:06.540131 4661 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ndfc8" Jan 20 18:52:06 crc kubenswrapper[4661]: I0120 18:52:06.546333 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8v6t6\" (UniqueName: \"kubernetes.io/projected/7caaf7f6-5372-450e-ab92-1026d295f050-kube-api-access-8v6t6\") pod \"7caaf7f6-5372-450e-ab92-1026d295f050\" (UID: \"7caaf7f6-5372-450e-ab92-1026d295f050\") " Jan 20 18:52:06 crc kubenswrapper[4661]: I0120 18:52:06.546497 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7caaf7f6-5372-450e-ab92-1026d295f050-catalog-content\") pod \"7caaf7f6-5372-450e-ab92-1026d295f050\" (UID: \"7caaf7f6-5372-450e-ab92-1026d295f050\") " Jan 20 18:52:06 crc kubenswrapper[4661]: I0120 18:52:06.546526 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7caaf7f6-5372-450e-ab92-1026d295f050-utilities\") pod \"7caaf7f6-5372-450e-ab92-1026d295f050\" (UID: \"7caaf7f6-5372-450e-ab92-1026d295f050\") " Jan 20 18:52:06 crc kubenswrapper[4661]: I0120 18:52:06.547194 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7caaf7f6-5372-450e-ab92-1026d295f050-utilities" (OuterVolumeSpecName: "utilities") pod "7caaf7f6-5372-450e-ab92-1026d295f050" (UID: "7caaf7f6-5372-450e-ab92-1026d295f050"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 18:52:06 crc kubenswrapper[4661]: I0120 18:52:06.553843 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7caaf7f6-5372-450e-ab92-1026d295f050-kube-api-access-8v6t6" (OuterVolumeSpecName: "kube-api-access-8v6t6") pod "7caaf7f6-5372-450e-ab92-1026d295f050" (UID: "7caaf7f6-5372-450e-ab92-1026d295f050"). InnerVolumeSpecName "kube-api-access-8v6t6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 18:52:06 crc kubenswrapper[4661]: I0120 18:52:06.649093 4661 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8v6t6\" (UniqueName: \"kubernetes.io/projected/7caaf7f6-5372-450e-ab92-1026d295f050-kube-api-access-8v6t6\") on node \"crc\" DevicePath \"\"" Jan 20 18:52:06 crc kubenswrapper[4661]: I0120 18:52:06.649361 4661 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7caaf7f6-5372-450e-ab92-1026d295f050-utilities\") on node \"crc\" DevicePath \"\"" Jan 20 18:52:06 crc kubenswrapper[4661]: I0120 18:52:06.679506 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7caaf7f6-5372-450e-ab92-1026d295f050-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7caaf7f6-5372-450e-ab92-1026d295f050" (UID: "7caaf7f6-5372-450e-ab92-1026d295f050"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 18:52:06 crc kubenswrapper[4661]: I0120 18:52:06.750233 4661 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7caaf7f6-5372-450e-ab92-1026d295f050-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 20 18:52:07 crc kubenswrapper[4661]: I0120 18:52:07.077841 4661 generic.go:334] "Generic (PLEG): container finished" podID="7caaf7f6-5372-450e-ab92-1026d295f050" containerID="03aa3f67d40dffb3612a6ee3441ba081a462c9bfbc808d0bf151337ec7027b37" exitCode=0 Jan 20 18:52:07 crc kubenswrapper[4661]: I0120 18:52:07.077926 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ndfc8" event={"ID":"7caaf7f6-5372-450e-ab92-1026d295f050","Type":"ContainerDied","Data":"03aa3f67d40dffb3612a6ee3441ba081a462c9bfbc808d0bf151337ec7027b37"} Jan 20 18:52:07 crc kubenswrapper[4661]: I0120 18:52:07.077992 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ndfc8" event={"ID":"7caaf7f6-5372-450e-ab92-1026d295f050","Type":"ContainerDied","Data":"2c6b7190a31f8ad73ab9c196ef863fef756f6e6ecbc613cf663149f86c109f20"} Jan 20 18:52:07 crc kubenswrapper[4661]: I0120 18:52:07.078041 4661 scope.go:117] "RemoveContainer" containerID="03aa3f67d40dffb3612a6ee3441ba081a462c9bfbc808d0bf151337ec7027b37" Jan 20 18:52:07 crc kubenswrapper[4661]: I0120 18:52:07.078322 4661 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ndfc8" Jan 20 18:52:07 crc kubenswrapper[4661]: I0120 18:52:07.115136 4661 scope.go:117] "RemoveContainer" containerID="4faaac4086b7d86d1c570d24cf4e24b5687acd384789a10a17819a515958c261" Jan 20 18:52:07 crc kubenswrapper[4661]: I0120 18:52:07.158112 4661 scope.go:117] "RemoveContainer" containerID="a4792ef7f7b23326b5b3e98ee12ee04c0ac9a91974c913aede2505192badf4b3" Jan 20 18:52:07 crc kubenswrapper[4661]: I0120 18:52:07.168958 4661 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-ndfc8"] Jan 20 18:52:07 crc kubenswrapper[4661]: I0120 18:52:07.178271 4661 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-ndfc8"] Jan 20 18:52:07 crc kubenswrapper[4661]: I0120 18:52:07.188262 4661 scope.go:117] "RemoveContainer" containerID="03aa3f67d40dffb3612a6ee3441ba081a462c9bfbc808d0bf151337ec7027b37" Jan 20 18:52:07 crc kubenswrapper[4661]: E0120 18:52:07.188738 4661 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"03aa3f67d40dffb3612a6ee3441ba081a462c9bfbc808d0bf151337ec7027b37\": container with ID starting with 03aa3f67d40dffb3612a6ee3441ba081a462c9bfbc808d0bf151337ec7027b37 not found: ID does not exist" containerID="03aa3f67d40dffb3612a6ee3441ba081a462c9bfbc808d0bf151337ec7027b37" Jan 20 18:52:07 crc kubenswrapper[4661]: I0120 18:52:07.188841 4661 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"03aa3f67d40dffb3612a6ee3441ba081a462c9bfbc808d0bf151337ec7027b37"} err="failed to get container status \"03aa3f67d40dffb3612a6ee3441ba081a462c9bfbc808d0bf151337ec7027b37\": rpc error: code = NotFound desc = could not find container \"03aa3f67d40dffb3612a6ee3441ba081a462c9bfbc808d0bf151337ec7027b37\": container with ID starting with 03aa3f67d40dffb3612a6ee3441ba081a462c9bfbc808d0bf151337ec7027b37 not found: ID does not exist" Jan 20 18:52:07 crc kubenswrapper[4661]: I0120 18:52:07.188929 4661 scope.go:117] "RemoveContainer" containerID="4faaac4086b7d86d1c570d24cf4e24b5687acd384789a10a17819a515958c261" Jan 20 18:52:07 crc kubenswrapper[4661]: E0120 18:52:07.189476 4661 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4faaac4086b7d86d1c570d24cf4e24b5687acd384789a10a17819a515958c261\": container with ID starting with 4faaac4086b7d86d1c570d24cf4e24b5687acd384789a10a17819a515958c261 not found: ID does not exist" containerID="4faaac4086b7d86d1c570d24cf4e24b5687acd384789a10a17819a515958c261" Jan 20 18:52:07 crc kubenswrapper[4661]: I0120 18:52:07.189543 4661 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4faaac4086b7d86d1c570d24cf4e24b5687acd384789a10a17819a515958c261"} err="failed to get container status \"4faaac4086b7d86d1c570d24cf4e24b5687acd384789a10a17819a515958c261\": rpc error: code = NotFound desc = could not find container \"4faaac4086b7d86d1c570d24cf4e24b5687acd384789a10a17819a515958c261\": container with ID starting with 4faaac4086b7d86d1c570d24cf4e24b5687acd384789a10a17819a515958c261 not found: ID does not exist" Jan 20 18:52:07 crc kubenswrapper[4661]: I0120 18:52:07.189576 4661 scope.go:117] "RemoveContainer" containerID="a4792ef7f7b23326b5b3e98ee12ee04c0ac9a91974c913aede2505192badf4b3" Jan 20 18:52:07 crc kubenswrapper[4661]: E0120 18:52:07.189981 4661 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a4792ef7f7b23326b5b3e98ee12ee04c0ac9a91974c913aede2505192badf4b3\": container with ID starting with a4792ef7f7b23326b5b3e98ee12ee04c0ac9a91974c913aede2505192badf4b3 not found: ID does not exist" containerID="a4792ef7f7b23326b5b3e98ee12ee04c0ac9a91974c913aede2505192badf4b3" Jan 20 18:52:07 crc kubenswrapper[4661]: I0120 18:52:07.190012 4661 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a4792ef7f7b23326b5b3e98ee12ee04c0ac9a91974c913aede2505192badf4b3"} err="failed to get container status \"a4792ef7f7b23326b5b3e98ee12ee04c0ac9a91974c913aede2505192badf4b3\": rpc error: code = NotFound desc = could not find container \"a4792ef7f7b23326b5b3e98ee12ee04c0ac9a91974c913aede2505192badf4b3\": container with ID starting with a4792ef7f7b23326b5b3e98ee12ee04c0ac9a91974c913aede2505192badf4b3 not found: ID does not exist" Jan 20 18:52:08 crc kubenswrapper[4661]: I0120 18:52:08.153182 4661 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7caaf7f6-5372-450e-ab92-1026d295f050" path="/var/lib/kubelet/pods/7caaf7f6-5372-450e-ab92-1026d295f050/volumes" Jan 20 18:52:13 crc kubenswrapper[4661]: I0120 18:52:13.136213 4661 generic.go:334] "Generic (PLEG): container finished" podID="7f25c2cb-31da-4f0d-b7cb-472e09443f4a" containerID="af83bb2e61c28cd29e7cf5c7edc0a05a1197c4ad48fe2d89d35496e8f41775df" exitCode=0 Jan 20 18:52:13 crc kubenswrapper[4661]: I0120 18:52:13.136310 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-kgwml" event={"ID":"7f25c2cb-31da-4f0d-b7cb-472e09443f4a","Type":"ContainerDied","Data":"af83bb2e61c28cd29e7cf5c7edc0a05a1197c4ad48fe2d89d35496e8f41775df"} Jan 20 18:52:15 crc kubenswrapper[4661]: I0120 18:52:15.166611 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-kgwml" event={"ID":"7f25c2cb-31da-4f0d-b7cb-472e09443f4a","Type":"ContainerDied","Data":"f25da0e392d5637b38adadc32d4787569e7377e14f354c2cacca5c8468d90a18"} Jan 20 18:52:15 crc kubenswrapper[4661]: I0120 18:52:15.166948 4661 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f25da0e392d5637b38adadc32d4787569e7377e14f354c2cacca5c8468d90a18" Jan 20 18:52:15 crc kubenswrapper[4661]: I0120 18:52:15.170857 4661 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-kgwml" Jan 20 18:52:15 crc kubenswrapper[4661]: I0120 18:52:15.233076 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7f25c2cb-31da-4f0d-b7cb-472e09443f4a-ssh-key-openstack-edpm-ipam\") pod \"7f25c2cb-31da-4f0d-b7cb-472e09443f4a\" (UID: \"7f25c2cb-31da-4f0d-b7cb-472e09443f4a\") " Jan 20 18:52:15 crc kubenswrapper[4661]: I0120 18:52:15.233194 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/7f25c2cb-31da-4f0d-b7cb-472e09443f4a-ceph\") pod \"7f25c2cb-31da-4f0d-b7cb-472e09443f4a\" (UID: \"7f25c2cb-31da-4f0d-b7cb-472e09443f4a\") " Jan 20 18:52:15 crc kubenswrapper[4661]: I0120 18:52:15.233227 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/7f25c2cb-31da-4f0d-b7cb-472e09443f4a-ovncontroller-config-0\") pod \"7f25c2cb-31da-4f0d-b7cb-472e09443f4a\" (UID: \"7f25c2cb-31da-4f0d-b7cb-472e09443f4a\") " Jan 20 18:52:15 crc kubenswrapper[4661]: I0120 18:52:15.233338 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7f25c2cb-31da-4f0d-b7cb-472e09443f4a-ovn-combined-ca-bundle\") pod \"7f25c2cb-31da-4f0d-b7cb-472e09443f4a\" (UID: \"7f25c2cb-31da-4f0d-b7cb-472e09443f4a\") " Jan 20 18:52:15 crc kubenswrapper[4661]: I0120 18:52:15.233374 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9tnvr\" (UniqueName: \"kubernetes.io/projected/7f25c2cb-31da-4f0d-b7cb-472e09443f4a-kube-api-access-9tnvr\") pod \"7f25c2cb-31da-4f0d-b7cb-472e09443f4a\" (UID: \"7f25c2cb-31da-4f0d-b7cb-472e09443f4a\") " Jan 20 18:52:15 crc kubenswrapper[4661]: I0120 18:52:15.233445 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7f25c2cb-31da-4f0d-b7cb-472e09443f4a-inventory\") pod \"7f25c2cb-31da-4f0d-b7cb-472e09443f4a\" (UID: \"7f25c2cb-31da-4f0d-b7cb-472e09443f4a\") " Jan 20 18:52:15 crc kubenswrapper[4661]: I0120 18:52:15.244462 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7f25c2cb-31da-4f0d-b7cb-472e09443f4a-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "7f25c2cb-31da-4f0d-b7cb-472e09443f4a" (UID: "7f25c2cb-31da-4f0d-b7cb-472e09443f4a"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 18:52:15 crc kubenswrapper[4661]: I0120 18:52:15.246013 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7f25c2cb-31da-4f0d-b7cb-472e09443f4a-ceph" (OuterVolumeSpecName: "ceph") pod "7f25c2cb-31da-4f0d-b7cb-472e09443f4a" (UID: "7f25c2cb-31da-4f0d-b7cb-472e09443f4a"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 18:52:15 crc kubenswrapper[4661]: I0120 18:52:15.249901 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7f25c2cb-31da-4f0d-b7cb-472e09443f4a-kube-api-access-9tnvr" (OuterVolumeSpecName: "kube-api-access-9tnvr") pod "7f25c2cb-31da-4f0d-b7cb-472e09443f4a" (UID: "7f25c2cb-31da-4f0d-b7cb-472e09443f4a"). InnerVolumeSpecName "kube-api-access-9tnvr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 18:52:15 crc kubenswrapper[4661]: I0120 18:52:15.262885 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7f25c2cb-31da-4f0d-b7cb-472e09443f4a-inventory" (OuterVolumeSpecName: "inventory") pod "7f25c2cb-31da-4f0d-b7cb-472e09443f4a" (UID: "7f25c2cb-31da-4f0d-b7cb-472e09443f4a"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 18:52:15 crc kubenswrapper[4661]: I0120 18:52:15.269614 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7f25c2cb-31da-4f0d-b7cb-472e09443f4a-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "7f25c2cb-31da-4f0d-b7cb-472e09443f4a" (UID: "7f25c2cb-31da-4f0d-b7cb-472e09443f4a"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 18:52:15 crc kubenswrapper[4661]: I0120 18:52:15.273586 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7f25c2cb-31da-4f0d-b7cb-472e09443f4a-ovncontroller-config-0" (OuterVolumeSpecName: "ovncontroller-config-0") pod "7f25c2cb-31da-4f0d-b7cb-472e09443f4a" (UID: "7f25c2cb-31da-4f0d-b7cb-472e09443f4a"). InnerVolumeSpecName "ovncontroller-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 18:52:15 crc kubenswrapper[4661]: I0120 18:52:15.352877 4661 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7f25c2cb-31da-4f0d-b7cb-472e09443f4a-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 20 18:52:15 crc kubenswrapper[4661]: I0120 18:52:15.352919 4661 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9tnvr\" (UniqueName: \"kubernetes.io/projected/7f25c2cb-31da-4f0d-b7cb-472e09443f4a-kube-api-access-9tnvr\") on node \"crc\" DevicePath \"\"" Jan 20 18:52:15 crc kubenswrapper[4661]: I0120 18:52:15.352933 4661 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7f25c2cb-31da-4f0d-b7cb-472e09443f4a-inventory\") on node \"crc\" DevicePath \"\"" Jan 20 18:52:15 crc kubenswrapper[4661]: I0120 18:52:15.352946 4661 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7f25c2cb-31da-4f0d-b7cb-472e09443f4a-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 20 18:52:15 crc kubenswrapper[4661]: I0120 18:52:15.352958 4661 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/7f25c2cb-31da-4f0d-b7cb-472e09443f4a-ceph\") on node \"crc\" DevicePath \"\"" Jan 20 18:52:15 crc kubenswrapper[4661]: I0120 18:52:15.352970 4661 reconciler_common.go:293] "Volume detached for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/7f25c2cb-31da-4f0d-b7cb-472e09443f4a-ovncontroller-config-0\") on node \"crc\" DevicePath \"\"" Jan 20 18:52:16 crc kubenswrapper[4661]: I0120 18:52:16.179060 4661 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-kgwml" Jan 20 18:52:16 crc kubenswrapper[4661]: I0120 18:52:16.406347 4661 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-tbhjj"] Jan 20 18:52:16 crc kubenswrapper[4661]: E0120 18:52:16.407417 4661 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7f25c2cb-31da-4f0d-b7cb-472e09443f4a" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Jan 20 18:52:16 crc kubenswrapper[4661]: I0120 18:52:16.407443 4661 state_mem.go:107] "Deleted CPUSet assignment" podUID="7f25c2cb-31da-4f0d-b7cb-472e09443f4a" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Jan 20 18:52:16 crc kubenswrapper[4661]: E0120 18:52:16.407465 4661 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7caaf7f6-5372-450e-ab92-1026d295f050" containerName="registry-server" Jan 20 18:52:16 crc kubenswrapper[4661]: I0120 18:52:16.407473 4661 state_mem.go:107] "Deleted CPUSet assignment" podUID="7caaf7f6-5372-450e-ab92-1026d295f050" containerName="registry-server" Jan 20 18:52:16 crc kubenswrapper[4661]: E0120 18:52:16.407489 4661 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7caaf7f6-5372-450e-ab92-1026d295f050" containerName="extract-utilities" Jan 20 18:52:16 crc kubenswrapper[4661]: I0120 18:52:16.407499 4661 state_mem.go:107] "Deleted CPUSet assignment" podUID="7caaf7f6-5372-450e-ab92-1026d295f050" containerName="extract-utilities" Jan 20 18:52:16 crc kubenswrapper[4661]: E0120 18:52:16.407507 4661 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7caaf7f6-5372-450e-ab92-1026d295f050" containerName="extract-content" Jan 20 18:52:16 crc kubenswrapper[4661]: I0120 18:52:16.407513 4661 state_mem.go:107] "Deleted CPUSet assignment" podUID="7caaf7f6-5372-450e-ab92-1026d295f050" containerName="extract-content" Jan 20 18:52:16 crc kubenswrapper[4661]: I0120 18:52:16.407728 4661 memory_manager.go:354] "RemoveStaleState removing state" podUID="7caaf7f6-5372-450e-ab92-1026d295f050" containerName="registry-server" Jan 20 18:52:16 crc kubenswrapper[4661]: I0120 18:52:16.407749 4661 memory_manager.go:354] "RemoveStaleState removing state" podUID="7f25c2cb-31da-4f0d-b7cb-472e09443f4a" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Jan 20 18:52:16 crc kubenswrapper[4661]: I0120 18:52:16.408291 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-tbhjj" Jan 20 18:52:16 crc kubenswrapper[4661]: I0120 18:52:16.410241 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-ovn-metadata-agent-neutron-config" Jan 20 18:52:16 crc kubenswrapper[4661]: I0120 18:52:16.411343 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Jan 20 18:52:16 crc kubenswrapper[4661]: I0120 18:52:16.411815 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 20 18:52:16 crc kubenswrapper[4661]: I0120 18:52:16.414823 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-neutron-config" Jan 20 18:52:16 crc kubenswrapper[4661]: I0120 18:52:16.415213 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-mmbv8" Jan 20 18:52:16 crc kubenswrapper[4661]: I0120 18:52:16.415387 4661 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 20 18:52:16 crc kubenswrapper[4661]: I0120 18:52:16.415233 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 20 18:52:16 crc kubenswrapper[4661]: I0120 18:52:16.430335 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-tbhjj"] Jan 20 18:52:16 crc kubenswrapper[4661]: I0120 18:52:16.479801 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/58c705e0-9353-44b0-b3af-65c84ddb1f44-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-tbhjj\" (UID: \"58c705e0-9353-44b0-b3af-65c84ddb1f44\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-tbhjj" Jan 20 18:52:16 crc kubenswrapper[4661]: I0120 18:52:16.479903 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/58c705e0-9353-44b0-b3af-65c84ddb1f44-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-tbhjj\" (UID: \"58c705e0-9353-44b0-b3af-65c84ddb1f44\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-tbhjj" Jan 20 18:52:16 crc kubenswrapper[4661]: I0120 18:52:16.479935 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v92zv\" (UniqueName: \"kubernetes.io/projected/58c705e0-9353-44b0-b3af-65c84ddb1f44-kube-api-access-v92zv\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-tbhjj\" (UID: \"58c705e0-9353-44b0-b3af-65c84ddb1f44\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-tbhjj" Jan 20 18:52:16 crc kubenswrapper[4661]: I0120 18:52:16.480047 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/58c705e0-9353-44b0-b3af-65c84ddb1f44-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-tbhjj\" (UID: \"58c705e0-9353-44b0-b3af-65c84ddb1f44\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-tbhjj" Jan 20 18:52:16 crc kubenswrapper[4661]: I0120 18:52:16.480108 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/58c705e0-9353-44b0-b3af-65c84ddb1f44-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-tbhjj\" (UID: \"58c705e0-9353-44b0-b3af-65c84ddb1f44\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-tbhjj" Jan 20 18:52:16 crc kubenswrapper[4661]: I0120 18:52:16.480148 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/58c705e0-9353-44b0-b3af-65c84ddb1f44-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-tbhjj\" (UID: \"58c705e0-9353-44b0-b3af-65c84ddb1f44\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-tbhjj" Jan 20 18:52:16 crc kubenswrapper[4661]: I0120 18:52:16.480178 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/58c705e0-9353-44b0-b3af-65c84ddb1f44-ceph\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-tbhjj\" (UID: \"58c705e0-9353-44b0-b3af-65c84ddb1f44\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-tbhjj" Jan 20 18:52:16 crc kubenswrapper[4661]: I0120 18:52:16.582115 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/58c705e0-9353-44b0-b3af-65c84ddb1f44-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-tbhjj\" (UID: \"58c705e0-9353-44b0-b3af-65c84ddb1f44\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-tbhjj" Jan 20 18:52:16 crc kubenswrapper[4661]: I0120 18:52:16.582175 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v92zv\" (UniqueName: \"kubernetes.io/projected/58c705e0-9353-44b0-b3af-65c84ddb1f44-kube-api-access-v92zv\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-tbhjj\" (UID: \"58c705e0-9353-44b0-b3af-65c84ddb1f44\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-tbhjj" Jan 20 18:52:16 crc kubenswrapper[4661]: I0120 18:52:16.582229 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/58c705e0-9353-44b0-b3af-65c84ddb1f44-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-tbhjj\" (UID: \"58c705e0-9353-44b0-b3af-65c84ddb1f44\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-tbhjj" Jan 20 18:52:16 crc kubenswrapper[4661]: I0120 18:52:16.582251 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/58c705e0-9353-44b0-b3af-65c84ddb1f44-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-tbhjj\" (UID: \"58c705e0-9353-44b0-b3af-65c84ddb1f44\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-tbhjj" Jan 20 18:52:16 crc kubenswrapper[4661]: I0120 18:52:16.582282 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/58c705e0-9353-44b0-b3af-65c84ddb1f44-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-tbhjj\" (UID: \"58c705e0-9353-44b0-b3af-65c84ddb1f44\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-tbhjj" Jan 20 18:52:16 crc kubenswrapper[4661]: I0120 18:52:16.582305 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/58c705e0-9353-44b0-b3af-65c84ddb1f44-ceph\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-tbhjj\" (UID: \"58c705e0-9353-44b0-b3af-65c84ddb1f44\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-tbhjj" Jan 20 18:52:16 crc kubenswrapper[4661]: I0120 18:52:16.582366 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/58c705e0-9353-44b0-b3af-65c84ddb1f44-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-tbhjj\" (UID: \"58c705e0-9353-44b0-b3af-65c84ddb1f44\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-tbhjj" Jan 20 18:52:16 crc kubenswrapper[4661]: I0120 18:52:16.587102 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/58c705e0-9353-44b0-b3af-65c84ddb1f44-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-tbhjj\" (UID: \"58c705e0-9353-44b0-b3af-65c84ddb1f44\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-tbhjj" Jan 20 18:52:16 crc kubenswrapper[4661]: I0120 18:52:16.587608 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/58c705e0-9353-44b0-b3af-65c84ddb1f44-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-tbhjj\" (UID: \"58c705e0-9353-44b0-b3af-65c84ddb1f44\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-tbhjj" Jan 20 18:52:16 crc kubenswrapper[4661]: I0120 18:52:16.590697 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/58c705e0-9353-44b0-b3af-65c84ddb1f44-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-tbhjj\" (UID: \"58c705e0-9353-44b0-b3af-65c84ddb1f44\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-tbhjj" Jan 20 18:52:16 crc kubenswrapper[4661]: I0120 18:52:16.590861 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/58c705e0-9353-44b0-b3af-65c84ddb1f44-ceph\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-tbhjj\" (UID: \"58c705e0-9353-44b0-b3af-65c84ddb1f44\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-tbhjj" Jan 20 18:52:16 crc kubenswrapper[4661]: I0120 18:52:16.591414 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/58c705e0-9353-44b0-b3af-65c84ddb1f44-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-tbhjj\" (UID: \"58c705e0-9353-44b0-b3af-65c84ddb1f44\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-tbhjj" Jan 20 18:52:16 crc kubenswrapper[4661]: I0120 18:52:16.592042 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/58c705e0-9353-44b0-b3af-65c84ddb1f44-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-tbhjj\" (UID: \"58c705e0-9353-44b0-b3af-65c84ddb1f44\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-tbhjj" Jan 20 18:52:16 crc kubenswrapper[4661]: I0120 18:52:16.608710 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v92zv\" (UniqueName: \"kubernetes.io/projected/58c705e0-9353-44b0-b3af-65c84ddb1f44-kube-api-access-v92zv\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-tbhjj\" (UID: \"58c705e0-9353-44b0-b3af-65c84ddb1f44\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-tbhjj" Jan 20 18:52:16 crc kubenswrapper[4661]: I0120 18:52:16.727933 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-tbhjj" Jan 20 18:52:17 crc kubenswrapper[4661]: I0120 18:52:17.237166 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-tbhjj"] Jan 20 18:52:17 crc kubenswrapper[4661]: W0120 18:52:17.241507 4661 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod58c705e0_9353_44b0_b3af_65c84ddb1f44.slice/crio-550d19c8cf5f7f18258deba4b3a3fb114c7ba12bbf7a775ee15ad4abc898bd8e WatchSource:0}: Error finding container 550d19c8cf5f7f18258deba4b3a3fb114c7ba12bbf7a775ee15ad4abc898bd8e: Status 404 returned error can't find the container with id 550d19c8cf5f7f18258deba4b3a3fb114c7ba12bbf7a775ee15ad4abc898bd8e Jan 20 18:52:18 crc kubenswrapper[4661]: I0120 18:52:18.196121 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-tbhjj" event={"ID":"58c705e0-9353-44b0-b3af-65c84ddb1f44","Type":"ContainerStarted","Data":"f9a1dd27d443eb9a401f456e9beaedf6da2823899b6df8878504ca11c9e7bdc9"} Jan 20 18:52:18 crc kubenswrapper[4661]: I0120 18:52:18.196713 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-tbhjj" event={"ID":"58c705e0-9353-44b0-b3af-65c84ddb1f44","Type":"ContainerStarted","Data":"550d19c8cf5f7f18258deba4b3a3fb114c7ba12bbf7a775ee15ad4abc898bd8e"} Jan 20 18:52:18 crc kubenswrapper[4661]: I0120 18:52:18.219145 4661 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-tbhjj" podStartSLOduration=1.6428996740000001 podStartE2EDuration="2.219127895s" podCreationTimestamp="2026-01-20 18:52:16 +0000 UTC" firstStartedPulling="2026-01-20 18:52:17.243913885 +0000 UTC m=+2793.574703567" lastFinishedPulling="2026-01-20 18:52:17.820142126 +0000 UTC m=+2794.150931788" observedRunningTime="2026-01-20 18:52:18.215110211 +0000 UTC m=+2794.545899883" watchObservedRunningTime="2026-01-20 18:52:18.219127895 +0000 UTC m=+2794.549917557" Jan 20 18:53:32 crc kubenswrapper[4661]: E0120 18:53:32.492193 4661 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod58c705e0_9353_44b0_b3af_65c84ddb1f44.slice/crio-f9a1dd27d443eb9a401f456e9beaedf6da2823899b6df8878504ca11c9e7bdc9.scope\": RecentStats: unable to find data in memory cache]" Jan 20 18:53:32 crc kubenswrapper[4661]: I0120 18:53:32.913490 4661 generic.go:334] "Generic (PLEG): container finished" podID="58c705e0-9353-44b0-b3af-65c84ddb1f44" containerID="f9a1dd27d443eb9a401f456e9beaedf6da2823899b6df8878504ca11c9e7bdc9" exitCode=0 Jan 20 18:53:32 crc kubenswrapper[4661]: I0120 18:53:32.913566 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-tbhjj" event={"ID":"58c705e0-9353-44b0-b3af-65c84ddb1f44","Type":"ContainerDied","Data":"f9a1dd27d443eb9a401f456e9beaedf6da2823899b6df8878504ca11c9e7bdc9"} Jan 20 18:53:34 crc kubenswrapper[4661]: I0120 18:53:34.447211 4661 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-tbhjj" Jan 20 18:53:34 crc kubenswrapper[4661]: I0120 18:53:34.548807 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/58c705e0-9353-44b0-b3af-65c84ddb1f44-ceph\") pod \"58c705e0-9353-44b0-b3af-65c84ddb1f44\" (UID: \"58c705e0-9353-44b0-b3af-65c84ddb1f44\") " Jan 20 18:53:34 crc kubenswrapper[4661]: I0120 18:53:34.548912 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/58c705e0-9353-44b0-b3af-65c84ddb1f44-neutron-ovn-metadata-agent-neutron-config-0\") pod \"58c705e0-9353-44b0-b3af-65c84ddb1f44\" (UID: \"58c705e0-9353-44b0-b3af-65c84ddb1f44\") " Jan 20 18:53:34 crc kubenswrapper[4661]: I0120 18:53:34.548944 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/58c705e0-9353-44b0-b3af-65c84ddb1f44-nova-metadata-neutron-config-0\") pod \"58c705e0-9353-44b0-b3af-65c84ddb1f44\" (UID: \"58c705e0-9353-44b0-b3af-65c84ddb1f44\") " Jan 20 18:53:34 crc kubenswrapper[4661]: I0120 18:53:34.548987 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/58c705e0-9353-44b0-b3af-65c84ddb1f44-ssh-key-openstack-edpm-ipam\") pod \"58c705e0-9353-44b0-b3af-65c84ddb1f44\" (UID: \"58c705e0-9353-44b0-b3af-65c84ddb1f44\") " Jan 20 18:53:34 crc kubenswrapper[4661]: I0120 18:53:34.549086 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/58c705e0-9353-44b0-b3af-65c84ddb1f44-inventory\") pod \"58c705e0-9353-44b0-b3af-65c84ddb1f44\" (UID: \"58c705e0-9353-44b0-b3af-65c84ddb1f44\") " Jan 20 18:53:34 crc kubenswrapper[4661]: I0120 18:53:34.549118 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v92zv\" (UniqueName: \"kubernetes.io/projected/58c705e0-9353-44b0-b3af-65c84ddb1f44-kube-api-access-v92zv\") pod \"58c705e0-9353-44b0-b3af-65c84ddb1f44\" (UID: \"58c705e0-9353-44b0-b3af-65c84ddb1f44\") " Jan 20 18:53:34 crc kubenswrapper[4661]: I0120 18:53:34.549172 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/58c705e0-9353-44b0-b3af-65c84ddb1f44-neutron-metadata-combined-ca-bundle\") pod \"58c705e0-9353-44b0-b3af-65c84ddb1f44\" (UID: \"58c705e0-9353-44b0-b3af-65c84ddb1f44\") " Jan 20 18:53:34 crc kubenswrapper[4661]: I0120 18:53:34.565034 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/58c705e0-9353-44b0-b3af-65c84ddb1f44-ceph" (OuterVolumeSpecName: "ceph") pod "58c705e0-9353-44b0-b3af-65c84ddb1f44" (UID: "58c705e0-9353-44b0-b3af-65c84ddb1f44"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 18:53:34 crc kubenswrapper[4661]: I0120 18:53:34.571610 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/58c705e0-9353-44b0-b3af-65c84ddb1f44-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "58c705e0-9353-44b0-b3af-65c84ddb1f44" (UID: "58c705e0-9353-44b0-b3af-65c84ddb1f44"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 18:53:34 crc kubenswrapper[4661]: I0120 18:53:34.576032 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/58c705e0-9353-44b0-b3af-65c84ddb1f44-kube-api-access-v92zv" (OuterVolumeSpecName: "kube-api-access-v92zv") pod "58c705e0-9353-44b0-b3af-65c84ddb1f44" (UID: "58c705e0-9353-44b0-b3af-65c84ddb1f44"). InnerVolumeSpecName "kube-api-access-v92zv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 18:53:34 crc kubenswrapper[4661]: I0120 18:53:34.591189 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/58c705e0-9353-44b0-b3af-65c84ddb1f44-nova-metadata-neutron-config-0" (OuterVolumeSpecName: "nova-metadata-neutron-config-0") pod "58c705e0-9353-44b0-b3af-65c84ddb1f44" (UID: "58c705e0-9353-44b0-b3af-65c84ddb1f44"). InnerVolumeSpecName "nova-metadata-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 18:53:34 crc kubenswrapper[4661]: I0120 18:53:34.591573 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/58c705e0-9353-44b0-b3af-65c84ddb1f44-neutron-ovn-metadata-agent-neutron-config-0" (OuterVolumeSpecName: "neutron-ovn-metadata-agent-neutron-config-0") pod "58c705e0-9353-44b0-b3af-65c84ddb1f44" (UID: "58c705e0-9353-44b0-b3af-65c84ddb1f44"). InnerVolumeSpecName "neutron-ovn-metadata-agent-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 18:53:34 crc kubenswrapper[4661]: I0120 18:53:34.595830 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/58c705e0-9353-44b0-b3af-65c84ddb1f44-inventory" (OuterVolumeSpecName: "inventory") pod "58c705e0-9353-44b0-b3af-65c84ddb1f44" (UID: "58c705e0-9353-44b0-b3af-65c84ddb1f44"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 18:53:34 crc kubenswrapper[4661]: I0120 18:53:34.599754 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/58c705e0-9353-44b0-b3af-65c84ddb1f44-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "58c705e0-9353-44b0-b3af-65c84ddb1f44" (UID: "58c705e0-9353-44b0-b3af-65c84ddb1f44"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 18:53:34 crc kubenswrapper[4661]: I0120 18:53:34.650683 4661 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/58c705e0-9353-44b0-b3af-65c84ddb1f44-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 20 18:53:34 crc kubenswrapper[4661]: I0120 18:53:34.650910 4661 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/58c705e0-9353-44b0-b3af-65c84ddb1f44-ceph\") on node \"crc\" DevicePath \"\"" Jan 20 18:53:34 crc kubenswrapper[4661]: I0120 18:53:34.650970 4661 reconciler_common.go:293] "Volume detached for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/58c705e0-9353-44b0-b3af-65c84ddb1f44-neutron-ovn-metadata-agent-neutron-config-0\") on node \"crc\" DevicePath \"\"" Jan 20 18:53:34 crc kubenswrapper[4661]: I0120 18:53:34.651039 4661 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/58c705e0-9353-44b0-b3af-65c84ddb1f44-nova-metadata-neutron-config-0\") on node \"crc\" DevicePath \"\"" Jan 20 18:53:34 crc kubenswrapper[4661]: I0120 18:53:34.651101 4661 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/58c705e0-9353-44b0-b3af-65c84ddb1f44-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 20 18:53:34 crc kubenswrapper[4661]: I0120 18:53:34.651168 4661 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/58c705e0-9353-44b0-b3af-65c84ddb1f44-inventory\") on node \"crc\" DevicePath \"\"" Jan 20 18:53:34 crc kubenswrapper[4661]: I0120 18:53:34.651230 4661 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v92zv\" (UniqueName: \"kubernetes.io/projected/58c705e0-9353-44b0-b3af-65c84ddb1f44-kube-api-access-v92zv\") on node \"crc\" DevicePath \"\"" Jan 20 18:53:34 crc kubenswrapper[4661]: I0120 18:53:34.934037 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-tbhjj" event={"ID":"58c705e0-9353-44b0-b3af-65c84ddb1f44","Type":"ContainerDied","Data":"550d19c8cf5f7f18258deba4b3a3fb114c7ba12bbf7a775ee15ad4abc898bd8e"} Jan 20 18:53:34 crc kubenswrapper[4661]: I0120 18:53:34.934071 4661 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="550d19c8cf5f7f18258deba4b3a3fb114c7ba12bbf7a775ee15ad4abc898bd8e" Jan 20 18:53:34 crc kubenswrapper[4661]: I0120 18:53:34.934362 4661 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-tbhjj" Jan 20 18:53:35 crc kubenswrapper[4661]: I0120 18:53:35.050873 4661 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-h99fk"] Jan 20 18:53:35 crc kubenswrapper[4661]: E0120 18:53:35.051640 4661 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="58c705e0-9353-44b0-b3af-65c84ddb1f44" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Jan 20 18:53:35 crc kubenswrapper[4661]: I0120 18:53:35.051810 4661 state_mem.go:107] "Deleted CPUSet assignment" podUID="58c705e0-9353-44b0-b3af-65c84ddb1f44" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Jan 20 18:53:35 crc kubenswrapper[4661]: I0120 18:53:35.052075 4661 memory_manager.go:354] "RemoveStaleState removing state" podUID="58c705e0-9353-44b0-b3af-65c84ddb1f44" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Jan 20 18:53:35 crc kubenswrapper[4661]: I0120 18:53:35.052875 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-h99fk" Jan 20 18:53:35 crc kubenswrapper[4661]: I0120 18:53:35.057869 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"libvirt-secret" Jan 20 18:53:35 crc kubenswrapper[4661]: I0120 18:53:35.058075 4661 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 20 18:53:35 crc kubenswrapper[4661]: I0120 18:53:35.058115 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 20 18:53:35 crc kubenswrapper[4661]: I0120 18:53:35.058264 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 20 18:53:35 crc kubenswrapper[4661]: I0120 18:53:35.058303 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-mmbv8" Jan 20 18:53:35 crc kubenswrapper[4661]: I0120 18:53:35.058365 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Jan 20 18:53:35 crc kubenswrapper[4661]: I0120 18:53:35.077182 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-h99fk"] Jan 20 18:53:35 crc kubenswrapper[4661]: I0120 18:53:35.160185 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e8b8d2fe-c25d-41e1-b32e-6e81c03e0717-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-h99fk\" (UID: \"e8b8d2fe-c25d-41e1-b32e-6e81c03e0717\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-h99fk" Jan 20 18:53:35 crc kubenswrapper[4661]: I0120 18:53:35.160280 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qvdk4\" (UniqueName: \"kubernetes.io/projected/e8b8d2fe-c25d-41e1-b32e-6e81c03e0717-kube-api-access-qvdk4\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-h99fk\" (UID: \"e8b8d2fe-c25d-41e1-b32e-6e81c03e0717\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-h99fk" Jan 20 18:53:35 crc kubenswrapper[4661]: I0120 18:53:35.160447 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e8b8d2fe-c25d-41e1-b32e-6e81c03e0717-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-h99fk\" (UID: \"e8b8d2fe-c25d-41e1-b32e-6e81c03e0717\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-h99fk" Jan 20 18:53:35 crc kubenswrapper[4661]: I0120 18:53:35.160478 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/e8b8d2fe-c25d-41e1-b32e-6e81c03e0717-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-h99fk\" (UID: \"e8b8d2fe-c25d-41e1-b32e-6e81c03e0717\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-h99fk" Jan 20 18:53:35 crc kubenswrapper[4661]: I0120 18:53:35.160500 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e8b8d2fe-c25d-41e1-b32e-6e81c03e0717-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-h99fk\" (UID: \"e8b8d2fe-c25d-41e1-b32e-6e81c03e0717\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-h99fk" Jan 20 18:53:35 crc kubenswrapper[4661]: I0120 18:53:35.160540 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/e8b8d2fe-c25d-41e1-b32e-6e81c03e0717-ceph\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-h99fk\" (UID: \"e8b8d2fe-c25d-41e1-b32e-6e81c03e0717\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-h99fk" Jan 20 18:53:35 crc kubenswrapper[4661]: I0120 18:53:35.262270 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e8b8d2fe-c25d-41e1-b32e-6e81c03e0717-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-h99fk\" (UID: \"e8b8d2fe-c25d-41e1-b32e-6e81c03e0717\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-h99fk" Jan 20 18:53:35 crc kubenswrapper[4661]: I0120 18:53:35.262328 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/e8b8d2fe-c25d-41e1-b32e-6e81c03e0717-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-h99fk\" (UID: \"e8b8d2fe-c25d-41e1-b32e-6e81c03e0717\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-h99fk" Jan 20 18:53:35 crc kubenswrapper[4661]: I0120 18:53:35.262368 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e8b8d2fe-c25d-41e1-b32e-6e81c03e0717-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-h99fk\" (UID: \"e8b8d2fe-c25d-41e1-b32e-6e81c03e0717\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-h99fk" Jan 20 18:53:35 crc kubenswrapper[4661]: I0120 18:53:35.262417 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/e8b8d2fe-c25d-41e1-b32e-6e81c03e0717-ceph\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-h99fk\" (UID: \"e8b8d2fe-c25d-41e1-b32e-6e81c03e0717\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-h99fk" Jan 20 18:53:35 crc kubenswrapper[4661]: I0120 18:53:35.262458 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e8b8d2fe-c25d-41e1-b32e-6e81c03e0717-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-h99fk\" (UID: \"e8b8d2fe-c25d-41e1-b32e-6e81c03e0717\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-h99fk" Jan 20 18:53:35 crc kubenswrapper[4661]: I0120 18:53:35.262565 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qvdk4\" (UniqueName: \"kubernetes.io/projected/e8b8d2fe-c25d-41e1-b32e-6e81c03e0717-kube-api-access-qvdk4\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-h99fk\" (UID: \"e8b8d2fe-c25d-41e1-b32e-6e81c03e0717\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-h99fk" Jan 20 18:53:35 crc kubenswrapper[4661]: I0120 18:53:35.266571 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e8b8d2fe-c25d-41e1-b32e-6e81c03e0717-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-h99fk\" (UID: \"e8b8d2fe-c25d-41e1-b32e-6e81c03e0717\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-h99fk" Jan 20 18:53:35 crc kubenswrapper[4661]: I0120 18:53:35.269315 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/e8b8d2fe-c25d-41e1-b32e-6e81c03e0717-ceph\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-h99fk\" (UID: \"e8b8d2fe-c25d-41e1-b32e-6e81c03e0717\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-h99fk" Jan 20 18:53:35 crc kubenswrapper[4661]: I0120 18:53:35.270277 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e8b8d2fe-c25d-41e1-b32e-6e81c03e0717-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-h99fk\" (UID: \"e8b8d2fe-c25d-41e1-b32e-6e81c03e0717\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-h99fk" Jan 20 18:53:35 crc kubenswrapper[4661]: I0120 18:53:35.273146 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/e8b8d2fe-c25d-41e1-b32e-6e81c03e0717-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-h99fk\" (UID: \"e8b8d2fe-c25d-41e1-b32e-6e81c03e0717\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-h99fk" Jan 20 18:53:35 crc kubenswrapper[4661]: I0120 18:53:35.273501 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e8b8d2fe-c25d-41e1-b32e-6e81c03e0717-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-h99fk\" (UID: \"e8b8d2fe-c25d-41e1-b32e-6e81c03e0717\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-h99fk" Jan 20 18:53:35 crc kubenswrapper[4661]: I0120 18:53:35.285763 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qvdk4\" (UniqueName: \"kubernetes.io/projected/e8b8d2fe-c25d-41e1-b32e-6e81c03e0717-kube-api-access-qvdk4\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-h99fk\" (UID: \"e8b8d2fe-c25d-41e1-b32e-6e81c03e0717\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-h99fk" Jan 20 18:53:35 crc kubenswrapper[4661]: I0120 18:53:35.380748 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-h99fk" Jan 20 18:53:36 crc kubenswrapper[4661]: I0120 18:53:36.073430 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-h99fk"] Jan 20 18:53:36 crc kubenswrapper[4661]: W0120 18:53:36.091489 4661 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode8b8d2fe_c25d_41e1_b32e_6e81c03e0717.slice/crio-a5dedf57f43d97ec641c9c8c359c8396a1aca567685b0821fa0c08901d28a86f WatchSource:0}: Error finding container a5dedf57f43d97ec641c9c8c359c8396a1aca567685b0821fa0c08901d28a86f: Status 404 returned error can't find the container with id a5dedf57f43d97ec641c9c8c359c8396a1aca567685b0821fa0c08901d28a86f Jan 20 18:53:36 crc kubenswrapper[4661]: I0120 18:53:36.955340 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-h99fk" event={"ID":"e8b8d2fe-c25d-41e1-b32e-6e81c03e0717","Type":"ContainerStarted","Data":"42870e80ca48ce49e90d5b21a4519dd4b0f1e196d0652ff4e0aed8d63b497e05"} Jan 20 18:53:36 crc kubenswrapper[4661]: I0120 18:53:36.955652 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-h99fk" event={"ID":"e8b8d2fe-c25d-41e1-b32e-6e81c03e0717","Type":"ContainerStarted","Data":"a5dedf57f43d97ec641c9c8c359c8396a1aca567685b0821fa0c08901d28a86f"} Jan 20 18:53:36 crc kubenswrapper[4661]: I0120 18:53:36.972792 4661 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-h99fk" podStartSLOduration=1.5414811510000002 podStartE2EDuration="1.972765399s" podCreationTimestamp="2026-01-20 18:53:35 +0000 UTC" firstStartedPulling="2026-01-20 18:53:36.095907662 +0000 UTC m=+2872.426697334" lastFinishedPulling="2026-01-20 18:53:36.52719192 +0000 UTC m=+2872.857981582" observedRunningTime="2026-01-20 18:53:36.971043644 +0000 UTC m=+2873.301833346" watchObservedRunningTime="2026-01-20 18:53:36.972765399 +0000 UTC m=+2873.303555101" Jan 20 18:53:59 crc kubenswrapper[4661]: I0120 18:53:59.324022 4661 patch_prober.go:28] interesting pod/machine-config-daemon-svf7c container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 20 18:53:59 crc kubenswrapper[4661]: I0120 18:53:59.324899 4661 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-svf7c" podUID="78855c94-da90-4523-8d65-70f7fd153dee" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 20 18:54:29 crc kubenswrapper[4661]: I0120 18:54:29.323819 4661 patch_prober.go:28] interesting pod/machine-config-daemon-svf7c container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 20 18:54:29 crc kubenswrapper[4661]: I0120 18:54:29.324347 4661 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-svf7c" podUID="78855c94-da90-4523-8d65-70f7fd153dee" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 20 18:54:59 crc kubenswrapper[4661]: I0120 18:54:59.324194 4661 patch_prober.go:28] interesting pod/machine-config-daemon-svf7c container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 20 18:54:59 crc kubenswrapper[4661]: I0120 18:54:59.325163 4661 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-svf7c" podUID="78855c94-da90-4523-8d65-70f7fd153dee" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 20 18:54:59 crc kubenswrapper[4661]: I0120 18:54:59.325237 4661 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-svf7c" Jan 20 18:54:59 crc kubenswrapper[4661]: I0120 18:54:59.326341 4661 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"8cf060cb371bfd85fb7ec3dfa349258b41b6b6371ddbb02e6b4120e489296593"} pod="openshift-machine-config-operator/machine-config-daemon-svf7c" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 20 18:54:59 crc kubenswrapper[4661]: I0120 18:54:59.326456 4661 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-svf7c" podUID="78855c94-da90-4523-8d65-70f7fd153dee" containerName="machine-config-daemon" containerID="cri-o://8cf060cb371bfd85fb7ec3dfa349258b41b6b6371ddbb02e6b4120e489296593" gracePeriod=600 Jan 20 18:54:59 crc kubenswrapper[4661]: E0120 18:54:59.461764 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-svf7c_openshift-machine-config-operator(78855c94-da90-4523-8d65-70f7fd153dee)\"" pod="openshift-machine-config-operator/machine-config-daemon-svf7c" podUID="78855c94-da90-4523-8d65-70f7fd153dee" Jan 20 18:54:59 crc kubenswrapper[4661]: I0120 18:54:59.787165 4661 generic.go:334] "Generic (PLEG): container finished" podID="78855c94-da90-4523-8d65-70f7fd153dee" containerID="8cf060cb371bfd85fb7ec3dfa349258b41b6b6371ddbb02e6b4120e489296593" exitCode=0 Jan 20 18:54:59 crc kubenswrapper[4661]: I0120 18:54:59.787224 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-svf7c" event={"ID":"78855c94-da90-4523-8d65-70f7fd153dee","Type":"ContainerDied","Data":"8cf060cb371bfd85fb7ec3dfa349258b41b6b6371ddbb02e6b4120e489296593"} Jan 20 18:54:59 crc kubenswrapper[4661]: I0120 18:54:59.787265 4661 scope.go:117] "RemoveContainer" containerID="09f90b8546dddfc6e996cdbb77ceb056fba14e8d7b35198aeb2ca0341a0647b6" Jan 20 18:54:59 crc kubenswrapper[4661]: I0120 18:54:59.788042 4661 scope.go:117] "RemoveContainer" containerID="8cf060cb371bfd85fb7ec3dfa349258b41b6b6371ddbb02e6b4120e489296593" Jan 20 18:54:59 crc kubenswrapper[4661]: E0120 18:54:59.788377 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-svf7c_openshift-machine-config-operator(78855c94-da90-4523-8d65-70f7fd153dee)\"" pod="openshift-machine-config-operator/machine-config-daemon-svf7c" podUID="78855c94-da90-4523-8d65-70f7fd153dee" Jan 20 18:55:12 crc kubenswrapper[4661]: I0120 18:55:12.141966 4661 scope.go:117] "RemoveContainer" containerID="8cf060cb371bfd85fb7ec3dfa349258b41b6b6371ddbb02e6b4120e489296593" Jan 20 18:55:12 crc kubenswrapper[4661]: E0120 18:55:12.143190 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-svf7c_openshift-machine-config-operator(78855c94-da90-4523-8d65-70f7fd153dee)\"" pod="openshift-machine-config-operator/machine-config-daemon-svf7c" podUID="78855c94-da90-4523-8d65-70f7fd153dee" Jan 20 18:55:25 crc kubenswrapper[4661]: I0120 18:55:25.143876 4661 scope.go:117] "RemoveContainer" containerID="8cf060cb371bfd85fb7ec3dfa349258b41b6b6371ddbb02e6b4120e489296593" Jan 20 18:55:25 crc kubenswrapper[4661]: E0120 18:55:25.147275 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-svf7c_openshift-machine-config-operator(78855c94-da90-4523-8d65-70f7fd153dee)\"" pod="openshift-machine-config-operator/machine-config-daemon-svf7c" podUID="78855c94-da90-4523-8d65-70f7fd153dee" Jan 20 18:55:37 crc kubenswrapper[4661]: I0120 18:55:37.142514 4661 scope.go:117] "RemoveContainer" containerID="8cf060cb371bfd85fb7ec3dfa349258b41b6b6371ddbb02e6b4120e489296593" Jan 20 18:55:37 crc kubenswrapper[4661]: E0120 18:55:37.145059 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-svf7c_openshift-machine-config-operator(78855c94-da90-4523-8d65-70f7fd153dee)\"" pod="openshift-machine-config-operator/machine-config-daemon-svf7c" podUID="78855c94-da90-4523-8d65-70f7fd153dee" Jan 20 18:55:51 crc kubenswrapper[4661]: I0120 18:55:51.143840 4661 scope.go:117] "RemoveContainer" containerID="8cf060cb371bfd85fb7ec3dfa349258b41b6b6371ddbb02e6b4120e489296593" Jan 20 18:55:51 crc kubenswrapper[4661]: E0120 18:55:51.144659 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-svf7c_openshift-machine-config-operator(78855c94-da90-4523-8d65-70f7fd153dee)\"" pod="openshift-machine-config-operator/machine-config-daemon-svf7c" podUID="78855c94-da90-4523-8d65-70f7fd153dee" Jan 20 18:56:04 crc kubenswrapper[4661]: I0120 18:56:04.150206 4661 scope.go:117] "RemoveContainer" containerID="8cf060cb371bfd85fb7ec3dfa349258b41b6b6371ddbb02e6b4120e489296593" Jan 20 18:56:04 crc kubenswrapper[4661]: E0120 18:56:04.151440 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-svf7c_openshift-machine-config-operator(78855c94-da90-4523-8d65-70f7fd153dee)\"" pod="openshift-machine-config-operator/machine-config-daemon-svf7c" podUID="78855c94-da90-4523-8d65-70f7fd153dee" Jan 20 18:56:19 crc kubenswrapper[4661]: I0120 18:56:19.142009 4661 scope.go:117] "RemoveContainer" containerID="8cf060cb371bfd85fb7ec3dfa349258b41b6b6371ddbb02e6b4120e489296593" Jan 20 18:56:19 crc kubenswrapper[4661]: E0120 18:56:19.142827 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-svf7c_openshift-machine-config-operator(78855c94-da90-4523-8d65-70f7fd153dee)\"" pod="openshift-machine-config-operator/machine-config-daemon-svf7c" podUID="78855c94-da90-4523-8d65-70f7fd153dee" Jan 20 18:56:33 crc kubenswrapper[4661]: I0120 18:56:33.142553 4661 scope.go:117] "RemoveContainer" containerID="8cf060cb371bfd85fb7ec3dfa349258b41b6b6371ddbb02e6b4120e489296593" Jan 20 18:56:33 crc kubenswrapper[4661]: E0120 18:56:33.144732 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-svf7c_openshift-machine-config-operator(78855c94-da90-4523-8d65-70f7fd153dee)\"" pod="openshift-machine-config-operator/machine-config-daemon-svf7c" podUID="78855c94-da90-4523-8d65-70f7fd153dee" Jan 20 18:56:47 crc kubenswrapper[4661]: I0120 18:56:47.143817 4661 scope.go:117] "RemoveContainer" containerID="8cf060cb371bfd85fb7ec3dfa349258b41b6b6371ddbb02e6b4120e489296593" Jan 20 18:56:47 crc kubenswrapper[4661]: E0120 18:56:47.145056 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-svf7c_openshift-machine-config-operator(78855c94-da90-4523-8d65-70f7fd153dee)\"" pod="openshift-machine-config-operator/machine-config-daemon-svf7c" podUID="78855c94-da90-4523-8d65-70f7fd153dee" Jan 20 18:57:00 crc kubenswrapper[4661]: I0120 18:57:00.142307 4661 scope.go:117] "RemoveContainer" containerID="8cf060cb371bfd85fb7ec3dfa349258b41b6b6371ddbb02e6b4120e489296593" Jan 20 18:57:00 crc kubenswrapper[4661]: E0120 18:57:00.143089 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-svf7c_openshift-machine-config-operator(78855c94-da90-4523-8d65-70f7fd153dee)\"" pod="openshift-machine-config-operator/machine-config-daemon-svf7c" podUID="78855c94-da90-4523-8d65-70f7fd153dee" Jan 20 18:57:13 crc kubenswrapper[4661]: I0120 18:57:13.142325 4661 scope.go:117] "RemoveContainer" containerID="8cf060cb371bfd85fb7ec3dfa349258b41b6b6371ddbb02e6b4120e489296593" Jan 20 18:57:13 crc kubenswrapper[4661]: E0120 18:57:13.143198 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-svf7c_openshift-machine-config-operator(78855c94-da90-4523-8d65-70f7fd153dee)\"" pod="openshift-machine-config-operator/machine-config-daemon-svf7c" podUID="78855c94-da90-4523-8d65-70f7fd153dee" Jan 20 18:57:26 crc kubenswrapper[4661]: I0120 18:57:26.841191 4661 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-qpct9"] Jan 20 18:57:26 crc kubenswrapper[4661]: I0120 18:57:26.844640 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qpct9" Jan 20 18:57:26 crc kubenswrapper[4661]: I0120 18:57:26.856374 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-qpct9"] Jan 20 18:57:26 crc kubenswrapper[4661]: I0120 18:57:26.940434 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e3373961-5790-4f76-894b-d9dbeee97a62-utilities\") pod \"community-operators-qpct9\" (UID: \"e3373961-5790-4f76-894b-d9dbeee97a62\") " pod="openshift-marketplace/community-operators-qpct9" Jan 20 18:57:26 crc kubenswrapper[4661]: I0120 18:57:26.940501 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dd8bq\" (UniqueName: \"kubernetes.io/projected/e3373961-5790-4f76-894b-d9dbeee97a62-kube-api-access-dd8bq\") pod \"community-operators-qpct9\" (UID: \"e3373961-5790-4f76-894b-d9dbeee97a62\") " pod="openshift-marketplace/community-operators-qpct9" Jan 20 18:57:26 crc kubenswrapper[4661]: I0120 18:57:26.940589 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e3373961-5790-4f76-894b-d9dbeee97a62-catalog-content\") pod \"community-operators-qpct9\" (UID: \"e3373961-5790-4f76-894b-d9dbeee97a62\") " pod="openshift-marketplace/community-operators-qpct9" Jan 20 18:57:27 crc kubenswrapper[4661]: I0120 18:57:27.042783 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e3373961-5790-4f76-894b-d9dbeee97a62-utilities\") pod \"community-operators-qpct9\" (UID: \"e3373961-5790-4f76-894b-d9dbeee97a62\") " pod="openshift-marketplace/community-operators-qpct9" Jan 20 18:57:27 crc kubenswrapper[4661]: I0120 18:57:27.042844 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dd8bq\" (UniqueName: \"kubernetes.io/projected/e3373961-5790-4f76-894b-d9dbeee97a62-kube-api-access-dd8bq\") pod \"community-operators-qpct9\" (UID: \"e3373961-5790-4f76-894b-d9dbeee97a62\") " pod="openshift-marketplace/community-operators-qpct9" Jan 20 18:57:27 crc kubenswrapper[4661]: I0120 18:57:27.042899 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e3373961-5790-4f76-894b-d9dbeee97a62-catalog-content\") pod \"community-operators-qpct9\" (UID: \"e3373961-5790-4f76-894b-d9dbeee97a62\") " pod="openshift-marketplace/community-operators-qpct9" Jan 20 18:57:27 crc kubenswrapper[4661]: I0120 18:57:27.043330 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e3373961-5790-4f76-894b-d9dbeee97a62-catalog-content\") pod \"community-operators-qpct9\" (UID: \"e3373961-5790-4f76-894b-d9dbeee97a62\") " pod="openshift-marketplace/community-operators-qpct9" Jan 20 18:57:27 crc kubenswrapper[4661]: I0120 18:57:27.043541 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e3373961-5790-4f76-894b-d9dbeee97a62-utilities\") pod \"community-operators-qpct9\" (UID: \"e3373961-5790-4f76-894b-d9dbeee97a62\") " pod="openshift-marketplace/community-operators-qpct9" Jan 20 18:57:27 crc kubenswrapper[4661]: I0120 18:57:27.063958 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dd8bq\" (UniqueName: \"kubernetes.io/projected/e3373961-5790-4f76-894b-d9dbeee97a62-kube-api-access-dd8bq\") pod \"community-operators-qpct9\" (UID: \"e3373961-5790-4f76-894b-d9dbeee97a62\") " pod="openshift-marketplace/community-operators-qpct9" Jan 20 18:57:27 crc kubenswrapper[4661]: I0120 18:57:27.203761 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qpct9" Jan 20 18:57:27 crc kubenswrapper[4661]: I0120 18:57:27.709590 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-qpct9"] Jan 20 18:57:28 crc kubenswrapper[4661]: I0120 18:57:28.143973 4661 scope.go:117] "RemoveContainer" containerID="8cf060cb371bfd85fb7ec3dfa349258b41b6b6371ddbb02e6b4120e489296593" Jan 20 18:57:28 crc kubenswrapper[4661]: E0120 18:57:28.144502 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-svf7c_openshift-machine-config-operator(78855c94-da90-4523-8d65-70f7fd153dee)\"" pod="openshift-machine-config-operator/machine-config-daemon-svf7c" podUID="78855c94-da90-4523-8d65-70f7fd153dee" Jan 20 18:57:28 crc kubenswrapper[4661]: I0120 18:57:28.213868 4661 generic.go:334] "Generic (PLEG): container finished" podID="e3373961-5790-4f76-894b-d9dbeee97a62" containerID="374388545a7efa88c813ba8b336680dc548994daa6797b8ac9763b1196adb995" exitCode=0 Jan 20 18:57:28 crc kubenswrapper[4661]: I0120 18:57:28.213925 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qpct9" event={"ID":"e3373961-5790-4f76-894b-d9dbeee97a62","Type":"ContainerDied","Data":"374388545a7efa88c813ba8b336680dc548994daa6797b8ac9763b1196adb995"} Jan 20 18:57:28 crc kubenswrapper[4661]: I0120 18:57:28.213956 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qpct9" event={"ID":"e3373961-5790-4f76-894b-d9dbeee97a62","Type":"ContainerStarted","Data":"9dae5174feb5e3c2a0ec8233e9185c9512c1b340b37a765dd47746809abfecb5"} Jan 20 18:57:28 crc kubenswrapper[4661]: I0120 18:57:28.217655 4661 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 20 18:57:29 crc kubenswrapper[4661]: I0120 18:57:29.222711 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qpct9" event={"ID":"e3373961-5790-4f76-894b-d9dbeee97a62","Type":"ContainerStarted","Data":"918d8db31608e7d11055b8f68b51ece48c5a51956c91d65d2de8a1c4ecb0f99f"} Jan 20 18:57:30 crc kubenswrapper[4661]: I0120 18:57:30.236581 4661 generic.go:334] "Generic (PLEG): container finished" podID="e3373961-5790-4f76-894b-d9dbeee97a62" containerID="918d8db31608e7d11055b8f68b51ece48c5a51956c91d65d2de8a1c4ecb0f99f" exitCode=0 Jan 20 18:57:30 crc kubenswrapper[4661]: I0120 18:57:30.236711 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qpct9" event={"ID":"e3373961-5790-4f76-894b-d9dbeee97a62","Type":"ContainerDied","Data":"918d8db31608e7d11055b8f68b51ece48c5a51956c91d65d2de8a1c4ecb0f99f"} Jan 20 18:57:31 crc kubenswrapper[4661]: I0120 18:57:31.250158 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qpct9" event={"ID":"e3373961-5790-4f76-894b-d9dbeee97a62","Type":"ContainerStarted","Data":"ebdb22ff56587d8b5a3bb13d2f0595372a4c0a410e77779cce247d7f788d6958"} Jan 20 18:57:31 crc kubenswrapper[4661]: I0120 18:57:31.275543 4661 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-qpct9" podStartSLOduration=2.824860187 podStartE2EDuration="5.275524383s" podCreationTimestamp="2026-01-20 18:57:26 +0000 UTC" firstStartedPulling="2026-01-20 18:57:28.217357726 +0000 UTC m=+3104.548147398" lastFinishedPulling="2026-01-20 18:57:30.668021922 +0000 UTC m=+3106.998811594" observedRunningTime="2026-01-20 18:57:31.269472556 +0000 UTC m=+3107.600262228" watchObservedRunningTime="2026-01-20 18:57:31.275524383 +0000 UTC m=+3107.606314045" Jan 20 18:57:37 crc kubenswrapper[4661]: I0120 18:57:37.204512 4661 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-qpct9" Jan 20 18:57:37 crc kubenswrapper[4661]: I0120 18:57:37.206884 4661 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-qpct9" Jan 20 18:57:37 crc kubenswrapper[4661]: I0120 18:57:37.286002 4661 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-qpct9" Jan 20 18:57:37 crc kubenswrapper[4661]: I0120 18:57:37.370889 4661 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-qpct9" Jan 20 18:57:37 crc kubenswrapper[4661]: I0120 18:57:37.534209 4661 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-qpct9"] Jan 20 18:57:39 crc kubenswrapper[4661]: I0120 18:57:39.322642 4661 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-qpct9" podUID="e3373961-5790-4f76-894b-d9dbeee97a62" containerName="registry-server" containerID="cri-o://ebdb22ff56587d8b5a3bb13d2f0595372a4c0a410e77779cce247d7f788d6958" gracePeriod=2 Jan 20 18:57:39 crc kubenswrapper[4661]: I0120 18:57:39.801090 4661 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qpct9" Jan 20 18:57:39 crc kubenswrapper[4661]: I0120 18:57:39.985046 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dd8bq\" (UniqueName: \"kubernetes.io/projected/e3373961-5790-4f76-894b-d9dbeee97a62-kube-api-access-dd8bq\") pod \"e3373961-5790-4f76-894b-d9dbeee97a62\" (UID: \"e3373961-5790-4f76-894b-d9dbeee97a62\") " Jan 20 18:57:39 crc kubenswrapper[4661]: I0120 18:57:39.985199 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e3373961-5790-4f76-894b-d9dbeee97a62-utilities\") pod \"e3373961-5790-4f76-894b-d9dbeee97a62\" (UID: \"e3373961-5790-4f76-894b-d9dbeee97a62\") " Jan 20 18:57:39 crc kubenswrapper[4661]: I0120 18:57:39.985322 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e3373961-5790-4f76-894b-d9dbeee97a62-catalog-content\") pod \"e3373961-5790-4f76-894b-d9dbeee97a62\" (UID: \"e3373961-5790-4f76-894b-d9dbeee97a62\") " Jan 20 18:57:39 crc kubenswrapper[4661]: I0120 18:57:39.986166 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e3373961-5790-4f76-894b-d9dbeee97a62-utilities" (OuterVolumeSpecName: "utilities") pod "e3373961-5790-4f76-894b-d9dbeee97a62" (UID: "e3373961-5790-4f76-894b-d9dbeee97a62"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 18:57:39 crc kubenswrapper[4661]: I0120 18:57:39.997937 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e3373961-5790-4f76-894b-d9dbeee97a62-kube-api-access-dd8bq" (OuterVolumeSpecName: "kube-api-access-dd8bq") pod "e3373961-5790-4f76-894b-d9dbeee97a62" (UID: "e3373961-5790-4f76-894b-d9dbeee97a62"). InnerVolumeSpecName "kube-api-access-dd8bq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 18:57:40 crc kubenswrapper[4661]: I0120 18:57:40.049968 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e3373961-5790-4f76-894b-d9dbeee97a62-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e3373961-5790-4f76-894b-d9dbeee97a62" (UID: "e3373961-5790-4f76-894b-d9dbeee97a62"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 18:57:40 crc kubenswrapper[4661]: I0120 18:57:40.087581 4661 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e3373961-5790-4f76-894b-d9dbeee97a62-utilities\") on node \"crc\" DevicePath \"\"" Jan 20 18:57:40 crc kubenswrapper[4661]: I0120 18:57:40.087620 4661 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e3373961-5790-4f76-894b-d9dbeee97a62-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 20 18:57:40 crc kubenswrapper[4661]: I0120 18:57:40.087635 4661 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dd8bq\" (UniqueName: \"kubernetes.io/projected/e3373961-5790-4f76-894b-d9dbeee97a62-kube-api-access-dd8bq\") on node \"crc\" DevicePath \"\"" Jan 20 18:57:40 crc kubenswrapper[4661]: I0120 18:57:40.142946 4661 scope.go:117] "RemoveContainer" containerID="8cf060cb371bfd85fb7ec3dfa349258b41b6b6371ddbb02e6b4120e489296593" Jan 20 18:57:40 crc kubenswrapper[4661]: E0120 18:57:40.143253 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-svf7c_openshift-machine-config-operator(78855c94-da90-4523-8d65-70f7fd153dee)\"" pod="openshift-machine-config-operator/machine-config-daemon-svf7c" podUID="78855c94-da90-4523-8d65-70f7fd153dee" Jan 20 18:57:40 crc kubenswrapper[4661]: I0120 18:57:40.334878 4661 generic.go:334] "Generic (PLEG): container finished" podID="e3373961-5790-4f76-894b-d9dbeee97a62" containerID="ebdb22ff56587d8b5a3bb13d2f0595372a4c0a410e77779cce247d7f788d6958" exitCode=0 Jan 20 18:57:40 crc kubenswrapper[4661]: I0120 18:57:40.334927 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qpct9" event={"ID":"e3373961-5790-4f76-894b-d9dbeee97a62","Type":"ContainerDied","Data":"ebdb22ff56587d8b5a3bb13d2f0595372a4c0a410e77779cce247d7f788d6958"} Jan 20 18:57:40 crc kubenswrapper[4661]: I0120 18:57:40.334957 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qpct9" event={"ID":"e3373961-5790-4f76-894b-d9dbeee97a62","Type":"ContainerDied","Data":"9dae5174feb5e3c2a0ec8233e9185c9512c1b340b37a765dd47746809abfecb5"} Jan 20 18:57:40 crc kubenswrapper[4661]: I0120 18:57:40.334982 4661 scope.go:117] "RemoveContainer" containerID="ebdb22ff56587d8b5a3bb13d2f0595372a4c0a410e77779cce247d7f788d6958" Jan 20 18:57:40 crc kubenswrapper[4661]: I0120 18:57:40.335127 4661 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qpct9" Jan 20 18:57:40 crc kubenswrapper[4661]: I0120 18:57:40.371409 4661 scope.go:117] "RemoveContainer" containerID="918d8db31608e7d11055b8f68b51ece48c5a51956c91d65d2de8a1c4ecb0f99f" Jan 20 18:57:40 crc kubenswrapper[4661]: I0120 18:57:40.380525 4661 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-qpct9"] Jan 20 18:57:40 crc kubenswrapper[4661]: I0120 18:57:40.423531 4661 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-qpct9"] Jan 20 18:57:40 crc kubenswrapper[4661]: I0120 18:57:40.424132 4661 scope.go:117] "RemoveContainer" containerID="374388545a7efa88c813ba8b336680dc548994daa6797b8ac9763b1196adb995" Jan 20 18:57:40 crc kubenswrapper[4661]: I0120 18:57:40.462804 4661 scope.go:117] "RemoveContainer" containerID="ebdb22ff56587d8b5a3bb13d2f0595372a4c0a410e77779cce247d7f788d6958" Jan 20 18:57:40 crc kubenswrapper[4661]: E0120 18:57:40.463340 4661 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ebdb22ff56587d8b5a3bb13d2f0595372a4c0a410e77779cce247d7f788d6958\": container with ID starting with ebdb22ff56587d8b5a3bb13d2f0595372a4c0a410e77779cce247d7f788d6958 not found: ID does not exist" containerID="ebdb22ff56587d8b5a3bb13d2f0595372a4c0a410e77779cce247d7f788d6958" Jan 20 18:57:40 crc kubenswrapper[4661]: I0120 18:57:40.463393 4661 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ebdb22ff56587d8b5a3bb13d2f0595372a4c0a410e77779cce247d7f788d6958"} err="failed to get container status \"ebdb22ff56587d8b5a3bb13d2f0595372a4c0a410e77779cce247d7f788d6958\": rpc error: code = NotFound desc = could not find container \"ebdb22ff56587d8b5a3bb13d2f0595372a4c0a410e77779cce247d7f788d6958\": container with ID starting with ebdb22ff56587d8b5a3bb13d2f0595372a4c0a410e77779cce247d7f788d6958 not found: ID does not exist" Jan 20 18:57:40 crc kubenswrapper[4661]: I0120 18:57:40.463420 4661 scope.go:117] "RemoveContainer" containerID="918d8db31608e7d11055b8f68b51ece48c5a51956c91d65d2de8a1c4ecb0f99f" Jan 20 18:57:40 crc kubenswrapper[4661]: E0120 18:57:40.463934 4661 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"918d8db31608e7d11055b8f68b51ece48c5a51956c91d65d2de8a1c4ecb0f99f\": container with ID starting with 918d8db31608e7d11055b8f68b51ece48c5a51956c91d65d2de8a1c4ecb0f99f not found: ID does not exist" containerID="918d8db31608e7d11055b8f68b51ece48c5a51956c91d65d2de8a1c4ecb0f99f" Jan 20 18:57:40 crc kubenswrapper[4661]: I0120 18:57:40.463964 4661 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"918d8db31608e7d11055b8f68b51ece48c5a51956c91d65d2de8a1c4ecb0f99f"} err="failed to get container status \"918d8db31608e7d11055b8f68b51ece48c5a51956c91d65d2de8a1c4ecb0f99f\": rpc error: code = NotFound desc = could not find container \"918d8db31608e7d11055b8f68b51ece48c5a51956c91d65d2de8a1c4ecb0f99f\": container with ID starting with 918d8db31608e7d11055b8f68b51ece48c5a51956c91d65d2de8a1c4ecb0f99f not found: ID does not exist" Jan 20 18:57:40 crc kubenswrapper[4661]: I0120 18:57:40.463985 4661 scope.go:117] "RemoveContainer" containerID="374388545a7efa88c813ba8b336680dc548994daa6797b8ac9763b1196adb995" Jan 20 18:57:40 crc kubenswrapper[4661]: E0120 18:57:40.464275 4661 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"374388545a7efa88c813ba8b336680dc548994daa6797b8ac9763b1196adb995\": container with ID starting with 374388545a7efa88c813ba8b336680dc548994daa6797b8ac9763b1196adb995 not found: ID does not exist" containerID="374388545a7efa88c813ba8b336680dc548994daa6797b8ac9763b1196adb995" Jan 20 18:57:40 crc kubenswrapper[4661]: I0120 18:57:40.464397 4661 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"374388545a7efa88c813ba8b336680dc548994daa6797b8ac9763b1196adb995"} err="failed to get container status \"374388545a7efa88c813ba8b336680dc548994daa6797b8ac9763b1196adb995\": rpc error: code = NotFound desc = could not find container \"374388545a7efa88c813ba8b336680dc548994daa6797b8ac9763b1196adb995\": container with ID starting with 374388545a7efa88c813ba8b336680dc548994daa6797b8ac9763b1196adb995 not found: ID does not exist" Jan 20 18:57:42 crc kubenswrapper[4661]: I0120 18:57:42.161311 4661 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e3373961-5790-4f76-894b-d9dbeee97a62" path="/var/lib/kubelet/pods/e3373961-5790-4f76-894b-d9dbeee97a62/volumes" Jan 20 18:57:45 crc kubenswrapper[4661]: I0120 18:57:45.947629 4661 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-h9g72"] Jan 20 18:57:45 crc kubenswrapper[4661]: E0120 18:57:45.948568 4661 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e3373961-5790-4f76-894b-d9dbeee97a62" containerName="extract-content" Jan 20 18:57:45 crc kubenswrapper[4661]: I0120 18:57:45.948582 4661 state_mem.go:107] "Deleted CPUSet assignment" podUID="e3373961-5790-4f76-894b-d9dbeee97a62" containerName="extract-content" Jan 20 18:57:45 crc kubenswrapper[4661]: E0120 18:57:45.948614 4661 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e3373961-5790-4f76-894b-d9dbeee97a62" containerName="extract-utilities" Jan 20 18:57:45 crc kubenswrapper[4661]: I0120 18:57:45.948621 4661 state_mem.go:107] "Deleted CPUSet assignment" podUID="e3373961-5790-4f76-894b-d9dbeee97a62" containerName="extract-utilities" Jan 20 18:57:45 crc kubenswrapper[4661]: E0120 18:57:45.948631 4661 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e3373961-5790-4f76-894b-d9dbeee97a62" containerName="registry-server" Jan 20 18:57:45 crc kubenswrapper[4661]: I0120 18:57:45.948638 4661 state_mem.go:107] "Deleted CPUSet assignment" podUID="e3373961-5790-4f76-894b-d9dbeee97a62" containerName="registry-server" Jan 20 18:57:45 crc kubenswrapper[4661]: I0120 18:57:45.948821 4661 memory_manager.go:354] "RemoveStaleState removing state" podUID="e3373961-5790-4f76-894b-d9dbeee97a62" containerName="registry-server" Jan 20 18:57:45 crc kubenswrapper[4661]: I0120 18:57:45.949971 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-h9g72" Jan 20 18:57:45 crc kubenswrapper[4661]: I0120 18:57:45.970422 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-h9g72"] Jan 20 18:57:46 crc kubenswrapper[4661]: I0120 18:57:46.116928 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d625d351-1825-4cb5-89d7-032e9114a0e7-catalog-content\") pod \"certified-operators-h9g72\" (UID: \"d625d351-1825-4cb5-89d7-032e9114a0e7\") " pod="openshift-marketplace/certified-operators-h9g72" Jan 20 18:57:46 crc kubenswrapper[4661]: I0120 18:57:46.116983 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d625d351-1825-4cb5-89d7-032e9114a0e7-utilities\") pod \"certified-operators-h9g72\" (UID: \"d625d351-1825-4cb5-89d7-032e9114a0e7\") " pod="openshift-marketplace/certified-operators-h9g72" Jan 20 18:57:46 crc kubenswrapper[4661]: I0120 18:57:46.117031 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9qkw4\" (UniqueName: \"kubernetes.io/projected/d625d351-1825-4cb5-89d7-032e9114a0e7-kube-api-access-9qkw4\") pod \"certified-operators-h9g72\" (UID: \"d625d351-1825-4cb5-89d7-032e9114a0e7\") " pod="openshift-marketplace/certified-operators-h9g72" Jan 20 18:57:46 crc kubenswrapper[4661]: I0120 18:57:46.218413 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9qkw4\" (UniqueName: \"kubernetes.io/projected/d625d351-1825-4cb5-89d7-032e9114a0e7-kube-api-access-9qkw4\") pod \"certified-operators-h9g72\" (UID: \"d625d351-1825-4cb5-89d7-032e9114a0e7\") " pod="openshift-marketplace/certified-operators-h9g72" Jan 20 18:57:46 crc kubenswrapper[4661]: I0120 18:57:46.218586 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d625d351-1825-4cb5-89d7-032e9114a0e7-catalog-content\") pod \"certified-operators-h9g72\" (UID: \"d625d351-1825-4cb5-89d7-032e9114a0e7\") " pod="openshift-marketplace/certified-operators-h9g72" Jan 20 18:57:46 crc kubenswrapper[4661]: I0120 18:57:46.218624 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d625d351-1825-4cb5-89d7-032e9114a0e7-utilities\") pod \"certified-operators-h9g72\" (UID: \"d625d351-1825-4cb5-89d7-032e9114a0e7\") " pod="openshift-marketplace/certified-operators-h9g72" Jan 20 18:57:46 crc kubenswrapper[4661]: I0120 18:57:46.219154 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d625d351-1825-4cb5-89d7-032e9114a0e7-catalog-content\") pod \"certified-operators-h9g72\" (UID: \"d625d351-1825-4cb5-89d7-032e9114a0e7\") " pod="openshift-marketplace/certified-operators-h9g72" Jan 20 18:57:46 crc kubenswrapper[4661]: I0120 18:57:46.219216 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d625d351-1825-4cb5-89d7-032e9114a0e7-utilities\") pod \"certified-operators-h9g72\" (UID: \"d625d351-1825-4cb5-89d7-032e9114a0e7\") " pod="openshift-marketplace/certified-operators-h9g72" Jan 20 18:57:46 crc kubenswrapper[4661]: I0120 18:57:46.252644 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9qkw4\" (UniqueName: \"kubernetes.io/projected/d625d351-1825-4cb5-89d7-032e9114a0e7-kube-api-access-9qkw4\") pod \"certified-operators-h9g72\" (UID: \"d625d351-1825-4cb5-89d7-032e9114a0e7\") " pod="openshift-marketplace/certified-operators-h9g72" Jan 20 18:57:46 crc kubenswrapper[4661]: I0120 18:57:46.266056 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-h9g72" Jan 20 18:57:46 crc kubenswrapper[4661]: I0120 18:57:46.790762 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-h9g72"] Jan 20 18:57:47 crc kubenswrapper[4661]: I0120 18:57:47.404373 4661 generic.go:334] "Generic (PLEG): container finished" podID="d625d351-1825-4cb5-89d7-032e9114a0e7" containerID="c75e9561769d41d3a3c26b66cb3d95f8dd72b8c2513eb510a298303c813f7aa2" exitCode=0 Jan 20 18:57:47 crc kubenswrapper[4661]: I0120 18:57:47.404448 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-h9g72" event={"ID":"d625d351-1825-4cb5-89d7-032e9114a0e7","Type":"ContainerDied","Data":"c75e9561769d41d3a3c26b66cb3d95f8dd72b8c2513eb510a298303c813f7aa2"} Jan 20 18:57:47 crc kubenswrapper[4661]: I0120 18:57:47.404746 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-h9g72" event={"ID":"d625d351-1825-4cb5-89d7-032e9114a0e7","Type":"ContainerStarted","Data":"c7dfd2611514aeceebd3a70997920b403b48d569014f259f616d947b81fdabb5"} Jan 20 18:57:48 crc kubenswrapper[4661]: I0120 18:57:48.424254 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-h9g72" event={"ID":"d625d351-1825-4cb5-89d7-032e9114a0e7","Type":"ContainerStarted","Data":"7e62ff59d040dd90c505542c73f36ea933df9c6d490e1832dbf9ff419606113d"} Jan 20 18:57:49 crc kubenswrapper[4661]: I0120 18:57:49.447959 4661 generic.go:334] "Generic (PLEG): container finished" podID="d625d351-1825-4cb5-89d7-032e9114a0e7" containerID="7e62ff59d040dd90c505542c73f36ea933df9c6d490e1832dbf9ff419606113d" exitCode=0 Jan 20 18:57:49 crc kubenswrapper[4661]: I0120 18:57:49.448330 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-h9g72" event={"ID":"d625d351-1825-4cb5-89d7-032e9114a0e7","Type":"ContainerDied","Data":"7e62ff59d040dd90c505542c73f36ea933df9c6d490e1832dbf9ff419606113d"} Jan 20 18:57:50 crc kubenswrapper[4661]: I0120 18:57:50.457650 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-h9g72" event={"ID":"d625d351-1825-4cb5-89d7-032e9114a0e7","Type":"ContainerStarted","Data":"d654392f51457a8fe17bc2f084803004186aa03c58805c92306a99e1124a24a8"} Jan 20 18:57:50 crc kubenswrapper[4661]: I0120 18:57:50.478231 4661 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-h9g72" podStartSLOduration=2.75644256 podStartE2EDuration="5.478210779s" podCreationTimestamp="2026-01-20 18:57:45 +0000 UTC" firstStartedPulling="2026-01-20 18:57:47.406522713 +0000 UTC m=+3123.737312375" lastFinishedPulling="2026-01-20 18:57:50.128290932 +0000 UTC m=+3126.459080594" observedRunningTime="2026-01-20 18:57:50.470305224 +0000 UTC m=+3126.801094896" watchObservedRunningTime="2026-01-20 18:57:50.478210779 +0000 UTC m=+3126.809000441" Jan 20 18:57:54 crc kubenswrapper[4661]: I0120 18:57:54.150822 4661 scope.go:117] "RemoveContainer" containerID="8cf060cb371bfd85fb7ec3dfa349258b41b6b6371ddbb02e6b4120e489296593" Jan 20 18:57:54 crc kubenswrapper[4661]: E0120 18:57:54.151497 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-svf7c_openshift-machine-config-operator(78855c94-da90-4523-8d65-70f7fd153dee)\"" pod="openshift-machine-config-operator/machine-config-daemon-svf7c" podUID="78855c94-da90-4523-8d65-70f7fd153dee" Jan 20 18:57:56 crc kubenswrapper[4661]: I0120 18:57:56.266794 4661 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-h9g72" Jan 20 18:57:56 crc kubenswrapper[4661]: I0120 18:57:56.267141 4661 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-h9g72" Jan 20 18:57:56 crc kubenswrapper[4661]: I0120 18:57:56.354955 4661 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-h9g72" Jan 20 18:57:56 crc kubenswrapper[4661]: I0120 18:57:56.586855 4661 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-h9g72" Jan 20 18:57:56 crc kubenswrapper[4661]: I0120 18:57:56.632330 4661 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-h9g72"] Jan 20 18:57:58 crc kubenswrapper[4661]: I0120 18:57:58.535915 4661 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-h9g72" podUID="d625d351-1825-4cb5-89d7-032e9114a0e7" containerName="registry-server" containerID="cri-o://d654392f51457a8fe17bc2f084803004186aa03c58805c92306a99e1124a24a8" gracePeriod=2 Jan 20 18:57:59 crc kubenswrapper[4661]: I0120 18:57:59.441391 4661 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-h9g72" Jan 20 18:57:59 crc kubenswrapper[4661]: I0120 18:57:59.477336 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d625d351-1825-4cb5-89d7-032e9114a0e7-utilities\") pod \"d625d351-1825-4cb5-89d7-032e9114a0e7\" (UID: \"d625d351-1825-4cb5-89d7-032e9114a0e7\") " Jan 20 18:57:59 crc kubenswrapper[4661]: I0120 18:57:59.477491 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9qkw4\" (UniqueName: \"kubernetes.io/projected/d625d351-1825-4cb5-89d7-032e9114a0e7-kube-api-access-9qkw4\") pod \"d625d351-1825-4cb5-89d7-032e9114a0e7\" (UID: \"d625d351-1825-4cb5-89d7-032e9114a0e7\") " Jan 20 18:57:59 crc kubenswrapper[4661]: I0120 18:57:59.477513 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d625d351-1825-4cb5-89d7-032e9114a0e7-catalog-content\") pod \"d625d351-1825-4cb5-89d7-032e9114a0e7\" (UID: \"d625d351-1825-4cb5-89d7-032e9114a0e7\") " Jan 20 18:57:59 crc kubenswrapper[4661]: I0120 18:57:59.490345 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d625d351-1825-4cb5-89d7-032e9114a0e7-utilities" (OuterVolumeSpecName: "utilities") pod "d625d351-1825-4cb5-89d7-032e9114a0e7" (UID: "d625d351-1825-4cb5-89d7-032e9114a0e7"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 18:57:59 crc kubenswrapper[4661]: I0120 18:57:59.499208 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d625d351-1825-4cb5-89d7-032e9114a0e7-kube-api-access-9qkw4" (OuterVolumeSpecName: "kube-api-access-9qkw4") pod "d625d351-1825-4cb5-89d7-032e9114a0e7" (UID: "d625d351-1825-4cb5-89d7-032e9114a0e7"). InnerVolumeSpecName "kube-api-access-9qkw4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 18:57:59 crc kubenswrapper[4661]: I0120 18:57:59.533140 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d625d351-1825-4cb5-89d7-032e9114a0e7-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d625d351-1825-4cb5-89d7-032e9114a0e7" (UID: "d625d351-1825-4cb5-89d7-032e9114a0e7"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 18:57:59 crc kubenswrapper[4661]: I0120 18:57:59.549649 4661 generic.go:334] "Generic (PLEG): container finished" podID="d625d351-1825-4cb5-89d7-032e9114a0e7" containerID="d654392f51457a8fe17bc2f084803004186aa03c58805c92306a99e1124a24a8" exitCode=0 Jan 20 18:57:59 crc kubenswrapper[4661]: I0120 18:57:59.549795 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-h9g72" event={"ID":"d625d351-1825-4cb5-89d7-032e9114a0e7","Type":"ContainerDied","Data":"d654392f51457a8fe17bc2f084803004186aa03c58805c92306a99e1124a24a8"} Jan 20 18:57:59 crc kubenswrapper[4661]: I0120 18:57:59.549835 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-h9g72" event={"ID":"d625d351-1825-4cb5-89d7-032e9114a0e7","Type":"ContainerDied","Data":"c7dfd2611514aeceebd3a70997920b403b48d569014f259f616d947b81fdabb5"} Jan 20 18:57:59 crc kubenswrapper[4661]: I0120 18:57:59.549856 4661 scope.go:117] "RemoveContainer" containerID="d654392f51457a8fe17bc2f084803004186aa03c58805c92306a99e1124a24a8" Jan 20 18:57:59 crc kubenswrapper[4661]: I0120 18:57:59.550090 4661 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-h9g72" Jan 20 18:57:59 crc kubenswrapper[4661]: I0120 18:57:59.582305 4661 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d625d351-1825-4cb5-89d7-032e9114a0e7-utilities\") on node \"crc\" DevicePath \"\"" Jan 20 18:57:59 crc kubenswrapper[4661]: I0120 18:57:59.582368 4661 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9qkw4\" (UniqueName: \"kubernetes.io/projected/d625d351-1825-4cb5-89d7-032e9114a0e7-kube-api-access-9qkw4\") on node \"crc\" DevicePath \"\"" Jan 20 18:57:59 crc kubenswrapper[4661]: I0120 18:57:59.582383 4661 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d625d351-1825-4cb5-89d7-032e9114a0e7-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 20 18:57:59 crc kubenswrapper[4661]: I0120 18:57:59.590274 4661 scope.go:117] "RemoveContainer" containerID="7e62ff59d040dd90c505542c73f36ea933df9c6d490e1832dbf9ff419606113d" Jan 20 18:57:59 crc kubenswrapper[4661]: I0120 18:57:59.601265 4661 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-h9g72"] Jan 20 18:57:59 crc kubenswrapper[4661]: I0120 18:57:59.608885 4661 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-h9g72"] Jan 20 18:57:59 crc kubenswrapper[4661]: I0120 18:57:59.611825 4661 scope.go:117] "RemoveContainer" containerID="c75e9561769d41d3a3c26b66cb3d95f8dd72b8c2513eb510a298303c813f7aa2" Jan 20 18:57:59 crc kubenswrapper[4661]: I0120 18:57:59.648288 4661 scope.go:117] "RemoveContainer" containerID="d654392f51457a8fe17bc2f084803004186aa03c58805c92306a99e1124a24a8" Jan 20 18:57:59 crc kubenswrapper[4661]: E0120 18:57:59.648745 4661 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d654392f51457a8fe17bc2f084803004186aa03c58805c92306a99e1124a24a8\": container with ID starting with d654392f51457a8fe17bc2f084803004186aa03c58805c92306a99e1124a24a8 not found: ID does not exist" containerID="d654392f51457a8fe17bc2f084803004186aa03c58805c92306a99e1124a24a8" Jan 20 18:57:59 crc kubenswrapper[4661]: I0120 18:57:59.648816 4661 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d654392f51457a8fe17bc2f084803004186aa03c58805c92306a99e1124a24a8"} err="failed to get container status \"d654392f51457a8fe17bc2f084803004186aa03c58805c92306a99e1124a24a8\": rpc error: code = NotFound desc = could not find container \"d654392f51457a8fe17bc2f084803004186aa03c58805c92306a99e1124a24a8\": container with ID starting with d654392f51457a8fe17bc2f084803004186aa03c58805c92306a99e1124a24a8 not found: ID does not exist" Jan 20 18:57:59 crc kubenswrapper[4661]: I0120 18:57:59.648841 4661 scope.go:117] "RemoveContainer" containerID="7e62ff59d040dd90c505542c73f36ea933df9c6d490e1832dbf9ff419606113d" Jan 20 18:57:59 crc kubenswrapper[4661]: E0120 18:57:59.649329 4661 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7e62ff59d040dd90c505542c73f36ea933df9c6d490e1832dbf9ff419606113d\": container with ID starting with 7e62ff59d040dd90c505542c73f36ea933df9c6d490e1832dbf9ff419606113d not found: ID does not exist" containerID="7e62ff59d040dd90c505542c73f36ea933df9c6d490e1832dbf9ff419606113d" Jan 20 18:57:59 crc kubenswrapper[4661]: I0120 18:57:59.649359 4661 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7e62ff59d040dd90c505542c73f36ea933df9c6d490e1832dbf9ff419606113d"} err="failed to get container status \"7e62ff59d040dd90c505542c73f36ea933df9c6d490e1832dbf9ff419606113d\": rpc error: code = NotFound desc = could not find container \"7e62ff59d040dd90c505542c73f36ea933df9c6d490e1832dbf9ff419606113d\": container with ID starting with 7e62ff59d040dd90c505542c73f36ea933df9c6d490e1832dbf9ff419606113d not found: ID does not exist" Jan 20 18:57:59 crc kubenswrapper[4661]: I0120 18:57:59.649381 4661 scope.go:117] "RemoveContainer" containerID="c75e9561769d41d3a3c26b66cb3d95f8dd72b8c2513eb510a298303c813f7aa2" Jan 20 18:57:59 crc kubenswrapper[4661]: E0120 18:57:59.649749 4661 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c75e9561769d41d3a3c26b66cb3d95f8dd72b8c2513eb510a298303c813f7aa2\": container with ID starting with c75e9561769d41d3a3c26b66cb3d95f8dd72b8c2513eb510a298303c813f7aa2 not found: ID does not exist" containerID="c75e9561769d41d3a3c26b66cb3d95f8dd72b8c2513eb510a298303c813f7aa2" Jan 20 18:57:59 crc kubenswrapper[4661]: I0120 18:57:59.649776 4661 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c75e9561769d41d3a3c26b66cb3d95f8dd72b8c2513eb510a298303c813f7aa2"} err="failed to get container status \"c75e9561769d41d3a3c26b66cb3d95f8dd72b8c2513eb510a298303c813f7aa2\": rpc error: code = NotFound desc = could not find container \"c75e9561769d41d3a3c26b66cb3d95f8dd72b8c2513eb510a298303c813f7aa2\": container with ID starting with c75e9561769d41d3a3c26b66cb3d95f8dd72b8c2513eb510a298303c813f7aa2 not found: ID does not exist" Jan 20 18:58:00 crc kubenswrapper[4661]: I0120 18:58:00.165516 4661 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d625d351-1825-4cb5-89d7-032e9114a0e7" path="/var/lib/kubelet/pods/d625d351-1825-4cb5-89d7-032e9114a0e7/volumes" Jan 20 18:58:07 crc kubenswrapper[4661]: I0120 18:58:07.142209 4661 scope.go:117] "RemoveContainer" containerID="8cf060cb371bfd85fb7ec3dfa349258b41b6b6371ddbb02e6b4120e489296593" Jan 20 18:58:07 crc kubenswrapper[4661]: E0120 18:58:07.143096 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-svf7c_openshift-machine-config-operator(78855c94-da90-4523-8d65-70f7fd153dee)\"" pod="openshift-machine-config-operator/machine-config-daemon-svf7c" podUID="78855c94-da90-4523-8d65-70f7fd153dee" Jan 20 18:58:19 crc kubenswrapper[4661]: I0120 18:58:19.142165 4661 scope.go:117] "RemoveContainer" containerID="8cf060cb371bfd85fb7ec3dfa349258b41b6b6371ddbb02e6b4120e489296593" Jan 20 18:58:19 crc kubenswrapper[4661]: E0120 18:58:19.142985 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-svf7c_openshift-machine-config-operator(78855c94-da90-4523-8d65-70f7fd153dee)\"" pod="openshift-machine-config-operator/machine-config-daemon-svf7c" podUID="78855c94-da90-4523-8d65-70f7fd153dee" Jan 20 18:58:34 crc kubenswrapper[4661]: I0120 18:58:34.150167 4661 scope.go:117] "RemoveContainer" containerID="8cf060cb371bfd85fb7ec3dfa349258b41b6b6371ddbb02e6b4120e489296593" Jan 20 18:58:34 crc kubenswrapper[4661]: E0120 18:58:34.152033 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-svf7c_openshift-machine-config-operator(78855c94-da90-4523-8d65-70f7fd153dee)\"" pod="openshift-machine-config-operator/machine-config-daemon-svf7c" podUID="78855c94-da90-4523-8d65-70f7fd153dee" Jan 20 18:58:46 crc kubenswrapper[4661]: I0120 18:58:46.143236 4661 scope.go:117] "RemoveContainer" containerID="8cf060cb371bfd85fb7ec3dfa349258b41b6b6371ddbb02e6b4120e489296593" Jan 20 18:58:46 crc kubenswrapper[4661]: E0120 18:58:46.143986 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-svf7c_openshift-machine-config-operator(78855c94-da90-4523-8d65-70f7fd153dee)\"" pod="openshift-machine-config-operator/machine-config-daemon-svf7c" podUID="78855c94-da90-4523-8d65-70f7fd153dee" Jan 20 18:58:48 crc kubenswrapper[4661]: I0120 18:58:48.990958 4661 generic.go:334] "Generic (PLEG): container finished" podID="e8b8d2fe-c25d-41e1-b32e-6e81c03e0717" containerID="42870e80ca48ce49e90d5b21a4519dd4b0f1e196d0652ff4e0aed8d63b497e05" exitCode=0 Jan 20 18:58:48 crc kubenswrapper[4661]: I0120 18:58:48.991075 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-h99fk" event={"ID":"e8b8d2fe-c25d-41e1-b32e-6e81c03e0717","Type":"ContainerDied","Data":"42870e80ca48ce49e90d5b21a4519dd4b0f1e196d0652ff4e0aed8d63b497e05"} Jan 20 18:58:50 crc kubenswrapper[4661]: I0120 18:58:50.385199 4661 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-h99fk" Jan 20 18:58:50 crc kubenswrapper[4661]: I0120 18:58:50.456261 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/e8b8d2fe-c25d-41e1-b32e-6e81c03e0717-libvirt-secret-0\") pod \"e8b8d2fe-c25d-41e1-b32e-6e81c03e0717\" (UID: \"e8b8d2fe-c25d-41e1-b32e-6e81c03e0717\") " Jan 20 18:58:50 crc kubenswrapper[4661]: I0120 18:58:50.456312 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e8b8d2fe-c25d-41e1-b32e-6e81c03e0717-libvirt-combined-ca-bundle\") pod \"e8b8d2fe-c25d-41e1-b32e-6e81c03e0717\" (UID: \"e8b8d2fe-c25d-41e1-b32e-6e81c03e0717\") " Jan 20 18:58:50 crc kubenswrapper[4661]: I0120 18:58:50.456336 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e8b8d2fe-c25d-41e1-b32e-6e81c03e0717-ssh-key-openstack-edpm-ipam\") pod \"e8b8d2fe-c25d-41e1-b32e-6e81c03e0717\" (UID: \"e8b8d2fe-c25d-41e1-b32e-6e81c03e0717\") " Jan 20 18:58:50 crc kubenswrapper[4661]: I0120 18:58:50.456429 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/e8b8d2fe-c25d-41e1-b32e-6e81c03e0717-ceph\") pod \"e8b8d2fe-c25d-41e1-b32e-6e81c03e0717\" (UID: \"e8b8d2fe-c25d-41e1-b32e-6e81c03e0717\") " Jan 20 18:58:50 crc kubenswrapper[4661]: I0120 18:58:50.456454 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qvdk4\" (UniqueName: \"kubernetes.io/projected/e8b8d2fe-c25d-41e1-b32e-6e81c03e0717-kube-api-access-qvdk4\") pod \"e8b8d2fe-c25d-41e1-b32e-6e81c03e0717\" (UID: \"e8b8d2fe-c25d-41e1-b32e-6e81c03e0717\") " Jan 20 18:58:50 crc kubenswrapper[4661]: I0120 18:58:50.456475 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e8b8d2fe-c25d-41e1-b32e-6e81c03e0717-inventory\") pod \"e8b8d2fe-c25d-41e1-b32e-6e81c03e0717\" (UID: \"e8b8d2fe-c25d-41e1-b32e-6e81c03e0717\") " Jan 20 18:58:50 crc kubenswrapper[4661]: I0120 18:58:50.463406 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e8b8d2fe-c25d-41e1-b32e-6e81c03e0717-kube-api-access-qvdk4" (OuterVolumeSpecName: "kube-api-access-qvdk4") pod "e8b8d2fe-c25d-41e1-b32e-6e81c03e0717" (UID: "e8b8d2fe-c25d-41e1-b32e-6e81c03e0717"). InnerVolumeSpecName "kube-api-access-qvdk4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 18:58:50 crc kubenswrapper[4661]: I0120 18:58:50.463980 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e8b8d2fe-c25d-41e1-b32e-6e81c03e0717-ceph" (OuterVolumeSpecName: "ceph") pod "e8b8d2fe-c25d-41e1-b32e-6e81c03e0717" (UID: "e8b8d2fe-c25d-41e1-b32e-6e81c03e0717"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 18:58:50 crc kubenswrapper[4661]: I0120 18:58:50.472819 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e8b8d2fe-c25d-41e1-b32e-6e81c03e0717-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "e8b8d2fe-c25d-41e1-b32e-6e81c03e0717" (UID: "e8b8d2fe-c25d-41e1-b32e-6e81c03e0717"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 18:58:50 crc kubenswrapper[4661]: I0120 18:58:50.484046 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e8b8d2fe-c25d-41e1-b32e-6e81c03e0717-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "e8b8d2fe-c25d-41e1-b32e-6e81c03e0717" (UID: "e8b8d2fe-c25d-41e1-b32e-6e81c03e0717"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 18:58:50 crc kubenswrapper[4661]: I0120 18:58:50.487878 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e8b8d2fe-c25d-41e1-b32e-6e81c03e0717-libvirt-secret-0" (OuterVolumeSpecName: "libvirt-secret-0") pod "e8b8d2fe-c25d-41e1-b32e-6e81c03e0717" (UID: "e8b8d2fe-c25d-41e1-b32e-6e81c03e0717"). InnerVolumeSpecName "libvirt-secret-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 18:58:50 crc kubenswrapper[4661]: I0120 18:58:50.506881 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e8b8d2fe-c25d-41e1-b32e-6e81c03e0717-inventory" (OuterVolumeSpecName: "inventory") pod "e8b8d2fe-c25d-41e1-b32e-6e81c03e0717" (UID: "e8b8d2fe-c25d-41e1-b32e-6e81c03e0717"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 18:58:50 crc kubenswrapper[4661]: I0120 18:58:50.560166 4661 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/e8b8d2fe-c25d-41e1-b32e-6e81c03e0717-ceph\") on node \"crc\" DevicePath \"\"" Jan 20 18:58:50 crc kubenswrapper[4661]: I0120 18:58:50.560490 4661 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qvdk4\" (UniqueName: \"kubernetes.io/projected/e8b8d2fe-c25d-41e1-b32e-6e81c03e0717-kube-api-access-qvdk4\") on node \"crc\" DevicePath \"\"" Jan 20 18:58:50 crc kubenswrapper[4661]: I0120 18:58:50.560558 4661 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e8b8d2fe-c25d-41e1-b32e-6e81c03e0717-inventory\") on node \"crc\" DevicePath \"\"" Jan 20 18:58:50 crc kubenswrapper[4661]: I0120 18:58:50.560637 4661 reconciler_common.go:293] "Volume detached for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/e8b8d2fe-c25d-41e1-b32e-6e81c03e0717-libvirt-secret-0\") on node \"crc\" DevicePath \"\"" Jan 20 18:58:50 crc kubenswrapper[4661]: I0120 18:58:50.560728 4661 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e8b8d2fe-c25d-41e1-b32e-6e81c03e0717-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 20 18:58:50 crc kubenswrapper[4661]: I0120 18:58:50.560815 4661 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e8b8d2fe-c25d-41e1-b32e-6e81c03e0717-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 20 18:58:51 crc kubenswrapper[4661]: I0120 18:58:51.010981 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-h99fk" event={"ID":"e8b8d2fe-c25d-41e1-b32e-6e81c03e0717","Type":"ContainerDied","Data":"a5dedf57f43d97ec641c9c8c359c8396a1aca567685b0821fa0c08901d28a86f"} Jan 20 18:58:51 crc kubenswrapper[4661]: I0120 18:58:51.011036 4661 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a5dedf57f43d97ec641c9c8c359c8396a1aca567685b0821fa0c08901d28a86f" Jan 20 18:58:51 crc kubenswrapper[4661]: I0120 18:58:51.011124 4661 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-h99fk" Jan 20 18:58:51 crc kubenswrapper[4661]: I0120 18:58:51.150642 4661 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-cnqdb"] Jan 20 18:58:51 crc kubenswrapper[4661]: E0120 18:58:51.151001 4661 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d625d351-1825-4cb5-89d7-032e9114a0e7" containerName="extract-content" Jan 20 18:58:51 crc kubenswrapper[4661]: I0120 18:58:51.151023 4661 state_mem.go:107] "Deleted CPUSet assignment" podUID="d625d351-1825-4cb5-89d7-032e9114a0e7" containerName="extract-content" Jan 20 18:58:51 crc kubenswrapper[4661]: E0120 18:58:51.151047 4661 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d625d351-1825-4cb5-89d7-032e9114a0e7" containerName="registry-server" Jan 20 18:58:51 crc kubenswrapper[4661]: I0120 18:58:51.151056 4661 state_mem.go:107] "Deleted CPUSet assignment" podUID="d625d351-1825-4cb5-89d7-032e9114a0e7" containerName="registry-server" Jan 20 18:58:51 crc kubenswrapper[4661]: E0120 18:58:51.151075 4661 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d625d351-1825-4cb5-89d7-032e9114a0e7" containerName="extract-utilities" Jan 20 18:58:51 crc kubenswrapper[4661]: I0120 18:58:51.151083 4661 state_mem.go:107] "Deleted CPUSet assignment" podUID="d625d351-1825-4cb5-89d7-032e9114a0e7" containerName="extract-utilities" Jan 20 18:58:51 crc kubenswrapper[4661]: E0120 18:58:51.151100 4661 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e8b8d2fe-c25d-41e1-b32e-6e81c03e0717" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Jan 20 18:58:51 crc kubenswrapper[4661]: I0120 18:58:51.151110 4661 state_mem.go:107] "Deleted CPUSet assignment" podUID="e8b8d2fe-c25d-41e1-b32e-6e81c03e0717" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Jan 20 18:58:51 crc kubenswrapper[4661]: I0120 18:58:51.151302 4661 memory_manager.go:354] "RemoveStaleState removing state" podUID="d625d351-1825-4cb5-89d7-032e9114a0e7" containerName="registry-server" Jan 20 18:58:51 crc kubenswrapper[4661]: I0120 18:58:51.151322 4661 memory_manager.go:354] "RemoveStaleState removing state" podUID="e8b8d2fe-c25d-41e1-b32e-6e81c03e0717" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Jan 20 18:58:51 crc kubenswrapper[4661]: I0120 18:58:51.151977 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-cnqdb" Jan 20 18:58:51 crc kubenswrapper[4661]: I0120 18:58:51.153642 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-mmbv8" Jan 20 18:58:51 crc kubenswrapper[4661]: I0120 18:58:51.153832 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-migration-ssh-key" Jan 20 18:58:51 crc kubenswrapper[4661]: I0120 18:58:51.153974 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Jan 20 18:58:51 crc kubenswrapper[4661]: I0120 18:58:51.155176 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 20 18:58:51 crc kubenswrapper[4661]: I0120 18:58:51.155500 4661 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 20 18:58:51 crc kubenswrapper[4661]: I0120 18:58:51.155620 4661 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ceph-nova" Jan 20 18:58:51 crc kubenswrapper[4661]: I0120 18:58:51.156126 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 20 18:58:51 crc kubenswrapper[4661]: I0120 18:58:51.156256 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-compute-config" Jan 20 18:58:51 crc kubenswrapper[4661]: I0120 18:58:51.161390 4661 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"nova-extra-config" Jan 20 18:58:51 crc kubenswrapper[4661]: I0120 18:58:51.176688 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-cnqdb"] Jan 20 18:58:51 crc kubenswrapper[4661]: I0120 18:58:51.273182 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/40502583-1982-469d-a228-04488a4eb068-nova-extra-config-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-cnqdb\" (UID: \"40502583-1982-469d-a228-04488a4eb068\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-cnqdb" Jan 20 18:58:51 crc kubenswrapper[4661]: I0120 18:58:51.273234 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-custom-ceph-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/40502583-1982-469d-a228-04488a4eb068-nova-custom-ceph-combined-ca-bundle\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-cnqdb\" (UID: \"40502583-1982-469d-a228-04488a4eb068\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-cnqdb" Jan 20 18:58:51 crc kubenswrapper[4661]: I0120 18:58:51.273371 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/40502583-1982-469d-a228-04488a4eb068-ceph\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-cnqdb\" (UID: \"40502583-1982-469d-a228-04488a4eb068\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-cnqdb" Jan 20 18:58:51 crc kubenswrapper[4661]: I0120 18:58:51.273405 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/40502583-1982-469d-a228-04488a4eb068-nova-cell1-compute-config-1\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-cnqdb\" (UID: \"40502583-1982-469d-a228-04488a4eb068\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-cnqdb" Jan 20 18:58:51 crc kubenswrapper[4661]: I0120 18:58:51.273816 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/40502583-1982-469d-a228-04488a4eb068-inventory\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-cnqdb\" (UID: \"40502583-1982-469d-a228-04488a4eb068\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-cnqdb" Jan 20 18:58:51 crc kubenswrapper[4661]: I0120 18:58:51.274002 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/40502583-1982-469d-a228-04488a4eb068-ssh-key-openstack-edpm-ipam\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-cnqdb\" (UID: \"40502583-1982-469d-a228-04488a4eb068\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-cnqdb" Jan 20 18:58:51 crc kubenswrapper[4661]: I0120 18:58:51.274178 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/40502583-1982-469d-a228-04488a4eb068-nova-migration-ssh-key-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-cnqdb\" (UID: \"40502583-1982-469d-a228-04488a4eb068\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-cnqdb" Jan 20 18:58:51 crc kubenswrapper[4661]: I0120 18:58:51.274208 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph-nova-0\" (UniqueName: \"kubernetes.io/configmap/40502583-1982-469d-a228-04488a4eb068-ceph-nova-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-cnqdb\" (UID: \"40502583-1982-469d-a228-04488a4eb068\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-cnqdb" Jan 20 18:58:51 crc kubenswrapper[4661]: I0120 18:58:51.274312 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ttn2t\" (UniqueName: \"kubernetes.io/projected/40502583-1982-469d-a228-04488a4eb068-kube-api-access-ttn2t\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-cnqdb\" (UID: \"40502583-1982-469d-a228-04488a4eb068\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-cnqdb" Jan 20 18:58:51 crc kubenswrapper[4661]: I0120 18:58:51.274348 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/40502583-1982-469d-a228-04488a4eb068-nova-migration-ssh-key-1\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-cnqdb\" (UID: \"40502583-1982-469d-a228-04488a4eb068\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-cnqdb" Jan 20 18:58:51 crc kubenswrapper[4661]: I0120 18:58:51.274531 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/40502583-1982-469d-a228-04488a4eb068-nova-cell1-compute-config-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-cnqdb\" (UID: \"40502583-1982-469d-a228-04488a4eb068\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-cnqdb" Jan 20 18:58:51 crc kubenswrapper[4661]: I0120 18:58:51.375820 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/40502583-1982-469d-a228-04488a4eb068-nova-cell1-compute-config-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-cnqdb\" (UID: \"40502583-1982-469d-a228-04488a4eb068\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-cnqdb" Jan 20 18:58:51 crc kubenswrapper[4661]: I0120 18:58:51.375882 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/40502583-1982-469d-a228-04488a4eb068-nova-extra-config-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-cnqdb\" (UID: \"40502583-1982-469d-a228-04488a4eb068\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-cnqdb" Jan 20 18:58:51 crc kubenswrapper[4661]: I0120 18:58:51.375906 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-custom-ceph-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/40502583-1982-469d-a228-04488a4eb068-nova-custom-ceph-combined-ca-bundle\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-cnqdb\" (UID: \"40502583-1982-469d-a228-04488a4eb068\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-cnqdb" Jan 20 18:58:51 crc kubenswrapper[4661]: I0120 18:58:51.375969 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/40502583-1982-469d-a228-04488a4eb068-ceph\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-cnqdb\" (UID: \"40502583-1982-469d-a228-04488a4eb068\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-cnqdb" Jan 20 18:58:51 crc kubenswrapper[4661]: I0120 18:58:51.375993 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/40502583-1982-469d-a228-04488a4eb068-nova-cell1-compute-config-1\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-cnqdb\" (UID: \"40502583-1982-469d-a228-04488a4eb068\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-cnqdb" Jan 20 18:58:51 crc kubenswrapper[4661]: I0120 18:58:51.376032 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/40502583-1982-469d-a228-04488a4eb068-inventory\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-cnqdb\" (UID: \"40502583-1982-469d-a228-04488a4eb068\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-cnqdb" Jan 20 18:58:51 crc kubenswrapper[4661]: I0120 18:58:51.376063 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/40502583-1982-469d-a228-04488a4eb068-ssh-key-openstack-edpm-ipam\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-cnqdb\" (UID: \"40502583-1982-469d-a228-04488a4eb068\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-cnqdb" Jan 20 18:58:51 crc kubenswrapper[4661]: I0120 18:58:51.376100 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/40502583-1982-469d-a228-04488a4eb068-nova-migration-ssh-key-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-cnqdb\" (UID: \"40502583-1982-469d-a228-04488a4eb068\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-cnqdb" Jan 20 18:58:51 crc kubenswrapper[4661]: I0120 18:58:51.376120 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph-nova-0\" (UniqueName: \"kubernetes.io/configmap/40502583-1982-469d-a228-04488a4eb068-ceph-nova-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-cnqdb\" (UID: \"40502583-1982-469d-a228-04488a4eb068\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-cnqdb" Jan 20 18:58:51 crc kubenswrapper[4661]: I0120 18:58:51.376150 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ttn2t\" (UniqueName: \"kubernetes.io/projected/40502583-1982-469d-a228-04488a4eb068-kube-api-access-ttn2t\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-cnqdb\" (UID: \"40502583-1982-469d-a228-04488a4eb068\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-cnqdb" Jan 20 18:58:51 crc kubenswrapper[4661]: I0120 18:58:51.376166 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/40502583-1982-469d-a228-04488a4eb068-nova-migration-ssh-key-1\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-cnqdb\" (UID: \"40502583-1982-469d-a228-04488a4eb068\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-cnqdb" Jan 20 18:58:51 crc kubenswrapper[4661]: I0120 18:58:51.376780 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/40502583-1982-469d-a228-04488a4eb068-nova-extra-config-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-cnqdb\" (UID: \"40502583-1982-469d-a228-04488a4eb068\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-cnqdb" Jan 20 18:58:51 crc kubenswrapper[4661]: I0120 18:58:51.377489 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph-nova-0\" (UniqueName: \"kubernetes.io/configmap/40502583-1982-469d-a228-04488a4eb068-ceph-nova-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-cnqdb\" (UID: \"40502583-1982-469d-a228-04488a4eb068\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-cnqdb" Jan 20 18:58:51 crc kubenswrapper[4661]: I0120 18:58:51.380039 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/40502583-1982-469d-a228-04488a4eb068-nova-migration-ssh-key-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-cnqdb\" (UID: \"40502583-1982-469d-a228-04488a4eb068\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-cnqdb" Jan 20 18:58:51 crc kubenswrapper[4661]: I0120 18:58:51.380450 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/40502583-1982-469d-a228-04488a4eb068-inventory\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-cnqdb\" (UID: \"40502583-1982-469d-a228-04488a4eb068\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-cnqdb" Jan 20 18:58:51 crc kubenswrapper[4661]: I0120 18:58:51.380876 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/40502583-1982-469d-a228-04488a4eb068-nova-cell1-compute-config-1\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-cnqdb\" (UID: \"40502583-1982-469d-a228-04488a4eb068\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-cnqdb" Jan 20 18:58:51 crc kubenswrapper[4661]: I0120 18:58:51.381551 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-custom-ceph-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/40502583-1982-469d-a228-04488a4eb068-nova-custom-ceph-combined-ca-bundle\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-cnqdb\" (UID: \"40502583-1982-469d-a228-04488a4eb068\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-cnqdb" Jan 20 18:58:51 crc kubenswrapper[4661]: I0120 18:58:51.381619 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/40502583-1982-469d-a228-04488a4eb068-nova-migration-ssh-key-1\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-cnqdb\" (UID: \"40502583-1982-469d-a228-04488a4eb068\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-cnqdb" Jan 20 18:58:51 crc kubenswrapper[4661]: I0120 18:58:51.384027 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/40502583-1982-469d-a228-04488a4eb068-nova-cell1-compute-config-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-cnqdb\" (UID: \"40502583-1982-469d-a228-04488a4eb068\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-cnqdb" Jan 20 18:58:51 crc kubenswrapper[4661]: I0120 18:58:51.384300 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/40502583-1982-469d-a228-04488a4eb068-ceph\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-cnqdb\" (UID: \"40502583-1982-469d-a228-04488a4eb068\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-cnqdb" Jan 20 18:58:51 crc kubenswrapper[4661]: I0120 18:58:51.395592 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/40502583-1982-469d-a228-04488a4eb068-ssh-key-openstack-edpm-ipam\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-cnqdb\" (UID: \"40502583-1982-469d-a228-04488a4eb068\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-cnqdb" Jan 20 18:58:51 crc kubenswrapper[4661]: I0120 18:58:51.404263 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ttn2t\" (UniqueName: \"kubernetes.io/projected/40502583-1982-469d-a228-04488a4eb068-kube-api-access-ttn2t\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-cnqdb\" (UID: \"40502583-1982-469d-a228-04488a4eb068\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-cnqdb" Jan 20 18:58:51 crc kubenswrapper[4661]: I0120 18:58:51.470036 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-cnqdb" Jan 20 18:58:52 crc kubenswrapper[4661]: I0120 18:58:52.025301 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-cnqdb"] Jan 20 18:58:53 crc kubenswrapper[4661]: I0120 18:58:53.025587 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-cnqdb" event={"ID":"40502583-1982-469d-a228-04488a4eb068","Type":"ContainerStarted","Data":"85825afdfd9e1e147a8243598206aed288a51b71c95b41e7b42b693762885448"} Jan 20 18:58:53 crc kubenswrapper[4661]: I0120 18:58:53.025638 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-cnqdb" event={"ID":"40502583-1982-469d-a228-04488a4eb068","Type":"ContainerStarted","Data":"779175ea1fcf72b8fb5d182ba5e7d0c524da7d8ea701711137cf790c5e2af981"} Jan 20 18:58:53 crc kubenswrapper[4661]: I0120 18:58:53.046470 4661 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-cnqdb" podStartSLOduration=1.497871872 podStartE2EDuration="2.046452608s" podCreationTimestamp="2026-01-20 18:58:51 +0000 UTC" firstStartedPulling="2026-01-20 18:58:52.032817134 +0000 UTC m=+3188.363606796" lastFinishedPulling="2026-01-20 18:58:52.58139785 +0000 UTC m=+3188.912187532" observedRunningTime="2026-01-20 18:58:53.041862508 +0000 UTC m=+3189.372652160" watchObservedRunningTime="2026-01-20 18:58:53.046452608 +0000 UTC m=+3189.377242270" Jan 20 18:59:00 crc kubenswrapper[4661]: I0120 18:59:00.143025 4661 scope.go:117] "RemoveContainer" containerID="8cf060cb371bfd85fb7ec3dfa349258b41b6b6371ddbb02e6b4120e489296593" Jan 20 18:59:00 crc kubenswrapper[4661]: E0120 18:59:00.144169 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-svf7c_openshift-machine-config-operator(78855c94-da90-4523-8d65-70f7fd153dee)\"" pod="openshift-machine-config-operator/machine-config-daemon-svf7c" podUID="78855c94-da90-4523-8d65-70f7fd153dee" Jan 20 18:59:14 crc kubenswrapper[4661]: I0120 18:59:14.146642 4661 scope.go:117] "RemoveContainer" containerID="8cf060cb371bfd85fb7ec3dfa349258b41b6b6371ddbb02e6b4120e489296593" Jan 20 18:59:14 crc kubenswrapper[4661]: E0120 18:59:14.150077 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-svf7c_openshift-machine-config-operator(78855c94-da90-4523-8d65-70f7fd153dee)\"" pod="openshift-machine-config-operator/machine-config-daemon-svf7c" podUID="78855c94-da90-4523-8d65-70f7fd153dee" Jan 20 18:59:29 crc kubenswrapper[4661]: I0120 18:59:29.143733 4661 scope.go:117] "RemoveContainer" containerID="8cf060cb371bfd85fb7ec3dfa349258b41b6b6371ddbb02e6b4120e489296593" Jan 20 18:59:29 crc kubenswrapper[4661]: E0120 18:59:29.145044 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-svf7c_openshift-machine-config-operator(78855c94-da90-4523-8d65-70f7fd153dee)\"" pod="openshift-machine-config-operator/machine-config-daemon-svf7c" podUID="78855c94-da90-4523-8d65-70f7fd153dee" Jan 20 18:59:44 crc kubenswrapper[4661]: I0120 18:59:44.156308 4661 scope.go:117] "RemoveContainer" containerID="8cf060cb371bfd85fb7ec3dfa349258b41b6b6371ddbb02e6b4120e489296593" Jan 20 18:59:44 crc kubenswrapper[4661]: E0120 18:59:44.157189 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-svf7c_openshift-machine-config-operator(78855c94-da90-4523-8d65-70f7fd153dee)\"" pod="openshift-machine-config-operator/machine-config-daemon-svf7c" podUID="78855c94-da90-4523-8d65-70f7fd153dee" Jan 20 18:59:57 crc kubenswrapper[4661]: I0120 18:59:57.142645 4661 scope.go:117] "RemoveContainer" containerID="8cf060cb371bfd85fb7ec3dfa349258b41b6b6371ddbb02e6b4120e489296593" Jan 20 18:59:57 crc kubenswrapper[4661]: E0120 18:59:57.143527 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-svf7c_openshift-machine-config-operator(78855c94-da90-4523-8d65-70f7fd153dee)\"" pod="openshift-machine-config-operator/machine-config-daemon-svf7c" podUID="78855c94-da90-4523-8d65-70f7fd153dee" Jan 20 19:00:00 crc kubenswrapper[4661]: I0120 19:00:00.180111 4661 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29482260-249l8"] Jan 20 19:00:00 crc kubenswrapper[4661]: I0120 19:00:00.188420 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29482260-249l8" Jan 20 19:00:00 crc kubenswrapper[4661]: I0120 19:00:00.195231 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29482260-249l8"] Jan 20 19:00:00 crc kubenswrapper[4661]: I0120 19:00:00.195745 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 20 19:00:00 crc kubenswrapper[4661]: I0120 19:00:00.196066 4661 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 20 19:00:00 crc kubenswrapper[4661]: I0120 19:00:00.224430 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c300e019-cfcb-41bf-9824-11eb5a49063e-secret-volume\") pod \"collect-profiles-29482260-249l8\" (UID: \"c300e019-cfcb-41bf-9824-11eb5a49063e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29482260-249l8" Jan 20 19:00:00 crc kubenswrapper[4661]: I0120 19:00:00.224700 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c300e019-cfcb-41bf-9824-11eb5a49063e-config-volume\") pod \"collect-profiles-29482260-249l8\" (UID: \"c300e019-cfcb-41bf-9824-11eb5a49063e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29482260-249l8" Jan 20 19:00:00 crc kubenswrapper[4661]: I0120 19:00:00.224829 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nr5r6\" (UniqueName: \"kubernetes.io/projected/c300e019-cfcb-41bf-9824-11eb5a49063e-kube-api-access-nr5r6\") pod \"collect-profiles-29482260-249l8\" (UID: \"c300e019-cfcb-41bf-9824-11eb5a49063e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29482260-249l8" Jan 20 19:00:00 crc kubenswrapper[4661]: I0120 19:00:00.326578 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c300e019-cfcb-41bf-9824-11eb5a49063e-secret-volume\") pod \"collect-profiles-29482260-249l8\" (UID: \"c300e019-cfcb-41bf-9824-11eb5a49063e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29482260-249l8" Jan 20 19:00:00 crc kubenswrapper[4661]: I0120 19:00:00.327063 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c300e019-cfcb-41bf-9824-11eb5a49063e-config-volume\") pod \"collect-profiles-29482260-249l8\" (UID: \"c300e019-cfcb-41bf-9824-11eb5a49063e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29482260-249l8" Jan 20 19:00:00 crc kubenswrapper[4661]: I0120 19:00:00.327150 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nr5r6\" (UniqueName: \"kubernetes.io/projected/c300e019-cfcb-41bf-9824-11eb5a49063e-kube-api-access-nr5r6\") pod \"collect-profiles-29482260-249l8\" (UID: \"c300e019-cfcb-41bf-9824-11eb5a49063e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29482260-249l8" Jan 20 19:00:00 crc kubenswrapper[4661]: I0120 19:00:00.327901 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c300e019-cfcb-41bf-9824-11eb5a49063e-config-volume\") pod \"collect-profiles-29482260-249l8\" (UID: \"c300e019-cfcb-41bf-9824-11eb5a49063e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29482260-249l8" Jan 20 19:00:00 crc kubenswrapper[4661]: I0120 19:00:00.335443 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c300e019-cfcb-41bf-9824-11eb5a49063e-secret-volume\") pod \"collect-profiles-29482260-249l8\" (UID: \"c300e019-cfcb-41bf-9824-11eb5a49063e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29482260-249l8" Jan 20 19:00:00 crc kubenswrapper[4661]: I0120 19:00:00.343512 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nr5r6\" (UniqueName: \"kubernetes.io/projected/c300e019-cfcb-41bf-9824-11eb5a49063e-kube-api-access-nr5r6\") pod \"collect-profiles-29482260-249l8\" (UID: \"c300e019-cfcb-41bf-9824-11eb5a49063e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29482260-249l8" Jan 20 19:00:00 crc kubenswrapper[4661]: I0120 19:00:00.515895 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29482260-249l8" Jan 20 19:00:00 crc kubenswrapper[4661]: I0120 19:00:00.995547 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29482260-249l8"] Jan 20 19:00:01 crc kubenswrapper[4661]: I0120 19:00:01.669258 4661 generic.go:334] "Generic (PLEG): container finished" podID="c300e019-cfcb-41bf-9824-11eb5a49063e" containerID="c9e624dbf678ee848016bde42cb8d6d2c3fb9390e305ba5bad5d4dbf7742b21e" exitCode=0 Jan 20 19:00:01 crc kubenswrapper[4661]: I0120 19:00:01.669349 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29482260-249l8" event={"ID":"c300e019-cfcb-41bf-9824-11eb5a49063e","Type":"ContainerDied","Data":"c9e624dbf678ee848016bde42cb8d6d2c3fb9390e305ba5bad5d4dbf7742b21e"} Jan 20 19:00:01 crc kubenswrapper[4661]: I0120 19:00:01.669424 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29482260-249l8" event={"ID":"c300e019-cfcb-41bf-9824-11eb5a49063e","Type":"ContainerStarted","Data":"4ddcc7538c2f6a1a58cd2dce39e7084877867a0b2b282713bd1923e7bc152bee"} Jan 20 19:00:03 crc kubenswrapper[4661]: I0120 19:00:03.016231 4661 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29482260-249l8" Jan 20 19:00:03 crc kubenswrapper[4661]: I0120 19:00:03.084396 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c300e019-cfcb-41bf-9824-11eb5a49063e-config-volume\") pod \"c300e019-cfcb-41bf-9824-11eb5a49063e\" (UID: \"c300e019-cfcb-41bf-9824-11eb5a49063e\") " Jan 20 19:00:03 crc kubenswrapper[4661]: I0120 19:00:03.084895 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nr5r6\" (UniqueName: \"kubernetes.io/projected/c300e019-cfcb-41bf-9824-11eb5a49063e-kube-api-access-nr5r6\") pod \"c300e019-cfcb-41bf-9824-11eb5a49063e\" (UID: \"c300e019-cfcb-41bf-9824-11eb5a49063e\") " Jan 20 19:00:03 crc kubenswrapper[4661]: I0120 19:00:03.085210 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c300e019-cfcb-41bf-9824-11eb5a49063e-secret-volume\") pod \"c300e019-cfcb-41bf-9824-11eb5a49063e\" (UID: \"c300e019-cfcb-41bf-9824-11eb5a49063e\") " Jan 20 19:00:03 crc kubenswrapper[4661]: I0120 19:00:03.085460 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c300e019-cfcb-41bf-9824-11eb5a49063e-config-volume" (OuterVolumeSpecName: "config-volume") pod "c300e019-cfcb-41bf-9824-11eb5a49063e" (UID: "c300e019-cfcb-41bf-9824-11eb5a49063e"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 19:00:03 crc kubenswrapper[4661]: I0120 19:00:03.086135 4661 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c300e019-cfcb-41bf-9824-11eb5a49063e-config-volume\") on node \"crc\" DevicePath \"\"" Jan 20 19:00:03 crc kubenswrapper[4661]: I0120 19:00:03.098344 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c300e019-cfcb-41bf-9824-11eb5a49063e-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "c300e019-cfcb-41bf-9824-11eb5a49063e" (UID: "c300e019-cfcb-41bf-9824-11eb5a49063e"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 19:00:03 crc kubenswrapper[4661]: I0120 19:00:03.098432 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c300e019-cfcb-41bf-9824-11eb5a49063e-kube-api-access-nr5r6" (OuterVolumeSpecName: "kube-api-access-nr5r6") pod "c300e019-cfcb-41bf-9824-11eb5a49063e" (UID: "c300e019-cfcb-41bf-9824-11eb5a49063e"). InnerVolumeSpecName "kube-api-access-nr5r6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 19:00:03 crc kubenswrapper[4661]: I0120 19:00:03.187679 4661 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nr5r6\" (UniqueName: \"kubernetes.io/projected/c300e019-cfcb-41bf-9824-11eb5a49063e-kube-api-access-nr5r6\") on node \"crc\" DevicePath \"\"" Jan 20 19:00:03 crc kubenswrapper[4661]: I0120 19:00:03.187711 4661 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c300e019-cfcb-41bf-9824-11eb5a49063e-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 20 19:00:03 crc kubenswrapper[4661]: I0120 19:00:03.702494 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29482260-249l8" event={"ID":"c300e019-cfcb-41bf-9824-11eb5a49063e","Type":"ContainerDied","Data":"4ddcc7538c2f6a1a58cd2dce39e7084877867a0b2b282713bd1923e7bc152bee"} Jan 20 19:00:03 crc kubenswrapper[4661]: I0120 19:00:03.702963 4661 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4ddcc7538c2f6a1a58cd2dce39e7084877867a0b2b282713bd1923e7bc152bee" Jan 20 19:00:03 crc kubenswrapper[4661]: I0120 19:00:03.703062 4661 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29482260-249l8" Jan 20 19:00:04 crc kubenswrapper[4661]: I0120 19:00:04.110362 4661 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29482215-vhxww"] Jan 20 19:00:04 crc kubenswrapper[4661]: I0120 19:00:04.136529 4661 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29482215-vhxww"] Jan 20 19:00:04 crc kubenswrapper[4661]: I0120 19:00:04.153979 4661 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="63f3a411-1b30-43f8-a0d4-dcf489b965c2" path="/var/lib/kubelet/pods/63f3a411-1b30-43f8-a0d4-dcf489b965c2/volumes" Jan 20 19:00:08 crc kubenswrapper[4661]: I0120 19:00:08.142640 4661 scope.go:117] "RemoveContainer" containerID="8cf060cb371bfd85fb7ec3dfa349258b41b6b6371ddbb02e6b4120e489296593" Jan 20 19:00:08 crc kubenswrapper[4661]: I0120 19:00:08.755230 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-svf7c" event={"ID":"78855c94-da90-4523-8d65-70f7fd153dee","Type":"ContainerStarted","Data":"03255bad160b69aedb631395e65d9a4b12434de8081b19d4e9a6358a608a74a9"} Jan 20 19:00:18 crc kubenswrapper[4661]: I0120 19:00:18.418241 4661 scope.go:117] "RemoveContainer" containerID="c7b52367cbb7cd7548a6b5d3b0d16fb75925f643c6e2896da0bdc597e4ec0832" Jan 20 19:01:00 crc kubenswrapper[4661]: I0120 19:01:00.168737 4661 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29482261-r7s7h"] Jan 20 19:01:00 crc kubenswrapper[4661]: E0120 19:01:00.186510 4661 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c300e019-cfcb-41bf-9824-11eb5a49063e" containerName="collect-profiles" Jan 20 19:01:00 crc kubenswrapper[4661]: I0120 19:01:00.186532 4661 state_mem.go:107] "Deleted CPUSet assignment" podUID="c300e019-cfcb-41bf-9824-11eb5a49063e" containerName="collect-profiles" Jan 20 19:01:00 crc kubenswrapper[4661]: I0120 19:01:00.186750 4661 memory_manager.go:354] "RemoveStaleState removing state" podUID="c300e019-cfcb-41bf-9824-11eb5a49063e" containerName="collect-profiles" Jan 20 19:01:00 crc kubenswrapper[4661]: I0120 19:01:00.187363 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29482261-r7s7h"] Jan 20 19:01:00 crc kubenswrapper[4661]: I0120 19:01:00.187448 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29482261-r7s7h" Jan 20 19:01:00 crc kubenswrapper[4661]: I0120 19:01:00.288080 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/089b2d3d-e382-4407-828e-cbeb9199951f-combined-ca-bundle\") pod \"keystone-cron-29482261-r7s7h\" (UID: \"089b2d3d-e382-4407-828e-cbeb9199951f\") " pod="openstack/keystone-cron-29482261-r7s7h" Jan 20 19:01:00 crc kubenswrapper[4661]: I0120 19:01:00.288146 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/089b2d3d-e382-4407-828e-cbeb9199951f-config-data\") pod \"keystone-cron-29482261-r7s7h\" (UID: \"089b2d3d-e382-4407-828e-cbeb9199951f\") " pod="openstack/keystone-cron-29482261-r7s7h" Jan 20 19:01:00 crc kubenswrapper[4661]: I0120 19:01:00.288203 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qzx65\" (UniqueName: \"kubernetes.io/projected/089b2d3d-e382-4407-828e-cbeb9199951f-kube-api-access-qzx65\") pod \"keystone-cron-29482261-r7s7h\" (UID: \"089b2d3d-e382-4407-828e-cbeb9199951f\") " pod="openstack/keystone-cron-29482261-r7s7h" Jan 20 19:01:00 crc kubenswrapper[4661]: I0120 19:01:00.288233 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/089b2d3d-e382-4407-828e-cbeb9199951f-fernet-keys\") pod \"keystone-cron-29482261-r7s7h\" (UID: \"089b2d3d-e382-4407-828e-cbeb9199951f\") " pod="openstack/keystone-cron-29482261-r7s7h" Jan 20 19:01:00 crc kubenswrapper[4661]: I0120 19:01:00.389341 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qzx65\" (UniqueName: \"kubernetes.io/projected/089b2d3d-e382-4407-828e-cbeb9199951f-kube-api-access-qzx65\") pod \"keystone-cron-29482261-r7s7h\" (UID: \"089b2d3d-e382-4407-828e-cbeb9199951f\") " pod="openstack/keystone-cron-29482261-r7s7h" Jan 20 19:01:00 crc kubenswrapper[4661]: I0120 19:01:00.389695 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/089b2d3d-e382-4407-828e-cbeb9199951f-fernet-keys\") pod \"keystone-cron-29482261-r7s7h\" (UID: \"089b2d3d-e382-4407-828e-cbeb9199951f\") " pod="openstack/keystone-cron-29482261-r7s7h" Jan 20 19:01:00 crc kubenswrapper[4661]: I0120 19:01:00.389882 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/089b2d3d-e382-4407-828e-cbeb9199951f-combined-ca-bundle\") pod \"keystone-cron-29482261-r7s7h\" (UID: \"089b2d3d-e382-4407-828e-cbeb9199951f\") " pod="openstack/keystone-cron-29482261-r7s7h" Jan 20 19:01:00 crc kubenswrapper[4661]: I0120 19:01:00.389972 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/089b2d3d-e382-4407-828e-cbeb9199951f-config-data\") pod \"keystone-cron-29482261-r7s7h\" (UID: \"089b2d3d-e382-4407-828e-cbeb9199951f\") " pod="openstack/keystone-cron-29482261-r7s7h" Jan 20 19:01:00 crc kubenswrapper[4661]: I0120 19:01:00.398500 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/089b2d3d-e382-4407-828e-cbeb9199951f-config-data\") pod \"keystone-cron-29482261-r7s7h\" (UID: \"089b2d3d-e382-4407-828e-cbeb9199951f\") " pod="openstack/keystone-cron-29482261-r7s7h" Jan 20 19:01:00 crc kubenswrapper[4661]: I0120 19:01:00.398974 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/089b2d3d-e382-4407-828e-cbeb9199951f-combined-ca-bundle\") pod \"keystone-cron-29482261-r7s7h\" (UID: \"089b2d3d-e382-4407-828e-cbeb9199951f\") " pod="openstack/keystone-cron-29482261-r7s7h" Jan 20 19:01:00 crc kubenswrapper[4661]: I0120 19:01:00.399058 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/089b2d3d-e382-4407-828e-cbeb9199951f-fernet-keys\") pod \"keystone-cron-29482261-r7s7h\" (UID: \"089b2d3d-e382-4407-828e-cbeb9199951f\") " pod="openstack/keystone-cron-29482261-r7s7h" Jan 20 19:01:00 crc kubenswrapper[4661]: I0120 19:01:00.418514 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qzx65\" (UniqueName: \"kubernetes.io/projected/089b2d3d-e382-4407-828e-cbeb9199951f-kube-api-access-qzx65\") pod \"keystone-cron-29482261-r7s7h\" (UID: \"089b2d3d-e382-4407-828e-cbeb9199951f\") " pod="openstack/keystone-cron-29482261-r7s7h" Jan 20 19:01:00 crc kubenswrapper[4661]: I0120 19:01:00.520064 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29482261-r7s7h" Jan 20 19:01:01 crc kubenswrapper[4661]: I0120 19:01:01.093985 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29482261-r7s7h"] Jan 20 19:01:01 crc kubenswrapper[4661]: I0120 19:01:01.350710 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29482261-r7s7h" event={"ID":"089b2d3d-e382-4407-828e-cbeb9199951f","Type":"ContainerStarted","Data":"928471a0c4179b1b2b4948ca9a97a529d7e85fdaf1793a263f3aab5e4ef0aa52"} Jan 20 19:01:01 crc kubenswrapper[4661]: I0120 19:01:01.350961 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29482261-r7s7h" event={"ID":"089b2d3d-e382-4407-828e-cbeb9199951f","Type":"ContainerStarted","Data":"f5432ffc5d517a97b5d71a7c31f19d1e7a53feaa19f9cffec24c478ef29c7163"} Jan 20 19:01:01 crc kubenswrapper[4661]: I0120 19:01:01.376555 4661 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-cron-29482261-r7s7h" podStartSLOduration=1.376536356 podStartE2EDuration="1.376536356s" podCreationTimestamp="2026-01-20 19:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 19:01:01.370751085 +0000 UTC m=+3317.701540747" watchObservedRunningTime="2026-01-20 19:01:01.376536356 +0000 UTC m=+3317.707326028" Jan 20 19:01:03 crc kubenswrapper[4661]: I0120 19:01:03.377611 4661 generic.go:334] "Generic (PLEG): container finished" podID="089b2d3d-e382-4407-828e-cbeb9199951f" containerID="928471a0c4179b1b2b4948ca9a97a529d7e85fdaf1793a263f3aab5e4ef0aa52" exitCode=0 Jan 20 19:01:03 crc kubenswrapper[4661]: I0120 19:01:03.377992 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29482261-r7s7h" event={"ID":"089b2d3d-e382-4407-828e-cbeb9199951f","Type":"ContainerDied","Data":"928471a0c4179b1b2b4948ca9a97a529d7e85fdaf1793a263f3aab5e4ef0aa52"} Jan 20 19:01:04 crc kubenswrapper[4661]: I0120 19:01:04.726084 4661 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29482261-r7s7h" Jan 20 19:01:04 crc kubenswrapper[4661]: I0120 19:01:04.873941 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/089b2d3d-e382-4407-828e-cbeb9199951f-fernet-keys\") pod \"089b2d3d-e382-4407-828e-cbeb9199951f\" (UID: \"089b2d3d-e382-4407-828e-cbeb9199951f\") " Jan 20 19:01:04 crc kubenswrapper[4661]: I0120 19:01:04.874049 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/089b2d3d-e382-4407-828e-cbeb9199951f-combined-ca-bundle\") pod \"089b2d3d-e382-4407-828e-cbeb9199951f\" (UID: \"089b2d3d-e382-4407-828e-cbeb9199951f\") " Jan 20 19:01:04 crc kubenswrapper[4661]: I0120 19:01:04.874084 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/089b2d3d-e382-4407-828e-cbeb9199951f-config-data\") pod \"089b2d3d-e382-4407-828e-cbeb9199951f\" (UID: \"089b2d3d-e382-4407-828e-cbeb9199951f\") " Jan 20 19:01:04 crc kubenswrapper[4661]: I0120 19:01:04.874274 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qzx65\" (UniqueName: \"kubernetes.io/projected/089b2d3d-e382-4407-828e-cbeb9199951f-kube-api-access-qzx65\") pod \"089b2d3d-e382-4407-828e-cbeb9199951f\" (UID: \"089b2d3d-e382-4407-828e-cbeb9199951f\") " Jan 20 19:01:04 crc kubenswrapper[4661]: I0120 19:01:04.903143 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/089b2d3d-e382-4407-828e-cbeb9199951f-kube-api-access-qzx65" (OuterVolumeSpecName: "kube-api-access-qzx65") pod "089b2d3d-e382-4407-828e-cbeb9199951f" (UID: "089b2d3d-e382-4407-828e-cbeb9199951f"). InnerVolumeSpecName "kube-api-access-qzx65". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 19:01:04 crc kubenswrapper[4661]: I0120 19:01:04.916526 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/089b2d3d-e382-4407-828e-cbeb9199951f-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "089b2d3d-e382-4407-828e-cbeb9199951f" (UID: "089b2d3d-e382-4407-828e-cbeb9199951f"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 19:01:04 crc kubenswrapper[4661]: I0120 19:01:04.926874 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/089b2d3d-e382-4407-828e-cbeb9199951f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "089b2d3d-e382-4407-828e-cbeb9199951f" (UID: "089b2d3d-e382-4407-828e-cbeb9199951f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 19:01:04 crc kubenswrapper[4661]: I0120 19:01:04.945302 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/089b2d3d-e382-4407-828e-cbeb9199951f-config-data" (OuterVolumeSpecName: "config-data") pod "089b2d3d-e382-4407-828e-cbeb9199951f" (UID: "089b2d3d-e382-4407-828e-cbeb9199951f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 19:01:04 crc kubenswrapper[4661]: I0120 19:01:04.976591 4661 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qzx65\" (UniqueName: \"kubernetes.io/projected/089b2d3d-e382-4407-828e-cbeb9199951f-kube-api-access-qzx65\") on node \"crc\" DevicePath \"\"" Jan 20 19:01:04 crc kubenswrapper[4661]: I0120 19:01:04.976621 4661 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/089b2d3d-e382-4407-828e-cbeb9199951f-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 20 19:01:04 crc kubenswrapper[4661]: I0120 19:01:04.976632 4661 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/089b2d3d-e382-4407-828e-cbeb9199951f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 20 19:01:04 crc kubenswrapper[4661]: I0120 19:01:04.976641 4661 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/089b2d3d-e382-4407-828e-cbeb9199951f-config-data\") on node \"crc\" DevicePath \"\"" Jan 20 19:01:05 crc kubenswrapper[4661]: I0120 19:01:05.397185 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29482261-r7s7h" event={"ID":"089b2d3d-e382-4407-828e-cbeb9199951f","Type":"ContainerDied","Data":"f5432ffc5d517a97b5d71a7c31f19d1e7a53feaa19f9cffec24c478ef29c7163"} Jan 20 19:01:05 crc kubenswrapper[4661]: I0120 19:01:05.397224 4661 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f5432ffc5d517a97b5d71a7c31f19d1e7a53feaa19f9cffec24c478ef29c7163" Jan 20 19:01:05 crc kubenswrapper[4661]: I0120 19:01:05.397240 4661 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29482261-r7s7h" Jan 20 19:01:11 crc kubenswrapper[4661]: I0120 19:01:11.436825 4661 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-ckvsw"] Jan 20 19:01:11 crc kubenswrapper[4661]: E0120 19:01:11.441755 4661 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="089b2d3d-e382-4407-828e-cbeb9199951f" containerName="keystone-cron" Jan 20 19:01:11 crc kubenswrapper[4661]: I0120 19:01:11.442115 4661 state_mem.go:107] "Deleted CPUSet assignment" podUID="089b2d3d-e382-4407-828e-cbeb9199951f" containerName="keystone-cron" Jan 20 19:01:11 crc kubenswrapper[4661]: I0120 19:01:11.442393 4661 memory_manager.go:354] "RemoveStaleState removing state" podUID="089b2d3d-e382-4407-828e-cbeb9199951f" containerName="keystone-cron" Jan 20 19:01:11 crc kubenswrapper[4661]: I0120 19:01:11.443757 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ckvsw" Jan 20 19:01:11 crc kubenswrapper[4661]: I0120 19:01:11.472624 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-ckvsw"] Jan 20 19:01:11 crc kubenswrapper[4661]: I0120 19:01:11.624945 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/33e61e54-f5f2-4d5e-bf60-2cbbf46a028d-utilities\") pod \"redhat-marketplace-ckvsw\" (UID: \"33e61e54-f5f2-4d5e-bf60-2cbbf46a028d\") " pod="openshift-marketplace/redhat-marketplace-ckvsw" Jan 20 19:01:11 crc kubenswrapper[4661]: I0120 19:01:11.625027 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/33e61e54-f5f2-4d5e-bf60-2cbbf46a028d-catalog-content\") pod \"redhat-marketplace-ckvsw\" (UID: \"33e61e54-f5f2-4d5e-bf60-2cbbf46a028d\") " pod="openshift-marketplace/redhat-marketplace-ckvsw" Jan 20 19:01:11 crc kubenswrapper[4661]: I0120 19:01:11.625077 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7nznm\" (UniqueName: \"kubernetes.io/projected/33e61e54-f5f2-4d5e-bf60-2cbbf46a028d-kube-api-access-7nznm\") pod \"redhat-marketplace-ckvsw\" (UID: \"33e61e54-f5f2-4d5e-bf60-2cbbf46a028d\") " pod="openshift-marketplace/redhat-marketplace-ckvsw" Jan 20 19:01:11 crc kubenswrapper[4661]: I0120 19:01:11.726578 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/33e61e54-f5f2-4d5e-bf60-2cbbf46a028d-utilities\") pod \"redhat-marketplace-ckvsw\" (UID: \"33e61e54-f5f2-4d5e-bf60-2cbbf46a028d\") " pod="openshift-marketplace/redhat-marketplace-ckvsw" Jan 20 19:01:11 crc kubenswrapper[4661]: I0120 19:01:11.726979 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/33e61e54-f5f2-4d5e-bf60-2cbbf46a028d-catalog-content\") pod \"redhat-marketplace-ckvsw\" (UID: \"33e61e54-f5f2-4d5e-bf60-2cbbf46a028d\") " pod="openshift-marketplace/redhat-marketplace-ckvsw" Jan 20 19:01:11 crc kubenswrapper[4661]: I0120 19:01:11.727111 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7nznm\" (UniqueName: \"kubernetes.io/projected/33e61e54-f5f2-4d5e-bf60-2cbbf46a028d-kube-api-access-7nznm\") pod \"redhat-marketplace-ckvsw\" (UID: \"33e61e54-f5f2-4d5e-bf60-2cbbf46a028d\") " pod="openshift-marketplace/redhat-marketplace-ckvsw" Jan 20 19:01:11 crc kubenswrapper[4661]: I0120 19:01:11.727105 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/33e61e54-f5f2-4d5e-bf60-2cbbf46a028d-utilities\") pod \"redhat-marketplace-ckvsw\" (UID: \"33e61e54-f5f2-4d5e-bf60-2cbbf46a028d\") " pod="openshift-marketplace/redhat-marketplace-ckvsw" Jan 20 19:01:11 crc kubenswrapper[4661]: I0120 19:01:11.727284 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/33e61e54-f5f2-4d5e-bf60-2cbbf46a028d-catalog-content\") pod \"redhat-marketplace-ckvsw\" (UID: \"33e61e54-f5f2-4d5e-bf60-2cbbf46a028d\") " pod="openshift-marketplace/redhat-marketplace-ckvsw" Jan 20 19:01:11 crc kubenswrapper[4661]: I0120 19:01:11.759771 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7nznm\" (UniqueName: \"kubernetes.io/projected/33e61e54-f5f2-4d5e-bf60-2cbbf46a028d-kube-api-access-7nznm\") pod \"redhat-marketplace-ckvsw\" (UID: \"33e61e54-f5f2-4d5e-bf60-2cbbf46a028d\") " pod="openshift-marketplace/redhat-marketplace-ckvsw" Jan 20 19:01:11 crc kubenswrapper[4661]: I0120 19:01:11.763306 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ckvsw" Jan 20 19:01:12 crc kubenswrapper[4661]: I0120 19:01:12.284831 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-ckvsw"] Jan 20 19:01:12 crc kubenswrapper[4661]: W0120 19:01:12.300374 4661 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod33e61e54_f5f2_4d5e_bf60_2cbbf46a028d.slice/crio-f114be39f4fdad72431402f91c47d3281be9af8e518682260da4ee51ff0ce166 WatchSource:0}: Error finding container f114be39f4fdad72431402f91c47d3281be9af8e518682260da4ee51ff0ce166: Status 404 returned error can't find the container with id f114be39f4fdad72431402f91c47d3281be9af8e518682260da4ee51ff0ce166 Jan 20 19:01:12 crc kubenswrapper[4661]: I0120 19:01:12.476484 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ckvsw" event={"ID":"33e61e54-f5f2-4d5e-bf60-2cbbf46a028d","Type":"ContainerStarted","Data":"f114be39f4fdad72431402f91c47d3281be9af8e518682260da4ee51ff0ce166"} Jan 20 19:01:13 crc kubenswrapper[4661]: I0120 19:01:13.486503 4661 generic.go:334] "Generic (PLEG): container finished" podID="33e61e54-f5f2-4d5e-bf60-2cbbf46a028d" containerID="eca0b844508ae4933c6cc2c838ce7e219a1d385c18f3ce691898bccfcf1e6f7e" exitCode=0 Jan 20 19:01:13 crc kubenswrapper[4661]: I0120 19:01:13.486601 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ckvsw" event={"ID":"33e61e54-f5f2-4d5e-bf60-2cbbf46a028d","Type":"ContainerDied","Data":"eca0b844508ae4933c6cc2c838ce7e219a1d385c18f3ce691898bccfcf1e6f7e"} Jan 20 19:01:15 crc kubenswrapper[4661]: I0120 19:01:15.503503 4661 generic.go:334] "Generic (PLEG): container finished" podID="33e61e54-f5f2-4d5e-bf60-2cbbf46a028d" containerID="6759fb6dcb017d6e55e21f8ffcd6a243f2c41be10123c08b0fb2254144ec75c2" exitCode=0 Jan 20 19:01:15 crc kubenswrapper[4661]: I0120 19:01:15.504113 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ckvsw" event={"ID":"33e61e54-f5f2-4d5e-bf60-2cbbf46a028d","Type":"ContainerDied","Data":"6759fb6dcb017d6e55e21f8ffcd6a243f2c41be10123c08b0fb2254144ec75c2"} Jan 20 19:01:16 crc kubenswrapper[4661]: I0120 19:01:16.515420 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ckvsw" event={"ID":"33e61e54-f5f2-4d5e-bf60-2cbbf46a028d","Type":"ContainerStarted","Data":"5b0a3e0d15d7793023dc500f5aa0c83da7fe1b10bf7c9c6af349d7eff8fcd421"} Jan 20 19:01:16 crc kubenswrapper[4661]: I0120 19:01:16.554061 4661 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-ckvsw" podStartSLOduration=3.122707349 podStartE2EDuration="5.554034539s" podCreationTimestamp="2026-01-20 19:01:11 +0000 UTC" firstStartedPulling="2026-01-20 19:01:13.4888317 +0000 UTC m=+3329.819621382" lastFinishedPulling="2026-01-20 19:01:15.92015889 +0000 UTC m=+3332.250948572" observedRunningTime="2026-01-20 19:01:16.540791592 +0000 UTC m=+3332.871581254" watchObservedRunningTime="2026-01-20 19:01:16.554034539 +0000 UTC m=+3332.884824221" Jan 20 19:01:21 crc kubenswrapper[4661]: I0120 19:01:21.763974 4661 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-ckvsw" Jan 20 19:01:21 crc kubenswrapper[4661]: I0120 19:01:21.764553 4661 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-ckvsw" Jan 20 19:01:21 crc kubenswrapper[4661]: I0120 19:01:21.874285 4661 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-ckvsw" Jan 20 19:01:22 crc kubenswrapper[4661]: I0120 19:01:22.617542 4661 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-ckvsw" Jan 20 19:01:22 crc kubenswrapper[4661]: I0120 19:01:22.694839 4661 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-ckvsw"] Jan 20 19:01:24 crc kubenswrapper[4661]: I0120 19:01:24.581161 4661 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-ckvsw" podUID="33e61e54-f5f2-4d5e-bf60-2cbbf46a028d" containerName="registry-server" containerID="cri-o://5b0a3e0d15d7793023dc500f5aa0c83da7fe1b10bf7c9c6af349d7eff8fcd421" gracePeriod=2 Jan 20 19:01:25 crc kubenswrapper[4661]: I0120 19:01:25.034088 4661 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ckvsw" Jan 20 19:01:25 crc kubenswrapper[4661]: I0120 19:01:25.180330 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/33e61e54-f5f2-4d5e-bf60-2cbbf46a028d-catalog-content\") pod \"33e61e54-f5f2-4d5e-bf60-2cbbf46a028d\" (UID: \"33e61e54-f5f2-4d5e-bf60-2cbbf46a028d\") " Jan 20 19:01:25 crc kubenswrapper[4661]: I0120 19:01:25.180469 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7nznm\" (UniqueName: \"kubernetes.io/projected/33e61e54-f5f2-4d5e-bf60-2cbbf46a028d-kube-api-access-7nznm\") pod \"33e61e54-f5f2-4d5e-bf60-2cbbf46a028d\" (UID: \"33e61e54-f5f2-4d5e-bf60-2cbbf46a028d\") " Jan 20 19:01:25 crc kubenswrapper[4661]: I0120 19:01:25.180582 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/33e61e54-f5f2-4d5e-bf60-2cbbf46a028d-utilities\") pod \"33e61e54-f5f2-4d5e-bf60-2cbbf46a028d\" (UID: \"33e61e54-f5f2-4d5e-bf60-2cbbf46a028d\") " Jan 20 19:01:25 crc kubenswrapper[4661]: I0120 19:01:25.181545 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/33e61e54-f5f2-4d5e-bf60-2cbbf46a028d-utilities" (OuterVolumeSpecName: "utilities") pod "33e61e54-f5f2-4d5e-bf60-2cbbf46a028d" (UID: "33e61e54-f5f2-4d5e-bf60-2cbbf46a028d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 19:01:25 crc kubenswrapper[4661]: I0120 19:01:25.186822 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/33e61e54-f5f2-4d5e-bf60-2cbbf46a028d-kube-api-access-7nznm" (OuterVolumeSpecName: "kube-api-access-7nznm") pod "33e61e54-f5f2-4d5e-bf60-2cbbf46a028d" (UID: "33e61e54-f5f2-4d5e-bf60-2cbbf46a028d"). InnerVolumeSpecName "kube-api-access-7nznm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 19:01:25 crc kubenswrapper[4661]: I0120 19:01:25.203635 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/33e61e54-f5f2-4d5e-bf60-2cbbf46a028d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "33e61e54-f5f2-4d5e-bf60-2cbbf46a028d" (UID: "33e61e54-f5f2-4d5e-bf60-2cbbf46a028d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 19:01:25 crc kubenswrapper[4661]: I0120 19:01:25.282177 4661 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/33e61e54-f5f2-4d5e-bf60-2cbbf46a028d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 20 19:01:25 crc kubenswrapper[4661]: I0120 19:01:25.282210 4661 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7nznm\" (UniqueName: \"kubernetes.io/projected/33e61e54-f5f2-4d5e-bf60-2cbbf46a028d-kube-api-access-7nznm\") on node \"crc\" DevicePath \"\"" Jan 20 19:01:25 crc kubenswrapper[4661]: I0120 19:01:25.282225 4661 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/33e61e54-f5f2-4d5e-bf60-2cbbf46a028d-utilities\") on node \"crc\" DevicePath \"\"" Jan 20 19:01:25 crc kubenswrapper[4661]: I0120 19:01:25.595458 4661 generic.go:334] "Generic (PLEG): container finished" podID="33e61e54-f5f2-4d5e-bf60-2cbbf46a028d" containerID="5b0a3e0d15d7793023dc500f5aa0c83da7fe1b10bf7c9c6af349d7eff8fcd421" exitCode=0 Jan 20 19:01:25 crc kubenswrapper[4661]: I0120 19:01:25.595548 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ckvsw" event={"ID":"33e61e54-f5f2-4d5e-bf60-2cbbf46a028d","Type":"ContainerDied","Data":"5b0a3e0d15d7793023dc500f5aa0c83da7fe1b10bf7c9c6af349d7eff8fcd421"} Jan 20 19:01:25 crc kubenswrapper[4661]: I0120 19:01:25.595587 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ckvsw" event={"ID":"33e61e54-f5f2-4d5e-bf60-2cbbf46a028d","Type":"ContainerDied","Data":"f114be39f4fdad72431402f91c47d3281be9af8e518682260da4ee51ff0ce166"} Jan 20 19:01:25 crc kubenswrapper[4661]: I0120 19:01:25.595699 4661 scope.go:117] "RemoveContainer" containerID="5b0a3e0d15d7793023dc500f5aa0c83da7fe1b10bf7c9c6af349d7eff8fcd421" Jan 20 19:01:25 crc kubenswrapper[4661]: I0120 19:01:25.597943 4661 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ckvsw" Jan 20 19:01:25 crc kubenswrapper[4661]: I0120 19:01:25.628009 4661 scope.go:117] "RemoveContainer" containerID="6759fb6dcb017d6e55e21f8ffcd6a243f2c41be10123c08b0fb2254144ec75c2" Jan 20 19:01:25 crc kubenswrapper[4661]: I0120 19:01:25.676339 4661 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-ckvsw"] Jan 20 19:01:25 crc kubenswrapper[4661]: I0120 19:01:25.679488 4661 scope.go:117] "RemoveContainer" containerID="eca0b844508ae4933c6cc2c838ce7e219a1d385c18f3ce691898bccfcf1e6f7e" Jan 20 19:01:25 crc kubenswrapper[4661]: I0120 19:01:25.692223 4661 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-ckvsw"] Jan 20 19:01:25 crc kubenswrapper[4661]: I0120 19:01:25.726731 4661 scope.go:117] "RemoveContainer" containerID="5b0a3e0d15d7793023dc500f5aa0c83da7fe1b10bf7c9c6af349d7eff8fcd421" Jan 20 19:01:25 crc kubenswrapper[4661]: E0120 19:01:25.727035 4661 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5b0a3e0d15d7793023dc500f5aa0c83da7fe1b10bf7c9c6af349d7eff8fcd421\": container with ID starting with 5b0a3e0d15d7793023dc500f5aa0c83da7fe1b10bf7c9c6af349d7eff8fcd421 not found: ID does not exist" containerID="5b0a3e0d15d7793023dc500f5aa0c83da7fe1b10bf7c9c6af349d7eff8fcd421" Jan 20 19:01:25 crc kubenswrapper[4661]: I0120 19:01:25.727062 4661 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5b0a3e0d15d7793023dc500f5aa0c83da7fe1b10bf7c9c6af349d7eff8fcd421"} err="failed to get container status \"5b0a3e0d15d7793023dc500f5aa0c83da7fe1b10bf7c9c6af349d7eff8fcd421\": rpc error: code = NotFound desc = could not find container \"5b0a3e0d15d7793023dc500f5aa0c83da7fe1b10bf7c9c6af349d7eff8fcd421\": container with ID starting with 5b0a3e0d15d7793023dc500f5aa0c83da7fe1b10bf7c9c6af349d7eff8fcd421 not found: ID does not exist" Jan 20 19:01:25 crc kubenswrapper[4661]: I0120 19:01:25.727081 4661 scope.go:117] "RemoveContainer" containerID="6759fb6dcb017d6e55e21f8ffcd6a243f2c41be10123c08b0fb2254144ec75c2" Jan 20 19:01:25 crc kubenswrapper[4661]: E0120 19:01:25.727435 4661 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6759fb6dcb017d6e55e21f8ffcd6a243f2c41be10123c08b0fb2254144ec75c2\": container with ID starting with 6759fb6dcb017d6e55e21f8ffcd6a243f2c41be10123c08b0fb2254144ec75c2 not found: ID does not exist" containerID="6759fb6dcb017d6e55e21f8ffcd6a243f2c41be10123c08b0fb2254144ec75c2" Jan 20 19:01:25 crc kubenswrapper[4661]: I0120 19:01:25.727458 4661 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6759fb6dcb017d6e55e21f8ffcd6a243f2c41be10123c08b0fb2254144ec75c2"} err="failed to get container status \"6759fb6dcb017d6e55e21f8ffcd6a243f2c41be10123c08b0fb2254144ec75c2\": rpc error: code = NotFound desc = could not find container \"6759fb6dcb017d6e55e21f8ffcd6a243f2c41be10123c08b0fb2254144ec75c2\": container with ID starting with 6759fb6dcb017d6e55e21f8ffcd6a243f2c41be10123c08b0fb2254144ec75c2 not found: ID does not exist" Jan 20 19:01:25 crc kubenswrapper[4661]: I0120 19:01:25.727470 4661 scope.go:117] "RemoveContainer" containerID="eca0b844508ae4933c6cc2c838ce7e219a1d385c18f3ce691898bccfcf1e6f7e" Jan 20 19:01:25 crc kubenswrapper[4661]: E0120 19:01:25.727790 4661 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"eca0b844508ae4933c6cc2c838ce7e219a1d385c18f3ce691898bccfcf1e6f7e\": container with ID starting with eca0b844508ae4933c6cc2c838ce7e219a1d385c18f3ce691898bccfcf1e6f7e not found: ID does not exist" containerID="eca0b844508ae4933c6cc2c838ce7e219a1d385c18f3ce691898bccfcf1e6f7e" Jan 20 19:01:25 crc kubenswrapper[4661]: I0120 19:01:25.727811 4661 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eca0b844508ae4933c6cc2c838ce7e219a1d385c18f3ce691898bccfcf1e6f7e"} err="failed to get container status \"eca0b844508ae4933c6cc2c838ce7e219a1d385c18f3ce691898bccfcf1e6f7e\": rpc error: code = NotFound desc = could not find container \"eca0b844508ae4933c6cc2c838ce7e219a1d385c18f3ce691898bccfcf1e6f7e\": container with ID starting with eca0b844508ae4933c6cc2c838ce7e219a1d385c18f3ce691898bccfcf1e6f7e not found: ID does not exist" Jan 20 19:01:26 crc kubenswrapper[4661]: I0120 19:01:26.156862 4661 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="33e61e54-f5f2-4d5e-bf60-2cbbf46a028d" path="/var/lib/kubelet/pods/33e61e54-f5f2-4d5e-bf60-2cbbf46a028d/volumes" Jan 20 19:01:53 crc kubenswrapper[4661]: I0120 19:01:53.456694 4661 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-c848c"] Jan 20 19:01:53 crc kubenswrapper[4661]: E0120 19:01:53.471194 4661 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="33e61e54-f5f2-4d5e-bf60-2cbbf46a028d" containerName="extract-content" Jan 20 19:01:53 crc kubenswrapper[4661]: I0120 19:01:53.471218 4661 state_mem.go:107] "Deleted CPUSet assignment" podUID="33e61e54-f5f2-4d5e-bf60-2cbbf46a028d" containerName="extract-content" Jan 20 19:01:53 crc kubenswrapper[4661]: E0120 19:01:53.471240 4661 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="33e61e54-f5f2-4d5e-bf60-2cbbf46a028d" containerName="extract-utilities" Jan 20 19:01:53 crc kubenswrapper[4661]: I0120 19:01:53.471249 4661 state_mem.go:107] "Deleted CPUSet assignment" podUID="33e61e54-f5f2-4d5e-bf60-2cbbf46a028d" containerName="extract-utilities" Jan 20 19:01:53 crc kubenswrapper[4661]: E0120 19:01:53.471279 4661 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="33e61e54-f5f2-4d5e-bf60-2cbbf46a028d" containerName="registry-server" Jan 20 19:01:53 crc kubenswrapper[4661]: I0120 19:01:53.471287 4661 state_mem.go:107] "Deleted CPUSet assignment" podUID="33e61e54-f5f2-4d5e-bf60-2cbbf46a028d" containerName="registry-server" Jan 20 19:01:53 crc kubenswrapper[4661]: I0120 19:01:53.471506 4661 memory_manager.go:354] "RemoveStaleState removing state" podUID="33e61e54-f5f2-4d5e-bf60-2cbbf46a028d" containerName="registry-server" Jan 20 19:01:53 crc kubenswrapper[4661]: I0120 19:01:53.473104 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-c848c" Jan 20 19:01:53 crc kubenswrapper[4661]: I0120 19:01:53.476225 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-c848c"] Jan 20 19:01:53 crc kubenswrapper[4661]: I0120 19:01:53.544905 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/53e3a49b-8884-4104-95a3-bb85e6647b68-catalog-content\") pod \"redhat-operators-c848c\" (UID: \"53e3a49b-8884-4104-95a3-bb85e6647b68\") " pod="openshift-marketplace/redhat-operators-c848c" Jan 20 19:01:53 crc kubenswrapper[4661]: I0120 19:01:53.544949 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/53e3a49b-8884-4104-95a3-bb85e6647b68-utilities\") pod \"redhat-operators-c848c\" (UID: \"53e3a49b-8884-4104-95a3-bb85e6647b68\") " pod="openshift-marketplace/redhat-operators-c848c" Jan 20 19:01:53 crc kubenswrapper[4661]: I0120 19:01:53.545256 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g2wj4\" (UniqueName: \"kubernetes.io/projected/53e3a49b-8884-4104-95a3-bb85e6647b68-kube-api-access-g2wj4\") pod \"redhat-operators-c848c\" (UID: \"53e3a49b-8884-4104-95a3-bb85e6647b68\") " pod="openshift-marketplace/redhat-operators-c848c" Jan 20 19:01:53 crc kubenswrapper[4661]: I0120 19:01:53.647581 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g2wj4\" (UniqueName: \"kubernetes.io/projected/53e3a49b-8884-4104-95a3-bb85e6647b68-kube-api-access-g2wj4\") pod \"redhat-operators-c848c\" (UID: \"53e3a49b-8884-4104-95a3-bb85e6647b68\") " pod="openshift-marketplace/redhat-operators-c848c" Jan 20 19:01:53 crc kubenswrapper[4661]: I0120 19:01:53.647693 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/53e3a49b-8884-4104-95a3-bb85e6647b68-catalog-content\") pod \"redhat-operators-c848c\" (UID: \"53e3a49b-8884-4104-95a3-bb85e6647b68\") " pod="openshift-marketplace/redhat-operators-c848c" Jan 20 19:01:53 crc kubenswrapper[4661]: I0120 19:01:53.647719 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/53e3a49b-8884-4104-95a3-bb85e6647b68-utilities\") pod \"redhat-operators-c848c\" (UID: \"53e3a49b-8884-4104-95a3-bb85e6647b68\") " pod="openshift-marketplace/redhat-operators-c848c" Jan 20 19:01:53 crc kubenswrapper[4661]: I0120 19:01:53.648201 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/53e3a49b-8884-4104-95a3-bb85e6647b68-utilities\") pod \"redhat-operators-c848c\" (UID: \"53e3a49b-8884-4104-95a3-bb85e6647b68\") " pod="openshift-marketplace/redhat-operators-c848c" Jan 20 19:01:53 crc kubenswrapper[4661]: I0120 19:01:53.648344 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/53e3a49b-8884-4104-95a3-bb85e6647b68-catalog-content\") pod \"redhat-operators-c848c\" (UID: \"53e3a49b-8884-4104-95a3-bb85e6647b68\") " pod="openshift-marketplace/redhat-operators-c848c" Jan 20 19:01:53 crc kubenswrapper[4661]: I0120 19:01:53.684518 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g2wj4\" (UniqueName: \"kubernetes.io/projected/53e3a49b-8884-4104-95a3-bb85e6647b68-kube-api-access-g2wj4\") pod \"redhat-operators-c848c\" (UID: \"53e3a49b-8884-4104-95a3-bb85e6647b68\") " pod="openshift-marketplace/redhat-operators-c848c" Jan 20 19:01:53 crc kubenswrapper[4661]: I0120 19:01:53.814188 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-c848c" Jan 20 19:01:54 crc kubenswrapper[4661]: I0120 19:01:54.335751 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-c848c"] Jan 20 19:01:54 crc kubenswrapper[4661]: I0120 19:01:54.864586 4661 generic.go:334] "Generic (PLEG): container finished" podID="53e3a49b-8884-4104-95a3-bb85e6647b68" containerID="d289ff9ddeafea3d0150d3f72bf22daa842391505faa6069a0d75cc6489c1917" exitCode=0 Jan 20 19:01:54 crc kubenswrapper[4661]: I0120 19:01:54.864852 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-c848c" event={"ID":"53e3a49b-8884-4104-95a3-bb85e6647b68","Type":"ContainerDied","Data":"d289ff9ddeafea3d0150d3f72bf22daa842391505faa6069a0d75cc6489c1917"} Jan 20 19:01:54 crc kubenswrapper[4661]: I0120 19:01:54.864876 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-c848c" event={"ID":"53e3a49b-8884-4104-95a3-bb85e6647b68","Type":"ContainerStarted","Data":"dc92447c1ee35b12a4d1d6e110d18a2de468a96a183d78c5f8c1ff5566dc6b6d"} Jan 20 19:01:56 crc kubenswrapper[4661]: I0120 19:01:56.897649 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-c848c" event={"ID":"53e3a49b-8884-4104-95a3-bb85e6647b68","Type":"ContainerStarted","Data":"c7c70d941860a019ca77717d9053837f20aa1ab2e49086e3b7ae6824b33e76b0"} Jan 20 19:01:58 crc kubenswrapper[4661]: I0120 19:01:58.923977 4661 generic.go:334] "Generic (PLEG): container finished" podID="53e3a49b-8884-4104-95a3-bb85e6647b68" containerID="c7c70d941860a019ca77717d9053837f20aa1ab2e49086e3b7ae6824b33e76b0" exitCode=0 Jan 20 19:01:58 crc kubenswrapper[4661]: I0120 19:01:58.924327 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-c848c" event={"ID":"53e3a49b-8884-4104-95a3-bb85e6647b68","Type":"ContainerDied","Data":"c7c70d941860a019ca77717d9053837f20aa1ab2e49086e3b7ae6824b33e76b0"} Jan 20 19:01:59 crc kubenswrapper[4661]: I0120 19:01:59.936049 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-c848c" event={"ID":"53e3a49b-8884-4104-95a3-bb85e6647b68","Type":"ContainerStarted","Data":"b5d0f9e57946bcca7fd84db9fa00fa8c726e60e6cbd9d87bb075b62aadd97d43"} Jan 20 19:01:59 crc kubenswrapper[4661]: I0120 19:01:59.963360 4661 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-c848c" podStartSLOduration=2.443504252 podStartE2EDuration="6.963342504s" podCreationTimestamp="2026-01-20 19:01:53 +0000 UTC" firstStartedPulling="2026-01-20 19:01:54.866144964 +0000 UTC m=+3371.196934616" lastFinishedPulling="2026-01-20 19:01:59.385983196 +0000 UTC m=+3375.716772868" observedRunningTime="2026-01-20 19:01:59.957041359 +0000 UTC m=+3376.287831021" watchObservedRunningTime="2026-01-20 19:01:59.963342504 +0000 UTC m=+3376.294132166" Jan 20 19:02:01 crc kubenswrapper[4661]: I0120 19:02:01.954864 4661 generic.go:334] "Generic (PLEG): container finished" podID="40502583-1982-469d-a228-04488a4eb068" containerID="85825afdfd9e1e147a8243598206aed288a51b71c95b41e7b42b693762885448" exitCode=0 Jan 20 19:02:01 crc kubenswrapper[4661]: I0120 19:02:01.954952 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-cnqdb" event={"ID":"40502583-1982-469d-a228-04488a4eb068","Type":"ContainerDied","Data":"85825afdfd9e1e147a8243598206aed288a51b71c95b41e7b42b693762885448"} Jan 20 19:02:03 crc kubenswrapper[4661]: I0120 19:02:03.485208 4661 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-cnqdb" Jan 20 19:02:03 crc kubenswrapper[4661]: I0120 19:02:03.532054 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/40502583-1982-469d-a228-04488a4eb068-nova-cell1-compute-config-1\") pod \"40502583-1982-469d-a228-04488a4eb068\" (UID: \"40502583-1982-469d-a228-04488a4eb068\") " Jan 20 19:02:03 crc kubenswrapper[4661]: I0120 19:02:03.532134 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/40502583-1982-469d-a228-04488a4eb068-nova-migration-ssh-key-0\") pod \"40502583-1982-469d-a228-04488a4eb068\" (UID: \"40502583-1982-469d-a228-04488a4eb068\") " Jan 20 19:02:03 crc kubenswrapper[4661]: I0120 19:02:03.532201 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-custom-ceph-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/40502583-1982-469d-a228-04488a4eb068-nova-custom-ceph-combined-ca-bundle\") pod \"40502583-1982-469d-a228-04488a4eb068\" (UID: \"40502583-1982-469d-a228-04488a4eb068\") " Jan 20 19:02:03 crc kubenswrapper[4661]: I0120 19:02:03.532288 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/40502583-1982-469d-a228-04488a4eb068-ssh-key-openstack-edpm-ipam\") pod \"40502583-1982-469d-a228-04488a4eb068\" (UID: \"40502583-1982-469d-a228-04488a4eb068\") " Jan 20 19:02:03 crc kubenswrapper[4661]: I0120 19:02:03.532303 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/40502583-1982-469d-a228-04488a4eb068-nova-migration-ssh-key-1\") pod \"40502583-1982-469d-a228-04488a4eb068\" (UID: \"40502583-1982-469d-a228-04488a4eb068\") " Jan 20 19:02:03 crc kubenswrapper[4661]: I0120 19:02:03.532382 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/40502583-1982-469d-a228-04488a4eb068-inventory\") pod \"40502583-1982-469d-a228-04488a4eb068\" (UID: \"40502583-1982-469d-a228-04488a4eb068\") " Jan 20 19:02:03 crc kubenswrapper[4661]: I0120 19:02:03.532402 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/40502583-1982-469d-a228-04488a4eb068-ceph\") pod \"40502583-1982-469d-a228-04488a4eb068\" (UID: \"40502583-1982-469d-a228-04488a4eb068\") " Jan 20 19:02:03 crc kubenswrapper[4661]: I0120 19:02:03.532426 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/40502583-1982-469d-a228-04488a4eb068-nova-extra-config-0\") pod \"40502583-1982-469d-a228-04488a4eb068\" (UID: \"40502583-1982-469d-a228-04488a4eb068\") " Jan 20 19:02:03 crc kubenswrapper[4661]: I0120 19:02:03.532459 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/40502583-1982-469d-a228-04488a4eb068-nova-cell1-compute-config-0\") pod \"40502583-1982-469d-a228-04488a4eb068\" (UID: \"40502583-1982-469d-a228-04488a4eb068\") " Jan 20 19:02:03 crc kubenswrapper[4661]: I0120 19:02:03.532479 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ttn2t\" (UniqueName: \"kubernetes.io/projected/40502583-1982-469d-a228-04488a4eb068-kube-api-access-ttn2t\") pod \"40502583-1982-469d-a228-04488a4eb068\" (UID: \"40502583-1982-469d-a228-04488a4eb068\") " Jan 20 19:02:03 crc kubenswrapper[4661]: I0120 19:02:03.532499 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph-nova-0\" (UniqueName: \"kubernetes.io/configmap/40502583-1982-469d-a228-04488a4eb068-ceph-nova-0\") pod \"40502583-1982-469d-a228-04488a4eb068\" (UID: \"40502583-1982-469d-a228-04488a4eb068\") " Jan 20 19:02:03 crc kubenswrapper[4661]: I0120 19:02:03.546725 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/40502583-1982-469d-a228-04488a4eb068-kube-api-access-ttn2t" (OuterVolumeSpecName: "kube-api-access-ttn2t") pod "40502583-1982-469d-a228-04488a4eb068" (UID: "40502583-1982-469d-a228-04488a4eb068"). InnerVolumeSpecName "kube-api-access-ttn2t". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 19:02:03 crc kubenswrapper[4661]: I0120 19:02:03.548942 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/40502583-1982-469d-a228-04488a4eb068-ceph" (OuterVolumeSpecName: "ceph") pod "40502583-1982-469d-a228-04488a4eb068" (UID: "40502583-1982-469d-a228-04488a4eb068"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 19:02:03 crc kubenswrapper[4661]: I0120 19:02:03.560373 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/40502583-1982-469d-a228-04488a4eb068-nova-custom-ceph-combined-ca-bundle" (OuterVolumeSpecName: "nova-custom-ceph-combined-ca-bundle") pod "40502583-1982-469d-a228-04488a4eb068" (UID: "40502583-1982-469d-a228-04488a4eb068"). InnerVolumeSpecName "nova-custom-ceph-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 19:02:03 crc kubenswrapper[4661]: I0120 19:02:03.568630 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/40502583-1982-469d-a228-04488a4eb068-nova-migration-ssh-key-0" (OuterVolumeSpecName: "nova-migration-ssh-key-0") pod "40502583-1982-469d-a228-04488a4eb068" (UID: "40502583-1982-469d-a228-04488a4eb068"). InnerVolumeSpecName "nova-migration-ssh-key-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 19:02:03 crc kubenswrapper[4661]: I0120 19:02:03.569822 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/40502583-1982-469d-a228-04488a4eb068-inventory" (OuterVolumeSpecName: "inventory") pod "40502583-1982-469d-a228-04488a4eb068" (UID: "40502583-1982-469d-a228-04488a4eb068"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 19:02:03 crc kubenswrapper[4661]: I0120 19:02:03.574193 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/40502583-1982-469d-a228-04488a4eb068-nova-cell1-compute-config-0" (OuterVolumeSpecName: "nova-cell1-compute-config-0") pod "40502583-1982-469d-a228-04488a4eb068" (UID: "40502583-1982-469d-a228-04488a4eb068"). InnerVolumeSpecName "nova-cell1-compute-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 19:02:03 crc kubenswrapper[4661]: I0120 19:02:03.585709 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/40502583-1982-469d-a228-04488a4eb068-ceph-nova-0" (OuterVolumeSpecName: "ceph-nova-0") pod "40502583-1982-469d-a228-04488a4eb068" (UID: "40502583-1982-469d-a228-04488a4eb068"). InnerVolumeSpecName "ceph-nova-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 19:02:03 crc kubenswrapper[4661]: I0120 19:02:03.589776 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/40502583-1982-469d-a228-04488a4eb068-nova-extra-config-0" (OuterVolumeSpecName: "nova-extra-config-0") pod "40502583-1982-469d-a228-04488a4eb068" (UID: "40502583-1982-469d-a228-04488a4eb068"). InnerVolumeSpecName "nova-extra-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 19:02:03 crc kubenswrapper[4661]: I0120 19:02:03.589878 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/40502583-1982-469d-a228-04488a4eb068-nova-migration-ssh-key-1" (OuterVolumeSpecName: "nova-migration-ssh-key-1") pod "40502583-1982-469d-a228-04488a4eb068" (UID: "40502583-1982-469d-a228-04488a4eb068"). InnerVolumeSpecName "nova-migration-ssh-key-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 19:02:03 crc kubenswrapper[4661]: I0120 19:02:03.598260 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/40502583-1982-469d-a228-04488a4eb068-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "40502583-1982-469d-a228-04488a4eb068" (UID: "40502583-1982-469d-a228-04488a4eb068"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 19:02:03 crc kubenswrapper[4661]: I0120 19:02:03.616921 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/40502583-1982-469d-a228-04488a4eb068-nova-cell1-compute-config-1" (OuterVolumeSpecName: "nova-cell1-compute-config-1") pod "40502583-1982-469d-a228-04488a4eb068" (UID: "40502583-1982-469d-a228-04488a4eb068"). InnerVolumeSpecName "nova-cell1-compute-config-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 19:02:03 crc kubenswrapper[4661]: I0120 19:02:03.634352 4661 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/40502583-1982-469d-a228-04488a4eb068-nova-cell1-compute-config-1\") on node \"crc\" DevicePath \"\"" Jan 20 19:02:03 crc kubenswrapper[4661]: I0120 19:02:03.634388 4661 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/40502583-1982-469d-a228-04488a4eb068-nova-migration-ssh-key-0\") on node \"crc\" DevicePath \"\"" Jan 20 19:02:03 crc kubenswrapper[4661]: I0120 19:02:03.634400 4661 reconciler_common.go:293] "Volume detached for volume \"nova-custom-ceph-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/40502583-1982-469d-a228-04488a4eb068-nova-custom-ceph-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 20 19:02:03 crc kubenswrapper[4661]: I0120 19:02:03.634433 4661 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/40502583-1982-469d-a228-04488a4eb068-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 20 19:02:03 crc kubenswrapper[4661]: I0120 19:02:03.634444 4661 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/40502583-1982-469d-a228-04488a4eb068-nova-migration-ssh-key-1\") on node \"crc\" DevicePath \"\"" Jan 20 19:02:03 crc kubenswrapper[4661]: I0120 19:02:03.634454 4661 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/40502583-1982-469d-a228-04488a4eb068-inventory\") on node \"crc\" DevicePath \"\"" Jan 20 19:02:03 crc kubenswrapper[4661]: I0120 19:02:03.634462 4661 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/40502583-1982-469d-a228-04488a4eb068-ceph\") on node \"crc\" DevicePath \"\"" Jan 20 19:02:03 crc kubenswrapper[4661]: I0120 19:02:03.634471 4661 reconciler_common.go:293] "Volume detached for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/40502583-1982-469d-a228-04488a4eb068-nova-extra-config-0\") on node \"crc\" DevicePath \"\"" Jan 20 19:02:03 crc kubenswrapper[4661]: I0120 19:02:03.634482 4661 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/40502583-1982-469d-a228-04488a4eb068-nova-cell1-compute-config-0\") on node \"crc\" DevicePath \"\"" Jan 20 19:02:03 crc kubenswrapper[4661]: I0120 19:02:03.634490 4661 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ttn2t\" (UniqueName: \"kubernetes.io/projected/40502583-1982-469d-a228-04488a4eb068-kube-api-access-ttn2t\") on node \"crc\" DevicePath \"\"" Jan 20 19:02:03 crc kubenswrapper[4661]: I0120 19:02:03.634498 4661 reconciler_common.go:293] "Volume detached for volume \"ceph-nova-0\" (UniqueName: \"kubernetes.io/configmap/40502583-1982-469d-a228-04488a4eb068-ceph-nova-0\") on node \"crc\" DevicePath \"\"" Jan 20 19:02:03 crc kubenswrapper[4661]: I0120 19:02:03.814633 4661 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-c848c" Jan 20 19:02:03 crc kubenswrapper[4661]: I0120 19:02:03.814704 4661 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-c848c" Jan 20 19:02:03 crc kubenswrapper[4661]: I0120 19:02:03.971869 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-cnqdb" event={"ID":"40502583-1982-469d-a228-04488a4eb068","Type":"ContainerDied","Data":"779175ea1fcf72b8fb5d182ba5e7d0c524da7d8ea701711137cf790c5e2af981"} Jan 20 19:02:03 crc kubenswrapper[4661]: I0120 19:02:03.971914 4661 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-cnqdb" Jan 20 19:02:03 crc kubenswrapper[4661]: I0120 19:02:03.971929 4661 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="779175ea1fcf72b8fb5d182ba5e7d0c524da7d8ea701711137cf790c5e2af981" Jan 20 19:02:04 crc kubenswrapper[4661]: I0120 19:02:04.860515 4661 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-c848c" podUID="53e3a49b-8884-4104-95a3-bb85e6647b68" containerName="registry-server" probeResult="failure" output=< Jan 20 19:02:04 crc kubenswrapper[4661]: timeout: failed to connect service ":50051" within 1s Jan 20 19:02:04 crc kubenswrapper[4661]: > Jan 20 19:02:13 crc kubenswrapper[4661]: I0120 19:02:13.861385 4661 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-c848c" Jan 20 19:02:13 crc kubenswrapper[4661]: I0120 19:02:13.933502 4661 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-c848c" Jan 20 19:02:14 crc kubenswrapper[4661]: I0120 19:02:14.408836 4661 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-c848c"] Jan 20 19:02:15 crc kubenswrapper[4661]: I0120 19:02:15.085456 4661 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-c848c" podUID="53e3a49b-8884-4104-95a3-bb85e6647b68" containerName="registry-server" containerID="cri-o://b5d0f9e57946bcca7fd84db9fa00fa8c726e60e6cbd9d87bb075b62aadd97d43" gracePeriod=2 Jan 20 19:02:15 crc kubenswrapper[4661]: I0120 19:02:15.604156 4661 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-c848c" Jan 20 19:02:15 crc kubenswrapper[4661]: I0120 19:02:15.760926 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/53e3a49b-8884-4104-95a3-bb85e6647b68-utilities\") pod \"53e3a49b-8884-4104-95a3-bb85e6647b68\" (UID: \"53e3a49b-8884-4104-95a3-bb85e6647b68\") " Jan 20 19:02:15 crc kubenswrapper[4661]: I0120 19:02:15.760971 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/53e3a49b-8884-4104-95a3-bb85e6647b68-catalog-content\") pod \"53e3a49b-8884-4104-95a3-bb85e6647b68\" (UID: \"53e3a49b-8884-4104-95a3-bb85e6647b68\") " Jan 20 19:02:15 crc kubenswrapper[4661]: I0120 19:02:15.761080 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g2wj4\" (UniqueName: \"kubernetes.io/projected/53e3a49b-8884-4104-95a3-bb85e6647b68-kube-api-access-g2wj4\") pod \"53e3a49b-8884-4104-95a3-bb85e6647b68\" (UID: \"53e3a49b-8884-4104-95a3-bb85e6647b68\") " Jan 20 19:02:15 crc kubenswrapper[4661]: I0120 19:02:15.762662 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/53e3a49b-8884-4104-95a3-bb85e6647b68-utilities" (OuterVolumeSpecName: "utilities") pod "53e3a49b-8884-4104-95a3-bb85e6647b68" (UID: "53e3a49b-8884-4104-95a3-bb85e6647b68"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 19:02:15 crc kubenswrapper[4661]: I0120 19:02:15.778937 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/53e3a49b-8884-4104-95a3-bb85e6647b68-kube-api-access-g2wj4" (OuterVolumeSpecName: "kube-api-access-g2wj4") pod "53e3a49b-8884-4104-95a3-bb85e6647b68" (UID: "53e3a49b-8884-4104-95a3-bb85e6647b68"). InnerVolumeSpecName "kube-api-access-g2wj4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 19:02:15 crc kubenswrapper[4661]: I0120 19:02:15.864353 4661 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g2wj4\" (UniqueName: \"kubernetes.io/projected/53e3a49b-8884-4104-95a3-bb85e6647b68-kube-api-access-g2wj4\") on node \"crc\" DevicePath \"\"" Jan 20 19:02:15 crc kubenswrapper[4661]: I0120 19:02:15.864385 4661 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/53e3a49b-8884-4104-95a3-bb85e6647b68-utilities\") on node \"crc\" DevicePath \"\"" Jan 20 19:02:15 crc kubenswrapper[4661]: I0120 19:02:15.948190 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/53e3a49b-8884-4104-95a3-bb85e6647b68-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "53e3a49b-8884-4104-95a3-bb85e6647b68" (UID: "53e3a49b-8884-4104-95a3-bb85e6647b68"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 19:02:15 crc kubenswrapper[4661]: I0120 19:02:15.968830 4661 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/53e3a49b-8884-4104-95a3-bb85e6647b68-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 20 19:02:16 crc kubenswrapper[4661]: I0120 19:02:16.097562 4661 generic.go:334] "Generic (PLEG): container finished" podID="53e3a49b-8884-4104-95a3-bb85e6647b68" containerID="b5d0f9e57946bcca7fd84db9fa00fa8c726e60e6cbd9d87bb075b62aadd97d43" exitCode=0 Jan 20 19:02:16 crc kubenswrapper[4661]: I0120 19:02:16.097605 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-c848c" event={"ID":"53e3a49b-8884-4104-95a3-bb85e6647b68","Type":"ContainerDied","Data":"b5d0f9e57946bcca7fd84db9fa00fa8c726e60e6cbd9d87bb075b62aadd97d43"} Jan 20 19:02:16 crc kubenswrapper[4661]: I0120 19:02:16.097634 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-c848c" event={"ID":"53e3a49b-8884-4104-95a3-bb85e6647b68","Type":"ContainerDied","Data":"dc92447c1ee35b12a4d1d6e110d18a2de468a96a183d78c5f8c1ff5566dc6b6d"} Jan 20 19:02:16 crc kubenswrapper[4661]: I0120 19:02:16.097654 4661 scope.go:117] "RemoveContainer" containerID="b5d0f9e57946bcca7fd84db9fa00fa8c726e60e6cbd9d87bb075b62aadd97d43" Jan 20 19:02:16 crc kubenswrapper[4661]: I0120 19:02:16.097816 4661 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-c848c" Jan 20 19:02:16 crc kubenswrapper[4661]: I0120 19:02:16.121703 4661 scope.go:117] "RemoveContainer" containerID="c7c70d941860a019ca77717d9053837f20aa1ab2e49086e3b7ae6824b33e76b0" Jan 20 19:02:16 crc kubenswrapper[4661]: I0120 19:02:16.144746 4661 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-c848c"] Jan 20 19:02:16 crc kubenswrapper[4661]: I0120 19:02:16.152766 4661 scope.go:117] "RemoveContainer" containerID="d289ff9ddeafea3d0150d3f72bf22daa842391505faa6069a0d75cc6489c1917" Jan 20 19:02:16 crc kubenswrapper[4661]: I0120 19:02:16.156409 4661 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-c848c"] Jan 20 19:02:16 crc kubenswrapper[4661]: I0120 19:02:16.179725 4661 scope.go:117] "RemoveContainer" containerID="b5d0f9e57946bcca7fd84db9fa00fa8c726e60e6cbd9d87bb075b62aadd97d43" Jan 20 19:02:16 crc kubenswrapper[4661]: E0120 19:02:16.181540 4661 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b5d0f9e57946bcca7fd84db9fa00fa8c726e60e6cbd9d87bb075b62aadd97d43\": container with ID starting with b5d0f9e57946bcca7fd84db9fa00fa8c726e60e6cbd9d87bb075b62aadd97d43 not found: ID does not exist" containerID="b5d0f9e57946bcca7fd84db9fa00fa8c726e60e6cbd9d87bb075b62aadd97d43" Jan 20 19:02:16 crc kubenswrapper[4661]: I0120 19:02:16.181696 4661 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b5d0f9e57946bcca7fd84db9fa00fa8c726e60e6cbd9d87bb075b62aadd97d43"} err="failed to get container status \"b5d0f9e57946bcca7fd84db9fa00fa8c726e60e6cbd9d87bb075b62aadd97d43\": rpc error: code = NotFound desc = could not find container \"b5d0f9e57946bcca7fd84db9fa00fa8c726e60e6cbd9d87bb075b62aadd97d43\": container with ID starting with b5d0f9e57946bcca7fd84db9fa00fa8c726e60e6cbd9d87bb075b62aadd97d43 not found: ID does not exist" Jan 20 19:02:16 crc kubenswrapper[4661]: I0120 19:02:16.181813 4661 scope.go:117] "RemoveContainer" containerID="c7c70d941860a019ca77717d9053837f20aa1ab2e49086e3b7ae6824b33e76b0" Jan 20 19:02:16 crc kubenswrapper[4661]: E0120 19:02:16.182464 4661 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c7c70d941860a019ca77717d9053837f20aa1ab2e49086e3b7ae6824b33e76b0\": container with ID starting with c7c70d941860a019ca77717d9053837f20aa1ab2e49086e3b7ae6824b33e76b0 not found: ID does not exist" containerID="c7c70d941860a019ca77717d9053837f20aa1ab2e49086e3b7ae6824b33e76b0" Jan 20 19:02:16 crc kubenswrapper[4661]: I0120 19:02:16.182513 4661 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c7c70d941860a019ca77717d9053837f20aa1ab2e49086e3b7ae6824b33e76b0"} err="failed to get container status \"c7c70d941860a019ca77717d9053837f20aa1ab2e49086e3b7ae6824b33e76b0\": rpc error: code = NotFound desc = could not find container \"c7c70d941860a019ca77717d9053837f20aa1ab2e49086e3b7ae6824b33e76b0\": container with ID starting with c7c70d941860a019ca77717d9053837f20aa1ab2e49086e3b7ae6824b33e76b0 not found: ID does not exist" Jan 20 19:02:16 crc kubenswrapper[4661]: I0120 19:02:16.182543 4661 scope.go:117] "RemoveContainer" containerID="d289ff9ddeafea3d0150d3f72bf22daa842391505faa6069a0d75cc6489c1917" Jan 20 19:02:16 crc kubenswrapper[4661]: E0120 19:02:16.182954 4661 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d289ff9ddeafea3d0150d3f72bf22daa842391505faa6069a0d75cc6489c1917\": container with ID starting with d289ff9ddeafea3d0150d3f72bf22daa842391505faa6069a0d75cc6489c1917 not found: ID does not exist" containerID="d289ff9ddeafea3d0150d3f72bf22daa842391505faa6069a0d75cc6489c1917" Jan 20 19:02:16 crc kubenswrapper[4661]: I0120 19:02:16.183065 4661 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d289ff9ddeafea3d0150d3f72bf22daa842391505faa6069a0d75cc6489c1917"} err="failed to get container status \"d289ff9ddeafea3d0150d3f72bf22daa842391505faa6069a0d75cc6489c1917\": rpc error: code = NotFound desc = could not find container \"d289ff9ddeafea3d0150d3f72bf22daa842391505faa6069a0d75cc6489c1917\": container with ID starting with d289ff9ddeafea3d0150d3f72bf22daa842391505faa6069a0d75cc6489c1917 not found: ID does not exist" Jan 20 19:02:18 crc kubenswrapper[4661]: I0120 19:02:18.155088 4661 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="53e3a49b-8884-4104-95a3-bb85e6647b68" path="/var/lib/kubelet/pods/53e3a49b-8884-4104-95a3-bb85e6647b68/volumes" Jan 20 19:02:19 crc kubenswrapper[4661]: I0120 19:02:19.489723 4661 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-volume-volume1-0"] Jan 20 19:02:19 crc kubenswrapper[4661]: E0120 19:02:19.490275 4661 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="53e3a49b-8884-4104-95a3-bb85e6647b68" containerName="registry-server" Jan 20 19:02:19 crc kubenswrapper[4661]: I0120 19:02:19.490289 4661 state_mem.go:107] "Deleted CPUSet assignment" podUID="53e3a49b-8884-4104-95a3-bb85e6647b68" containerName="registry-server" Jan 20 19:02:19 crc kubenswrapper[4661]: E0120 19:02:19.490296 4661 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="40502583-1982-469d-a228-04488a4eb068" containerName="nova-custom-ceph-edpm-deployment-openstack-edpm-ipam" Jan 20 19:02:19 crc kubenswrapper[4661]: I0120 19:02:19.490303 4661 state_mem.go:107] "Deleted CPUSet assignment" podUID="40502583-1982-469d-a228-04488a4eb068" containerName="nova-custom-ceph-edpm-deployment-openstack-edpm-ipam" Jan 20 19:02:19 crc kubenswrapper[4661]: E0120 19:02:19.490313 4661 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="53e3a49b-8884-4104-95a3-bb85e6647b68" containerName="extract-content" Jan 20 19:02:19 crc kubenswrapper[4661]: I0120 19:02:19.490319 4661 state_mem.go:107] "Deleted CPUSet assignment" podUID="53e3a49b-8884-4104-95a3-bb85e6647b68" containerName="extract-content" Jan 20 19:02:19 crc kubenswrapper[4661]: E0120 19:02:19.490328 4661 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="53e3a49b-8884-4104-95a3-bb85e6647b68" containerName="extract-utilities" Jan 20 19:02:19 crc kubenswrapper[4661]: I0120 19:02:19.490334 4661 state_mem.go:107] "Deleted CPUSet assignment" podUID="53e3a49b-8884-4104-95a3-bb85e6647b68" containerName="extract-utilities" Jan 20 19:02:19 crc kubenswrapper[4661]: I0120 19:02:19.490501 4661 memory_manager.go:354] "RemoveStaleState removing state" podUID="53e3a49b-8884-4104-95a3-bb85e6647b68" containerName="registry-server" Jan 20 19:02:19 crc kubenswrapper[4661]: I0120 19:02:19.490525 4661 memory_manager.go:354] "RemoveStaleState removing state" podUID="40502583-1982-469d-a228-04488a4eb068" containerName="nova-custom-ceph-edpm-deployment-openstack-edpm-ipam" Jan 20 19:02:19 crc kubenswrapper[4661]: I0120 19:02:19.491324 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-volume-volume1-0" Jan 20 19:02:19 crc kubenswrapper[4661]: I0120 19:02:19.493992 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Jan 20 19:02:19 crc kubenswrapper[4661]: I0120 19:02:19.494235 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-volume-volume1-config-data" Jan 20 19:02:19 crc kubenswrapper[4661]: I0120 19:02:19.539551 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/c8c04a60-5bb8-4d54-93a6-1acfcbea3358-sys\") pod \"cinder-volume-volume1-0\" (UID: \"c8c04a60-5bb8-4d54-93a6-1acfcbea3358\") " pod="openstack/cinder-volume-volume1-0" Jan 20 19:02:19 crc kubenswrapper[4661]: I0120 19:02:19.539627 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/c8c04a60-5bb8-4d54-93a6-1acfcbea3358-ceph\") pod \"cinder-volume-volume1-0\" (UID: \"c8c04a60-5bb8-4d54-93a6-1acfcbea3358\") " pod="openstack/cinder-volume-volume1-0" Jan 20 19:02:19 crc kubenswrapper[4661]: I0120 19:02:19.539697 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c8c04a60-5bb8-4d54-93a6-1acfcbea3358-lib-modules\") pod \"cinder-volume-volume1-0\" (UID: \"c8c04a60-5bb8-4d54-93a6-1acfcbea3358\") " pod="openstack/cinder-volume-volume1-0" Jan 20 19:02:19 crc kubenswrapper[4661]: I0120 19:02:19.539727 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/c8c04a60-5bb8-4d54-93a6-1acfcbea3358-var-locks-brick\") pod \"cinder-volume-volume1-0\" (UID: \"c8c04a60-5bb8-4d54-93a6-1acfcbea3358\") " pod="openstack/cinder-volume-volume1-0" Jan 20 19:02:19 crc kubenswrapper[4661]: I0120 19:02:19.539761 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c8c04a60-5bb8-4d54-93a6-1acfcbea3358-config-data\") pod \"cinder-volume-volume1-0\" (UID: \"c8c04a60-5bb8-4d54-93a6-1acfcbea3358\") " pod="openstack/cinder-volume-volume1-0" Jan 20 19:02:19 crc kubenswrapper[4661]: I0120 19:02:19.539803 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/c8c04a60-5bb8-4d54-93a6-1acfcbea3358-run\") pod \"cinder-volume-volume1-0\" (UID: \"c8c04a60-5bb8-4d54-93a6-1acfcbea3358\") " pod="openstack/cinder-volume-volume1-0" Jan 20 19:02:19 crc kubenswrapper[4661]: I0120 19:02:19.539830 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/c8c04a60-5bb8-4d54-93a6-1acfcbea3358-etc-nvme\") pod \"cinder-volume-volume1-0\" (UID: \"c8c04a60-5bb8-4d54-93a6-1acfcbea3358\") " pod="openstack/cinder-volume-volume1-0" Jan 20 19:02:19 crc kubenswrapper[4661]: I0120 19:02:19.539881 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c8c04a60-5bb8-4d54-93a6-1acfcbea3358-config-data-custom\") pod \"cinder-volume-volume1-0\" (UID: \"c8c04a60-5bb8-4d54-93a6-1acfcbea3358\") " pod="openstack/cinder-volume-volume1-0" Jan 20 19:02:19 crc kubenswrapper[4661]: I0120 19:02:19.539956 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/c8c04a60-5bb8-4d54-93a6-1acfcbea3358-var-locks-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"c8c04a60-5bb8-4d54-93a6-1acfcbea3358\") " pod="openstack/cinder-volume-volume1-0" Jan 20 19:02:19 crc kubenswrapper[4661]: I0120 19:02:19.539983 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/c8c04a60-5bb8-4d54-93a6-1acfcbea3358-etc-iscsi\") pod \"cinder-volume-volume1-0\" (UID: \"c8c04a60-5bb8-4d54-93a6-1acfcbea3358\") " pod="openstack/cinder-volume-volume1-0" Jan 20 19:02:19 crc kubenswrapper[4661]: I0120 19:02:19.540016 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/c8c04a60-5bb8-4d54-93a6-1acfcbea3358-dev\") pod \"cinder-volume-volume1-0\" (UID: \"c8c04a60-5bb8-4d54-93a6-1acfcbea3358\") " pod="openstack/cinder-volume-volume1-0" Jan 20 19:02:19 crc kubenswrapper[4661]: I0120 19:02:19.540043 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c8c04a60-5bb8-4d54-93a6-1acfcbea3358-combined-ca-bundle\") pod \"cinder-volume-volume1-0\" (UID: \"c8c04a60-5bb8-4d54-93a6-1acfcbea3358\") " pod="openstack/cinder-volume-volume1-0" Jan 20 19:02:19 crc kubenswrapper[4661]: I0120 19:02:19.540088 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cxwfg\" (UniqueName: \"kubernetes.io/projected/c8c04a60-5bb8-4d54-93a6-1acfcbea3358-kube-api-access-cxwfg\") pod \"cinder-volume-volume1-0\" (UID: \"c8c04a60-5bb8-4d54-93a6-1acfcbea3358\") " pod="openstack/cinder-volume-volume1-0" Jan 20 19:02:19 crc kubenswrapper[4661]: I0120 19:02:19.544006 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-volume-volume1-0"] Jan 20 19:02:19 crc kubenswrapper[4661]: I0120 19:02:19.544592 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/c8c04a60-5bb8-4d54-93a6-1acfcbea3358-var-lib-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"c8c04a60-5bb8-4d54-93a6-1acfcbea3358\") " pod="openstack/cinder-volume-volume1-0" Jan 20 19:02:19 crc kubenswrapper[4661]: I0120 19:02:19.544646 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c8c04a60-5bb8-4d54-93a6-1acfcbea3358-etc-machine-id\") pod \"cinder-volume-volume1-0\" (UID: \"c8c04a60-5bb8-4d54-93a6-1acfcbea3358\") " pod="openstack/cinder-volume-volume1-0" Jan 20 19:02:19 crc kubenswrapper[4661]: I0120 19:02:19.544751 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c8c04a60-5bb8-4d54-93a6-1acfcbea3358-scripts\") pod \"cinder-volume-volume1-0\" (UID: \"c8c04a60-5bb8-4d54-93a6-1acfcbea3358\") " pod="openstack/cinder-volume-volume1-0" Jan 20 19:02:19 crc kubenswrapper[4661]: I0120 19:02:19.601745 4661 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-backup-0"] Jan 20 19:02:19 crc kubenswrapper[4661]: I0120 19:02:19.603164 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-backup-0" Jan 20 19:02:19 crc kubenswrapper[4661]: I0120 19:02:19.605337 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-backup-config-data" Jan 20 19:02:19 crc kubenswrapper[4661]: I0120 19:02:19.646054 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/c8c04a60-5bb8-4d54-93a6-1acfcbea3358-sys\") pod \"cinder-volume-volume1-0\" (UID: \"c8c04a60-5bb8-4d54-93a6-1acfcbea3358\") " pod="openstack/cinder-volume-volume1-0" Jan 20 19:02:19 crc kubenswrapper[4661]: I0120 19:02:19.646110 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/c8c04a60-5bb8-4d54-93a6-1acfcbea3358-ceph\") pod \"cinder-volume-volume1-0\" (UID: \"c8c04a60-5bb8-4d54-93a6-1acfcbea3358\") " pod="openstack/cinder-volume-volume1-0" Jan 20 19:02:19 crc kubenswrapper[4661]: I0120 19:02:19.646145 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c8c04a60-5bb8-4d54-93a6-1acfcbea3358-lib-modules\") pod \"cinder-volume-volume1-0\" (UID: \"c8c04a60-5bb8-4d54-93a6-1acfcbea3358\") " pod="openstack/cinder-volume-volume1-0" Jan 20 19:02:19 crc kubenswrapper[4661]: I0120 19:02:19.646163 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/c8c04a60-5bb8-4d54-93a6-1acfcbea3358-var-locks-brick\") pod \"cinder-volume-volume1-0\" (UID: \"c8c04a60-5bb8-4d54-93a6-1acfcbea3358\") " pod="openstack/cinder-volume-volume1-0" Jan 20 19:02:19 crc kubenswrapper[4661]: I0120 19:02:19.646210 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c8c04a60-5bb8-4d54-93a6-1acfcbea3358-config-data\") pod \"cinder-volume-volume1-0\" (UID: \"c8c04a60-5bb8-4d54-93a6-1acfcbea3358\") " pod="openstack/cinder-volume-volume1-0" Jan 20 19:02:19 crc kubenswrapper[4661]: I0120 19:02:19.646225 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/c8c04a60-5bb8-4d54-93a6-1acfcbea3358-run\") pod \"cinder-volume-volume1-0\" (UID: \"c8c04a60-5bb8-4d54-93a6-1acfcbea3358\") " pod="openstack/cinder-volume-volume1-0" Jan 20 19:02:19 crc kubenswrapper[4661]: I0120 19:02:19.646243 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/c8c04a60-5bb8-4d54-93a6-1acfcbea3358-etc-nvme\") pod \"cinder-volume-volume1-0\" (UID: \"c8c04a60-5bb8-4d54-93a6-1acfcbea3358\") " pod="openstack/cinder-volume-volume1-0" Jan 20 19:02:19 crc kubenswrapper[4661]: I0120 19:02:19.646271 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c8c04a60-5bb8-4d54-93a6-1acfcbea3358-config-data-custom\") pod \"cinder-volume-volume1-0\" (UID: \"c8c04a60-5bb8-4d54-93a6-1acfcbea3358\") " pod="openstack/cinder-volume-volume1-0" Jan 20 19:02:19 crc kubenswrapper[4661]: I0120 19:02:19.646310 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/c8c04a60-5bb8-4d54-93a6-1acfcbea3358-var-locks-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"c8c04a60-5bb8-4d54-93a6-1acfcbea3358\") " pod="openstack/cinder-volume-volume1-0" Jan 20 19:02:19 crc kubenswrapper[4661]: I0120 19:02:19.646325 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/c8c04a60-5bb8-4d54-93a6-1acfcbea3358-etc-iscsi\") pod \"cinder-volume-volume1-0\" (UID: \"c8c04a60-5bb8-4d54-93a6-1acfcbea3358\") " pod="openstack/cinder-volume-volume1-0" Jan 20 19:02:19 crc kubenswrapper[4661]: I0120 19:02:19.646347 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/c8c04a60-5bb8-4d54-93a6-1acfcbea3358-dev\") pod \"cinder-volume-volume1-0\" (UID: \"c8c04a60-5bb8-4d54-93a6-1acfcbea3358\") " pod="openstack/cinder-volume-volume1-0" Jan 20 19:02:19 crc kubenswrapper[4661]: I0120 19:02:19.646362 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c8c04a60-5bb8-4d54-93a6-1acfcbea3358-combined-ca-bundle\") pod \"cinder-volume-volume1-0\" (UID: \"c8c04a60-5bb8-4d54-93a6-1acfcbea3358\") " pod="openstack/cinder-volume-volume1-0" Jan 20 19:02:19 crc kubenswrapper[4661]: I0120 19:02:19.646387 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cxwfg\" (UniqueName: \"kubernetes.io/projected/c8c04a60-5bb8-4d54-93a6-1acfcbea3358-kube-api-access-cxwfg\") pod \"cinder-volume-volume1-0\" (UID: \"c8c04a60-5bb8-4d54-93a6-1acfcbea3358\") " pod="openstack/cinder-volume-volume1-0" Jan 20 19:02:19 crc kubenswrapper[4661]: I0120 19:02:19.646415 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/c8c04a60-5bb8-4d54-93a6-1acfcbea3358-var-lib-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"c8c04a60-5bb8-4d54-93a6-1acfcbea3358\") " pod="openstack/cinder-volume-volume1-0" Jan 20 19:02:19 crc kubenswrapper[4661]: I0120 19:02:19.646429 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c8c04a60-5bb8-4d54-93a6-1acfcbea3358-etc-machine-id\") pod \"cinder-volume-volume1-0\" (UID: \"c8c04a60-5bb8-4d54-93a6-1acfcbea3358\") " pod="openstack/cinder-volume-volume1-0" Jan 20 19:02:19 crc kubenswrapper[4661]: I0120 19:02:19.646449 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c8c04a60-5bb8-4d54-93a6-1acfcbea3358-scripts\") pod \"cinder-volume-volume1-0\" (UID: \"c8c04a60-5bb8-4d54-93a6-1acfcbea3358\") " pod="openstack/cinder-volume-volume1-0" Jan 20 19:02:19 crc kubenswrapper[4661]: I0120 19:02:19.646743 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-backup-0"] Jan 20 19:02:19 crc kubenswrapper[4661]: I0120 19:02:19.647436 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/c8c04a60-5bb8-4d54-93a6-1acfcbea3358-var-locks-brick\") pod \"cinder-volume-volume1-0\" (UID: \"c8c04a60-5bb8-4d54-93a6-1acfcbea3358\") " pod="openstack/cinder-volume-volume1-0" Jan 20 19:02:19 crc kubenswrapper[4661]: I0120 19:02:19.647484 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c8c04a60-5bb8-4d54-93a6-1acfcbea3358-lib-modules\") pod \"cinder-volume-volume1-0\" (UID: \"c8c04a60-5bb8-4d54-93a6-1acfcbea3358\") " pod="openstack/cinder-volume-volume1-0" Jan 20 19:02:19 crc kubenswrapper[4661]: I0120 19:02:19.647692 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/c8c04a60-5bb8-4d54-93a6-1acfcbea3358-var-lib-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"c8c04a60-5bb8-4d54-93a6-1acfcbea3358\") " pod="openstack/cinder-volume-volume1-0" Jan 20 19:02:19 crc kubenswrapper[4661]: I0120 19:02:19.647750 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/c8c04a60-5bb8-4d54-93a6-1acfcbea3358-run\") pod \"cinder-volume-volume1-0\" (UID: \"c8c04a60-5bb8-4d54-93a6-1acfcbea3358\") " pod="openstack/cinder-volume-volume1-0" Jan 20 19:02:19 crc kubenswrapper[4661]: I0120 19:02:19.647834 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/c8c04a60-5bb8-4d54-93a6-1acfcbea3358-etc-nvme\") pod \"cinder-volume-volume1-0\" (UID: \"c8c04a60-5bb8-4d54-93a6-1acfcbea3358\") " pod="openstack/cinder-volume-volume1-0" Jan 20 19:02:19 crc kubenswrapper[4661]: I0120 19:02:19.647887 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/c8c04a60-5bb8-4d54-93a6-1acfcbea3358-etc-iscsi\") pod \"cinder-volume-volume1-0\" (UID: \"c8c04a60-5bb8-4d54-93a6-1acfcbea3358\") " pod="openstack/cinder-volume-volume1-0" Jan 20 19:02:19 crc kubenswrapper[4661]: I0120 19:02:19.647981 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/c8c04a60-5bb8-4d54-93a6-1acfcbea3358-var-locks-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"c8c04a60-5bb8-4d54-93a6-1acfcbea3358\") " pod="openstack/cinder-volume-volume1-0" Jan 20 19:02:19 crc kubenswrapper[4661]: I0120 19:02:19.648020 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/c8c04a60-5bb8-4d54-93a6-1acfcbea3358-dev\") pod \"cinder-volume-volume1-0\" (UID: \"c8c04a60-5bb8-4d54-93a6-1acfcbea3358\") " pod="openstack/cinder-volume-volume1-0" Jan 20 19:02:19 crc kubenswrapper[4661]: I0120 19:02:19.648049 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/c8c04a60-5bb8-4d54-93a6-1acfcbea3358-sys\") pod \"cinder-volume-volume1-0\" (UID: \"c8c04a60-5bb8-4d54-93a6-1acfcbea3358\") " pod="openstack/cinder-volume-volume1-0" Jan 20 19:02:19 crc kubenswrapper[4661]: I0120 19:02:19.648088 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c8c04a60-5bb8-4d54-93a6-1acfcbea3358-etc-machine-id\") pod \"cinder-volume-volume1-0\" (UID: \"c8c04a60-5bb8-4d54-93a6-1acfcbea3358\") " pod="openstack/cinder-volume-volume1-0" Jan 20 19:02:19 crc kubenswrapper[4661]: I0120 19:02:19.652600 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c8c04a60-5bb8-4d54-93a6-1acfcbea3358-combined-ca-bundle\") pod \"cinder-volume-volume1-0\" (UID: \"c8c04a60-5bb8-4d54-93a6-1acfcbea3358\") " pod="openstack/cinder-volume-volume1-0" Jan 20 19:02:19 crc kubenswrapper[4661]: I0120 19:02:19.653196 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c8c04a60-5bb8-4d54-93a6-1acfcbea3358-config-data\") pod \"cinder-volume-volume1-0\" (UID: \"c8c04a60-5bb8-4d54-93a6-1acfcbea3358\") " pod="openstack/cinder-volume-volume1-0" Jan 20 19:02:19 crc kubenswrapper[4661]: I0120 19:02:19.661715 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c8c04a60-5bb8-4d54-93a6-1acfcbea3358-scripts\") pod \"cinder-volume-volume1-0\" (UID: \"c8c04a60-5bb8-4d54-93a6-1acfcbea3358\") " pod="openstack/cinder-volume-volume1-0" Jan 20 19:02:19 crc kubenswrapper[4661]: I0120 19:02:19.661831 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/c8c04a60-5bb8-4d54-93a6-1acfcbea3358-ceph\") pod \"cinder-volume-volume1-0\" (UID: \"c8c04a60-5bb8-4d54-93a6-1acfcbea3358\") " pod="openstack/cinder-volume-volume1-0" Jan 20 19:02:19 crc kubenswrapper[4661]: I0120 19:02:19.670607 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cxwfg\" (UniqueName: \"kubernetes.io/projected/c8c04a60-5bb8-4d54-93a6-1acfcbea3358-kube-api-access-cxwfg\") pod \"cinder-volume-volume1-0\" (UID: \"c8c04a60-5bb8-4d54-93a6-1acfcbea3358\") " pod="openstack/cinder-volume-volume1-0" Jan 20 19:02:19 crc kubenswrapper[4661]: I0120 19:02:19.671164 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c8c04a60-5bb8-4d54-93a6-1acfcbea3358-config-data-custom\") pod \"cinder-volume-volume1-0\" (UID: \"c8c04a60-5bb8-4d54-93a6-1acfcbea3358\") " pod="openstack/cinder-volume-volume1-0" Jan 20 19:02:19 crc kubenswrapper[4661]: I0120 19:02:19.747542 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3066acf4-e48e-410e-8623-f29b5424f4fe-combined-ca-bundle\") pod \"cinder-backup-0\" (UID: \"3066acf4-e48e-410e-8623-f29b5424f4fe\") " pod="openstack/cinder-backup-0" Jan 20 19:02:19 crc kubenswrapper[4661]: I0120 19:02:19.747607 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/3066acf4-e48e-410e-8623-f29b5424f4fe-etc-machine-id\") pod \"cinder-backup-0\" (UID: \"3066acf4-e48e-410e-8623-f29b5424f4fe\") " pod="openstack/cinder-backup-0" Jan 20 19:02:19 crc kubenswrapper[4661]: I0120 19:02:19.747657 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/3066acf4-e48e-410e-8623-f29b5424f4fe-var-locks-cinder\") pod \"cinder-backup-0\" (UID: \"3066acf4-e48e-410e-8623-f29b5424f4fe\") " pod="openstack/cinder-backup-0" Jan 20 19:02:19 crc kubenswrapper[4661]: I0120 19:02:19.747776 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t29xq\" (UniqueName: \"kubernetes.io/projected/3066acf4-e48e-410e-8623-f29b5424f4fe-kube-api-access-t29xq\") pod \"cinder-backup-0\" (UID: \"3066acf4-e48e-410e-8623-f29b5424f4fe\") " pod="openstack/cinder-backup-0" Jan 20 19:02:19 crc kubenswrapper[4661]: I0120 19:02:19.747796 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/3066acf4-e48e-410e-8623-f29b5424f4fe-ceph\") pod \"cinder-backup-0\" (UID: \"3066acf4-e48e-410e-8623-f29b5424f4fe\") " pod="openstack/cinder-backup-0" Jan 20 19:02:19 crc kubenswrapper[4661]: I0120 19:02:19.747813 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/3066acf4-e48e-410e-8623-f29b5424f4fe-sys\") pod \"cinder-backup-0\" (UID: \"3066acf4-e48e-410e-8623-f29b5424f4fe\") " pod="openstack/cinder-backup-0" Jan 20 19:02:19 crc kubenswrapper[4661]: I0120 19:02:19.747833 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/3066acf4-e48e-410e-8623-f29b5424f4fe-var-locks-brick\") pod \"cinder-backup-0\" (UID: \"3066acf4-e48e-410e-8623-f29b5424f4fe\") " pod="openstack/cinder-backup-0" Jan 20 19:02:19 crc kubenswrapper[4661]: I0120 19:02:19.747857 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/3066acf4-e48e-410e-8623-f29b5424f4fe-run\") pod \"cinder-backup-0\" (UID: \"3066acf4-e48e-410e-8623-f29b5424f4fe\") " pod="openstack/cinder-backup-0" Jan 20 19:02:19 crc kubenswrapper[4661]: I0120 19:02:19.747875 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/3066acf4-e48e-410e-8623-f29b5424f4fe-etc-iscsi\") pod \"cinder-backup-0\" (UID: \"3066acf4-e48e-410e-8623-f29b5424f4fe\") " pod="openstack/cinder-backup-0" Jan 20 19:02:19 crc kubenswrapper[4661]: I0120 19:02:19.747942 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/3066acf4-e48e-410e-8623-f29b5424f4fe-etc-nvme\") pod \"cinder-backup-0\" (UID: \"3066acf4-e48e-410e-8623-f29b5424f4fe\") " pod="openstack/cinder-backup-0" Jan 20 19:02:19 crc kubenswrapper[4661]: I0120 19:02:19.748070 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/3066acf4-e48e-410e-8623-f29b5424f4fe-var-lib-cinder\") pod \"cinder-backup-0\" (UID: \"3066acf4-e48e-410e-8623-f29b5424f4fe\") " pod="openstack/cinder-backup-0" Jan 20 19:02:19 crc kubenswrapper[4661]: I0120 19:02:19.748110 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3066acf4-e48e-410e-8623-f29b5424f4fe-scripts\") pod \"cinder-backup-0\" (UID: \"3066acf4-e48e-410e-8623-f29b5424f4fe\") " pod="openstack/cinder-backup-0" Jan 20 19:02:19 crc kubenswrapper[4661]: I0120 19:02:19.748151 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3066acf4-e48e-410e-8623-f29b5424f4fe-config-data\") pod \"cinder-backup-0\" (UID: \"3066acf4-e48e-410e-8623-f29b5424f4fe\") " pod="openstack/cinder-backup-0" Jan 20 19:02:19 crc kubenswrapper[4661]: I0120 19:02:19.748180 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3066acf4-e48e-410e-8623-f29b5424f4fe-config-data-custom\") pod \"cinder-backup-0\" (UID: \"3066acf4-e48e-410e-8623-f29b5424f4fe\") " pod="openstack/cinder-backup-0" Jan 20 19:02:19 crc kubenswrapper[4661]: I0120 19:02:19.748396 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3066acf4-e48e-410e-8623-f29b5424f4fe-lib-modules\") pod \"cinder-backup-0\" (UID: \"3066acf4-e48e-410e-8623-f29b5424f4fe\") " pod="openstack/cinder-backup-0" Jan 20 19:02:19 crc kubenswrapper[4661]: I0120 19:02:19.748423 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/3066acf4-e48e-410e-8623-f29b5424f4fe-dev\") pod \"cinder-backup-0\" (UID: \"3066acf4-e48e-410e-8623-f29b5424f4fe\") " pod="openstack/cinder-backup-0" Jan 20 19:02:19 crc kubenswrapper[4661]: I0120 19:02:19.811060 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-volume-volume1-0" Jan 20 19:02:19 crc kubenswrapper[4661]: I0120 19:02:19.850265 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t29xq\" (UniqueName: \"kubernetes.io/projected/3066acf4-e48e-410e-8623-f29b5424f4fe-kube-api-access-t29xq\") pod \"cinder-backup-0\" (UID: \"3066acf4-e48e-410e-8623-f29b5424f4fe\") " pod="openstack/cinder-backup-0" Jan 20 19:02:19 crc kubenswrapper[4661]: I0120 19:02:19.850316 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/3066acf4-e48e-410e-8623-f29b5424f4fe-ceph\") pod \"cinder-backup-0\" (UID: \"3066acf4-e48e-410e-8623-f29b5424f4fe\") " pod="openstack/cinder-backup-0" Jan 20 19:02:19 crc kubenswrapper[4661]: I0120 19:02:19.850338 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/3066acf4-e48e-410e-8623-f29b5424f4fe-sys\") pod \"cinder-backup-0\" (UID: \"3066acf4-e48e-410e-8623-f29b5424f4fe\") " pod="openstack/cinder-backup-0" Jan 20 19:02:19 crc kubenswrapper[4661]: I0120 19:02:19.850365 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/3066acf4-e48e-410e-8623-f29b5424f4fe-var-locks-brick\") pod \"cinder-backup-0\" (UID: \"3066acf4-e48e-410e-8623-f29b5424f4fe\") " pod="openstack/cinder-backup-0" Jan 20 19:02:19 crc kubenswrapper[4661]: I0120 19:02:19.850398 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/3066acf4-e48e-410e-8623-f29b5424f4fe-run\") pod \"cinder-backup-0\" (UID: \"3066acf4-e48e-410e-8623-f29b5424f4fe\") " pod="openstack/cinder-backup-0" Jan 20 19:02:19 crc kubenswrapper[4661]: I0120 19:02:19.850420 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/3066acf4-e48e-410e-8623-f29b5424f4fe-etc-nvme\") pod \"cinder-backup-0\" (UID: \"3066acf4-e48e-410e-8623-f29b5424f4fe\") " pod="openstack/cinder-backup-0" Jan 20 19:02:19 crc kubenswrapper[4661]: I0120 19:02:19.850437 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/3066acf4-e48e-410e-8623-f29b5424f4fe-etc-iscsi\") pod \"cinder-backup-0\" (UID: \"3066acf4-e48e-410e-8623-f29b5424f4fe\") " pod="openstack/cinder-backup-0" Jan 20 19:02:19 crc kubenswrapper[4661]: I0120 19:02:19.850460 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/3066acf4-e48e-410e-8623-f29b5424f4fe-var-lib-cinder\") pod \"cinder-backup-0\" (UID: \"3066acf4-e48e-410e-8623-f29b5424f4fe\") " pod="openstack/cinder-backup-0" Jan 20 19:02:19 crc kubenswrapper[4661]: I0120 19:02:19.850479 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3066acf4-e48e-410e-8623-f29b5424f4fe-scripts\") pod \"cinder-backup-0\" (UID: \"3066acf4-e48e-410e-8623-f29b5424f4fe\") " pod="openstack/cinder-backup-0" Jan 20 19:02:19 crc kubenswrapper[4661]: I0120 19:02:19.850505 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3066acf4-e48e-410e-8623-f29b5424f4fe-config-data\") pod \"cinder-backup-0\" (UID: \"3066acf4-e48e-410e-8623-f29b5424f4fe\") " pod="openstack/cinder-backup-0" Jan 20 19:02:19 crc kubenswrapper[4661]: I0120 19:02:19.850528 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3066acf4-e48e-410e-8623-f29b5424f4fe-config-data-custom\") pod \"cinder-backup-0\" (UID: \"3066acf4-e48e-410e-8623-f29b5424f4fe\") " pod="openstack/cinder-backup-0" Jan 20 19:02:19 crc kubenswrapper[4661]: I0120 19:02:19.850573 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3066acf4-e48e-410e-8623-f29b5424f4fe-lib-modules\") pod \"cinder-backup-0\" (UID: \"3066acf4-e48e-410e-8623-f29b5424f4fe\") " pod="openstack/cinder-backup-0" Jan 20 19:02:19 crc kubenswrapper[4661]: I0120 19:02:19.850587 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/3066acf4-e48e-410e-8623-f29b5424f4fe-dev\") pod \"cinder-backup-0\" (UID: \"3066acf4-e48e-410e-8623-f29b5424f4fe\") " pod="openstack/cinder-backup-0" Jan 20 19:02:19 crc kubenswrapper[4661]: I0120 19:02:19.850615 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3066acf4-e48e-410e-8623-f29b5424f4fe-combined-ca-bundle\") pod \"cinder-backup-0\" (UID: \"3066acf4-e48e-410e-8623-f29b5424f4fe\") " pod="openstack/cinder-backup-0" Jan 20 19:02:19 crc kubenswrapper[4661]: I0120 19:02:19.850648 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/3066acf4-e48e-410e-8623-f29b5424f4fe-etc-machine-id\") pod \"cinder-backup-0\" (UID: \"3066acf4-e48e-410e-8623-f29b5424f4fe\") " pod="openstack/cinder-backup-0" Jan 20 19:02:19 crc kubenswrapper[4661]: I0120 19:02:19.850699 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/3066acf4-e48e-410e-8623-f29b5424f4fe-var-locks-cinder\") pod \"cinder-backup-0\" (UID: \"3066acf4-e48e-410e-8623-f29b5424f4fe\") " pod="openstack/cinder-backup-0" Jan 20 19:02:19 crc kubenswrapper[4661]: I0120 19:02:19.850809 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/3066acf4-e48e-410e-8623-f29b5424f4fe-var-locks-cinder\") pod \"cinder-backup-0\" (UID: \"3066acf4-e48e-410e-8623-f29b5424f4fe\") " pod="openstack/cinder-backup-0" Jan 20 19:02:19 crc kubenswrapper[4661]: I0120 19:02:19.851254 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/3066acf4-e48e-410e-8623-f29b5424f4fe-sys\") pod \"cinder-backup-0\" (UID: \"3066acf4-e48e-410e-8623-f29b5424f4fe\") " pod="openstack/cinder-backup-0" Jan 20 19:02:19 crc kubenswrapper[4661]: I0120 19:02:19.851327 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/3066acf4-e48e-410e-8623-f29b5424f4fe-var-locks-brick\") pod \"cinder-backup-0\" (UID: \"3066acf4-e48e-410e-8623-f29b5424f4fe\") " pod="openstack/cinder-backup-0" Jan 20 19:02:19 crc kubenswrapper[4661]: I0120 19:02:19.851359 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/3066acf4-e48e-410e-8623-f29b5424f4fe-run\") pod \"cinder-backup-0\" (UID: \"3066acf4-e48e-410e-8623-f29b5424f4fe\") " pod="openstack/cinder-backup-0" Jan 20 19:02:19 crc kubenswrapper[4661]: I0120 19:02:19.851398 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/3066acf4-e48e-410e-8623-f29b5424f4fe-etc-nvme\") pod \"cinder-backup-0\" (UID: \"3066acf4-e48e-410e-8623-f29b5424f4fe\") " pod="openstack/cinder-backup-0" Jan 20 19:02:19 crc kubenswrapper[4661]: I0120 19:02:19.851428 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/3066acf4-e48e-410e-8623-f29b5424f4fe-etc-iscsi\") pod \"cinder-backup-0\" (UID: \"3066acf4-e48e-410e-8623-f29b5424f4fe\") " pod="openstack/cinder-backup-0" Jan 20 19:02:19 crc kubenswrapper[4661]: I0120 19:02:19.851468 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/3066acf4-e48e-410e-8623-f29b5424f4fe-var-lib-cinder\") pod \"cinder-backup-0\" (UID: \"3066acf4-e48e-410e-8623-f29b5424f4fe\") " pod="openstack/cinder-backup-0" Jan 20 19:02:19 crc kubenswrapper[4661]: I0120 19:02:19.851972 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3066acf4-e48e-410e-8623-f29b5424f4fe-lib-modules\") pod \"cinder-backup-0\" (UID: \"3066acf4-e48e-410e-8623-f29b5424f4fe\") " pod="openstack/cinder-backup-0" Jan 20 19:02:19 crc kubenswrapper[4661]: I0120 19:02:19.852849 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/3066acf4-e48e-410e-8623-f29b5424f4fe-etc-machine-id\") pod \"cinder-backup-0\" (UID: \"3066acf4-e48e-410e-8623-f29b5424f4fe\") " pod="openstack/cinder-backup-0" Jan 20 19:02:19 crc kubenswrapper[4661]: I0120 19:02:19.852874 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/3066acf4-e48e-410e-8623-f29b5424f4fe-dev\") pod \"cinder-backup-0\" (UID: \"3066acf4-e48e-410e-8623-f29b5424f4fe\") " pod="openstack/cinder-backup-0" Jan 20 19:02:19 crc kubenswrapper[4661]: I0120 19:02:19.856019 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/3066acf4-e48e-410e-8623-f29b5424f4fe-ceph\") pod \"cinder-backup-0\" (UID: \"3066acf4-e48e-410e-8623-f29b5424f4fe\") " pod="openstack/cinder-backup-0" Jan 20 19:02:19 crc kubenswrapper[4661]: I0120 19:02:19.856165 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3066acf4-e48e-410e-8623-f29b5424f4fe-config-data-custom\") pod \"cinder-backup-0\" (UID: \"3066acf4-e48e-410e-8623-f29b5424f4fe\") " pod="openstack/cinder-backup-0" Jan 20 19:02:19 crc kubenswrapper[4661]: I0120 19:02:19.856817 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3066acf4-e48e-410e-8623-f29b5424f4fe-config-data\") pod \"cinder-backup-0\" (UID: \"3066acf4-e48e-410e-8623-f29b5424f4fe\") " pod="openstack/cinder-backup-0" Jan 20 19:02:19 crc kubenswrapper[4661]: I0120 19:02:19.857235 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3066acf4-e48e-410e-8623-f29b5424f4fe-scripts\") pod \"cinder-backup-0\" (UID: \"3066acf4-e48e-410e-8623-f29b5424f4fe\") " pod="openstack/cinder-backup-0" Jan 20 19:02:19 crc kubenswrapper[4661]: I0120 19:02:19.857352 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3066acf4-e48e-410e-8623-f29b5424f4fe-combined-ca-bundle\") pod \"cinder-backup-0\" (UID: \"3066acf4-e48e-410e-8623-f29b5424f4fe\") " pod="openstack/cinder-backup-0" Jan 20 19:02:19 crc kubenswrapper[4661]: I0120 19:02:19.870418 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t29xq\" (UniqueName: \"kubernetes.io/projected/3066acf4-e48e-410e-8623-f29b5424f4fe-kube-api-access-t29xq\") pod \"cinder-backup-0\" (UID: \"3066acf4-e48e-410e-8623-f29b5424f4fe\") " pod="openstack/cinder-backup-0" Jan 20 19:02:19 crc kubenswrapper[4661]: I0120 19:02:19.933740 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-backup-0" Jan 20 19:02:20 crc kubenswrapper[4661]: I0120 19:02:20.164516 4661 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-db-create-24n4j"] Jan 20 19:02:20 crc kubenswrapper[4661]: I0120 19:02:20.166877 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-create-24n4j" Jan 20 19:02:20 crc kubenswrapper[4661]: I0120 19:02:20.179559 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-db-create-24n4j"] Jan 20 19:02:20 crc kubenswrapper[4661]: I0120 19:02:20.264747 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f250573d-7ee6-4f08-92f7-b8997189e124-operator-scripts\") pod \"manila-db-create-24n4j\" (UID: \"f250573d-7ee6-4f08-92f7-b8997189e124\") " pod="openstack/manila-db-create-24n4j" Jan 20 19:02:20 crc kubenswrapper[4661]: I0120 19:02:20.264950 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lsdpq\" (UniqueName: \"kubernetes.io/projected/f250573d-7ee6-4f08-92f7-b8997189e124-kube-api-access-lsdpq\") pod \"manila-db-create-24n4j\" (UID: \"f250573d-7ee6-4f08-92f7-b8997189e124\") " pod="openstack/manila-db-create-24n4j" Jan 20 19:02:20 crc kubenswrapper[4661]: I0120 19:02:20.288284 4661 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-316e-account-create-update-bp2gj"] Jan 20 19:02:20 crc kubenswrapper[4661]: I0120 19:02:20.289356 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-316e-account-create-update-bp2gj" Jan 20 19:02:20 crc kubenswrapper[4661]: I0120 19:02:20.293395 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-db-secret" Jan 20 19:02:20 crc kubenswrapper[4661]: I0120 19:02:20.310358 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-316e-account-create-update-bp2gj"] Jan 20 19:02:20 crc kubenswrapper[4661]: I0120 19:02:20.349999 4661 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-68df949b55-t6lcn"] Jan 20 19:02:20 crc kubenswrapper[4661]: I0120 19:02:20.352287 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-68df949b55-t6lcn" Jan 20 19:02:20 crc kubenswrapper[4661]: I0120 19:02:20.364719 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon" Jan 20 19:02:20 crc kubenswrapper[4661]: I0120 19:02:20.364948 4661 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-config-data" Jan 20 19:02:20 crc kubenswrapper[4661]: I0120 19:02:20.365205 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon-horizon-dockercfg-fx22k" Jan 20 19:02:20 crc kubenswrapper[4661]: I0120 19:02:20.365584 4661 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-scripts" Jan 20 19:02:20 crc kubenswrapper[4661]: I0120 19:02:20.366629 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4cjg7\" (UniqueName: \"kubernetes.io/projected/7fa209ed-495f-49bb-b9cc-01ad4e1032a3-kube-api-access-4cjg7\") pod \"manila-316e-account-create-update-bp2gj\" (UID: \"7fa209ed-495f-49bb-b9cc-01ad4e1032a3\") " pod="openstack/manila-316e-account-create-update-bp2gj" Jan 20 19:02:20 crc kubenswrapper[4661]: I0120 19:02:20.366746 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lsdpq\" (UniqueName: \"kubernetes.io/projected/f250573d-7ee6-4f08-92f7-b8997189e124-kube-api-access-lsdpq\") pod \"manila-db-create-24n4j\" (UID: \"f250573d-7ee6-4f08-92f7-b8997189e124\") " pod="openstack/manila-db-create-24n4j" Jan 20 19:02:20 crc kubenswrapper[4661]: I0120 19:02:20.366802 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7fa209ed-495f-49bb-b9cc-01ad4e1032a3-operator-scripts\") pod \"manila-316e-account-create-update-bp2gj\" (UID: \"7fa209ed-495f-49bb-b9cc-01ad4e1032a3\") " pod="openstack/manila-316e-account-create-update-bp2gj" Jan 20 19:02:20 crc kubenswrapper[4661]: I0120 19:02:20.366847 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f250573d-7ee6-4f08-92f7-b8997189e124-operator-scripts\") pod \"manila-db-create-24n4j\" (UID: \"f250573d-7ee6-4f08-92f7-b8997189e124\") " pod="openstack/manila-db-create-24n4j" Jan 20 19:02:20 crc kubenswrapper[4661]: I0120 19:02:20.367719 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f250573d-7ee6-4f08-92f7-b8997189e124-operator-scripts\") pod \"manila-db-create-24n4j\" (UID: \"f250573d-7ee6-4f08-92f7-b8997189e124\") " pod="openstack/manila-db-create-24n4j" Jan 20 19:02:20 crc kubenswrapper[4661]: I0120 19:02:20.381486 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-68df949b55-t6lcn"] Jan 20 19:02:20 crc kubenswrapper[4661]: I0120 19:02:20.397615 4661 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 20 19:02:20 crc kubenswrapper[4661]: I0120 19:02:20.399047 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 20 19:02:20 crc kubenswrapper[4661]: I0120 19:02:20.399318 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lsdpq\" (UniqueName: \"kubernetes.io/projected/f250573d-7ee6-4f08-92f7-b8997189e124-kube-api-access-lsdpq\") pod \"manila-db-create-24n4j\" (UID: \"f250573d-7ee6-4f08-92f7-b8997189e124\") " pod="openstack/manila-db-create-24n4j" Jan 20 19:02:20 crc kubenswrapper[4661]: I0120 19:02:20.402496 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Jan 20 19:02:20 crc kubenswrapper[4661]: I0120 19:02:20.402796 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-q5zvh" Jan 20 19:02:20 crc kubenswrapper[4661]: I0120 19:02:20.403363 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 20 19:02:20 crc kubenswrapper[4661]: I0120 19:02:20.403495 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Jan 20 19:02:20 crc kubenswrapper[4661]: I0120 19:02:20.415220 4661 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 20 19:02:20 crc kubenswrapper[4661]: E0120 19:02:20.429887 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[ceph combined-ca-bundle config-data glance httpd-run kube-api-access-cqwr2 logs public-tls-certs scripts], unattached volumes=[], failed to process volumes=[ceph combined-ca-bundle config-data glance httpd-run kube-api-access-cqwr2 logs public-tls-certs scripts]: context canceled" pod="openstack/glance-default-external-api-0" podUID="ca62c0a5-b15f-41e9-8eb2-507a277856a3" Jan 20 19:02:20 crc kubenswrapper[4661]: I0120 19:02:20.453775 4661 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 20 19:02:20 crc kubenswrapper[4661]: I0120 19:02:20.455159 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 20 19:02:20 crc kubenswrapper[4661]: I0120 19:02:20.461430 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 20 19:02:20 crc kubenswrapper[4661]: I0120 19:02:20.461863 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Jan 20 19:02:20 crc kubenswrapper[4661]: I0120 19:02:20.468131 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/24a02294-d575-420a-a004-9eaac022318e-horizon-secret-key\") pod \"horizon-68df949b55-t6lcn\" (UID: \"24a02294-d575-420a-a004-9eaac022318e\") " pod="openstack/horizon-68df949b55-t6lcn" Jan 20 19:02:20 crc kubenswrapper[4661]: I0120 19:02:20.468298 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ca62c0a5-b15f-41e9-8eb2-507a277856a3-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"ca62c0a5-b15f-41e9-8eb2-507a277856a3\") " pod="openstack/glance-default-external-api-0" Jan 20 19:02:20 crc kubenswrapper[4661]: I0120 19:02:20.468368 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/24a02294-d575-420a-a004-9eaac022318e-logs\") pod \"horizon-68df949b55-t6lcn\" (UID: \"24a02294-d575-420a-a004-9eaac022318e\") " pod="openstack/horizon-68df949b55-t6lcn" Jan 20 19:02:20 crc kubenswrapper[4661]: I0120 19:02:20.468457 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7fa209ed-495f-49bb-b9cc-01ad4e1032a3-operator-scripts\") pod \"manila-316e-account-create-update-bp2gj\" (UID: \"7fa209ed-495f-49bb-b9cc-01ad4e1032a3\") " pod="openstack/manila-316e-account-create-update-bp2gj" Jan 20 19:02:20 crc kubenswrapper[4661]: I0120 19:02:20.468566 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/ca62c0a5-b15f-41e9-8eb2-507a277856a3-ceph\") pod \"glance-default-external-api-0\" (UID: \"ca62c0a5-b15f-41e9-8eb2-507a277856a3\") " pod="openstack/glance-default-external-api-0" Jan 20 19:02:20 crc kubenswrapper[4661]: I0120 19:02:20.468696 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/24a02294-d575-420a-a004-9eaac022318e-config-data\") pod \"horizon-68df949b55-t6lcn\" (UID: \"24a02294-d575-420a-a004-9eaac022318e\") " pod="openstack/horizon-68df949b55-t6lcn" Jan 20 19:02:20 crc kubenswrapper[4661]: I0120 19:02:20.468792 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f64vj\" (UniqueName: \"kubernetes.io/projected/24a02294-d575-420a-a004-9eaac022318e-kube-api-access-f64vj\") pod \"horizon-68df949b55-t6lcn\" (UID: \"24a02294-d575-420a-a004-9eaac022318e\") " pod="openstack/horizon-68df949b55-t6lcn" Jan 20 19:02:20 crc kubenswrapper[4661]: I0120 19:02:20.468861 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4cjg7\" (UniqueName: \"kubernetes.io/projected/7fa209ed-495f-49bb-b9cc-01ad4e1032a3-kube-api-access-4cjg7\") pod \"manila-316e-account-create-update-bp2gj\" (UID: \"7fa209ed-495f-49bb-b9cc-01ad4e1032a3\") " pod="openstack/manila-316e-account-create-update-bp2gj" Jan 20 19:02:20 crc kubenswrapper[4661]: I0120 19:02:20.468939 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-external-api-0\" (UID: \"ca62c0a5-b15f-41e9-8eb2-507a277856a3\") " pod="openstack/glance-default-external-api-0" Jan 20 19:02:20 crc kubenswrapper[4661]: I0120 19:02:20.469012 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/24a02294-d575-420a-a004-9eaac022318e-scripts\") pod \"horizon-68df949b55-t6lcn\" (UID: \"24a02294-d575-420a-a004-9eaac022318e\") " pod="openstack/horizon-68df949b55-t6lcn" Jan 20 19:02:20 crc kubenswrapper[4661]: I0120 19:02:20.469082 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ca62c0a5-b15f-41e9-8eb2-507a277856a3-scripts\") pod \"glance-default-external-api-0\" (UID: \"ca62c0a5-b15f-41e9-8eb2-507a277856a3\") " pod="openstack/glance-default-external-api-0" Jan 20 19:02:20 crc kubenswrapper[4661]: I0120 19:02:20.469151 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/ca62c0a5-b15f-41e9-8eb2-507a277856a3-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"ca62c0a5-b15f-41e9-8eb2-507a277856a3\") " pod="openstack/glance-default-external-api-0" Jan 20 19:02:20 crc kubenswrapper[4661]: I0120 19:02:20.469230 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ca62c0a5-b15f-41e9-8eb2-507a277856a3-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"ca62c0a5-b15f-41e9-8eb2-507a277856a3\") " pod="openstack/glance-default-external-api-0" Jan 20 19:02:20 crc kubenswrapper[4661]: I0120 19:02:20.469319 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ca62c0a5-b15f-41e9-8eb2-507a277856a3-logs\") pod \"glance-default-external-api-0\" (UID: \"ca62c0a5-b15f-41e9-8eb2-507a277856a3\") " pod="openstack/glance-default-external-api-0" Jan 20 19:02:20 crc kubenswrapper[4661]: I0120 19:02:20.469408 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ca62c0a5-b15f-41e9-8eb2-507a277856a3-config-data\") pod \"glance-default-external-api-0\" (UID: \"ca62c0a5-b15f-41e9-8eb2-507a277856a3\") " pod="openstack/glance-default-external-api-0" Jan 20 19:02:20 crc kubenswrapper[4661]: I0120 19:02:20.469491 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cqwr2\" (UniqueName: \"kubernetes.io/projected/ca62c0a5-b15f-41e9-8eb2-507a277856a3-kube-api-access-cqwr2\") pod \"glance-default-external-api-0\" (UID: \"ca62c0a5-b15f-41e9-8eb2-507a277856a3\") " pod="openstack/glance-default-external-api-0" Jan 20 19:02:20 crc kubenswrapper[4661]: I0120 19:02:20.470221 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7fa209ed-495f-49bb-b9cc-01ad4e1032a3-operator-scripts\") pod \"manila-316e-account-create-update-bp2gj\" (UID: \"7fa209ed-495f-49bb-b9cc-01ad4e1032a3\") " pod="openstack/manila-316e-account-create-update-bp2gj" Jan 20 19:02:20 crc kubenswrapper[4661]: I0120 19:02:20.472522 4661 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 20 19:02:20 crc kubenswrapper[4661]: I0120 19:02:20.508275 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4cjg7\" (UniqueName: \"kubernetes.io/projected/7fa209ed-495f-49bb-b9cc-01ad4e1032a3-kube-api-access-4cjg7\") pod \"manila-316e-account-create-update-bp2gj\" (UID: \"7fa209ed-495f-49bb-b9cc-01ad4e1032a3\") " pod="openstack/manila-316e-account-create-update-bp2gj" Jan 20 19:02:20 crc kubenswrapper[4661]: I0120 19:02:20.517841 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-create-24n4j" Jan 20 19:02:20 crc kubenswrapper[4661]: I0120 19:02:20.531248 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 20 19:02:20 crc kubenswrapper[4661]: I0120 19:02:20.553956 4661 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-676bf6649-b97jp"] Jan 20 19:02:20 crc kubenswrapper[4661]: I0120 19:02:20.555353 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-676bf6649-b97jp" Jan 20 19:02:20 crc kubenswrapper[4661]: I0120 19:02:20.561356 4661 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 20 19:02:20 crc kubenswrapper[4661]: E0120 19:02:20.562272 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[ceph combined-ca-bundle config-data glance httpd-run internal-tls-certs kube-api-access-rbfwq logs scripts], unattached volumes=[], failed to process volumes=[ceph combined-ca-bundle config-data glance httpd-run internal-tls-certs kube-api-access-rbfwq logs scripts]: context canceled" pod="openstack/glance-default-internal-api-0" podUID="43ad721f-147e-4d5f-bedf-7f75d256e791" Jan 20 19:02:20 crc kubenswrapper[4661]: I0120 19:02:20.571098 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/ca62c0a5-b15f-41e9-8eb2-507a277856a3-ceph\") pod \"glance-default-external-api-0\" (UID: \"ca62c0a5-b15f-41e9-8eb2-507a277856a3\") " pod="openstack/glance-default-external-api-0" Jan 20 19:02:20 crc kubenswrapper[4661]: I0120 19:02:20.571149 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/24a02294-d575-420a-a004-9eaac022318e-config-data\") pod \"horizon-68df949b55-t6lcn\" (UID: \"24a02294-d575-420a-a004-9eaac022318e\") " pod="openstack/horizon-68df949b55-t6lcn" Jan 20 19:02:20 crc kubenswrapper[4661]: I0120 19:02:20.571168 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f64vj\" (UniqueName: \"kubernetes.io/projected/24a02294-d575-420a-a004-9eaac022318e-kube-api-access-f64vj\") pod \"horizon-68df949b55-t6lcn\" (UID: \"24a02294-d575-420a-a004-9eaac022318e\") " pod="openstack/horizon-68df949b55-t6lcn" Jan 20 19:02:20 crc kubenswrapper[4661]: I0120 19:02:20.571192 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-external-api-0\" (UID: \"ca62c0a5-b15f-41e9-8eb2-507a277856a3\") " pod="openstack/glance-default-external-api-0" Jan 20 19:02:20 crc kubenswrapper[4661]: I0120 19:02:20.571212 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/24a02294-d575-420a-a004-9eaac022318e-scripts\") pod \"horizon-68df949b55-t6lcn\" (UID: \"24a02294-d575-420a-a004-9eaac022318e\") " pod="openstack/horizon-68df949b55-t6lcn" Jan 20 19:02:20 crc kubenswrapper[4661]: I0120 19:02:20.571230 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ca62c0a5-b15f-41e9-8eb2-507a277856a3-scripts\") pod \"glance-default-external-api-0\" (UID: \"ca62c0a5-b15f-41e9-8eb2-507a277856a3\") " pod="openstack/glance-default-external-api-0" Jan 20 19:02:20 crc kubenswrapper[4661]: I0120 19:02:20.571249 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/ca62c0a5-b15f-41e9-8eb2-507a277856a3-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"ca62c0a5-b15f-41e9-8eb2-507a277856a3\") " pod="openstack/glance-default-external-api-0" Jan 20 19:02:20 crc kubenswrapper[4661]: I0120 19:02:20.571265 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ca62c0a5-b15f-41e9-8eb2-507a277856a3-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"ca62c0a5-b15f-41e9-8eb2-507a277856a3\") " pod="openstack/glance-default-external-api-0" Jan 20 19:02:20 crc kubenswrapper[4661]: I0120 19:02:20.571288 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ca62c0a5-b15f-41e9-8eb2-507a277856a3-logs\") pod \"glance-default-external-api-0\" (UID: \"ca62c0a5-b15f-41e9-8eb2-507a277856a3\") " pod="openstack/glance-default-external-api-0" Jan 20 19:02:20 crc kubenswrapper[4661]: I0120 19:02:20.571318 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ca62c0a5-b15f-41e9-8eb2-507a277856a3-config-data\") pod \"glance-default-external-api-0\" (UID: \"ca62c0a5-b15f-41e9-8eb2-507a277856a3\") " pod="openstack/glance-default-external-api-0" Jan 20 19:02:20 crc kubenswrapper[4661]: I0120 19:02:20.571340 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqwr2\" (UniqueName: \"kubernetes.io/projected/ca62c0a5-b15f-41e9-8eb2-507a277856a3-kube-api-access-cqwr2\") pod \"glance-default-external-api-0\" (UID: \"ca62c0a5-b15f-41e9-8eb2-507a277856a3\") " pod="openstack/glance-default-external-api-0" Jan 20 19:02:20 crc kubenswrapper[4661]: I0120 19:02:20.571365 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/24a02294-d575-420a-a004-9eaac022318e-horizon-secret-key\") pod \"horizon-68df949b55-t6lcn\" (UID: \"24a02294-d575-420a-a004-9eaac022318e\") " pod="openstack/horizon-68df949b55-t6lcn" Jan 20 19:02:20 crc kubenswrapper[4661]: I0120 19:02:20.571399 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ca62c0a5-b15f-41e9-8eb2-507a277856a3-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"ca62c0a5-b15f-41e9-8eb2-507a277856a3\") " pod="openstack/glance-default-external-api-0" Jan 20 19:02:20 crc kubenswrapper[4661]: I0120 19:02:20.571412 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/24a02294-d575-420a-a004-9eaac022318e-logs\") pod \"horizon-68df949b55-t6lcn\" (UID: \"24a02294-d575-420a-a004-9eaac022318e\") " pod="openstack/horizon-68df949b55-t6lcn" Jan 20 19:02:20 crc kubenswrapper[4661]: I0120 19:02:20.571837 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/24a02294-d575-420a-a004-9eaac022318e-logs\") pod \"horizon-68df949b55-t6lcn\" (UID: \"24a02294-d575-420a-a004-9eaac022318e\") " pod="openstack/horizon-68df949b55-t6lcn" Jan 20 19:02:20 crc kubenswrapper[4661]: I0120 19:02:20.574028 4661 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-external-api-0\" (UID: \"ca62c0a5-b15f-41e9-8eb2-507a277856a3\") device mount path \"/mnt/openstack/pv01\"" pod="openstack/glance-default-external-api-0" Jan 20 19:02:20 crc kubenswrapper[4661]: I0120 19:02:20.574575 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/24a02294-d575-420a-a004-9eaac022318e-scripts\") pod \"horizon-68df949b55-t6lcn\" (UID: \"24a02294-d575-420a-a004-9eaac022318e\") " pod="openstack/horizon-68df949b55-t6lcn" Jan 20 19:02:20 crc kubenswrapper[4661]: I0120 19:02:20.574066 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-676bf6649-b97jp"] Jan 20 19:02:20 crc kubenswrapper[4661]: I0120 19:02:20.575506 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/24a02294-d575-420a-a004-9eaac022318e-config-data\") pod \"horizon-68df949b55-t6lcn\" (UID: \"24a02294-d575-420a-a004-9eaac022318e\") " pod="openstack/horizon-68df949b55-t6lcn" Jan 20 19:02:20 crc kubenswrapper[4661]: I0120 19:02:20.576700 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ca62c0a5-b15f-41e9-8eb2-507a277856a3-logs\") pod \"glance-default-external-api-0\" (UID: \"ca62c0a5-b15f-41e9-8eb2-507a277856a3\") " pod="openstack/glance-default-external-api-0" Jan 20 19:02:20 crc kubenswrapper[4661]: I0120 19:02:20.580494 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/24a02294-d575-420a-a004-9eaac022318e-horizon-secret-key\") pod \"horizon-68df949b55-t6lcn\" (UID: \"24a02294-d575-420a-a004-9eaac022318e\") " pod="openstack/horizon-68df949b55-t6lcn" Jan 20 19:02:20 crc kubenswrapper[4661]: I0120 19:02:20.574066 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/ca62c0a5-b15f-41e9-8eb2-507a277856a3-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"ca62c0a5-b15f-41e9-8eb2-507a277856a3\") " pod="openstack/glance-default-external-api-0" Jan 20 19:02:20 crc kubenswrapper[4661]: I0120 19:02:20.593876 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ca62c0a5-b15f-41e9-8eb2-507a277856a3-scripts\") pod \"glance-default-external-api-0\" (UID: \"ca62c0a5-b15f-41e9-8eb2-507a277856a3\") " pod="openstack/glance-default-external-api-0" Jan 20 19:02:20 crc kubenswrapper[4661]: I0120 19:02:20.598137 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/ca62c0a5-b15f-41e9-8eb2-507a277856a3-ceph\") pod \"glance-default-external-api-0\" (UID: \"ca62c0a5-b15f-41e9-8eb2-507a277856a3\") " pod="openstack/glance-default-external-api-0" Jan 20 19:02:20 crc kubenswrapper[4661]: I0120 19:02:20.599149 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ca62c0a5-b15f-41e9-8eb2-507a277856a3-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"ca62c0a5-b15f-41e9-8eb2-507a277856a3\") " pod="openstack/glance-default-external-api-0" Jan 20 19:02:20 crc kubenswrapper[4661]: I0120 19:02:20.602371 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ca62c0a5-b15f-41e9-8eb2-507a277856a3-config-data\") pod \"glance-default-external-api-0\" (UID: \"ca62c0a5-b15f-41e9-8eb2-507a277856a3\") " pod="openstack/glance-default-external-api-0" Jan 20 19:02:20 crc kubenswrapper[4661]: I0120 19:02:20.604193 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ca62c0a5-b15f-41e9-8eb2-507a277856a3-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"ca62c0a5-b15f-41e9-8eb2-507a277856a3\") " pod="openstack/glance-default-external-api-0" Jan 20 19:02:20 crc kubenswrapper[4661]: I0120 19:02:20.605601 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f64vj\" (UniqueName: \"kubernetes.io/projected/24a02294-d575-420a-a004-9eaac022318e-kube-api-access-f64vj\") pod \"horizon-68df949b55-t6lcn\" (UID: \"24a02294-d575-420a-a004-9eaac022318e\") " pod="openstack/horizon-68df949b55-t6lcn" Jan 20 19:02:20 crc kubenswrapper[4661]: I0120 19:02:20.612892 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqwr2\" (UniqueName: \"kubernetes.io/projected/ca62c0a5-b15f-41e9-8eb2-507a277856a3-kube-api-access-cqwr2\") pod \"glance-default-external-api-0\" (UID: \"ca62c0a5-b15f-41e9-8eb2-507a277856a3\") " pod="openstack/glance-default-external-api-0" Jan 20 19:02:20 crc kubenswrapper[4661]: I0120 19:02:20.626429 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-external-api-0\" (UID: \"ca62c0a5-b15f-41e9-8eb2-507a277856a3\") " pod="openstack/glance-default-external-api-0" Jan 20 19:02:20 crc kubenswrapper[4661]: I0120 19:02:20.632416 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-volume-volume1-0"] Jan 20 19:02:20 crc kubenswrapper[4661]: I0120 19:02:20.636592 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-316e-account-create-update-bp2gj" Jan 20 19:02:20 crc kubenswrapper[4661]: I0120 19:02:20.675136 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c7b62ec6-2f48-4eba-a9dc-612d50b1c7f4-logs\") pod \"horizon-676bf6649-b97jp\" (UID: \"c7b62ec6-2f48-4eba-a9dc-612d50b1c7f4\") " pod="openstack/horizon-676bf6649-b97jp" Jan 20 19:02:20 crc kubenswrapper[4661]: I0120 19:02:20.675207 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/43ad721f-147e-4d5f-bedf-7f75d256e791-scripts\") pod \"glance-default-internal-api-0\" (UID: \"43ad721f-147e-4d5f-bedf-7f75d256e791\") " pod="openstack/glance-default-internal-api-0" Jan 20 19:02:20 crc kubenswrapper[4661]: I0120 19:02:20.675240 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/43ad721f-147e-4d5f-bedf-7f75d256e791-ceph\") pod \"glance-default-internal-api-0\" (UID: \"43ad721f-147e-4d5f-bedf-7f75d256e791\") " pod="openstack/glance-default-internal-api-0" Jan 20 19:02:20 crc kubenswrapper[4661]: I0120 19:02:20.675282 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rbfwq\" (UniqueName: \"kubernetes.io/projected/43ad721f-147e-4d5f-bedf-7f75d256e791-kube-api-access-rbfwq\") pod \"glance-default-internal-api-0\" (UID: \"43ad721f-147e-4d5f-bedf-7f75d256e791\") " pod="openstack/glance-default-internal-api-0" Jan 20 19:02:20 crc kubenswrapper[4661]: I0120 19:02:20.675322 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/43ad721f-147e-4d5f-bedf-7f75d256e791-config-data\") pod \"glance-default-internal-api-0\" (UID: \"43ad721f-147e-4d5f-bedf-7f75d256e791\") " pod="openstack/glance-default-internal-api-0" Jan 20 19:02:20 crc kubenswrapper[4661]: I0120 19:02:20.675344 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/43ad721f-147e-4d5f-bedf-7f75d256e791-logs\") pod \"glance-default-internal-api-0\" (UID: \"43ad721f-147e-4d5f-bedf-7f75d256e791\") " pod="openstack/glance-default-internal-api-0" Jan 20 19:02:20 crc kubenswrapper[4661]: I0120 19:02:20.675362 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/43ad721f-147e-4d5f-bedf-7f75d256e791-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"43ad721f-147e-4d5f-bedf-7f75d256e791\") " pod="openstack/glance-default-internal-api-0" Jan 20 19:02:20 crc kubenswrapper[4661]: I0120 19:02:20.675378 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c7b62ec6-2f48-4eba-a9dc-612d50b1c7f4-config-data\") pod \"horizon-676bf6649-b97jp\" (UID: \"c7b62ec6-2f48-4eba-a9dc-612d50b1c7f4\") " pod="openstack/horizon-676bf6649-b97jp" Jan 20 19:02:20 crc kubenswrapper[4661]: I0120 19:02:20.675400 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c7b62ec6-2f48-4eba-a9dc-612d50b1c7f4-scripts\") pod \"horizon-676bf6649-b97jp\" (UID: \"c7b62ec6-2f48-4eba-a9dc-612d50b1c7f4\") " pod="openstack/horizon-676bf6649-b97jp" Jan 20 19:02:20 crc kubenswrapper[4661]: I0120 19:02:20.675423 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/43ad721f-147e-4d5f-bedf-7f75d256e791-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"43ad721f-147e-4d5f-bedf-7f75d256e791\") " pod="openstack/glance-default-internal-api-0" Jan 20 19:02:20 crc kubenswrapper[4661]: I0120 19:02:20.675442 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/43ad721f-147e-4d5f-bedf-7f75d256e791-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"43ad721f-147e-4d5f-bedf-7f75d256e791\") " pod="openstack/glance-default-internal-api-0" Jan 20 19:02:20 crc kubenswrapper[4661]: I0120 19:02:20.675477 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/c7b62ec6-2f48-4eba-a9dc-612d50b1c7f4-horizon-secret-key\") pod \"horizon-676bf6649-b97jp\" (UID: \"c7b62ec6-2f48-4eba-a9dc-612d50b1c7f4\") " pod="openstack/horizon-676bf6649-b97jp" Jan 20 19:02:20 crc kubenswrapper[4661]: I0120 19:02:20.675502 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-internal-api-0\" (UID: \"43ad721f-147e-4d5f-bedf-7f75d256e791\") " pod="openstack/glance-default-internal-api-0" Jan 20 19:02:20 crc kubenswrapper[4661]: I0120 19:02:20.675531 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gxvqn\" (UniqueName: \"kubernetes.io/projected/c7b62ec6-2f48-4eba-a9dc-612d50b1c7f4-kube-api-access-gxvqn\") pod \"horizon-676bf6649-b97jp\" (UID: \"c7b62ec6-2f48-4eba-a9dc-612d50b1c7f4\") " pod="openstack/horizon-676bf6649-b97jp" Jan 20 19:02:20 crc kubenswrapper[4661]: I0120 19:02:20.699604 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-68df949b55-t6lcn" Jan 20 19:02:20 crc kubenswrapper[4661]: I0120 19:02:20.778616 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/43ad721f-147e-4d5f-bedf-7f75d256e791-scripts\") pod \"glance-default-internal-api-0\" (UID: \"43ad721f-147e-4d5f-bedf-7f75d256e791\") " pod="openstack/glance-default-internal-api-0" Jan 20 19:02:20 crc kubenswrapper[4661]: I0120 19:02:20.778907 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/43ad721f-147e-4d5f-bedf-7f75d256e791-ceph\") pod \"glance-default-internal-api-0\" (UID: \"43ad721f-147e-4d5f-bedf-7f75d256e791\") " pod="openstack/glance-default-internal-api-0" Jan 20 19:02:20 crc kubenswrapper[4661]: I0120 19:02:20.778951 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rbfwq\" (UniqueName: \"kubernetes.io/projected/43ad721f-147e-4d5f-bedf-7f75d256e791-kube-api-access-rbfwq\") pod \"glance-default-internal-api-0\" (UID: \"43ad721f-147e-4d5f-bedf-7f75d256e791\") " pod="openstack/glance-default-internal-api-0" Jan 20 19:02:20 crc kubenswrapper[4661]: I0120 19:02:20.778981 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/43ad721f-147e-4d5f-bedf-7f75d256e791-config-data\") pod \"glance-default-internal-api-0\" (UID: \"43ad721f-147e-4d5f-bedf-7f75d256e791\") " pod="openstack/glance-default-internal-api-0" Jan 20 19:02:20 crc kubenswrapper[4661]: I0120 19:02:20.779004 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/43ad721f-147e-4d5f-bedf-7f75d256e791-logs\") pod \"glance-default-internal-api-0\" (UID: \"43ad721f-147e-4d5f-bedf-7f75d256e791\") " pod="openstack/glance-default-internal-api-0" Jan 20 19:02:20 crc kubenswrapper[4661]: I0120 19:02:20.781524 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/43ad721f-147e-4d5f-bedf-7f75d256e791-logs\") pod \"glance-default-internal-api-0\" (UID: \"43ad721f-147e-4d5f-bedf-7f75d256e791\") " pod="openstack/glance-default-internal-api-0" Jan 20 19:02:20 crc kubenswrapper[4661]: I0120 19:02:20.781914 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/43ad721f-147e-4d5f-bedf-7f75d256e791-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"43ad721f-147e-4d5f-bedf-7f75d256e791\") " pod="openstack/glance-default-internal-api-0" Jan 20 19:02:20 crc kubenswrapper[4661]: I0120 19:02:20.782066 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c7b62ec6-2f48-4eba-a9dc-612d50b1c7f4-config-data\") pod \"horizon-676bf6649-b97jp\" (UID: \"c7b62ec6-2f48-4eba-a9dc-612d50b1c7f4\") " pod="openstack/horizon-676bf6649-b97jp" Jan 20 19:02:20 crc kubenswrapper[4661]: I0120 19:02:20.782100 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c7b62ec6-2f48-4eba-a9dc-612d50b1c7f4-scripts\") pod \"horizon-676bf6649-b97jp\" (UID: \"c7b62ec6-2f48-4eba-a9dc-612d50b1c7f4\") " pod="openstack/horizon-676bf6649-b97jp" Jan 20 19:02:20 crc kubenswrapper[4661]: I0120 19:02:20.782134 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/43ad721f-147e-4d5f-bedf-7f75d256e791-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"43ad721f-147e-4d5f-bedf-7f75d256e791\") " pod="openstack/glance-default-internal-api-0" Jan 20 19:02:20 crc kubenswrapper[4661]: I0120 19:02:20.782149 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/43ad721f-147e-4d5f-bedf-7f75d256e791-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"43ad721f-147e-4d5f-bedf-7f75d256e791\") " pod="openstack/glance-default-internal-api-0" Jan 20 19:02:20 crc kubenswrapper[4661]: I0120 19:02:20.782944 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/c7b62ec6-2f48-4eba-a9dc-612d50b1c7f4-horizon-secret-key\") pod \"horizon-676bf6649-b97jp\" (UID: \"c7b62ec6-2f48-4eba-a9dc-612d50b1c7f4\") " pod="openstack/horizon-676bf6649-b97jp" Jan 20 19:02:20 crc kubenswrapper[4661]: I0120 19:02:20.783022 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-internal-api-0\" (UID: \"43ad721f-147e-4d5f-bedf-7f75d256e791\") " pod="openstack/glance-default-internal-api-0" Jan 20 19:02:20 crc kubenswrapper[4661]: I0120 19:02:20.783076 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gxvqn\" (UniqueName: \"kubernetes.io/projected/c7b62ec6-2f48-4eba-a9dc-612d50b1c7f4-kube-api-access-gxvqn\") pod \"horizon-676bf6649-b97jp\" (UID: \"c7b62ec6-2f48-4eba-a9dc-612d50b1c7f4\") " pod="openstack/horizon-676bf6649-b97jp" Jan 20 19:02:20 crc kubenswrapper[4661]: I0120 19:02:20.783120 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c7b62ec6-2f48-4eba-a9dc-612d50b1c7f4-logs\") pod \"horizon-676bf6649-b97jp\" (UID: \"c7b62ec6-2f48-4eba-a9dc-612d50b1c7f4\") " pod="openstack/horizon-676bf6649-b97jp" Jan 20 19:02:20 crc kubenswrapper[4661]: I0120 19:02:20.784427 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/43ad721f-147e-4d5f-bedf-7f75d256e791-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"43ad721f-147e-4d5f-bedf-7f75d256e791\") " pod="openstack/glance-default-internal-api-0" Jan 20 19:02:20 crc kubenswrapper[4661]: I0120 19:02:20.784615 4661 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-internal-api-0\" (UID: \"43ad721f-147e-4d5f-bedf-7f75d256e791\") device mount path \"/mnt/openstack/pv03\"" pod="openstack/glance-default-internal-api-0" Jan 20 19:02:20 crc kubenswrapper[4661]: I0120 19:02:20.793328 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/43ad721f-147e-4d5f-bedf-7f75d256e791-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"43ad721f-147e-4d5f-bedf-7f75d256e791\") " pod="openstack/glance-default-internal-api-0" Jan 20 19:02:20 crc kubenswrapper[4661]: I0120 19:02:20.793681 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/43ad721f-147e-4d5f-bedf-7f75d256e791-ceph\") pod \"glance-default-internal-api-0\" (UID: \"43ad721f-147e-4d5f-bedf-7f75d256e791\") " pod="openstack/glance-default-internal-api-0" Jan 20 19:02:20 crc kubenswrapper[4661]: I0120 19:02:20.793965 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/43ad721f-147e-4d5f-bedf-7f75d256e791-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"43ad721f-147e-4d5f-bedf-7f75d256e791\") " pod="openstack/glance-default-internal-api-0" Jan 20 19:02:20 crc kubenswrapper[4661]: I0120 19:02:20.797026 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c7b62ec6-2f48-4eba-a9dc-612d50b1c7f4-logs\") pod \"horizon-676bf6649-b97jp\" (UID: \"c7b62ec6-2f48-4eba-a9dc-612d50b1c7f4\") " pod="openstack/horizon-676bf6649-b97jp" Jan 20 19:02:20 crc kubenswrapper[4661]: I0120 19:02:20.797209 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c7b62ec6-2f48-4eba-a9dc-612d50b1c7f4-scripts\") pod \"horizon-676bf6649-b97jp\" (UID: \"c7b62ec6-2f48-4eba-a9dc-612d50b1c7f4\") " pod="openstack/horizon-676bf6649-b97jp" Jan 20 19:02:20 crc kubenswrapper[4661]: I0120 19:02:20.797611 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/43ad721f-147e-4d5f-bedf-7f75d256e791-scripts\") pod \"glance-default-internal-api-0\" (UID: \"43ad721f-147e-4d5f-bedf-7f75d256e791\") " pod="openstack/glance-default-internal-api-0" Jan 20 19:02:20 crc kubenswrapper[4661]: I0120 19:02:20.806605 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c7b62ec6-2f48-4eba-a9dc-612d50b1c7f4-config-data\") pod \"horizon-676bf6649-b97jp\" (UID: \"c7b62ec6-2f48-4eba-a9dc-612d50b1c7f4\") " pod="openstack/horizon-676bf6649-b97jp" Jan 20 19:02:20 crc kubenswrapper[4661]: I0120 19:02:20.806743 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/43ad721f-147e-4d5f-bedf-7f75d256e791-config-data\") pod \"glance-default-internal-api-0\" (UID: \"43ad721f-147e-4d5f-bedf-7f75d256e791\") " pod="openstack/glance-default-internal-api-0" Jan 20 19:02:20 crc kubenswrapper[4661]: I0120 19:02:20.824868 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/c7b62ec6-2f48-4eba-a9dc-612d50b1c7f4-horizon-secret-key\") pod \"horizon-676bf6649-b97jp\" (UID: \"c7b62ec6-2f48-4eba-a9dc-612d50b1c7f4\") " pod="openstack/horizon-676bf6649-b97jp" Jan 20 19:02:20 crc kubenswrapper[4661]: I0120 19:02:20.840489 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rbfwq\" (UniqueName: \"kubernetes.io/projected/43ad721f-147e-4d5f-bedf-7f75d256e791-kube-api-access-rbfwq\") pod \"glance-default-internal-api-0\" (UID: \"43ad721f-147e-4d5f-bedf-7f75d256e791\") " pod="openstack/glance-default-internal-api-0" Jan 20 19:02:20 crc kubenswrapper[4661]: I0120 19:02:20.854330 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gxvqn\" (UniqueName: \"kubernetes.io/projected/c7b62ec6-2f48-4eba-a9dc-612d50b1c7f4-kube-api-access-gxvqn\") pod \"horizon-676bf6649-b97jp\" (UID: \"c7b62ec6-2f48-4eba-a9dc-612d50b1c7f4\") " pod="openstack/horizon-676bf6649-b97jp" Jan 20 19:02:20 crc kubenswrapper[4661]: I0120 19:02:20.889003 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-backup-0"] Jan 20 19:02:20 crc kubenswrapper[4661]: I0120 19:02:20.921985 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-676bf6649-b97jp" Jan 20 19:02:20 crc kubenswrapper[4661]: I0120 19:02:20.962991 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-internal-api-0\" (UID: \"43ad721f-147e-4d5f-bedf-7f75d256e791\") " pod="openstack/glance-default-internal-api-0" Jan 20 19:02:21 crc kubenswrapper[4661]: I0120 19:02:21.149441 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-db-create-24n4j"] Jan 20 19:02:21 crc kubenswrapper[4661]: W0120 19:02:21.188577 4661 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf250573d_7ee6_4f08_92f7_b8997189e124.slice/crio-1128542f234c6f5c3e31f0e95e9e08d2071b23a68b1316987992d3a4696044b3 WatchSource:0}: Error finding container 1128542f234c6f5c3e31f0e95e9e08d2071b23a68b1316987992d3a4696044b3: Status 404 returned error can't find the container with id 1128542f234c6f5c3e31f0e95e9e08d2071b23a68b1316987992d3a4696044b3 Jan 20 19:02:21 crc kubenswrapper[4661]: I0120 19:02:21.193082 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-backup-0" event={"ID":"3066acf4-e48e-410e-8623-f29b5424f4fe","Type":"ContainerStarted","Data":"a2071312a1169ee21492660c6f1aab4c567dae8bdbc14e1e4e200d775d919697"} Jan 20 19:02:21 crc kubenswrapper[4661]: I0120 19:02:21.198083 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 20 19:02:21 crc kubenswrapper[4661]: I0120 19:02:21.199211 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-volume-volume1-0" event={"ID":"c8c04a60-5bb8-4d54-93a6-1acfcbea3358","Type":"ContainerStarted","Data":"4e5742b740b6adf546538fac92dc7f3efe2afba35b6a5215b2a8507d1488422f"} Jan 20 19:02:21 crc kubenswrapper[4661]: I0120 19:02:21.199300 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 20 19:02:21 crc kubenswrapper[4661]: I0120 19:02:21.223712 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 20 19:02:21 crc kubenswrapper[4661]: I0120 19:02:21.265081 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 20 19:02:21 crc kubenswrapper[4661]: I0120 19:02:21.372973 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-316e-account-create-update-bp2gj"] Jan 20 19:02:21 crc kubenswrapper[4661]: I0120 19:02:21.421567 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rbfwq\" (UniqueName: \"kubernetes.io/projected/43ad721f-147e-4d5f-bedf-7f75d256e791-kube-api-access-rbfwq\") pod \"43ad721f-147e-4d5f-bedf-7f75d256e791\" (UID: \"43ad721f-147e-4d5f-bedf-7f75d256e791\") " Jan 20 19:02:21 crc kubenswrapper[4661]: I0120 19:02:21.421615 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ca62c0a5-b15f-41e9-8eb2-507a277856a3-config-data\") pod \"ca62c0a5-b15f-41e9-8eb2-507a277856a3\" (UID: \"ca62c0a5-b15f-41e9-8eb2-507a277856a3\") " Jan 20 19:02:21 crc kubenswrapper[4661]: I0120 19:02:21.421715 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/ca62c0a5-b15f-41e9-8eb2-507a277856a3-httpd-run\") pod \"ca62c0a5-b15f-41e9-8eb2-507a277856a3\" (UID: \"ca62c0a5-b15f-41e9-8eb2-507a277856a3\") " Jan 20 19:02:21 crc kubenswrapper[4661]: I0120 19:02:21.421762 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/43ad721f-147e-4d5f-bedf-7f75d256e791-httpd-run\") pod \"43ad721f-147e-4d5f-bedf-7f75d256e791\" (UID: \"43ad721f-147e-4d5f-bedf-7f75d256e791\") " Jan 20 19:02:21 crc kubenswrapper[4661]: I0120 19:02:21.421819 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/43ad721f-147e-4d5f-bedf-7f75d256e791-logs\") pod \"43ad721f-147e-4d5f-bedf-7f75d256e791\" (UID: \"43ad721f-147e-4d5f-bedf-7f75d256e791\") " Jan 20 19:02:21 crc kubenswrapper[4661]: I0120 19:02:21.421844 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/43ad721f-147e-4d5f-bedf-7f75d256e791-config-data\") pod \"43ad721f-147e-4d5f-bedf-7f75d256e791\" (UID: \"43ad721f-147e-4d5f-bedf-7f75d256e791\") " Jan 20 19:02:21 crc kubenswrapper[4661]: I0120 19:02:21.421861 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/43ad721f-147e-4d5f-bedf-7f75d256e791-ceph\") pod \"43ad721f-147e-4d5f-bedf-7f75d256e791\" (UID: \"43ad721f-147e-4d5f-bedf-7f75d256e791\") " Jan 20 19:02:21 crc kubenswrapper[4661]: I0120 19:02:21.421899 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ca62c0a5-b15f-41e9-8eb2-507a277856a3-public-tls-certs\") pod \"ca62c0a5-b15f-41e9-8eb2-507a277856a3\" (UID: \"ca62c0a5-b15f-41e9-8eb2-507a277856a3\") " Jan 20 19:02:21 crc kubenswrapper[4661]: I0120 19:02:21.421935 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ca62c0a5-b15f-41e9-8eb2-507a277856a3-combined-ca-bundle\") pod \"ca62c0a5-b15f-41e9-8eb2-507a277856a3\" (UID: \"ca62c0a5-b15f-41e9-8eb2-507a277856a3\") " Jan 20 19:02:21 crc kubenswrapper[4661]: I0120 19:02:21.421955 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ca62c0a5-b15f-41e9-8eb2-507a277856a3-scripts\") pod \"ca62c0a5-b15f-41e9-8eb2-507a277856a3\" (UID: \"ca62c0a5-b15f-41e9-8eb2-507a277856a3\") " Jan 20 19:02:21 crc kubenswrapper[4661]: I0120 19:02:21.421993 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ca62c0a5-b15f-41e9-8eb2-507a277856a3-logs\") pod \"ca62c0a5-b15f-41e9-8eb2-507a277856a3\" (UID: \"ca62c0a5-b15f-41e9-8eb2-507a277856a3\") " Jan 20 19:02:21 crc kubenswrapper[4661]: I0120 19:02:21.422009 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/43ad721f-147e-4d5f-bedf-7f75d256e791-combined-ca-bundle\") pod \"43ad721f-147e-4d5f-bedf-7f75d256e791\" (UID: \"43ad721f-147e-4d5f-bedf-7f75d256e791\") " Jan 20 19:02:21 crc kubenswrapper[4661]: I0120 19:02:21.422043 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"43ad721f-147e-4d5f-bedf-7f75d256e791\" (UID: \"43ad721f-147e-4d5f-bedf-7f75d256e791\") " Jan 20 19:02:21 crc kubenswrapper[4661]: I0120 19:02:21.422077 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/43ad721f-147e-4d5f-bedf-7f75d256e791-internal-tls-certs\") pod \"43ad721f-147e-4d5f-bedf-7f75d256e791\" (UID: \"43ad721f-147e-4d5f-bedf-7f75d256e791\") " Jan 20 19:02:21 crc kubenswrapper[4661]: I0120 19:02:21.422091 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"ca62c0a5-b15f-41e9-8eb2-507a277856a3\" (UID: \"ca62c0a5-b15f-41e9-8eb2-507a277856a3\") " Jan 20 19:02:21 crc kubenswrapper[4661]: I0120 19:02:21.422118 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cqwr2\" (UniqueName: \"kubernetes.io/projected/ca62c0a5-b15f-41e9-8eb2-507a277856a3-kube-api-access-cqwr2\") pod \"ca62c0a5-b15f-41e9-8eb2-507a277856a3\" (UID: \"ca62c0a5-b15f-41e9-8eb2-507a277856a3\") " Jan 20 19:02:21 crc kubenswrapper[4661]: I0120 19:02:21.422136 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/ca62c0a5-b15f-41e9-8eb2-507a277856a3-ceph\") pod \"ca62c0a5-b15f-41e9-8eb2-507a277856a3\" (UID: \"ca62c0a5-b15f-41e9-8eb2-507a277856a3\") " Jan 20 19:02:21 crc kubenswrapper[4661]: I0120 19:02:21.422152 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/43ad721f-147e-4d5f-bedf-7f75d256e791-scripts\") pod \"43ad721f-147e-4d5f-bedf-7f75d256e791\" (UID: \"43ad721f-147e-4d5f-bedf-7f75d256e791\") " Jan 20 19:02:21 crc kubenswrapper[4661]: I0120 19:02:21.424991 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/43ad721f-147e-4d5f-bedf-7f75d256e791-logs" (OuterVolumeSpecName: "logs") pod "43ad721f-147e-4d5f-bedf-7f75d256e791" (UID: "43ad721f-147e-4d5f-bedf-7f75d256e791"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 19:02:21 crc kubenswrapper[4661]: I0120 19:02:21.435815 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ca62c0a5-b15f-41e9-8eb2-507a277856a3-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "ca62c0a5-b15f-41e9-8eb2-507a277856a3" (UID: "ca62c0a5-b15f-41e9-8eb2-507a277856a3"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 19:02:21 crc kubenswrapper[4661]: I0120 19:02:21.436516 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/43ad721f-147e-4d5f-bedf-7f75d256e791-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "43ad721f-147e-4d5f-bedf-7f75d256e791" (UID: "43ad721f-147e-4d5f-bedf-7f75d256e791"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 19:02:21 crc kubenswrapper[4661]: I0120 19:02:21.441934 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ca62c0a5-b15f-41e9-8eb2-507a277856a3-scripts" (OuterVolumeSpecName: "scripts") pod "ca62c0a5-b15f-41e9-8eb2-507a277856a3" (UID: "ca62c0a5-b15f-41e9-8eb2-507a277856a3"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 19:02:21 crc kubenswrapper[4661]: I0120 19:02:21.442211 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ca62c0a5-b15f-41e9-8eb2-507a277856a3-logs" (OuterVolumeSpecName: "logs") pod "ca62c0a5-b15f-41e9-8eb2-507a277856a3" (UID: "ca62c0a5-b15f-41e9-8eb2-507a277856a3"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 19:02:21 crc kubenswrapper[4661]: I0120 19:02:21.443558 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ca62c0a5-b15f-41e9-8eb2-507a277856a3-kube-api-access-cqwr2" (OuterVolumeSpecName: "kube-api-access-cqwr2") pod "ca62c0a5-b15f-41e9-8eb2-507a277856a3" (UID: "ca62c0a5-b15f-41e9-8eb2-507a277856a3"). InnerVolumeSpecName "kube-api-access-cqwr2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 19:02:21 crc kubenswrapper[4661]: I0120 19:02:21.443611 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ca62c0a5-b15f-41e9-8eb2-507a277856a3-config-data" (OuterVolumeSpecName: "config-data") pod "ca62c0a5-b15f-41e9-8eb2-507a277856a3" (UID: "ca62c0a5-b15f-41e9-8eb2-507a277856a3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 19:02:21 crc kubenswrapper[4661]: I0120 19:02:21.445268 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43ad721f-147e-4d5f-bedf-7f75d256e791-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "43ad721f-147e-4d5f-bedf-7f75d256e791" (UID: "43ad721f-147e-4d5f-bedf-7f75d256e791"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 19:02:21 crc kubenswrapper[4661]: I0120 19:02:21.447043 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43ad721f-147e-4d5f-bedf-7f75d256e791-kube-api-access-rbfwq" (OuterVolumeSpecName: "kube-api-access-rbfwq") pod "43ad721f-147e-4d5f-bedf-7f75d256e791" (UID: "43ad721f-147e-4d5f-bedf-7f75d256e791"). InnerVolumeSpecName "kube-api-access-rbfwq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 19:02:21 crc kubenswrapper[4661]: I0120 19:02:21.447562 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43ad721f-147e-4d5f-bedf-7f75d256e791-scripts" (OuterVolumeSpecName: "scripts") pod "43ad721f-147e-4d5f-bedf-7f75d256e791" (UID: "43ad721f-147e-4d5f-bedf-7f75d256e791"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 19:02:21 crc kubenswrapper[4661]: I0120 19:02:21.449141 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ca62c0a5-b15f-41e9-8eb2-507a277856a3-ceph" (OuterVolumeSpecName: "ceph") pod "ca62c0a5-b15f-41e9-8eb2-507a277856a3" (UID: "ca62c0a5-b15f-41e9-8eb2-507a277856a3"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 19:02:21 crc kubenswrapper[4661]: I0120 19:02:21.449155 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage03-crc" (OuterVolumeSpecName: "glance") pod "43ad721f-147e-4d5f-bedf-7f75d256e791" (UID: "43ad721f-147e-4d5f-bedf-7f75d256e791"). InnerVolumeSpecName "local-storage03-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 20 19:02:21 crc kubenswrapper[4661]: I0120 19:02:21.449202 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43ad721f-147e-4d5f-bedf-7f75d256e791-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "43ad721f-147e-4d5f-bedf-7f75d256e791" (UID: "43ad721f-147e-4d5f-bedf-7f75d256e791"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 19:02:21 crc kubenswrapper[4661]: I0120 19:02:21.453096 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage01-crc" (OuterVolumeSpecName: "glance") pod "ca62c0a5-b15f-41e9-8eb2-507a277856a3" (UID: "ca62c0a5-b15f-41e9-8eb2-507a277856a3"). InnerVolumeSpecName "local-storage01-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 20 19:02:21 crc kubenswrapper[4661]: I0120 19:02:21.455375 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43ad721f-147e-4d5f-bedf-7f75d256e791-config-data" (OuterVolumeSpecName: "config-data") pod "43ad721f-147e-4d5f-bedf-7f75d256e791" (UID: "43ad721f-147e-4d5f-bedf-7f75d256e791"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 19:02:21 crc kubenswrapper[4661]: I0120 19:02:21.455421 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ca62c0a5-b15f-41e9-8eb2-507a277856a3-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "ca62c0a5-b15f-41e9-8eb2-507a277856a3" (UID: "ca62c0a5-b15f-41e9-8eb2-507a277856a3"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 19:02:21 crc kubenswrapper[4661]: I0120 19:02:21.458162 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ca62c0a5-b15f-41e9-8eb2-507a277856a3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ca62c0a5-b15f-41e9-8eb2-507a277856a3" (UID: "ca62c0a5-b15f-41e9-8eb2-507a277856a3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 19:02:21 crc kubenswrapper[4661]: I0120 19:02:21.458561 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43ad721f-147e-4d5f-bedf-7f75d256e791-ceph" (OuterVolumeSpecName: "ceph") pod "43ad721f-147e-4d5f-bedf-7f75d256e791" (UID: "43ad721f-147e-4d5f-bedf-7f75d256e791"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 19:02:21 crc kubenswrapper[4661]: W0120 19:02:21.503067 4661 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod24a02294_d575_420a_a004_9eaac022318e.slice/crio-3f77b1bb053b3f39e6f8f48c055a43bd4cd54b5524821232721a14ac2078bb4e WatchSource:0}: Error finding container 3f77b1bb053b3f39e6f8f48c055a43bd4cd54b5524821232721a14ac2078bb4e: Status 404 returned error can't find the container with id 3f77b1bb053b3f39e6f8f48c055a43bd4cd54b5524821232721a14ac2078bb4e Jan 20 19:02:21 crc kubenswrapper[4661]: I0120 19:02:21.503898 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-68df949b55-t6lcn"] Jan 20 19:02:21 crc kubenswrapper[4661]: I0120 19:02:21.524097 4661 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/43ad721f-147e-4d5f-bedf-7f75d256e791-logs\") on node \"crc\" DevicePath \"\"" Jan 20 19:02:21 crc kubenswrapper[4661]: I0120 19:02:21.524133 4661 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/43ad721f-147e-4d5f-bedf-7f75d256e791-config-data\") on node \"crc\" DevicePath \"\"" Jan 20 19:02:21 crc kubenswrapper[4661]: I0120 19:02:21.524144 4661 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/43ad721f-147e-4d5f-bedf-7f75d256e791-ceph\") on node \"crc\" DevicePath \"\"" Jan 20 19:02:21 crc kubenswrapper[4661]: I0120 19:02:21.524154 4661 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ca62c0a5-b15f-41e9-8eb2-507a277856a3-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 20 19:02:21 crc kubenswrapper[4661]: I0120 19:02:21.524164 4661 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ca62c0a5-b15f-41e9-8eb2-507a277856a3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 20 19:02:21 crc kubenswrapper[4661]: I0120 19:02:21.524173 4661 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ca62c0a5-b15f-41e9-8eb2-507a277856a3-scripts\") on node \"crc\" DevicePath \"\"" Jan 20 19:02:21 crc kubenswrapper[4661]: I0120 19:02:21.524185 4661 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ca62c0a5-b15f-41e9-8eb2-507a277856a3-logs\") on node \"crc\" DevicePath \"\"" Jan 20 19:02:21 crc kubenswrapper[4661]: I0120 19:02:21.524192 4661 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/43ad721f-147e-4d5f-bedf-7f75d256e791-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 20 19:02:21 crc kubenswrapper[4661]: I0120 19:02:21.524218 4661 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" " Jan 20 19:02:21 crc kubenswrapper[4661]: I0120 19:02:21.524226 4661 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/43ad721f-147e-4d5f-bedf-7f75d256e791-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 20 19:02:21 crc kubenswrapper[4661]: I0120 19:02:21.524238 4661 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" " Jan 20 19:02:21 crc kubenswrapper[4661]: I0120 19:02:21.524248 4661 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cqwr2\" (UniqueName: \"kubernetes.io/projected/ca62c0a5-b15f-41e9-8eb2-507a277856a3-kube-api-access-cqwr2\") on node \"crc\" DevicePath \"\"" Jan 20 19:02:21 crc kubenswrapper[4661]: I0120 19:02:21.524257 4661 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/ca62c0a5-b15f-41e9-8eb2-507a277856a3-ceph\") on node \"crc\" DevicePath \"\"" Jan 20 19:02:21 crc kubenswrapper[4661]: I0120 19:02:21.524264 4661 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/43ad721f-147e-4d5f-bedf-7f75d256e791-scripts\") on node \"crc\" DevicePath \"\"" Jan 20 19:02:21 crc kubenswrapper[4661]: I0120 19:02:21.524273 4661 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rbfwq\" (UniqueName: \"kubernetes.io/projected/43ad721f-147e-4d5f-bedf-7f75d256e791-kube-api-access-rbfwq\") on node \"crc\" DevicePath \"\"" Jan 20 19:02:21 crc kubenswrapper[4661]: I0120 19:02:21.524280 4661 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ca62c0a5-b15f-41e9-8eb2-507a277856a3-config-data\") on node \"crc\" DevicePath \"\"" Jan 20 19:02:21 crc kubenswrapper[4661]: I0120 19:02:21.524288 4661 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/ca62c0a5-b15f-41e9-8eb2-507a277856a3-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 20 19:02:21 crc kubenswrapper[4661]: I0120 19:02:21.524297 4661 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/43ad721f-147e-4d5f-bedf-7f75d256e791-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 20 19:02:21 crc kubenswrapper[4661]: I0120 19:02:21.555614 4661 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage03-crc" (UniqueName: "kubernetes.io/local-volume/local-storage03-crc") on node "crc" Jan 20 19:02:21 crc kubenswrapper[4661]: I0120 19:02:21.579656 4661 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage01-crc" (UniqueName: "kubernetes.io/local-volume/local-storage01-crc") on node "crc" Jan 20 19:02:21 crc kubenswrapper[4661]: I0120 19:02:21.601359 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-676bf6649-b97jp"] Jan 20 19:02:21 crc kubenswrapper[4661]: I0120 19:02:21.625689 4661 reconciler_common.go:293] "Volume detached for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" DevicePath \"\"" Jan 20 19:02:21 crc kubenswrapper[4661]: I0120 19:02:21.625914 4661 reconciler_common.go:293] "Volume detached for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" DevicePath \"\"" Jan 20 19:02:22 crc kubenswrapper[4661]: I0120 19:02:22.209076 4661 generic.go:334] "Generic (PLEG): container finished" podID="f250573d-7ee6-4f08-92f7-b8997189e124" containerID="b587ce68b106b5ae693f9592d665999efe528a3c0e15079b8ad23da1349b8d97" exitCode=0 Jan 20 19:02:22 crc kubenswrapper[4661]: I0120 19:02:22.209174 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-db-create-24n4j" event={"ID":"f250573d-7ee6-4f08-92f7-b8997189e124","Type":"ContainerDied","Data":"b587ce68b106b5ae693f9592d665999efe528a3c0e15079b8ad23da1349b8d97"} Jan 20 19:02:22 crc kubenswrapper[4661]: I0120 19:02:22.209499 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-db-create-24n4j" event={"ID":"f250573d-7ee6-4f08-92f7-b8997189e124","Type":"ContainerStarted","Data":"1128542f234c6f5c3e31f0e95e9e08d2071b23a68b1316987992d3a4696044b3"} Jan 20 19:02:22 crc kubenswrapper[4661]: I0120 19:02:22.211217 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-676bf6649-b97jp" event={"ID":"c7b62ec6-2f48-4eba-a9dc-612d50b1c7f4","Type":"ContainerStarted","Data":"b8f2f9cb07882c796b1da7f28013c0e28f89a0106965273d7d8e5ae5221ee51c"} Jan 20 19:02:22 crc kubenswrapper[4661]: I0120 19:02:22.214152 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-volume-volume1-0" event={"ID":"c8c04a60-5bb8-4d54-93a6-1acfcbea3358","Type":"ContainerStarted","Data":"52551cabbd3fddd333622e00e3798e7759164c96eaf627ab0f13f3bcf7232456"} Jan 20 19:02:22 crc kubenswrapper[4661]: I0120 19:02:22.215275 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-68df949b55-t6lcn" event={"ID":"24a02294-d575-420a-a004-9eaac022318e","Type":"ContainerStarted","Data":"3f77b1bb053b3f39e6f8f48c055a43bd4cd54b5524821232721a14ac2078bb4e"} Jan 20 19:02:22 crc kubenswrapper[4661]: I0120 19:02:22.216421 4661 generic.go:334] "Generic (PLEG): container finished" podID="7fa209ed-495f-49bb-b9cc-01ad4e1032a3" containerID="d2f1d230ef760e9a7e511a0e8233b7a7dd94d66a7a0bd6fbdd810b6ddf328966" exitCode=0 Jan 20 19:02:22 crc kubenswrapper[4661]: I0120 19:02:22.216484 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 20 19:02:22 crc kubenswrapper[4661]: I0120 19:02:22.216493 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-316e-account-create-update-bp2gj" event={"ID":"7fa209ed-495f-49bb-b9cc-01ad4e1032a3","Type":"ContainerDied","Data":"d2f1d230ef760e9a7e511a0e8233b7a7dd94d66a7a0bd6fbdd810b6ddf328966"} Jan 20 19:02:22 crc kubenswrapper[4661]: I0120 19:02:22.216541 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-316e-account-create-update-bp2gj" event={"ID":"7fa209ed-495f-49bb-b9cc-01ad4e1032a3","Type":"ContainerStarted","Data":"8dab6f78b9dd029f62302a3f819fb03668ccdcc79d752d762657392444626403"} Jan 20 19:02:22 crc kubenswrapper[4661]: I0120 19:02:22.216782 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 20 19:02:22 crc kubenswrapper[4661]: I0120 19:02:22.319318 4661 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 20 19:02:22 crc kubenswrapper[4661]: I0120 19:02:22.335147 4661 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 20 19:02:22 crc kubenswrapper[4661]: I0120 19:02:22.399752 4661 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 20 19:02:22 crc kubenswrapper[4661]: I0120 19:02:22.401219 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 20 19:02:22 crc kubenswrapper[4661]: I0120 19:02:22.411227 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Jan 20 19:02:22 crc kubenswrapper[4661]: I0120 19:02:22.411705 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-q5zvh" Jan 20 19:02:22 crc kubenswrapper[4661]: I0120 19:02:22.411991 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 20 19:02:22 crc kubenswrapper[4661]: I0120 19:02:22.412121 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Jan 20 19:02:22 crc kubenswrapper[4661]: I0120 19:02:22.422748 4661 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 20 19:02:22 crc kubenswrapper[4661]: I0120 19:02:22.458785 4661 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 20 19:02:22 crc kubenswrapper[4661]: I0120 19:02:22.486244 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 20 19:02:22 crc kubenswrapper[4661]: I0120 19:02:22.492066 4661 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 20 19:02:22 crc kubenswrapper[4661]: I0120 19:02:22.517460 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 20 19:02:22 crc kubenswrapper[4661]: I0120 19:02:22.517570 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 20 19:02:22 crc kubenswrapper[4661]: I0120 19:02:22.523287 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 20 19:02:22 crc kubenswrapper[4661]: I0120 19:02:22.523473 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Jan 20 19:02:22 crc kubenswrapper[4661]: I0120 19:02:22.558178 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/7030a848-b903-4001-a699-5683451486b8-ceph\") pod \"glance-default-external-api-0\" (UID: \"7030a848-b903-4001-a699-5683451486b8\") " pod="openstack/glance-default-external-api-0" Jan 20 19:02:22 crc kubenswrapper[4661]: I0120 19:02:22.560404 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7hnfj\" (UniqueName: \"kubernetes.io/projected/7030a848-b903-4001-a699-5683451486b8-kube-api-access-7hnfj\") pod \"glance-default-external-api-0\" (UID: \"7030a848-b903-4001-a699-5683451486b8\") " pod="openstack/glance-default-external-api-0" Jan 20 19:02:22 crc kubenswrapper[4661]: I0120 19:02:22.560555 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7030a848-b903-4001-a699-5683451486b8-scripts\") pod \"glance-default-external-api-0\" (UID: \"7030a848-b903-4001-a699-5683451486b8\") " pod="openstack/glance-default-external-api-0" Jan 20 19:02:22 crc kubenswrapper[4661]: I0120 19:02:22.560741 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7030a848-b903-4001-a699-5683451486b8-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"7030a848-b903-4001-a699-5683451486b8\") " pod="openstack/glance-default-external-api-0" Jan 20 19:02:22 crc kubenswrapper[4661]: I0120 19:02:22.561618 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7030a848-b903-4001-a699-5683451486b8-logs\") pod \"glance-default-external-api-0\" (UID: \"7030a848-b903-4001-a699-5683451486b8\") " pod="openstack/glance-default-external-api-0" Jan 20 19:02:22 crc kubenswrapper[4661]: I0120 19:02:22.561829 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/7030a848-b903-4001-a699-5683451486b8-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"7030a848-b903-4001-a699-5683451486b8\") " pod="openstack/glance-default-external-api-0" Jan 20 19:02:22 crc kubenswrapper[4661]: I0120 19:02:22.561930 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7030a848-b903-4001-a699-5683451486b8-config-data\") pod \"glance-default-external-api-0\" (UID: \"7030a848-b903-4001-a699-5683451486b8\") " pod="openstack/glance-default-external-api-0" Jan 20 19:02:22 crc kubenswrapper[4661]: I0120 19:02:22.562005 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-external-api-0\" (UID: \"7030a848-b903-4001-a699-5683451486b8\") " pod="openstack/glance-default-external-api-0" Jan 20 19:02:22 crc kubenswrapper[4661]: I0120 19:02:22.562101 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7030a848-b903-4001-a699-5683451486b8-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"7030a848-b903-4001-a699-5683451486b8\") " pod="openstack/glance-default-external-api-0" Jan 20 19:02:22 crc kubenswrapper[4661]: I0120 19:02:22.663605 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c4653b95-1c26-4d87-bf2d-b3daea2414f9-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"c4653b95-1c26-4d87-bf2d-b3daea2414f9\") " pod="openstack/glance-default-internal-api-0" Jan 20 19:02:22 crc kubenswrapper[4661]: I0120 19:02:22.663656 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7030a848-b903-4001-a699-5683451486b8-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"7030a848-b903-4001-a699-5683451486b8\") " pod="openstack/glance-default-external-api-0" Jan 20 19:02:22 crc kubenswrapper[4661]: I0120 19:02:22.663691 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c4653b95-1c26-4d87-bf2d-b3daea2414f9-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"c4653b95-1c26-4d87-bf2d-b3daea2414f9\") " pod="openstack/glance-default-internal-api-0" Jan 20 19:02:22 crc kubenswrapper[4661]: I0120 19:02:22.663715 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/c4653b95-1c26-4d87-bf2d-b3daea2414f9-ceph\") pod \"glance-default-internal-api-0\" (UID: \"c4653b95-1c26-4d87-bf2d-b3daea2414f9\") " pod="openstack/glance-default-internal-api-0" Jan 20 19:02:22 crc kubenswrapper[4661]: I0120 19:02:22.663743 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7030a848-b903-4001-a699-5683451486b8-logs\") pod \"glance-default-external-api-0\" (UID: \"7030a848-b903-4001-a699-5683451486b8\") " pod="openstack/glance-default-external-api-0" Jan 20 19:02:22 crc kubenswrapper[4661]: I0120 19:02:22.663765 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/7030a848-b903-4001-a699-5683451486b8-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"7030a848-b903-4001-a699-5683451486b8\") " pod="openstack/glance-default-external-api-0" Jan 20 19:02:22 crc kubenswrapper[4661]: I0120 19:02:22.663789 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7030a848-b903-4001-a699-5683451486b8-config-data\") pod \"glance-default-external-api-0\" (UID: \"7030a848-b903-4001-a699-5683451486b8\") " pod="openstack/glance-default-external-api-0" Jan 20 19:02:22 crc kubenswrapper[4661]: I0120 19:02:22.663809 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-external-api-0\" (UID: \"7030a848-b903-4001-a699-5683451486b8\") " pod="openstack/glance-default-external-api-0" Jan 20 19:02:22 crc kubenswrapper[4661]: I0120 19:02:22.663828 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c4653b95-1c26-4d87-bf2d-b3daea2414f9-config-data\") pod \"glance-default-internal-api-0\" (UID: \"c4653b95-1c26-4d87-bf2d-b3daea2414f9\") " pod="openstack/glance-default-internal-api-0" Jan 20 19:02:22 crc kubenswrapper[4661]: I0120 19:02:22.663853 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7030a848-b903-4001-a699-5683451486b8-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"7030a848-b903-4001-a699-5683451486b8\") " pod="openstack/glance-default-external-api-0" Jan 20 19:02:22 crc kubenswrapper[4661]: I0120 19:02:22.663891 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-internal-api-0\" (UID: \"c4653b95-1c26-4d87-bf2d-b3daea2414f9\") " pod="openstack/glance-default-internal-api-0" Jan 20 19:02:22 crc kubenswrapper[4661]: I0120 19:02:22.663922 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/7030a848-b903-4001-a699-5683451486b8-ceph\") pod \"glance-default-external-api-0\" (UID: \"7030a848-b903-4001-a699-5683451486b8\") " pod="openstack/glance-default-external-api-0" Jan 20 19:02:22 crc kubenswrapper[4661]: I0120 19:02:22.663939 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c4653b95-1c26-4d87-bf2d-b3daea2414f9-logs\") pod \"glance-default-internal-api-0\" (UID: \"c4653b95-1c26-4d87-bf2d-b3daea2414f9\") " pod="openstack/glance-default-internal-api-0" Jan 20 19:02:22 crc kubenswrapper[4661]: I0120 19:02:22.663958 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7hnfj\" (UniqueName: \"kubernetes.io/projected/7030a848-b903-4001-a699-5683451486b8-kube-api-access-7hnfj\") pod \"glance-default-external-api-0\" (UID: \"7030a848-b903-4001-a699-5683451486b8\") " pod="openstack/glance-default-external-api-0" Jan 20 19:02:22 crc kubenswrapper[4661]: I0120 19:02:22.663986 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7030a848-b903-4001-a699-5683451486b8-scripts\") pod \"glance-default-external-api-0\" (UID: \"7030a848-b903-4001-a699-5683451486b8\") " pod="openstack/glance-default-external-api-0" Jan 20 19:02:22 crc kubenswrapper[4661]: I0120 19:02:22.664004 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c4653b95-1c26-4d87-bf2d-b3daea2414f9-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"c4653b95-1c26-4d87-bf2d-b3daea2414f9\") " pod="openstack/glance-default-internal-api-0" Jan 20 19:02:22 crc kubenswrapper[4661]: I0120 19:02:22.664025 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c4653b95-1c26-4d87-bf2d-b3daea2414f9-scripts\") pod \"glance-default-internal-api-0\" (UID: \"c4653b95-1c26-4d87-bf2d-b3daea2414f9\") " pod="openstack/glance-default-internal-api-0" Jan 20 19:02:22 crc kubenswrapper[4661]: I0120 19:02:22.664046 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-72b88\" (UniqueName: \"kubernetes.io/projected/c4653b95-1c26-4d87-bf2d-b3daea2414f9-kube-api-access-72b88\") pod \"glance-default-internal-api-0\" (UID: \"c4653b95-1c26-4d87-bf2d-b3daea2414f9\") " pod="openstack/glance-default-internal-api-0" Jan 20 19:02:22 crc kubenswrapper[4661]: I0120 19:02:22.669406 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/7030a848-b903-4001-a699-5683451486b8-ceph\") pod \"glance-default-external-api-0\" (UID: \"7030a848-b903-4001-a699-5683451486b8\") " pod="openstack/glance-default-external-api-0" Jan 20 19:02:22 crc kubenswrapper[4661]: I0120 19:02:22.669930 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/7030a848-b903-4001-a699-5683451486b8-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"7030a848-b903-4001-a699-5683451486b8\") " pod="openstack/glance-default-external-api-0" Jan 20 19:02:22 crc kubenswrapper[4661]: I0120 19:02:22.670309 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7030a848-b903-4001-a699-5683451486b8-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"7030a848-b903-4001-a699-5683451486b8\") " pod="openstack/glance-default-external-api-0" Jan 20 19:02:22 crc kubenswrapper[4661]: I0120 19:02:22.670453 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7030a848-b903-4001-a699-5683451486b8-logs\") pod \"glance-default-external-api-0\" (UID: \"7030a848-b903-4001-a699-5683451486b8\") " pod="openstack/glance-default-external-api-0" Jan 20 19:02:22 crc kubenswrapper[4661]: I0120 19:02:22.670697 4661 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-external-api-0\" (UID: \"7030a848-b903-4001-a699-5683451486b8\") device mount path \"/mnt/openstack/pv01\"" pod="openstack/glance-default-external-api-0" Jan 20 19:02:22 crc kubenswrapper[4661]: I0120 19:02:22.676477 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7030a848-b903-4001-a699-5683451486b8-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"7030a848-b903-4001-a699-5683451486b8\") " pod="openstack/glance-default-external-api-0" Jan 20 19:02:22 crc kubenswrapper[4661]: I0120 19:02:22.719333 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7030a848-b903-4001-a699-5683451486b8-scripts\") pod \"glance-default-external-api-0\" (UID: \"7030a848-b903-4001-a699-5683451486b8\") " pod="openstack/glance-default-external-api-0" Jan 20 19:02:22 crc kubenswrapper[4661]: I0120 19:02:22.720287 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7030a848-b903-4001-a699-5683451486b8-config-data\") pod \"glance-default-external-api-0\" (UID: \"7030a848-b903-4001-a699-5683451486b8\") " pod="openstack/glance-default-external-api-0" Jan 20 19:02:22 crc kubenswrapper[4661]: I0120 19:02:22.756806 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7hnfj\" (UniqueName: \"kubernetes.io/projected/7030a848-b903-4001-a699-5683451486b8-kube-api-access-7hnfj\") pod \"glance-default-external-api-0\" (UID: \"7030a848-b903-4001-a699-5683451486b8\") " pod="openstack/glance-default-external-api-0" Jan 20 19:02:22 crc kubenswrapper[4661]: I0120 19:02:22.778515 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c4653b95-1c26-4d87-bf2d-b3daea2414f9-logs\") pod \"glance-default-internal-api-0\" (UID: \"c4653b95-1c26-4d87-bf2d-b3daea2414f9\") " pod="openstack/glance-default-internal-api-0" Jan 20 19:02:22 crc kubenswrapper[4661]: I0120 19:02:22.778828 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c4653b95-1c26-4d87-bf2d-b3daea2414f9-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"c4653b95-1c26-4d87-bf2d-b3daea2414f9\") " pod="openstack/glance-default-internal-api-0" Jan 20 19:02:22 crc kubenswrapper[4661]: I0120 19:02:22.778979 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c4653b95-1c26-4d87-bf2d-b3daea2414f9-scripts\") pod \"glance-default-internal-api-0\" (UID: \"c4653b95-1c26-4d87-bf2d-b3daea2414f9\") " pod="openstack/glance-default-internal-api-0" Jan 20 19:02:22 crc kubenswrapper[4661]: I0120 19:02:22.779086 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-72b88\" (UniqueName: \"kubernetes.io/projected/c4653b95-1c26-4d87-bf2d-b3daea2414f9-kube-api-access-72b88\") pod \"glance-default-internal-api-0\" (UID: \"c4653b95-1c26-4d87-bf2d-b3daea2414f9\") " pod="openstack/glance-default-internal-api-0" Jan 20 19:02:22 crc kubenswrapper[4661]: I0120 19:02:22.779208 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c4653b95-1c26-4d87-bf2d-b3daea2414f9-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"c4653b95-1c26-4d87-bf2d-b3daea2414f9\") " pod="openstack/glance-default-internal-api-0" Jan 20 19:02:22 crc kubenswrapper[4661]: I0120 19:02:22.779301 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c4653b95-1c26-4d87-bf2d-b3daea2414f9-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"c4653b95-1c26-4d87-bf2d-b3daea2414f9\") " pod="openstack/glance-default-internal-api-0" Jan 20 19:02:22 crc kubenswrapper[4661]: I0120 19:02:22.779388 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/c4653b95-1c26-4d87-bf2d-b3daea2414f9-ceph\") pod \"glance-default-internal-api-0\" (UID: \"c4653b95-1c26-4d87-bf2d-b3daea2414f9\") " pod="openstack/glance-default-internal-api-0" Jan 20 19:02:22 crc kubenswrapper[4661]: I0120 19:02:22.779523 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c4653b95-1c26-4d87-bf2d-b3daea2414f9-config-data\") pod \"glance-default-internal-api-0\" (UID: \"c4653b95-1c26-4d87-bf2d-b3daea2414f9\") " pod="openstack/glance-default-internal-api-0" Jan 20 19:02:22 crc kubenswrapper[4661]: I0120 19:02:22.779641 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-internal-api-0\" (UID: \"c4653b95-1c26-4d87-bf2d-b3daea2414f9\") " pod="openstack/glance-default-internal-api-0" Jan 20 19:02:22 crc kubenswrapper[4661]: I0120 19:02:22.779884 4661 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-internal-api-0\" (UID: \"c4653b95-1c26-4d87-bf2d-b3daea2414f9\") device mount path \"/mnt/openstack/pv03\"" pod="openstack/glance-default-internal-api-0" Jan 20 19:02:22 crc kubenswrapper[4661]: I0120 19:02:22.780850 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c4653b95-1c26-4d87-bf2d-b3daea2414f9-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"c4653b95-1c26-4d87-bf2d-b3daea2414f9\") " pod="openstack/glance-default-internal-api-0" Jan 20 19:02:22 crc kubenswrapper[4661]: I0120 19:02:22.787043 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c4653b95-1c26-4d87-bf2d-b3daea2414f9-logs\") pod \"glance-default-internal-api-0\" (UID: \"c4653b95-1c26-4d87-bf2d-b3daea2414f9\") " pod="openstack/glance-default-internal-api-0" Jan 20 19:02:22 crc kubenswrapper[4661]: I0120 19:02:22.884538 4661 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-676bf6649-b97jp"] Jan 20 19:02:22 crc kubenswrapper[4661]: I0120 19:02:22.906029 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-external-api-0\" (UID: \"7030a848-b903-4001-a699-5683451486b8\") " pod="openstack/glance-default-external-api-0" Jan 20 19:02:22 crc kubenswrapper[4661]: I0120 19:02:22.923879 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/c4653b95-1c26-4d87-bf2d-b3daea2414f9-ceph\") pod \"glance-default-internal-api-0\" (UID: \"c4653b95-1c26-4d87-bf2d-b3daea2414f9\") " pod="openstack/glance-default-internal-api-0" Jan 20 19:02:22 crc kubenswrapper[4661]: I0120 19:02:22.924441 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c4653b95-1c26-4d87-bf2d-b3daea2414f9-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"c4653b95-1c26-4d87-bf2d-b3daea2414f9\") " pod="openstack/glance-default-internal-api-0" Jan 20 19:02:22 crc kubenswrapper[4661]: I0120 19:02:22.925322 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c4653b95-1c26-4d87-bf2d-b3daea2414f9-config-data\") pod \"glance-default-internal-api-0\" (UID: \"c4653b95-1c26-4d87-bf2d-b3daea2414f9\") " pod="openstack/glance-default-internal-api-0" Jan 20 19:02:22 crc kubenswrapper[4661]: I0120 19:02:22.926512 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c4653b95-1c26-4d87-bf2d-b3daea2414f9-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"c4653b95-1c26-4d87-bf2d-b3daea2414f9\") " pod="openstack/glance-default-internal-api-0" Jan 20 19:02:22 crc kubenswrapper[4661]: I0120 19:02:22.943043 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c4653b95-1c26-4d87-bf2d-b3daea2414f9-scripts\") pod \"glance-default-internal-api-0\" (UID: \"c4653b95-1c26-4d87-bf2d-b3daea2414f9\") " pod="openstack/glance-default-internal-api-0" Jan 20 19:02:22 crc kubenswrapper[4661]: I0120 19:02:22.962692 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-72b88\" (UniqueName: \"kubernetes.io/projected/c4653b95-1c26-4d87-bf2d-b3daea2414f9-kube-api-access-72b88\") pod \"glance-default-internal-api-0\" (UID: \"c4653b95-1c26-4d87-bf2d-b3daea2414f9\") " pod="openstack/glance-default-internal-api-0" Jan 20 19:02:23 crc kubenswrapper[4661]: I0120 19:02:23.085439 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 20 19:02:23 crc kubenswrapper[4661]: I0120 19:02:23.098062 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-internal-api-0\" (UID: \"c4653b95-1c26-4d87-bf2d-b3daea2414f9\") " pod="openstack/glance-default-internal-api-0" Jan 20 19:02:23 crc kubenswrapper[4661]: I0120 19:02:23.108793 4661 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-844bcbddd8-v9pcc"] Jan 20 19:02:23 crc kubenswrapper[4661]: I0120 19:02:23.110338 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-844bcbddd8-v9pcc" Jan 20 19:02:23 crc kubenswrapper[4661]: I0120 19:02:23.125302 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-horizon-svc" Jan 20 19:02:23 crc kubenswrapper[4661]: I0120 19:02:23.160562 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 20 19:02:23 crc kubenswrapper[4661]: I0120 19:02:23.169232 4661 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 20 19:02:23 crc kubenswrapper[4661]: I0120 19:02:23.192708 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ea579f19-b21d-4098-8f52-517be45768fb-scripts\") pod \"horizon-844bcbddd8-v9pcc\" (UID: \"ea579f19-b21d-4098-8f52-517be45768fb\") " pod="openstack/horizon-844bcbddd8-v9pcc" Jan 20 19:02:23 crc kubenswrapper[4661]: I0120 19:02:23.192788 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/ea579f19-b21d-4098-8f52-517be45768fb-horizon-secret-key\") pod \"horizon-844bcbddd8-v9pcc\" (UID: \"ea579f19-b21d-4098-8f52-517be45768fb\") " pod="openstack/horizon-844bcbddd8-v9pcc" Jan 20 19:02:23 crc kubenswrapper[4661]: I0120 19:02:23.192823 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ea579f19-b21d-4098-8f52-517be45768fb-config-data\") pod \"horizon-844bcbddd8-v9pcc\" (UID: \"ea579f19-b21d-4098-8f52-517be45768fb\") " pod="openstack/horizon-844bcbddd8-v9pcc" Jan 20 19:02:23 crc kubenswrapper[4661]: I0120 19:02:23.192873 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/ea579f19-b21d-4098-8f52-517be45768fb-horizon-tls-certs\") pod \"horizon-844bcbddd8-v9pcc\" (UID: \"ea579f19-b21d-4098-8f52-517be45768fb\") " pod="openstack/horizon-844bcbddd8-v9pcc" Jan 20 19:02:23 crc kubenswrapper[4661]: I0120 19:02:23.192900 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ea579f19-b21d-4098-8f52-517be45768fb-combined-ca-bundle\") pod \"horizon-844bcbddd8-v9pcc\" (UID: \"ea579f19-b21d-4098-8f52-517be45768fb\") " pod="openstack/horizon-844bcbddd8-v9pcc" Jan 20 19:02:23 crc kubenswrapper[4661]: I0120 19:02:23.192973 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ea579f19-b21d-4098-8f52-517be45768fb-logs\") pod \"horizon-844bcbddd8-v9pcc\" (UID: \"ea579f19-b21d-4098-8f52-517be45768fb\") " pod="openstack/horizon-844bcbddd8-v9pcc" Jan 20 19:02:23 crc kubenswrapper[4661]: I0120 19:02:23.193002 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bz2vc\" (UniqueName: \"kubernetes.io/projected/ea579f19-b21d-4098-8f52-517be45768fb-kube-api-access-bz2vc\") pod \"horizon-844bcbddd8-v9pcc\" (UID: \"ea579f19-b21d-4098-8f52-517be45768fb\") " pod="openstack/horizon-844bcbddd8-v9pcc" Jan 20 19:02:23 crc kubenswrapper[4661]: I0120 19:02:23.204766 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-844bcbddd8-v9pcc"] Jan 20 19:02:23 crc kubenswrapper[4661]: I0120 19:02:23.223809 4661 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-68df949b55-t6lcn"] Jan 20 19:02:23 crc kubenswrapper[4661]: I0120 19:02:23.249652 4661 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 20 19:02:23 crc kubenswrapper[4661]: I0120 19:02:23.294619 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/ea579f19-b21d-4098-8f52-517be45768fb-horizon-secret-key\") pod \"horizon-844bcbddd8-v9pcc\" (UID: \"ea579f19-b21d-4098-8f52-517be45768fb\") " pod="openstack/horizon-844bcbddd8-v9pcc" Jan 20 19:02:23 crc kubenswrapper[4661]: I0120 19:02:23.294693 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ea579f19-b21d-4098-8f52-517be45768fb-config-data\") pod \"horizon-844bcbddd8-v9pcc\" (UID: \"ea579f19-b21d-4098-8f52-517be45768fb\") " pod="openstack/horizon-844bcbddd8-v9pcc" Jan 20 19:02:23 crc kubenswrapper[4661]: I0120 19:02:23.294740 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/ea579f19-b21d-4098-8f52-517be45768fb-horizon-tls-certs\") pod \"horizon-844bcbddd8-v9pcc\" (UID: \"ea579f19-b21d-4098-8f52-517be45768fb\") " pod="openstack/horizon-844bcbddd8-v9pcc" Jan 20 19:02:23 crc kubenswrapper[4661]: I0120 19:02:23.294766 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ea579f19-b21d-4098-8f52-517be45768fb-combined-ca-bundle\") pod \"horizon-844bcbddd8-v9pcc\" (UID: \"ea579f19-b21d-4098-8f52-517be45768fb\") " pod="openstack/horizon-844bcbddd8-v9pcc" Jan 20 19:02:23 crc kubenswrapper[4661]: I0120 19:02:23.294826 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ea579f19-b21d-4098-8f52-517be45768fb-logs\") pod \"horizon-844bcbddd8-v9pcc\" (UID: \"ea579f19-b21d-4098-8f52-517be45768fb\") " pod="openstack/horizon-844bcbddd8-v9pcc" Jan 20 19:02:23 crc kubenswrapper[4661]: I0120 19:02:23.294856 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bz2vc\" (UniqueName: \"kubernetes.io/projected/ea579f19-b21d-4098-8f52-517be45768fb-kube-api-access-bz2vc\") pod \"horizon-844bcbddd8-v9pcc\" (UID: \"ea579f19-b21d-4098-8f52-517be45768fb\") " pod="openstack/horizon-844bcbddd8-v9pcc" Jan 20 19:02:23 crc kubenswrapper[4661]: I0120 19:02:23.294878 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ea579f19-b21d-4098-8f52-517be45768fb-scripts\") pod \"horizon-844bcbddd8-v9pcc\" (UID: \"ea579f19-b21d-4098-8f52-517be45768fb\") " pod="openstack/horizon-844bcbddd8-v9pcc" Jan 20 19:02:23 crc kubenswrapper[4661]: I0120 19:02:23.295610 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ea579f19-b21d-4098-8f52-517be45768fb-scripts\") pod \"horizon-844bcbddd8-v9pcc\" (UID: \"ea579f19-b21d-4098-8f52-517be45768fb\") " pod="openstack/horizon-844bcbddd8-v9pcc" Jan 20 19:02:23 crc kubenswrapper[4661]: I0120 19:02:23.297933 4661 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-658f6cd46d-59d52"] Jan 20 19:02:23 crc kubenswrapper[4661]: I0120 19:02:23.299421 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-658f6cd46d-59d52" Jan 20 19:02:23 crc kubenswrapper[4661]: I0120 19:02:23.303936 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/ea579f19-b21d-4098-8f52-517be45768fb-horizon-secret-key\") pod \"horizon-844bcbddd8-v9pcc\" (UID: \"ea579f19-b21d-4098-8f52-517be45768fb\") " pod="openstack/horizon-844bcbddd8-v9pcc" Jan 20 19:02:23 crc kubenswrapper[4661]: I0120 19:02:23.305277 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ea579f19-b21d-4098-8f52-517be45768fb-combined-ca-bundle\") pod \"horizon-844bcbddd8-v9pcc\" (UID: \"ea579f19-b21d-4098-8f52-517be45768fb\") " pod="openstack/horizon-844bcbddd8-v9pcc" Jan 20 19:02:23 crc kubenswrapper[4661]: I0120 19:02:23.310419 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ea579f19-b21d-4098-8f52-517be45768fb-logs\") pod \"horizon-844bcbddd8-v9pcc\" (UID: \"ea579f19-b21d-4098-8f52-517be45768fb\") " pod="openstack/horizon-844bcbddd8-v9pcc" Jan 20 19:02:23 crc kubenswrapper[4661]: I0120 19:02:23.310995 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ea579f19-b21d-4098-8f52-517be45768fb-config-data\") pod \"horizon-844bcbddd8-v9pcc\" (UID: \"ea579f19-b21d-4098-8f52-517be45768fb\") " pod="openstack/horizon-844bcbddd8-v9pcc" Jan 20 19:02:23 crc kubenswrapper[4661]: I0120 19:02:23.316715 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-658f6cd46d-59d52"] Jan 20 19:02:23 crc kubenswrapper[4661]: I0120 19:02:23.338415 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/ea579f19-b21d-4098-8f52-517be45768fb-horizon-tls-certs\") pod \"horizon-844bcbddd8-v9pcc\" (UID: \"ea579f19-b21d-4098-8f52-517be45768fb\") " pod="openstack/horizon-844bcbddd8-v9pcc" Jan 20 19:02:23 crc kubenswrapper[4661]: I0120 19:02:23.362840 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bz2vc\" (UniqueName: \"kubernetes.io/projected/ea579f19-b21d-4098-8f52-517be45768fb-kube-api-access-bz2vc\") pod \"horizon-844bcbddd8-v9pcc\" (UID: \"ea579f19-b21d-4098-8f52-517be45768fb\") " pod="openstack/horizon-844bcbddd8-v9pcc" Jan 20 19:02:23 crc kubenswrapper[4661]: I0120 19:02:23.376596 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-volume-volume1-0" event={"ID":"c8c04a60-5bb8-4d54-93a6-1acfcbea3358","Type":"ContainerStarted","Data":"309b5f33e00a14109926f04b32c4df66ccebc7bd26407fdab521186474ca18fd"} Jan 20 19:02:23 crc kubenswrapper[4661]: I0120 19:02:23.416807 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-backup-0" event={"ID":"3066acf4-e48e-410e-8623-f29b5424f4fe","Type":"ContainerStarted","Data":"7b64078830307d73bd9d55655a5445b547d4172ed65555e26095cfe989f691e5"} Jan 20 19:02:23 crc kubenswrapper[4661]: I0120 19:02:23.416845 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-backup-0" event={"ID":"3066acf4-e48e-410e-8623-f29b5424f4fe","Type":"ContainerStarted","Data":"b2d4ca57da985b311db576bb903242b1e90db63f4632249ea9c13ed5cc552d7e"} Jan 20 19:02:23 crc kubenswrapper[4661]: I0120 19:02:23.463173 4661 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-volume-volume1-0" podStartSLOduration=3.389207808 podStartE2EDuration="4.463157491s" podCreationTimestamp="2026-01-20 19:02:19 +0000 UTC" firstStartedPulling="2026-01-20 19:02:20.638205944 +0000 UTC m=+3396.968995606" lastFinishedPulling="2026-01-20 19:02:21.712155627 +0000 UTC m=+3398.042945289" observedRunningTime="2026-01-20 19:02:23.436914863 +0000 UTC m=+3399.767704525" watchObservedRunningTime="2026-01-20 19:02:23.463157491 +0000 UTC m=+3399.793947153" Jan 20 19:02:23 crc kubenswrapper[4661]: I0120 19:02:23.484151 4661 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-backup-0" podStartSLOduration=3.4472958289999998 podStartE2EDuration="4.48413268s" podCreationTimestamp="2026-01-20 19:02:19 +0000 UTC" firstStartedPulling="2026-01-20 19:02:20.983914837 +0000 UTC m=+3397.314704499" lastFinishedPulling="2026-01-20 19:02:22.020751688 +0000 UTC m=+3398.351541350" observedRunningTime="2026-01-20 19:02:23.476241583 +0000 UTC m=+3399.807031245" watchObservedRunningTime="2026-01-20 19:02:23.48413268 +0000 UTC m=+3399.814922342" Jan 20 19:02:23 crc kubenswrapper[4661]: I0120 19:02:23.484853 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-844bcbddd8-v9pcc" Jan 20 19:02:23 crc kubenswrapper[4661]: I0120 19:02:23.497516 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3c2196ee-0a5d-49b8-9f9b-4eada2792101-scripts\") pod \"horizon-658f6cd46d-59d52\" (UID: \"3c2196ee-0a5d-49b8-9f9b-4eada2792101\") " pod="openstack/horizon-658f6cd46d-59d52" Jan 20 19:02:23 crc kubenswrapper[4661]: I0120 19:02:23.498205 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gqfz8\" (UniqueName: \"kubernetes.io/projected/3c2196ee-0a5d-49b8-9f9b-4eada2792101-kube-api-access-gqfz8\") pod \"horizon-658f6cd46d-59d52\" (UID: \"3c2196ee-0a5d-49b8-9f9b-4eada2792101\") " pod="openstack/horizon-658f6cd46d-59d52" Jan 20 19:02:23 crc kubenswrapper[4661]: I0120 19:02:23.498396 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/3c2196ee-0a5d-49b8-9f9b-4eada2792101-horizon-secret-key\") pod \"horizon-658f6cd46d-59d52\" (UID: \"3c2196ee-0a5d-49b8-9f9b-4eada2792101\") " pod="openstack/horizon-658f6cd46d-59d52" Jan 20 19:02:23 crc kubenswrapper[4661]: I0120 19:02:23.498542 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/3c2196ee-0a5d-49b8-9f9b-4eada2792101-config-data\") pod \"horizon-658f6cd46d-59d52\" (UID: \"3c2196ee-0a5d-49b8-9f9b-4eada2792101\") " pod="openstack/horizon-658f6cd46d-59d52" Jan 20 19:02:23 crc kubenswrapper[4661]: I0120 19:02:23.498847 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3c2196ee-0a5d-49b8-9f9b-4eada2792101-logs\") pod \"horizon-658f6cd46d-59d52\" (UID: \"3c2196ee-0a5d-49b8-9f9b-4eada2792101\") " pod="openstack/horizon-658f6cd46d-59d52" Jan 20 19:02:23 crc kubenswrapper[4661]: I0120 19:02:23.498971 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3c2196ee-0a5d-49b8-9f9b-4eada2792101-combined-ca-bundle\") pod \"horizon-658f6cd46d-59d52\" (UID: \"3c2196ee-0a5d-49b8-9f9b-4eada2792101\") " pod="openstack/horizon-658f6cd46d-59d52" Jan 20 19:02:23 crc kubenswrapper[4661]: I0120 19:02:23.499050 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/3c2196ee-0a5d-49b8-9f9b-4eada2792101-horizon-tls-certs\") pod \"horizon-658f6cd46d-59d52\" (UID: \"3c2196ee-0a5d-49b8-9f9b-4eada2792101\") " pod="openstack/horizon-658f6cd46d-59d52" Jan 20 19:02:23 crc kubenswrapper[4661]: I0120 19:02:23.602000 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3c2196ee-0a5d-49b8-9f9b-4eada2792101-scripts\") pod \"horizon-658f6cd46d-59d52\" (UID: \"3c2196ee-0a5d-49b8-9f9b-4eada2792101\") " pod="openstack/horizon-658f6cd46d-59d52" Jan 20 19:02:23 crc kubenswrapper[4661]: I0120 19:02:23.602415 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gqfz8\" (UniqueName: \"kubernetes.io/projected/3c2196ee-0a5d-49b8-9f9b-4eada2792101-kube-api-access-gqfz8\") pod \"horizon-658f6cd46d-59d52\" (UID: \"3c2196ee-0a5d-49b8-9f9b-4eada2792101\") " pod="openstack/horizon-658f6cd46d-59d52" Jan 20 19:02:23 crc kubenswrapper[4661]: I0120 19:02:23.602942 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/3c2196ee-0a5d-49b8-9f9b-4eada2792101-horizon-secret-key\") pod \"horizon-658f6cd46d-59d52\" (UID: \"3c2196ee-0a5d-49b8-9f9b-4eada2792101\") " pod="openstack/horizon-658f6cd46d-59d52" Jan 20 19:02:23 crc kubenswrapper[4661]: I0120 19:02:23.603031 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3c2196ee-0a5d-49b8-9f9b-4eada2792101-scripts\") pod \"horizon-658f6cd46d-59d52\" (UID: \"3c2196ee-0a5d-49b8-9f9b-4eada2792101\") " pod="openstack/horizon-658f6cd46d-59d52" Jan 20 19:02:23 crc kubenswrapper[4661]: I0120 19:02:23.603161 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/3c2196ee-0a5d-49b8-9f9b-4eada2792101-config-data\") pod \"horizon-658f6cd46d-59d52\" (UID: \"3c2196ee-0a5d-49b8-9f9b-4eada2792101\") " pod="openstack/horizon-658f6cd46d-59d52" Jan 20 19:02:23 crc kubenswrapper[4661]: I0120 19:02:23.603206 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3c2196ee-0a5d-49b8-9f9b-4eada2792101-logs\") pod \"horizon-658f6cd46d-59d52\" (UID: \"3c2196ee-0a5d-49b8-9f9b-4eada2792101\") " pod="openstack/horizon-658f6cd46d-59d52" Jan 20 19:02:23 crc kubenswrapper[4661]: I0120 19:02:23.603337 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3c2196ee-0a5d-49b8-9f9b-4eada2792101-combined-ca-bundle\") pod \"horizon-658f6cd46d-59d52\" (UID: \"3c2196ee-0a5d-49b8-9f9b-4eada2792101\") " pod="openstack/horizon-658f6cd46d-59d52" Jan 20 19:02:23 crc kubenswrapper[4661]: I0120 19:02:23.603360 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/3c2196ee-0a5d-49b8-9f9b-4eada2792101-horizon-tls-certs\") pod \"horizon-658f6cd46d-59d52\" (UID: \"3c2196ee-0a5d-49b8-9f9b-4eada2792101\") " pod="openstack/horizon-658f6cd46d-59d52" Jan 20 19:02:23 crc kubenswrapper[4661]: I0120 19:02:23.604815 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/3c2196ee-0a5d-49b8-9f9b-4eada2792101-config-data\") pod \"horizon-658f6cd46d-59d52\" (UID: \"3c2196ee-0a5d-49b8-9f9b-4eada2792101\") " pod="openstack/horizon-658f6cd46d-59d52" Jan 20 19:02:23 crc kubenswrapper[4661]: I0120 19:02:23.605048 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3c2196ee-0a5d-49b8-9f9b-4eada2792101-logs\") pod \"horizon-658f6cd46d-59d52\" (UID: \"3c2196ee-0a5d-49b8-9f9b-4eada2792101\") " pod="openstack/horizon-658f6cd46d-59d52" Jan 20 19:02:23 crc kubenswrapper[4661]: I0120 19:02:23.616619 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3c2196ee-0a5d-49b8-9f9b-4eada2792101-combined-ca-bundle\") pod \"horizon-658f6cd46d-59d52\" (UID: \"3c2196ee-0a5d-49b8-9f9b-4eada2792101\") " pod="openstack/horizon-658f6cd46d-59d52" Jan 20 19:02:23 crc kubenswrapper[4661]: I0120 19:02:23.618183 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/3c2196ee-0a5d-49b8-9f9b-4eada2792101-horizon-secret-key\") pod \"horizon-658f6cd46d-59d52\" (UID: \"3c2196ee-0a5d-49b8-9f9b-4eada2792101\") " pod="openstack/horizon-658f6cd46d-59d52" Jan 20 19:02:23 crc kubenswrapper[4661]: I0120 19:02:23.623321 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/3c2196ee-0a5d-49b8-9f9b-4eada2792101-horizon-tls-certs\") pod \"horizon-658f6cd46d-59d52\" (UID: \"3c2196ee-0a5d-49b8-9f9b-4eada2792101\") " pod="openstack/horizon-658f6cd46d-59d52" Jan 20 19:02:23 crc kubenswrapper[4661]: I0120 19:02:23.631004 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gqfz8\" (UniqueName: \"kubernetes.io/projected/3c2196ee-0a5d-49b8-9f9b-4eada2792101-kube-api-access-gqfz8\") pod \"horizon-658f6cd46d-59d52\" (UID: \"3c2196ee-0a5d-49b8-9f9b-4eada2792101\") " pod="openstack/horizon-658f6cd46d-59d52" Jan 20 19:02:23 crc kubenswrapper[4661]: I0120 19:02:23.657096 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-658f6cd46d-59d52" Jan 20 19:02:24 crc kubenswrapper[4661]: I0120 19:02:24.188431 4661 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43ad721f-147e-4d5f-bedf-7f75d256e791" path="/var/lib/kubelet/pods/43ad721f-147e-4d5f-bedf-7f75d256e791/volumes" Jan 20 19:02:24 crc kubenswrapper[4661]: I0120 19:02:24.189370 4661 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ca62c0a5-b15f-41e9-8eb2-507a277856a3" path="/var/lib/kubelet/pods/ca62c0a5-b15f-41e9-8eb2-507a277856a3/volumes" Jan 20 19:02:24 crc kubenswrapper[4661]: I0120 19:02:24.325919 4661 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-create-24n4j" Jan 20 19:02:24 crc kubenswrapper[4661]: I0120 19:02:24.344938 4661 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-316e-account-create-update-bp2gj" Jan 20 19:02:24 crc kubenswrapper[4661]: I0120 19:02:24.430699 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-316e-account-create-update-bp2gj" event={"ID":"7fa209ed-495f-49bb-b9cc-01ad4e1032a3","Type":"ContainerDied","Data":"8dab6f78b9dd029f62302a3f819fb03668ccdcc79d752d762657392444626403"} Jan 20 19:02:24 crc kubenswrapper[4661]: I0120 19:02:24.430961 4661 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8dab6f78b9dd029f62302a3f819fb03668ccdcc79d752d762657392444626403" Jan 20 19:02:24 crc kubenswrapper[4661]: I0120 19:02:24.431020 4661 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-316e-account-create-update-bp2gj" Jan 20 19:02:24 crc kubenswrapper[4661]: I0120 19:02:24.441689 4661 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-create-24n4j" Jan 20 19:02:24 crc kubenswrapper[4661]: I0120 19:02:24.441997 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-db-create-24n4j" event={"ID":"f250573d-7ee6-4f08-92f7-b8997189e124","Type":"ContainerDied","Data":"1128542f234c6f5c3e31f0e95e9e08d2071b23a68b1316987992d3a4696044b3"} Jan 20 19:02:24 crc kubenswrapper[4661]: I0120 19:02:24.442015 4661 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1128542f234c6f5c3e31f0e95e9e08d2071b23a68b1316987992d3a4696044b3" Jan 20 19:02:24 crc kubenswrapper[4661]: I0120 19:02:24.448728 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4cjg7\" (UniqueName: \"kubernetes.io/projected/7fa209ed-495f-49bb-b9cc-01ad4e1032a3-kube-api-access-4cjg7\") pod \"7fa209ed-495f-49bb-b9cc-01ad4e1032a3\" (UID: \"7fa209ed-495f-49bb-b9cc-01ad4e1032a3\") " Jan 20 19:02:24 crc kubenswrapper[4661]: I0120 19:02:24.448889 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lsdpq\" (UniqueName: \"kubernetes.io/projected/f250573d-7ee6-4f08-92f7-b8997189e124-kube-api-access-lsdpq\") pod \"f250573d-7ee6-4f08-92f7-b8997189e124\" (UID: \"f250573d-7ee6-4f08-92f7-b8997189e124\") " Jan 20 19:02:24 crc kubenswrapper[4661]: I0120 19:02:24.448924 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7fa209ed-495f-49bb-b9cc-01ad4e1032a3-operator-scripts\") pod \"7fa209ed-495f-49bb-b9cc-01ad4e1032a3\" (UID: \"7fa209ed-495f-49bb-b9cc-01ad4e1032a3\") " Jan 20 19:02:24 crc kubenswrapper[4661]: I0120 19:02:24.448994 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f250573d-7ee6-4f08-92f7-b8997189e124-operator-scripts\") pod \"f250573d-7ee6-4f08-92f7-b8997189e124\" (UID: \"f250573d-7ee6-4f08-92f7-b8997189e124\") " Jan 20 19:02:24 crc kubenswrapper[4661]: I0120 19:02:24.449983 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f250573d-7ee6-4f08-92f7-b8997189e124-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "f250573d-7ee6-4f08-92f7-b8997189e124" (UID: "f250573d-7ee6-4f08-92f7-b8997189e124"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 19:02:24 crc kubenswrapper[4661]: I0120 19:02:24.455728 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7fa209ed-495f-49bb-b9cc-01ad4e1032a3-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "7fa209ed-495f-49bb-b9cc-01ad4e1032a3" (UID: "7fa209ed-495f-49bb-b9cc-01ad4e1032a3"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 19:02:24 crc kubenswrapper[4661]: I0120 19:02:24.455832 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7fa209ed-495f-49bb-b9cc-01ad4e1032a3-kube-api-access-4cjg7" (OuterVolumeSpecName: "kube-api-access-4cjg7") pod "7fa209ed-495f-49bb-b9cc-01ad4e1032a3" (UID: "7fa209ed-495f-49bb-b9cc-01ad4e1032a3"). InnerVolumeSpecName "kube-api-access-4cjg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 19:02:24 crc kubenswrapper[4661]: I0120 19:02:24.458766 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f250573d-7ee6-4f08-92f7-b8997189e124-kube-api-access-lsdpq" (OuterVolumeSpecName: "kube-api-access-lsdpq") pod "f250573d-7ee6-4f08-92f7-b8997189e124" (UID: "f250573d-7ee6-4f08-92f7-b8997189e124"). InnerVolumeSpecName "kube-api-access-lsdpq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 19:02:24 crc kubenswrapper[4661]: I0120 19:02:24.551896 4661 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f250573d-7ee6-4f08-92f7-b8997189e124-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 20 19:02:24 crc kubenswrapper[4661]: I0120 19:02:24.551932 4661 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4cjg7\" (UniqueName: \"kubernetes.io/projected/7fa209ed-495f-49bb-b9cc-01ad4e1032a3-kube-api-access-4cjg7\") on node \"crc\" DevicePath \"\"" Jan 20 19:02:24 crc kubenswrapper[4661]: I0120 19:02:24.551943 4661 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lsdpq\" (UniqueName: \"kubernetes.io/projected/f250573d-7ee6-4f08-92f7-b8997189e124-kube-api-access-lsdpq\") on node \"crc\" DevicePath \"\"" Jan 20 19:02:24 crc kubenswrapper[4661]: I0120 19:02:24.551953 4661 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7fa209ed-495f-49bb-b9cc-01ad4e1032a3-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 20 19:02:24 crc kubenswrapper[4661]: I0120 19:02:24.704752 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-844bcbddd8-v9pcc"] Jan 20 19:02:24 crc kubenswrapper[4661]: I0120 19:02:24.792970 4661 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 20 19:02:24 crc kubenswrapper[4661]: I0120 19:02:24.827624 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-658f6cd46d-59d52"] Jan 20 19:02:24 crc kubenswrapper[4661]: I0120 19:02:24.828975 4661 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-volume-volume1-0" Jan 20 19:02:24 crc kubenswrapper[4661]: W0120 19:02:24.871705 4661 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3c2196ee_0a5d_49b8_9f9b_4eada2792101.slice/crio-b1f522ca0acdfc45584f16ab5c565942f4cd4e68e0069e9390117122f690c7c7 WatchSource:0}: Error finding container b1f522ca0acdfc45584f16ab5c565942f4cd4e68e0069e9390117122f690c7c7: Status 404 returned error can't find the container with id b1f522ca0acdfc45584f16ab5c565942f4cd4e68e0069e9390117122f690c7c7 Jan 20 19:02:24 crc kubenswrapper[4661]: I0120 19:02:24.935783 4661 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-backup-0" Jan 20 19:02:25 crc kubenswrapper[4661]: I0120 19:02:25.456656 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-658f6cd46d-59d52" event={"ID":"3c2196ee-0a5d-49b8-9f9b-4eada2792101","Type":"ContainerStarted","Data":"b1f522ca0acdfc45584f16ab5c565942f4cd4e68e0069e9390117122f690c7c7"} Jan 20 19:02:25 crc kubenswrapper[4661]: I0120 19:02:25.464282 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"c4653b95-1c26-4d87-bf2d-b3daea2414f9","Type":"ContainerStarted","Data":"9560c374aaca06dcb5566f33da675a29d24ed1b6cbc63387bc82a2fd2cc07bb8"} Jan 20 19:02:25 crc kubenswrapper[4661]: I0120 19:02:25.465937 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-844bcbddd8-v9pcc" event={"ID":"ea579f19-b21d-4098-8f52-517be45768fb","Type":"ContainerStarted","Data":"3fcc2297ae30cc28d680d0e8e286bd5cc6b552e3381dd6df9d6b9cabb32b985e"} Jan 20 19:02:25 crc kubenswrapper[4661]: I0120 19:02:25.678201 4661 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 20 19:02:26 crc kubenswrapper[4661]: I0120 19:02:26.480968 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"c4653b95-1c26-4d87-bf2d-b3daea2414f9","Type":"ContainerStarted","Data":"c70074b7b1df285b024e95343e8430aeae103e988dd01fae7272d7b945099d86"} Jan 20 19:02:26 crc kubenswrapper[4661]: I0120 19:02:26.483714 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"7030a848-b903-4001-a699-5683451486b8","Type":"ContainerStarted","Data":"d3ce55265d298f35ba776d46e85f15926d3bdcc61234245e1c576cbdb31dbc8e"} Jan 20 19:02:27 crc kubenswrapper[4661]: I0120 19:02:27.500501 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"c4653b95-1c26-4d87-bf2d-b3daea2414f9","Type":"ContainerStarted","Data":"f30907dec6a3282e90b67685c34f934dbe642525d4fe7d1bd9da757bde0e074e"} Jan 20 19:02:27 crc kubenswrapper[4661]: I0120 19:02:27.500871 4661 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="c4653b95-1c26-4d87-bf2d-b3daea2414f9" containerName="glance-log" containerID="cri-o://c70074b7b1df285b024e95343e8430aeae103e988dd01fae7272d7b945099d86" gracePeriod=30 Jan 20 19:02:27 crc kubenswrapper[4661]: I0120 19:02:27.501396 4661 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="c4653b95-1c26-4d87-bf2d-b3daea2414f9" containerName="glance-httpd" containerID="cri-o://f30907dec6a3282e90b67685c34f934dbe642525d4fe7d1bd9da757bde0e074e" gracePeriod=30 Jan 20 19:02:27 crc kubenswrapper[4661]: I0120 19:02:27.504641 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"7030a848-b903-4001-a699-5683451486b8","Type":"ContainerStarted","Data":"ab2f6ab66a8f111fe693bcf0d3f3a4671e2aa95ac47c91506e2de48b7e8e53d2"} Jan 20 19:02:27 crc kubenswrapper[4661]: I0120 19:02:27.504699 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"7030a848-b903-4001-a699-5683451486b8","Type":"ContainerStarted","Data":"d64d3a69c1583f2b980d5cc05f30e926c6ecf55db038c0ecc28ce668fbf4696a"} Jan 20 19:02:27 crc kubenswrapper[4661]: I0120 19:02:27.504809 4661 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="7030a848-b903-4001-a699-5683451486b8" containerName="glance-log" containerID="cri-o://d64d3a69c1583f2b980d5cc05f30e926c6ecf55db038c0ecc28ce668fbf4696a" gracePeriod=30 Jan 20 19:02:27 crc kubenswrapper[4661]: I0120 19:02:27.504897 4661 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="7030a848-b903-4001-a699-5683451486b8" containerName="glance-httpd" containerID="cri-o://ab2f6ab66a8f111fe693bcf0d3f3a4671e2aa95ac47c91506e2de48b7e8e53d2" gracePeriod=30 Jan 20 19:02:27 crc kubenswrapper[4661]: I0120 19:02:27.527910 4661 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=5.527895054 podStartE2EDuration="5.527895054s" podCreationTimestamp="2026-01-20 19:02:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 19:02:27.527488263 +0000 UTC m=+3403.858277925" watchObservedRunningTime="2026-01-20 19:02:27.527895054 +0000 UTC m=+3403.858684716" Jan 20 19:02:28 crc kubenswrapper[4661]: I0120 19:02:28.333093 4661 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 20 19:02:28 crc kubenswrapper[4661]: I0120 19:02:28.476816 4661 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 20 19:02:28 crc kubenswrapper[4661]: I0120 19:02:28.480796 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7hnfj\" (UniqueName: \"kubernetes.io/projected/7030a848-b903-4001-a699-5683451486b8-kube-api-access-7hnfj\") pod \"7030a848-b903-4001-a699-5683451486b8\" (UID: \"7030a848-b903-4001-a699-5683451486b8\") " Jan 20 19:02:28 crc kubenswrapper[4661]: I0120 19:02:28.480856 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7030a848-b903-4001-a699-5683451486b8-logs\") pod \"7030a848-b903-4001-a699-5683451486b8\" (UID: \"7030a848-b903-4001-a699-5683451486b8\") " Jan 20 19:02:28 crc kubenswrapper[4661]: I0120 19:02:28.480882 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/7030a848-b903-4001-a699-5683451486b8-httpd-run\") pod \"7030a848-b903-4001-a699-5683451486b8\" (UID: \"7030a848-b903-4001-a699-5683451486b8\") " Jan 20 19:02:28 crc kubenswrapper[4661]: I0120 19:02:28.480927 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/7030a848-b903-4001-a699-5683451486b8-ceph\") pod \"7030a848-b903-4001-a699-5683451486b8\" (UID: \"7030a848-b903-4001-a699-5683451486b8\") " Jan 20 19:02:28 crc kubenswrapper[4661]: I0120 19:02:28.480988 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7030a848-b903-4001-a699-5683451486b8-public-tls-certs\") pod \"7030a848-b903-4001-a699-5683451486b8\" (UID: \"7030a848-b903-4001-a699-5683451486b8\") " Jan 20 19:02:28 crc kubenswrapper[4661]: I0120 19:02:28.481005 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7030a848-b903-4001-a699-5683451486b8-combined-ca-bundle\") pod \"7030a848-b903-4001-a699-5683451486b8\" (UID: \"7030a848-b903-4001-a699-5683451486b8\") " Jan 20 19:02:28 crc kubenswrapper[4661]: I0120 19:02:28.481090 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"7030a848-b903-4001-a699-5683451486b8\" (UID: \"7030a848-b903-4001-a699-5683451486b8\") " Jan 20 19:02:28 crc kubenswrapper[4661]: I0120 19:02:28.481118 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7030a848-b903-4001-a699-5683451486b8-config-data\") pod \"7030a848-b903-4001-a699-5683451486b8\" (UID: \"7030a848-b903-4001-a699-5683451486b8\") " Jan 20 19:02:28 crc kubenswrapper[4661]: I0120 19:02:28.481139 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7030a848-b903-4001-a699-5683451486b8-scripts\") pod \"7030a848-b903-4001-a699-5683451486b8\" (UID: \"7030a848-b903-4001-a699-5683451486b8\") " Jan 20 19:02:28 crc kubenswrapper[4661]: I0120 19:02:28.481779 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7030a848-b903-4001-a699-5683451486b8-logs" (OuterVolumeSpecName: "logs") pod "7030a848-b903-4001-a699-5683451486b8" (UID: "7030a848-b903-4001-a699-5683451486b8"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 19:02:28 crc kubenswrapper[4661]: I0120 19:02:28.488732 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7030a848-b903-4001-a699-5683451486b8-ceph" (OuterVolumeSpecName: "ceph") pod "7030a848-b903-4001-a699-5683451486b8" (UID: "7030a848-b903-4001-a699-5683451486b8"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 19:02:28 crc kubenswrapper[4661]: I0120 19:02:28.491544 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7030a848-b903-4001-a699-5683451486b8-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "7030a848-b903-4001-a699-5683451486b8" (UID: "7030a848-b903-4001-a699-5683451486b8"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 19:02:28 crc kubenswrapper[4661]: I0120 19:02:28.492177 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7030a848-b903-4001-a699-5683451486b8-kube-api-access-7hnfj" (OuterVolumeSpecName: "kube-api-access-7hnfj") pod "7030a848-b903-4001-a699-5683451486b8" (UID: "7030a848-b903-4001-a699-5683451486b8"). InnerVolumeSpecName "kube-api-access-7hnfj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 19:02:28 crc kubenswrapper[4661]: I0120 19:02:28.494958 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage01-crc" (OuterVolumeSpecName: "glance") pod "7030a848-b903-4001-a699-5683451486b8" (UID: "7030a848-b903-4001-a699-5683451486b8"). InnerVolumeSpecName "local-storage01-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 20 19:02:28 crc kubenswrapper[4661]: I0120 19:02:28.501305 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7030a848-b903-4001-a699-5683451486b8-scripts" (OuterVolumeSpecName: "scripts") pod "7030a848-b903-4001-a699-5683451486b8" (UID: "7030a848-b903-4001-a699-5683451486b8"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 19:02:28 crc kubenswrapper[4661]: I0120 19:02:28.522847 4661 generic.go:334] "Generic (PLEG): container finished" podID="c4653b95-1c26-4d87-bf2d-b3daea2414f9" containerID="f30907dec6a3282e90b67685c34f934dbe642525d4fe7d1bd9da757bde0e074e" exitCode=0 Jan 20 19:02:28 crc kubenswrapper[4661]: I0120 19:02:28.522879 4661 generic.go:334] "Generic (PLEG): container finished" podID="c4653b95-1c26-4d87-bf2d-b3daea2414f9" containerID="c70074b7b1df285b024e95343e8430aeae103e988dd01fae7272d7b945099d86" exitCode=143 Jan 20 19:02:28 crc kubenswrapper[4661]: I0120 19:02:28.522897 4661 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 20 19:02:28 crc kubenswrapper[4661]: I0120 19:02:28.522934 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"c4653b95-1c26-4d87-bf2d-b3daea2414f9","Type":"ContainerDied","Data":"f30907dec6a3282e90b67685c34f934dbe642525d4fe7d1bd9da757bde0e074e"} Jan 20 19:02:28 crc kubenswrapper[4661]: I0120 19:02:28.522961 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"c4653b95-1c26-4d87-bf2d-b3daea2414f9","Type":"ContainerDied","Data":"c70074b7b1df285b024e95343e8430aeae103e988dd01fae7272d7b945099d86"} Jan 20 19:02:28 crc kubenswrapper[4661]: I0120 19:02:28.522971 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"c4653b95-1c26-4d87-bf2d-b3daea2414f9","Type":"ContainerDied","Data":"9560c374aaca06dcb5566f33da675a29d24ed1b6cbc63387bc82a2fd2cc07bb8"} Jan 20 19:02:28 crc kubenswrapper[4661]: I0120 19:02:28.522985 4661 scope.go:117] "RemoveContainer" containerID="f30907dec6a3282e90b67685c34f934dbe642525d4fe7d1bd9da757bde0e074e" Jan 20 19:02:28 crc kubenswrapper[4661]: I0120 19:02:28.528144 4661 generic.go:334] "Generic (PLEG): container finished" podID="7030a848-b903-4001-a699-5683451486b8" containerID="ab2f6ab66a8f111fe693bcf0d3f3a4671e2aa95ac47c91506e2de48b7e8e53d2" exitCode=143 Jan 20 19:02:28 crc kubenswrapper[4661]: I0120 19:02:28.528207 4661 generic.go:334] "Generic (PLEG): container finished" podID="7030a848-b903-4001-a699-5683451486b8" containerID="d64d3a69c1583f2b980d5cc05f30e926c6ecf55db038c0ecc28ce668fbf4696a" exitCode=143 Jan 20 19:02:28 crc kubenswrapper[4661]: I0120 19:02:28.528230 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"7030a848-b903-4001-a699-5683451486b8","Type":"ContainerDied","Data":"ab2f6ab66a8f111fe693bcf0d3f3a4671e2aa95ac47c91506e2de48b7e8e53d2"} Jan 20 19:02:28 crc kubenswrapper[4661]: I0120 19:02:28.528258 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"7030a848-b903-4001-a699-5683451486b8","Type":"ContainerDied","Data":"d64d3a69c1583f2b980d5cc05f30e926c6ecf55db038c0ecc28ce668fbf4696a"} Jan 20 19:02:28 crc kubenswrapper[4661]: I0120 19:02:28.528273 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"7030a848-b903-4001-a699-5683451486b8","Type":"ContainerDied","Data":"d3ce55265d298f35ba776d46e85f15926d3bdcc61234245e1c576cbdb31dbc8e"} Jan 20 19:02:28 crc kubenswrapper[4661]: I0120 19:02:28.528347 4661 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 20 19:02:28 crc kubenswrapper[4661]: I0120 19:02:28.552393 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7030a848-b903-4001-a699-5683451486b8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7030a848-b903-4001-a699-5683451486b8" (UID: "7030a848-b903-4001-a699-5683451486b8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 19:02:28 crc kubenswrapper[4661]: I0120 19:02:28.564827 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7030a848-b903-4001-a699-5683451486b8-config-data" (OuterVolumeSpecName: "config-data") pod "7030a848-b903-4001-a699-5683451486b8" (UID: "7030a848-b903-4001-a699-5683451486b8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 19:02:28 crc kubenswrapper[4661]: I0120 19:02:28.575307 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7030a848-b903-4001-a699-5683451486b8-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "7030a848-b903-4001-a699-5683451486b8" (UID: "7030a848-b903-4001-a699-5683451486b8"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 19:02:28 crc kubenswrapper[4661]: I0120 19:02:28.583069 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c4653b95-1c26-4d87-bf2d-b3daea2414f9-combined-ca-bundle\") pod \"c4653b95-1c26-4d87-bf2d-b3daea2414f9\" (UID: \"c4653b95-1c26-4d87-bf2d-b3daea2414f9\") " Jan 20 19:02:28 crc kubenswrapper[4661]: I0120 19:02:28.583119 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c4653b95-1c26-4d87-bf2d-b3daea2414f9-logs\") pod \"c4653b95-1c26-4d87-bf2d-b3daea2414f9\" (UID: \"c4653b95-1c26-4d87-bf2d-b3daea2414f9\") " Jan 20 19:02:28 crc kubenswrapper[4661]: I0120 19:02:28.583151 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c4653b95-1c26-4d87-bf2d-b3daea2414f9-scripts\") pod \"c4653b95-1c26-4d87-bf2d-b3daea2414f9\" (UID: \"c4653b95-1c26-4d87-bf2d-b3daea2414f9\") " Jan 20 19:02:28 crc kubenswrapper[4661]: I0120 19:02:28.583295 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c4653b95-1c26-4d87-bf2d-b3daea2414f9-config-data\") pod \"c4653b95-1c26-4d87-bf2d-b3daea2414f9\" (UID: \"c4653b95-1c26-4d87-bf2d-b3daea2414f9\") " Jan 20 19:02:28 crc kubenswrapper[4661]: I0120 19:02:28.583367 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"c4653b95-1c26-4d87-bf2d-b3daea2414f9\" (UID: \"c4653b95-1c26-4d87-bf2d-b3daea2414f9\") " Jan 20 19:02:28 crc kubenswrapper[4661]: I0120 19:02:28.583405 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-72b88\" (UniqueName: \"kubernetes.io/projected/c4653b95-1c26-4d87-bf2d-b3daea2414f9-kube-api-access-72b88\") pod \"c4653b95-1c26-4d87-bf2d-b3daea2414f9\" (UID: \"c4653b95-1c26-4d87-bf2d-b3daea2414f9\") " Jan 20 19:02:28 crc kubenswrapper[4661]: I0120 19:02:28.583446 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/c4653b95-1c26-4d87-bf2d-b3daea2414f9-ceph\") pod \"c4653b95-1c26-4d87-bf2d-b3daea2414f9\" (UID: \"c4653b95-1c26-4d87-bf2d-b3daea2414f9\") " Jan 20 19:02:28 crc kubenswrapper[4661]: I0120 19:02:28.583556 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c4653b95-1c26-4d87-bf2d-b3daea2414f9-httpd-run\") pod \"c4653b95-1c26-4d87-bf2d-b3daea2414f9\" (UID: \"c4653b95-1c26-4d87-bf2d-b3daea2414f9\") " Jan 20 19:02:28 crc kubenswrapper[4661]: I0120 19:02:28.583583 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c4653b95-1c26-4d87-bf2d-b3daea2414f9-internal-tls-certs\") pod \"c4653b95-1c26-4d87-bf2d-b3daea2414f9\" (UID: \"c4653b95-1c26-4d87-bf2d-b3daea2414f9\") " Jan 20 19:02:28 crc kubenswrapper[4661]: I0120 19:02:28.583971 4661 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7030a848-b903-4001-a699-5683451486b8-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 20 19:02:28 crc kubenswrapper[4661]: I0120 19:02:28.583984 4661 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7030a848-b903-4001-a699-5683451486b8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 20 19:02:28 crc kubenswrapper[4661]: I0120 19:02:28.584004 4661 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" " Jan 20 19:02:28 crc kubenswrapper[4661]: I0120 19:02:28.584013 4661 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7030a848-b903-4001-a699-5683451486b8-config-data\") on node \"crc\" DevicePath \"\"" Jan 20 19:02:28 crc kubenswrapper[4661]: I0120 19:02:28.584024 4661 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7030a848-b903-4001-a699-5683451486b8-scripts\") on node \"crc\" DevicePath \"\"" Jan 20 19:02:28 crc kubenswrapper[4661]: I0120 19:02:28.584032 4661 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7hnfj\" (UniqueName: \"kubernetes.io/projected/7030a848-b903-4001-a699-5683451486b8-kube-api-access-7hnfj\") on node \"crc\" DevicePath \"\"" Jan 20 19:02:28 crc kubenswrapper[4661]: I0120 19:02:28.584042 4661 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7030a848-b903-4001-a699-5683451486b8-logs\") on node \"crc\" DevicePath \"\"" Jan 20 19:02:28 crc kubenswrapper[4661]: I0120 19:02:28.584049 4661 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/7030a848-b903-4001-a699-5683451486b8-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 20 19:02:28 crc kubenswrapper[4661]: I0120 19:02:28.584057 4661 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/7030a848-b903-4001-a699-5683451486b8-ceph\") on node \"crc\" DevicePath \"\"" Jan 20 19:02:28 crc kubenswrapper[4661]: I0120 19:02:28.585111 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c4653b95-1c26-4d87-bf2d-b3daea2414f9-logs" (OuterVolumeSpecName: "logs") pod "c4653b95-1c26-4d87-bf2d-b3daea2414f9" (UID: "c4653b95-1c26-4d87-bf2d-b3daea2414f9"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 19:02:28 crc kubenswrapper[4661]: I0120 19:02:28.585519 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c4653b95-1c26-4d87-bf2d-b3daea2414f9-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "c4653b95-1c26-4d87-bf2d-b3daea2414f9" (UID: "c4653b95-1c26-4d87-bf2d-b3daea2414f9"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 19:02:28 crc kubenswrapper[4661]: I0120 19:02:28.586325 4661 scope.go:117] "RemoveContainer" containerID="c70074b7b1df285b024e95343e8430aeae103e988dd01fae7272d7b945099d86" Jan 20 19:02:28 crc kubenswrapper[4661]: I0120 19:02:28.587584 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage03-crc" (OuterVolumeSpecName: "glance") pod "c4653b95-1c26-4d87-bf2d-b3daea2414f9" (UID: "c4653b95-1c26-4d87-bf2d-b3daea2414f9"). InnerVolumeSpecName "local-storage03-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 20 19:02:28 crc kubenswrapper[4661]: I0120 19:02:28.591451 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c4653b95-1c26-4d87-bf2d-b3daea2414f9-kube-api-access-72b88" (OuterVolumeSpecName: "kube-api-access-72b88") pod "c4653b95-1c26-4d87-bf2d-b3daea2414f9" (UID: "c4653b95-1c26-4d87-bf2d-b3daea2414f9"). InnerVolumeSpecName "kube-api-access-72b88". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 19:02:28 crc kubenswrapper[4661]: I0120 19:02:28.594518 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c4653b95-1c26-4d87-bf2d-b3daea2414f9-ceph" (OuterVolumeSpecName: "ceph") pod "c4653b95-1c26-4d87-bf2d-b3daea2414f9" (UID: "c4653b95-1c26-4d87-bf2d-b3daea2414f9"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 19:02:28 crc kubenswrapper[4661]: I0120 19:02:28.606366 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c4653b95-1c26-4d87-bf2d-b3daea2414f9-scripts" (OuterVolumeSpecName: "scripts") pod "c4653b95-1c26-4d87-bf2d-b3daea2414f9" (UID: "c4653b95-1c26-4d87-bf2d-b3daea2414f9"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 19:02:28 crc kubenswrapper[4661]: I0120 19:02:28.610202 4661 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage01-crc" (UniqueName: "kubernetes.io/local-volume/local-storage01-crc") on node "crc" Jan 20 19:02:28 crc kubenswrapper[4661]: I0120 19:02:28.619281 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c4653b95-1c26-4d87-bf2d-b3daea2414f9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c4653b95-1c26-4d87-bf2d-b3daea2414f9" (UID: "c4653b95-1c26-4d87-bf2d-b3daea2414f9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 19:02:28 crc kubenswrapper[4661]: I0120 19:02:28.638526 4661 scope.go:117] "RemoveContainer" containerID="f30907dec6a3282e90b67685c34f934dbe642525d4fe7d1bd9da757bde0e074e" Jan 20 19:02:28 crc kubenswrapper[4661]: E0120 19:02:28.640952 4661 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f30907dec6a3282e90b67685c34f934dbe642525d4fe7d1bd9da757bde0e074e\": container with ID starting with f30907dec6a3282e90b67685c34f934dbe642525d4fe7d1bd9da757bde0e074e not found: ID does not exist" containerID="f30907dec6a3282e90b67685c34f934dbe642525d4fe7d1bd9da757bde0e074e" Jan 20 19:02:28 crc kubenswrapper[4661]: I0120 19:02:28.640990 4661 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f30907dec6a3282e90b67685c34f934dbe642525d4fe7d1bd9da757bde0e074e"} err="failed to get container status \"f30907dec6a3282e90b67685c34f934dbe642525d4fe7d1bd9da757bde0e074e\": rpc error: code = NotFound desc = could not find container \"f30907dec6a3282e90b67685c34f934dbe642525d4fe7d1bd9da757bde0e074e\": container with ID starting with f30907dec6a3282e90b67685c34f934dbe642525d4fe7d1bd9da757bde0e074e not found: ID does not exist" Jan 20 19:02:28 crc kubenswrapper[4661]: I0120 19:02:28.641012 4661 scope.go:117] "RemoveContainer" containerID="c70074b7b1df285b024e95343e8430aeae103e988dd01fae7272d7b945099d86" Jan 20 19:02:28 crc kubenswrapper[4661]: E0120 19:02:28.641215 4661 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c70074b7b1df285b024e95343e8430aeae103e988dd01fae7272d7b945099d86\": container with ID starting with c70074b7b1df285b024e95343e8430aeae103e988dd01fae7272d7b945099d86 not found: ID does not exist" containerID="c70074b7b1df285b024e95343e8430aeae103e988dd01fae7272d7b945099d86" Jan 20 19:02:28 crc kubenswrapper[4661]: I0120 19:02:28.641247 4661 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c70074b7b1df285b024e95343e8430aeae103e988dd01fae7272d7b945099d86"} err="failed to get container status \"c70074b7b1df285b024e95343e8430aeae103e988dd01fae7272d7b945099d86\": rpc error: code = NotFound desc = could not find container \"c70074b7b1df285b024e95343e8430aeae103e988dd01fae7272d7b945099d86\": container with ID starting with c70074b7b1df285b024e95343e8430aeae103e988dd01fae7272d7b945099d86 not found: ID does not exist" Jan 20 19:02:28 crc kubenswrapper[4661]: I0120 19:02:28.641267 4661 scope.go:117] "RemoveContainer" containerID="f30907dec6a3282e90b67685c34f934dbe642525d4fe7d1bd9da757bde0e074e" Jan 20 19:02:28 crc kubenswrapper[4661]: I0120 19:02:28.641498 4661 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f30907dec6a3282e90b67685c34f934dbe642525d4fe7d1bd9da757bde0e074e"} err="failed to get container status \"f30907dec6a3282e90b67685c34f934dbe642525d4fe7d1bd9da757bde0e074e\": rpc error: code = NotFound desc = could not find container \"f30907dec6a3282e90b67685c34f934dbe642525d4fe7d1bd9da757bde0e074e\": container with ID starting with f30907dec6a3282e90b67685c34f934dbe642525d4fe7d1bd9da757bde0e074e not found: ID does not exist" Jan 20 19:02:28 crc kubenswrapper[4661]: I0120 19:02:28.641553 4661 scope.go:117] "RemoveContainer" containerID="c70074b7b1df285b024e95343e8430aeae103e988dd01fae7272d7b945099d86" Jan 20 19:02:28 crc kubenswrapper[4661]: I0120 19:02:28.641994 4661 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c70074b7b1df285b024e95343e8430aeae103e988dd01fae7272d7b945099d86"} err="failed to get container status \"c70074b7b1df285b024e95343e8430aeae103e988dd01fae7272d7b945099d86\": rpc error: code = NotFound desc = could not find container \"c70074b7b1df285b024e95343e8430aeae103e988dd01fae7272d7b945099d86\": container with ID starting with c70074b7b1df285b024e95343e8430aeae103e988dd01fae7272d7b945099d86 not found: ID does not exist" Jan 20 19:02:28 crc kubenswrapper[4661]: I0120 19:02:28.642014 4661 scope.go:117] "RemoveContainer" containerID="ab2f6ab66a8f111fe693bcf0d3f3a4671e2aa95ac47c91506e2de48b7e8e53d2" Jan 20 19:02:28 crc kubenswrapper[4661]: I0120 19:02:28.645234 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c4653b95-1c26-4d87-bf2d-b3daea2414f9-config-data" (OuterVolumeSpecName: "config-data") pod "c4653b95-1c26-4d87-bf2d-b3daea2414f9" (UID: "c4653b95-1c26-4d87-bf2d-b3daea2414f9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 19:02:28 crc kubenswrapper[4661]: I0120 19:02:28.661363 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c4653b95-1c26-4d87-bf2d-b3daea2414f9-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "c4653b95-1c26-4d87-bf2d-b3daea2414f9" (UID: "c4653b95-1c26-4d87-bf2d-b3daea2414f9"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 19:02:28 crc kubenswrapper[4661]: I0120 19:02:28.688288 4661 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" " Jan 20 19:02:28 crc kubenswrapper[4661]: I0120 19:02:28.688320 4661 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-72b88\" (UniqueName: \"kubernetes.io/projected/c4653b95-1c26-4d87-bf2d-b3daea2414f9-kube-api-access-72b88\") on node \"crc\" DevicePath \"\"" Jan 20 19:02:28 crc kubenswrapper[4661]: I0120 19:02:28.688331 4661 reconciler_common.go:293] "Volume detached for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" DevicePath \"\"" Jan 20 19:02:28 crc kubenswrapper[4661]: I0120 19:02:28.688340 4661 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/c4653b95-1c26-4d87-bf2d-b3daea2414f9-ceph\") on node \"crc\" DevicePath \"\"" Jan 20 19:02:28 crc kubenswrapper[4661]: I0120 19:02:28.688348 4661 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c4653b95-1c26-4d87-bf2d-b3daea2414f9-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 20 19:02:28 crc kubenswrapper[4661]: I0120 19:02:28.688357 4661 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c4653b95-1c26-4d87-bf2d-b3daea2414f9-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 20 19:02:28 crc kubenswrapper[4661]: I0120 19:02:28.688365 4661 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c4653b95-1c26-4d87-bf2d-b3daea2414f9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 20 19:02:28 crc kubenswrapper[4661]: I0120 19:02:28.688375 4661 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c4653b95-1c26-4d87-bf2d-b3daea2414f9-logs\") on node \"crc\" DevicePath \"\"" Jan 20 19:02:28 crc kubenswrapper[4661]: I0120 19:02:28.688384 4661 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c4653b95-1c26-4d87-bf2d-b3daea2414f9-scripts\") on node \"crc\" DevicePath \"\"" Jan 20 19:02:28 crc kubenswrapper[4661]: I0120 19:02:28.688392 4661 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c4653b95-1c26-4d87-bf2d-b3daea2414f9-config-data\") on node \"crc\" DevicePath \"\"" Jan 20 19:02:28 crc kubenswrapper[4661]: I0120 19:02:28.709715 4661 scope.go:117] "RemoveContainer" containerID="d64d3a69c1583f2b980d5cc05f30e926c6ecf55db038c0ecc28ce668fbf4696a" Jan 20 19:02:28 crc kubenswrapper[4661]: I0120 19:02:28.715478 4661 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage03-crc" (UniqueName: "kubernetes.io/local-volume/local-storage03-crc") on node "crc" Jan 20 19:02:28 crc kubenswrapper[4661]: I0120 19:02:28.747890 4661 scope.go:117] "RemoveContainer" containerID="ab2f6ab66a8f111fe693bcf0d3f3a4671e2aa95ac47c91506e2de48b7e8e53d2" Jan 20 19:02:28 crc kubenswrapper[4661]: E0120 19:02:28.748412 4661 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ab2f6ab66a8f111fe693bcf0d3f3a4671e2aa95ac47c91506e2de48b7e8e53d2\": container with ID starting with ab2f6ab66a8f111fe693bcf0d3f3a4671e2aa95ac47c91506e2de48b7e8e53d2 not found: ID does not exist" containerID="ab2f6ab66a8f111fe693bcf0d3f3a4671e2aa95ac47c91506e2de48b7e8e53d2" Jan 20 19:02:28 crc kubenswrapper[4661]: I0120 19:02:28.748467 4661 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ab2f6ab66a8f111fe693bcf0d3f3a4671e2aa95ac47c91506e2de48b7e8e53d2"} err="failed to get container status \"ab2f6ab66a8f111fe693bcf0d3f3a4671e2aa95ac47c91506e2de48b7e8e53d2\": rpc error: code = NotFound desc = could not find container \"ab2f6ab66a8f111fe693bcf0d3f3a4671e2aa95ac47c91506e2de48b7e8e53d2\": container with ID starting with ab2f6ab66a8f111fe693bcf0d3f3a4671e2aa95ac47c91506e2de48b7e8e53d2 not found: ID does not exist" Jan 20 19:02:28 crc kubenswrapper[4661]: I0120 19:02:28.748497 4661 scope.go:117] "RemoveContainer" containerID="d64d3a69c1583f2b980d5cc05f30e926c6ecf55db038c0ecc28ce668fbf4696a" Jan 20 19:02:28 crc kubenswrapper[4661]: E0120 19:02:28.750055 4661 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d64d3a69c1583f2b980d5cc05f30e926c6ecf55db038c0ecc28ce668fbf4696a\": container with ID starting with d64d3a69c1583f2b980d5cc05f30e926c6ecf55db038c0ecc28ce668fbf4696a not found: ID does not exist" containerID="d64d3a69c1583f2b980d5cc05f30e926c6ecf55db038c0ecc28ce668fbf4696a" Jan 20 19:02:28 crc kubenswrapper[4661]: I0120 19:02:28.750082 4661 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d64d3a69c1583f2b980d5cc05f30e926c6ecf55db038c0ecc28ce668fbf4696a"} err="failed to get container status \"d64d3a69c1583f2b980d5cc05f30e926c6ecf55db038c0ecc28ce668fbf4696a\": rpc error: code = NotFound desc = could not find container \"d64d3a69c1583f2b980d5cc05f30e926c6ecf55db038c0ecc28ce668fbf4696a\": container with ID starting with d64d3a69c1583f2b980d5cc05f30e926c6ecf55db038c0ecc28ce668fbf4696a not found: ID does not exist" Jan 20 19:02:28 crc kubenswrapper[4661]: I0120 19:02:28.750097 4661 scope.go:117] "RemoveContainer" containerID="ab2f6ab66a8f111fe693bcf0d3f3a4671e2aa95ac47c91506e2de48b7e8e53d2" Jan 20 19:02:28 crc kubenswrapper[4661]: I0120 19:02:28.750567 4661 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ab2f6ab66a8f111fe693bcf0d3f3a4671e2aa95ac47c91506e2de48b7e8e53d2"} err="failed to get container status \"ab2f6ab66a8f111fe693bcf0d3f3a4671e2aa95ac47c91506e2de48b7e8e53d2\": rpc error: code = NotFound desc = could not find container \"ab2f6ab66a8f111fe693bcf0d3f3a4671e2aa95ac47c91506e2de48b7e8e53d2\": container with ID starting with ab2f6ab66a8f111fe693bcf0d3f3a4671e2aa95ac47c91506e2de48b7e8e53d2 not found: ID does not exist" Jan 20 19:02:28 crc kubenswrapper[4661]: I0120 19:02:28.750588 4661 scope.go:117] "RemoveContainer" containerID="d64d3a69c1583f2b980d5cc05f30e926c6ecf55db038c0ecc28ce668fbf4696a" Jan 20 19:02:28 crc kubenswrapper[4661]: I0120 19:02:28.750803 4661 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d64d3a69c1583f2b980d5cc05f30e926c6ecf55db038c0ecc28ce668fbf4696a"} err="failed to get container status \"d64d3a69c1583f2b980d5cc05f30e926c6ecf55db038c0ecc28ce668fbf4696a\": rpc error: code = NotFound desc = could not find container \"d64d3a69c1583f2b980d5cc05f30e926c6ecf55db038c0ecc28ce668fbf4696a\": container with ID starting with d64d3a69c1583f2b980d5cc05f30e926c6ecf55db038c0ecc28ce668fbf4696a not found: ID does not exist" Jan 20 19:02:28 crc kubenswrapper[4661]: I0120 19:02:28.790858 4661 reconciler_common.go:293] "Volume detached for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" DevicePath \"\"" Jan 20 19:02:28 crc kubenswrapper[4661]: I0120 19:02:28.924576 4661 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 20 19:02:28 crc kubenswrapper[4661]: I0120 19:02:28.948696 4661 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 20 19:02:28 crc kubenswrapper[4661]: I0120 19:02:28.961434 4661 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 20 19:02:28 crc kubenswrapper[4661]: I0120 19:02:28.975738 4661 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 20 19:02:28 crc kubenswrapper[4661]: I0120 19:02:28.987139 4661 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 20 19:02:28 crc kubenswrapper[4661]: E0120 19:02:28.987500 4661 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7030a848-b903-4001-a699-5683451486b8" containerName="glance-httpd" Jan 20 19:02:28 crc kubenswrapper[4661]: I0120 19:02:28.987512 4661 state_mem.go:107] "Deleted CPUSet assignment" podUID="7030a848-b903-4001-a699-5683451486b8" containerName="glance-httpd" Jan 20 19:02:28 crc kubenswrapper[4661]: E0120 19:02:28.987525 4661 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c4653b95-1c26-4d87-bf2d-b3daea2414f9" containerName="glance-httpd" Jan 20 19:02:28 crc kubenswrapper[4661]: I0120 19:02:28.987533 4661 state_mem.go:107] "Deleted CPUSet assignment" podUID="c4653b95-1c26-4d87-bf2d-b3daea2414f9" containerName="glance-httpd" Jan 20 19:02:28 crc kubenswrapper[4661]: E0120 19:02:28.987545 4661 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f250573d-7ee6-4f08-92f7-b8997189e124" containerName="mariadb-database-create" Jan 20 19:02:28 crc kubenswrapper[4661]: I0120 19:02:28.987551 4661 state_mem.go:107] "Deleted CPUSet assignment" podUID="f250573d-7ee6-4f08-92f7-b8997189e124" containerName="mariadb-database-create" Jan 20 19:02:28 crc kubenswrapper[4661]: E0120 19:02:28.987563 4661 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c4653b95-1c26-4d87-bf2d-b3daea2414f9" containerName="glance-log" Jan 20 19:02:28 crc kubenswrapper[4661]: I0120 19:02:28.987568 4661 state_mem.go:107] "Deleted CPUSet assignment" podUID="c4653b95-1c26-4d87-bf2d-b3daea2414f9" containerName="glance-log" Jan 20 19:02:28 crc kubenswrapper[4661]: E0120 19:02:28.987577 4661 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7fa209ed-495f-49bb-b9cc-01ad4e1032a3" containerName="mariadb-account-create-update" Jan 20 19:02:28 crc kubenswrapper[4661]: I0120 19:02:28.987582 4661 state_mem.go:107] "Deleted CPUSet assignment" podUID="7fa209ed-495f-49bb-b9cc-01ad4e1032a3" containerName="mariadb-account-create-update" Jan 20 19:02:28 crc kubenswrapper[4661]: E0120 19:02:28.987590 4661 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7030a848-b903-4001-a699-5683451486b8" containerName="glance-log" Jan 20 19:02:28 crc kubenswrapper[4661]: I0120 19:02:28.987597 4661 state_mem.go:107] "Deleted CPUSet assignment" podUID="7030a848-b903-4001-a699-5683451486b8" containerName="glance-log" Jan 20 19:02:28 crc kubenswrapper[4661]: I0120 19:02:28.987803 4661 memory_manager.go:354] "RemoveStaleState removing state" podUID="c4653b95-1c26-4d87-bf2d-b3daea2414f9" containerName="glance-httpd" Jan 20 19:02:28 crc kubenswrapper[4661]: I0120 19:02:28.987822 4661 memory_manager.go:354] "RemoveStaleState removing state" podUID="c4653b95-1c26-4d87-bf2d-b3daea2414f9" containerName="glance-log" Jan 20 19:02:28 crc kubenswrapper[4661]: I0120 19:02:28.987833 4661 memory_manager.go:354] "RemoveStaleState removing state" podUID="7fa209ed-495f-49bb-b9cc-01ad4e1032a3" containerName="mariadb-account-create-update" Jan 20 19:02:28 crc kubenswrapper[4661]: I0120 19:02:28.987843 4661 memory_manager.go:354] "RemoveStaleState removing state" podUID="7030a848-b903-4001-a699-5683451486b8" containerName="glance-httpd" Jan 20 19:02:28 crc kubenswrapper[4661]: I0120 19:02:28.987854 4661 memory_manager.go:354] "RemoveStaleState removing state" podUID="7030a848-b903-4001-a699-5683451486b8" containerName="glance-log" Jan 20 19:02:28 crc kubenswrapper[4661]: I0120 19:02:28.987861 4661 memory_manager.go:354] "RemoveStaleState removing state" podUID="f250573d-7ee6-4f08-92f7-b8997189e124" containerName="mariadb-database-create" Jan 20 19:02:28 crc kubenswrapper[4661]: I0120 19:02:28.988767 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 20 19:02:28 crc kubenswrapper[4661]: I0120 19:02:28.991479 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-q5zvh" Jan 20 19:02:28 crc kubenswrapper[4661]: I0120 19:02:28.991827 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Jan 20 19:02:28 crc kubenswrapper[4661]: I0120 19:02:28.991939 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Jan 20 19:02:28 crc kubenswrapper[4661]: I0120 19:02:28.991990 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 20 19:02:29 crc kubenswrapper[4661]: I0120 19:02:29.005430 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 20 19:02:29 crc kubenswrapper[4661]: I0120 19:02:29.017490 4661 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 20 19:02:29 crc kubenswrapper[4661]: I0120 19:02:29.022838 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 20 19:02:29 crc kubenswrapper[4661]: I0120 19:02:29.025280 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 20 19:02:29 crc kubenswrapper[4661]: I0120 19:02:29.027984 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 20 19:02:29 crc kubenswrapper[4661]: I0120 19:02:29.028224 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Jan 20 19:02:29 crc kubenswrapper[4661]: I0120 19:02:29.096341 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/26659b2b-07b6-4184-b2a8-bad999a10fd3-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"26659b2b-07b6-4184-b2a8-bad999a10fd3\") " pod="openstack/glance-default-internal-api-0" Jan 20 19:02:29 crc kubenswrapper[4661]: I0120 19:02:29.096761 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/26659b2b-07b6-4184-b2a8-bad999a10fd3-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"26659b2b-07b6-4184-b2a8-bad999a10fd3\") " pod="openstack/glance-default-internal-api-0" Jan 20 19:02:29 crc kubenswrapper[4661]: I0120 19:02:29.096824 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/26659b2b-07b6-4184-b2a8-bad999a10fd3-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"26659b2b-07b6-4184-b2a8-bad999a10fd3\") " pod="openstack/glance-default-internal-api-0" Jan 20 19:02:29 crc kubenswrapper[4661]: I0120 19:02:29.096990 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t4xzb\" (UniqueName: \"kubernetes.io/projected/26659b2b-07b6-4184-b2a8-bad999a10fd3-kube-api-access-t4xzb\") pod \"glance-default-internal-api-0\" (UID: \"26659b2b-07b6-4184-b2a8-bad999a10fd3\") " pod="openstack/glance-default-internal-api-0" Jan 20 19:02:29 crc kubenswrapper[4661]: I0120 19:02:29.097054 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/26659b2b-07b6-4184-b2a8-bad999a10fd3-scripts\") pod \"glance-default-internal-api-0\" (UID: \"26659b2b-07b6-4184-b2a8-bad999a10fd3\") " pod="openstack/glance-default-internal-api-0" Jan 20 19:02:29 crc kubenswrapper[4661]: I0120 19:02:29.097083 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/ff298cae-f405-48cf-a8b3-c297f1e6cf80-ceph\") pod \"glance-default-external-api-0\" (UID: \"ff298cae-f405-48cf-a8b3-c297f1e6cf80\") " pod="openstack/glance-default-external-api-0" Jan 20 19:02:29 crc kubenswrapper[4661]: I0120 19:02:29.097228 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ff298cae-f405-48cf-a8b3-c297f1e6cf80-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"ff298cae-f405-48cf-a8b3-c297f1e6cf80\") " pod="openstack/glance-default-external-api-0" Jan 20 19:02:29 crc kubenswrapper[4661]: I0120 19:02:29.097286 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/26659b2b-07b6-4184-b2a8-bad999a10fd3-logs\") pod \"glance-default-internal-api-0\" (UID: \"26659b2b-07b6-4184-b2a8-bad999a10fd3\") " pod="openstack/glance-default-internal-api-0" Jan 20 19:02:29 crc kubenswrapper[4661]: I0120 19:02:29.097333 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ff298cae-f405-48cf-a8b3-c297f1e6cf80-logs\") pod \"glance-default-external-api-0\" (UID: \"ff298cae-f405-48cf-a8b3-c297f1e6cf80\") " pod="openstack/glance-default-external-api-0" Jan 20 19:02:29 crc kubenswrapper[4661]: I0120 19:02:29.097347 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff298cae-f405-48cf-a8b3-c297f1e6cf80-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"ff298cae-f405-48cf-a8b3-c297f1e6cf80\") " pod="openstack/glance-default-external-api-0" Jan 20 19:02:29 crc kubenswrapper[4661]: I0120 19:02:29.097422 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/26659b2b-07b6-4184-b2a8-bad999a10fd3-ceph\") pod \"glance-default-internal-api-0\" (UID: \"26659b2b-07b6-4184-b2a8-bad999a10fd3\") " pod="openstack/glance-default-internal-api-0" Jan 20 19:02:29 crc kubenswrapper[4661]: I0120 19:02:29.097449 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/ff298cae-f405-48cf-a8b3-c297f1e6cf80-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"ff298cae-f405-48cf-a8b3-c297f1e6cf80\") " pod="openstack/glance-default-external-api-0" Jan 20 19:02:29 crc kubenswrapper[4661]: I0120 19:02:29.097472 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-external-api-0\" (UID: \"ff298cae-f405-48cf-a8b3-c297f1e6cf80\") " pod="openstack/glance-default-external-api-0" Jan 20 19:02:29 crc kubenswrapper[4661]: I0120 19:02:29.097508 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ff298cae-f405-48cf-a8b3-c297f1e6cf80-config-data\") pod \"glance-default-external-api-0\" (UID: \"ff298cae-f405-48cf-a8b3-c297f1e6cf80\") " pod="openstack/glance-default-external-api-0" Jan 20 19:02:29 crc kubenswrapper[4661]: I0120 19:02:29.097556 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/26659b2b-07b6-4184-b2a8-bad999a10fd3-config-data\") pod \"glance-default-internal-api-0\" (UID: \"26659b2b-07b6-4184-b2a8-bad999a10fd3\") " pod="openstack/glance-default-internal-api-0" Jan 20 19:02:29 crc kubenswrapper[4661]: I0120 19:02:29.097614 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ff298cae-f405-48cf-a8b3-c297f1e6cf80-scripts\") pod \"glance-default-external-api-0\" (UID: \"ff298cae-f405-48cf-a8b3-c297f1e6cf80\") " pod="openstack/glance-default-external-api-0" Jan 20 19:02:29 crc kubenswrapper[4661]: I0120 19:02:29.097700 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-internal-api-0\" (UID: \"26659b2b-07b6-4184-b2a8-bad999a10fd3\") " pod="openstack/glance-default-internal-api-0" Jan 20 19:02:29 crc kubenswrapper[4661]: I0120 19:02:29.097723 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v9k46\" (UniqueName: \"kubernetes.io/projected/ff298cae-f405-48cf-a8b3-c297f1e6cf80-kube-api-access-v9k46\") pod \"glance-default-external-api-0\" (UID: \"ff298cae-f405-48cf-a8b3-c297f1e6cf80\") " pod="openstack/glance-default-external-api-0" Jan 20 19:02:29 crc kubenswrapper[4661]: I0120 19:02:29.199650 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/26659b2b-07b6-4184-b2a8-bad999a10fd3-logs\") pod \"glance-default-internal-api-0\" (UID: \"26659b2b-07b6-4184-b2a8-bad999a10fd3\") " pod="openstack/glance-default-internal-api-0" Jan 20 19:02:29 crc kubenswrapper[4661]: I0120 19:02:29.199727 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ff298cae-f405-48cf-a8b3-c297f1e6cf80-logs\") pod \"glance-default-external-api-0\" (UID: \"ff298cae-f405-48cf-a8b3-c297f1e6cf80\") " pod="openstack/glance-default-external-api-0" Jan 20 19:02:29 crc kubenswrapper[4661]: I0120 19:02:29.199748 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff298cae-f405-48cf-a8b3-c297f1e6cf80-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"ff298cae-f405-48cf-a8b3-c297f1e6cf80\") " pod="openstack/glance-default-external-api-0" Jan 20 19:02:29 crc kubenswrapper[4661]: I0120 19:02:29.199784 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/26659b2b-07b6-4184-b2a8-bad999a10fd3-ceph\") pod \"glance-default-internal-api-0\" (UID: \"26659b2b-07b6-4184-b2a8-bad999a10fd3\") " pod="openstack/glance-default-internal-api-0" Jan 20 19:02:29 crc kubenswrapper[4661]: I0120 19:02:29.199803 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/ff298cae-f405-48cf-a8b3-c297f1e6cf80-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"ff298cae-f405-48cf-a8b3-c297f1e6cf80\") " pod="openstack/glance-default-external-api-0" Jan 20 19:02:29 crc kubenswrapper[4661]: I0120 19:02:29.199822 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-external-api-0\" (UID: \"ff298cae-f405-48cf-a8b3-c297f1e6cf80\") " pod="openstack/glance-default-external-api-0" Jan 20 19:02:29 crc kubenswrapper[4661]: I0120 19:02:29.199861 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ff298cae-f405-48cf-a8b3-c297f1e6cf80-config-data\") pod \"glance-default-external-api-0\" (UID: \"ff298cae-f405-48cf-a8b3-c297f1e6cf80\") " pod="openstack/glance-default-external-api-0" Jan 20 19:02:29 crc kubenswrapper[4661]: I0120 19:02:29.199897 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/26659b2b-07b6-4184-b2a8-bad999a10fd3-config-data\") pod \"glance-default-internal-api-0\" (UID: \"26659b2b-07b6-4184-b2a8-bad999a10fd3\") " pod="openstack/glance-default-internal-api-0" Jan 20 19:02:29 crc kubenswrapper[4661]: I0120 19:02:29.199924 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ff298cae-f405-48cf-a8b3-c297f1e6cf80-scripts\") pod \"glance-default-external-api-0\" (UID: \"ff298cae-f405-48cf-a8b3-c297f1e6cf80\") " pod="openstack/glance-default-external-api-0" Jan 20 19:02:29 crc kubenswrapper[4661]: I0120 19:02:29.199947 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-internal-api-0\" (UID: \"26659b2b-07b6-4184-b2a8-bad999a10fd3\") " pod="openstack/glance-default-internal-api-0" Jan 20 19:02:29 crc kubenswrapper[4661]: I0120 19:02:29.199966 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v9k46\" (UniqueName: \"kubernetes.io/projected/ff298cae-f405-48cf-a8b3-c297f1e6cf80-kube-api-access-v9k46\") pod \"glance-default-external-api-0\" (UID: \"ff298cae-f405-48cf-a8b3-c297f1e6cf80\") " pod="openstack/glance-default-external-api-0" Jan 20 19:02:29 crc kubenswrapper[4661]: I0120 19:02:29.200002 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/26659b2b-07b6-4184-b2a8-bad999a10fd3-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"26659b2b-07b6-4184-b2a8-bad999a10fd3\") " pod="openstack/glance-default-internal-api-0" Jan 20 19:02:29 crc kubenswrapper[4661]: I0120 19:02:29.200683 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/ff298cae-f405-48cf-a8b3-c297f1e6cf80-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"ff298cae-f405-48cf-a8b3-c297f1e6cf80\") " pod="openstack/glance-default-external-api-0" Jan 20 19:02:29 crc kubenswrapper[4661]: I0120 19:02:29.200872 4661 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-external-api-0\" (UID: \"ff298cae-f405-48cf-a8b3-c297f1e6cf80\") device mount path \"/mnt/openstack/pv01\"" pod="openstack/glance-default-external-api-0" Jan 20 19:02:29 crc kubenswrapper[4661]: I0120 19:02:29.201416 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/26659b2b-07b6-4184-b2a8-bad999a10fd3-logs\") pod \"glance-default-internal-api-0\" (UID: \"26659b2b-07b6-4184-b2a8-bad999a10fd3\") " pod="openstack/glance-default-internal-api-0" Jan 20 19:02:29 crc kubenswrapper[4661]: I0120 19:02:29.201455 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/26659b2b-07b6-4184-b2a8-bad999a10fd3-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"26659b2b-07b6-4184-b2a8-bad999a10fd3\") " pod="openstack/glance-default-internal-api-0" Jan 20 19:02:29 crc kubenswrapper[4661]: I0120 19:02:29.201688 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/26659b2b-07b6-4184-b2a8-bad999a10fd3-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"26659b2b-07b6-4184-b2a8-bad999a10fd3\") " pod="openstack/glance-default-internal-api-0" Jan 20 19:02:29 crc kubenswrapper[4661]: I0120 19:02:29.201701 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/26659b2b-07b6-4184-b2a8-bad999a10fd3-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"26659b2b-07b6-4184-b2a8-bad999a10fd3\") " pod="openstack/glance-default-internal-api-0" Jan 20 19:02:29 crc kubenswrapper[4661]: I0120 19:02:29.201809 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t4xzb\" (UniqueName: \"kubernetes.io/projected/26659b2b-07b6-4184-b2a8-bad999a10fd3-kube-api-access-t4xzb\") pod \"glance-default-internal-api-0\" (UID: \"26659b2b-07b6-4184-b2a8-bad999a10fd3\") " pod="openstack/glance-default-internal-api-0" Jan 20 19:02:29 crc kubenswrapper[4661]: I0120 19:02:29.201865 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/26659b2b-07b6-4184-b2a8-bad999a10fd3-scripts\") pod \"glance-default-internal-api-0\" (UID: \"26659b2b-07b6-4184-b2a8-bad999a10fd3\") " pod="openstack/glance-default-internal-api-0" Jan 20 19:02:29 crc kubenswrapper[4661]: I0120 19:02:29.201886 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/ff298cae-f405-48cf-a8b3-c297f1e6cf80-ceph\") pod \"glance-default-external-api-0\" (UID: \"ff298cae-f405-48cf-a8b3-c297f1e6cf80\") " pod="openstack/glance-default-external-api-0" Jan 20 19:02:29 crc kubenswrapper[4661]: I0120 19:02:29.201951 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ff298cae-f405-48cf-a8b3-c297f1e6cf80-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"ff298cae-f405-48cf-a8b3-c297f1e6cf80\") " pod="openstack/glance-default-external-api-0" Jan 20 19:02:29 crc kubenswrapper[4661]: I0120 19:02:29.204963 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ff298cae-f405-48cf-a8b3-c297f1e6cf80-logs\") pod \"glance-default-external-api-0\" (UID: \"ff298cae-f405-48cf-a8b3-c297f1e6cf80\") " pod="openstack/glance-default-external-api-0" Jan 20 19:02:29 crc kubenswrapper[4661]: I0120 19:02:29.205839 4661 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-internal-api-0\" (UID: \"26659b2b-07b6-4184-b2a8-bad999a10fd3\") device mount path \"/mnt/openstack/pv03\"" pod="openstack/glance-default-internal-api-0" Jan 20 19:02:29 crc kubenswrapper[4661]: I0120 19:02:29.210787 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ff298cae-f405-48cf-a8b3-c297f1e6cf80-config-data\") pod \"glance-default-external-api-0\" (UID: \"ff298cae-f405-48cf-a8b3-c297f1e6cf80\") " pod="openstack/glance-default-external-api-0" Jan 20 19:02:29 crc kubenswrapper[4661]: I0120 19:02:29.210918 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/ff298cae-f405-48cf-a8b3-c297f1e6cf80-ceph\") pod \"glance-default-external-api-0\" (UID: \"ff298cae-f405-48cf-a8b3-c297f1e6cf80\") " pod="openstack/glance-default-external-api-0" Jan 20 19:02:29 crc kubenswrapper[4661]: I0120 19:02:29.211788 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/26659b2b-07b6-4184-b2a8-bad999a10fd3-ceph\") pod \"glance-default-internal-api-0\" (UID: \"26659b2b-07b6-4184-b2a8-bad999a10fd3\") " pod="openstack/glance-default-internal-api-0" Jan 20 19:02:29 crc kubenswrapper[4661]: I0120 19:02:29.225097 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ff298cae-f405-48cf-a8b3-c297f1e6cf80-scripts\") pod \"glance-default-external-api-0\" (UID: \"ff298cae-f405-48cf-a8b3-c297f1e6cf80\") " pod="openstack/glance-default-external-api-0" Jan 20 19:02:29 crc kubenswrapper[4661]: I0120 19:02:29.229932 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/26659b2b-07b6-4184-b2a8-bad999a10fd3-config-data\") pod \"glance-default-internal-api-0\" (UID: \"26659b2b-07b6-4184-b2a8-bad999a10fd3\") " pod="openstack/glance-default-internal-api-0" Jan 20 19:02:29 crc kubenswrapper[4661]: I0120 19:02:29.231820 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/26659b2b-07b6-4184-b2a8-bad999a10fd3-scripts\") pod \"glance-default-internal-api-0\" (UID: \"26659b2b-07b6-4184-b2a8-bad999a10fd3\") " pod="openstack/glance-default-internal-api-0" Jan 20 19:02:29 crc kubenswrapper[4661]: I0120 19:02:29.232230 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff298cae-f405-48cf-a8b3-c297f1e6cf80-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"ff298cae-f405-48cf-a8b3-c297f1e6cf80\") " pod="openstack/glance-default-external-api-0" Jan 20 19:02:29 crc kubenswrapper[4661]: I0120 19:02:29.234494 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/26659b2b-07b6-4184-b2a8-bad999a10fd3-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"26659b2b-07b6-4184-b2a8-bad999a10fd3\") " pod="openstack/glance-default-internal-api-0" Jan 20 19:02:29 crc kubenswrapper[4661]: I0120 19:02:29.235662 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v9k46\" (UniqueName: \"kubernetes.io/projected/ff298cae-f405-48cf-a8b3-c297f1e6cf80-kube-api-access-v9k46\") pod \"glance-default-external-api-0\" (UID: \"ff298cae-f405-48cf-a8b3-c297f1e6cf80\") " pod="openstack/glance-default-external-api-0" Jan 20 19:02:29 crc kubenswrapper[4661]: I0120 19:02:29.236370 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/26659b2b-07b6-4184-b2a8-bad999a10fd3-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"26659b2b-07b6-4184-b2a8-bad999a10fd3\") " pod="openstack/glance-default-internal-api-0" Jan 20 19:02:29 crc kubenswrapper[4661]: I0120 19:02:29.238545 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t4xzb\" (UniqueName: \"kubernetes.io/projected/26659b2b-07b6-4184-b2a8-bad999a10fd3-kube-api-access-t4xzb\") pod \"glance-default-internal-api-0\" (UID: \"26659b2b-07b6-4184-b2a8-bad999a10fd3\") " pod="openstack/glance-default-internal-api-0" Jan 20 19:02:29 crc kubenswrapper[4661]: I0120 19:02:29.256288 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ff298cae-f405-48cf-a8b3-c297f1e6cf80-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"ff298cae-f405-48cf-a8b3-c297f1e6cf80\") " pod="openstack/glance-default-external-api-0" Jan 20 19:02:29 crc kubenswrapper[4661]: I0120 19:02:29.284853 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-external-api-0\" (UID: \"ff298cae-f405-48cf-a8b3-c297f1e6cf80\") " pod="openstack/glance-default-external-api-0" Jan 20 19:02:29 crc kubenswrapper[4661]: I0120 19:02:29.300960 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-internal-api-0\" (UID: \"26659b2b-07b6-4184-b2a8-bad999a10fd3\") " pod="openstack/glance-default-internal-api-0" Jan 20 19:02:29 crc kubenswrapper[4661]: I0120 19:02:29.312306 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 20 19:02:29 crc kubenswrapper[4661]: I0120 19:02:29.323840 4661 patch_prober.go:28] interesting pod/machine-config-daemon-svf7c container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 20 19:02:29 crc kubenswrapper[4661]: I0120 19:02:29.323891 4661 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-svf7c" podUID="78855c94-da90-4523-8d65-70f7fd153dee" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 20 19:02:29 crc kubenswrapper[4661]: I0120 19:02:29.367443 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 20 19:02:30 crc kubenswrapper[4661]: I0120 19:02:29.900981 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 20 19:02:30 crc kubenswrapper[4661]: I0120 19:02:30.063359 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 20 19:02:30 crc kubenswrapper[4661]: I0120 19:02:30.165865 4661 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7030a848-b903-4001-a699-5683451486b8" path="/var/lib/kubelet/pods/7030a848-b903-4001-a699-5683451486b8/volumes" Jan 20 19:02:30 crc kubenswrapper[4661]: I0120 19:02:30.166713 4661 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c4653b95-1c26-4d87-bf2d-b3daea2414f9" path="/var/lib/kubelet/pods/c4653b95-1c26-4d87-bf2d-b3daea2414f9/volumes" Jan 20 19:02:30 crc kubenswrapper[4661]: I0120 19:02:30.259097 4661 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-volume-volume1-0" Jan 20 19:02:30 crc kubenswrapper[4661]: I0120 19:02:30.397497 4661 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-backup-0" Jan 20 19:02:30 crc kubenswrapper[4661]: I0120 19:02:30.737315 4661 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-db-sync-glqvr"] Jan 20 19:02:30 crc kubenswrapper[4661]: I0120 19:02:30.738385 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-sync-glqvr" Jan 20 19:02:30 crc kubenswrapper[4661]: I0120 19:02:30.741066 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-manila-dockercfg-x66qp" Jan 20 19:02:30 crc kubenswrapper[4661]: I0120 19:02:30.741265 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-config-data" Jan 20 19:02:30 crc kubenswrapper[4661]: I0120 19:02:30.751396 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-db-sync-glqvr"] Jan 20 19:02:30 crc kubenswrapper[4661]: I0120 19:02:30.856810 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/147f6908-f22c-451e-85e0-d75cce7af6a7-combined-ca-bundle\") pod \"manila-db-sync-glqvr\" (UID: \"147f6908-f22c-451e-85e0-d75cce7af6a7\") " pod="openstack/manila-db-sync-glqvr" Jan 20 19:02:30 crc kubenswrapper[4661]: I0120 19:02:30.856916 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8ph4t\" (UniqueName: \"kubernetes.io/projected/147f6908-f22c-451e-85e0-d75cce7af6a7-kube-api-access-8ph4t\") pod \"manila-db-sync-glqvr\" (UID: \"147f6908-f22c-451e-85e0-d75cce7af6a7\") " pod="openstack/manila-db-sync-glqvr" Jan 20 19:02:30 crc kubenswrapper[4661]: I0120 19:02:30.856975 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"job-config-data\" (UniqueName: \"kubernetes.io/secret/147f6908-f22c-451e-85e0-d75cce7af6a7-job-config-data\") pod \"manila-db-sync-glqvr\" (UID: \"147f6908-f22c-451e-85e0-d75cce7af6a7\") " pod="openstack/manila-db-sync-glqvr" Jan 20 19:02:30 crc kubenswrapper[4661]: I0120 19:02:30.857055 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/147f6908-f22c-451e-85e0-d75cce7af6a7-config-data\") pod \"manila-db-sync-glqvr\" (UID: \"147f6908-f22c-451e-85e0-d75cce7af6a7\") " pod="openstack/manila-db-sync-glqvr" Jan 20 19:02:30 crc kubenswrapper[4661]: I0120 19:02:30.958655 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/147f6908-f22c-451e-85e0-d75cce7af6a7-config-data\") pod \"manila-db-sync-glqvr\" (UID: \"147f6908-f22c-451e-85e0-d75cce7af6a7\") " pod="openstack/manila-db-sync-glqvr" Jan 20 19:02:30 crc kubenswrapper[4661]: I0120 19:02:30.958753 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/147f6908-f22c-451e-85e0-d75cce7af6a7-combined-ca-bundle\") pod \"manila-db-sync-glqvr\" (UID: \"147f6908-f22c-451e-85e0-d75cce7af6a7\") " pod="openstack/manila-db-sync-glqvr" Jan 20 19:02:30 crc kubenswrapper[4661]: I0120 19:02:30.958827 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8ph4t\" (UniqueName: \"kubernetes.io/projected/147f6908-f22c-451e-85e0-d75cce7af6a7-kube-api-access-8ph4t\") pod \"manila-db-sync-glqvr\" (UID: \"147f6908-f22c-451e-85e0-d75cce7af6a7\") " pod="openstack/manila-db-sync-glqvr" Jan 20 19:02:30 crc kubenswrapper[4661]: I0120 19:02:30.958874 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"job-config-data\" (UniqueName: \"kubernetes.io/secret/147f6908-f22c-451e-85e0-d75cce7af6a7-job-config-data\") pod \"manila-db-sync-glqvr\" (UID: \"147f6908-f22c-451e-85e0-d75cce7af6a7\") " pod="openstack/manila-db-sync-glqvr" Jan 20 19:02:30 crc kubenswrapper[4661]: I0120 19:02:30.969847 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/147f6908-f22c-451e-85e0-d75cce7af6a7-combined-ca-bundle\") pod \"manila-db-sync-glqvr\" (UID: \"147f6908-f22c-451e-85e0-d75cce7af6a7\") " pod="openstack/manila-db-sync-glqvr" Jan 20 19:02:30 crc kubenswrapper[4661]: I0120 19:02:30.970682 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/147f6908-f22c-451e-85e0-d75cce7af6a7-config-data\") pod \"manila-db-sync-glqvr\" (UID: \"147f6908-f22c-451e-85e0-d75cce7af6a7\") " pod="openstack/manila-db-sync-glqvr" Jan 20 19:02:30 crc kubenswrapper[4661]: I0120 19:02:30.975623 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8ph4t\" (UniqueName: \"kubernetes.io/projected/147f6908-f22c-451e-85e0-d75cce7af6a7-kube-api-access-8ph4t\") pod \"manila-db-sync-glqvr\" (UID: \"147f6908-f22c-451e-85e0-d75cce7af6a7\") " pod="openstack/manila-db-sync-glqvr" Jan 20 19:02:30 crc kubenswrapper[4661]: I0120 19:02:30.981015 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"job-config-data\" (UniqueName: \"kubernetes.io/secret/147f6908-f22c-451e-85e0-d75cce7af6a7-job-config-data\") pod \"manila-db-sync-glqvr\" (UID: \"147f6908-f22c-451e-85e0-d75cce7af6a7\") " pod="openstack/manila-db-sync-glqvr" Jan 20 19:02:31 crc kubenswrapper[4661]: I0120 19:02:31.080711 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-sync-glqvr" Jan 20 19:02:35 crc kubenswrapper[4661]: W0120 19:02:35.605921 4661 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podff298cae_f405_48cf_a8b3_c297f1e6cf80.slice/crio-f4a11555e1cfc551c7b9430069ddf44a43f143725ac5bab29b0e49f99920d688 WatchSource:0}: Error finding container f4a11555e1cfc551c7b9430069ddf44a43f143725ac5bab29b0e49f99920d688: Status 404 returned error can't find the container with id f4a11555e1cfc551c7b9430069ddf44a43f143725ac5bab29b0e49f99920d688 Jan 20 19:02:35 crc kubenswrapper[4661]: W0120 19:02:35.625572 4661 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod26659b2b_07b6_4184_b2a8_bad999a10fd3.slice/crio-ad3442fdb0ed4a9606a0aed40c50f6d51f5f4ba600e6935c6360396cb155f7e8 WatchSource:0}: Error finding container ad3442fdb0ed4a9606a0aed40c50f6d51f5f4ba600e6935c6360396cb155f7e8: Status 404 returned error can't find the container with id ad3442fdb0ed4a9606a0aed40c50f6d51f5f4ba600e6935c6360396cb155f7e8 Jan 20 19:02:35 crc kubenswrapper[4661]: I0120 19:02:35.686053 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"26659b2b-07b6-4184-b2a8-bad999a10fd3","Type":"ContainerStarted","Data":"ad3442fdb0ed4a9606a0aed40c50f6d51f5f4ba600e6935c6360396cb155f7e8"} Jan 20 19:02:35 crc kubenswrapper[4661]: I0120 19:02:35.688067 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"ff298cae-f405-48cf-a8b3-c297f1e6cf80","Type":"ContainerStarted","Data":"f4a11555e1cfc551c7b9430069ddf44a43f143725ac5bab29b0e49f99920d688"} Jan 20 19:02:36 crc kubenswrapper[4661]: I0120 19:02:36.315874 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-db-sync-glqvr"] Jan 20 19:02:36 crc kubenswrapper[4661]: W0120 19:02:36.322007 4661 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod147f6908_f22c_451e_85e0_d75cce7af6a7.slice/crio-50cd6a346d7a5df7857cbd71c9e3b1919c635b773508411071bc7dc40696b7eb WatchSource:0}: Error finding container 50cd6a346d7a5df7857cbd71c9e3b1919c635b773508411071bc7dc40696b7eb: Status 404 returned error can't find the container with id 50cd6a346d7a5df7857cbd71c9e3b1919c635b773508411071bc7dc40696b7eb Jan 20 19:02:36 crc kubenswrapper[4661]: I0120 19:02:36.328928 4661 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 20 19:02:36 crc kubenswrapper[4661]: I0120 19:02:36.721518 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-658f6cd46d-59d52" event={"ID":"3c2196ee-0a5d-49b8-9f9b-4eada2792101","Type":"ContainerStarted","Data":"29f8d1b4cf76d5aa860478cdb28b75fa38bdda23884b9bb436d5a275de5c0ae2"} Jan 20 19:02:36 crc kubenswrapper[4661]: I0120 19:02:36.721814 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-658f6cd46d-59d52" event={"ID":"3c2196ee-0a5d-49b8-9f9b-4eada2792101","Type":"ContainerStarted","Data":"7d6d43999c83a74698dcba5b87507cff4c82bfb797d4c0633c910031d9bc712d"} Jan 20 19:02:36 crc kubenswrapper[4661]: I0120 19:02:36.732966 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"ff298cae-f405-48cf-a8b3-c297f1e6cf80","Type":"ContainerStarted","Data":"88d830a7c23bd43a009ac0d02d3d01eca080e792fdca38692456e2747bc63a4d"} Jan 20 19:02:36 crc kubenswrapper[4661]: I0120 19:02:36.757083 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-db-sync-glqvr" event={"ID":"147f6908-f22c-451e-85e0-d75cce7af6a7","Type":"ContainerStarted","Data":"50cd6a346d7a5df7857cbd71c9e3b1919c635b773508411071bc7dc40696b7eb"} Jan 20 19:02:36 crc kubenswrapper[4661]: I0120 19:02:36.779101 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-68df949b55-t6lcn" event={"ID":"24a02294-d575-420a-a004-9eaac022318e","Type":"ContainerStarted","Data":"41cff747617a35fc61aab25c795cfd19ccc8c87aee5b4b4176116edc066819f0"} Jan 20 19:02:36 crc kubenswrapper[4661]: I0120 19:02:36.779140 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-68df949b55-t6lcn" event={"ID":"24a02294-d575-420a-a004-9eaac022318e","Type":"ContainerStarted","Data":"96931cecffd32deb0b5a65174eef09968b1a6eab634134c3542de608948b5409"} Jan 20 19:02:36 crc kubenswrapper[4661]: I0120 19:02:36.779265 4661 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-68df949b55-t6lcn" podUID="24a02294-d575-420a-a004-9eaac022318e" containerName="horizon-log" containerID="cri-o://96931cecffd32deb0b5a65174eef09968b1a6eab634134c3542de608948b5409" gracePeriod=30 Jan 20 19:02:36 crc kubenswrapper[4661]: I0120 19:02:36.779920 4661 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-68df949b55-t6lcn" podUID="24a02294-d575-420a-a004-9eaac022318e" containerName="horizon" containerID="cri-o://41cff747617a35fc61aab25c795cfd19ccc8c87aee5b4b4176116edc066819f0" gracePeriod=30 Jan 20 19:02:36 crc kubenswrapper[4661]: I0120 19:02:36.801389 4661 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-658f6cd46d-59d52" podStartSLOduration=2.884058677 podStartE2EDuration="13.801371157s" podCreationTimestamp="2026-01-20 19:02:23 +0000 UTC" firstStartedPulling="2026-01-20 19:02:24.891024282 +0000 UTC m=+3401.221813944" lastFinishedPulling="2026-01-20 19:02:35.808336762 +0000 UTC m=+3412.139126424" observedRunningTime="2026-01-20 19:02:36.757281773 +0000 UTC m=+3413.088071435" watchObservedRunningTime="2026-01-20 19:02:36.801371157 +0000 UTC m=+3413.132160819" Jan 20 19:02:36 crc kubenswrapper[4661]: I0120 19:02:36.810385 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-844bcbddd8-v9pcc" event={"ID":"ea579f19-b21d-4098-8f52-517be45768fb","Type":"ContainerStarted","Data":"f921a9bdfcd4155d0506b5d9b39786056aba403a0899c9ec152853ac2b3655f1"} Jan 20 19:02:36 crc kubenswrapper[4661]: I0120 19:02:36.810441 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-844bcbddd8-v9pcc" event={"ID":"ea579f19-b21d-4098-8f52-517be45768fb","Type":"ContainerStarted","Data":"5111dbc27e814b94b5795799cd69cac1ddd769cbf1816e3666e0f1967d88e741"} Jan 20 19:02:36 crc kubenswrapper[4661]: I0120 19:02:36.817783 4661 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-68df949b55-t6lcn" podStartSLOduration=2.505322658 podStartE2EDuration="16.817766087s" podCreationTimestamp="2026-01-20 19:02:20 +0000 UTC" firstStartedPulling="2026-01-20 19:02:21.505389632 +0000 UTC m=+3397.836179294" lastFinishedPulling="2026-01-20 19:02:35.817833061 +0000 UTC m=+3412.148622723" observedRunningTime="2026-01-20 19:02:36.799718764 +0000 UTC m=+3413.130508436" watchObservedRunningTime="2026-01-20 19:02:36.817766087 +0000 UTC m=+3413.148555749" Jan 20 19:02:36 crc kubenswrapper[4661]: I0120 19:02:36.821081 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-676bf6649-b97jp" event={"ID":"c7b62ec6-2f48-4eba-a9dc-612d50b1c7f4","Type":"ContainerStarted","Data":"a5f896f95be8c2cc37c73cee92fa21cb98027960309cefa619025e9a051a0760"} Jan 20 19:02:36 crc kubenswrapper[4661]: I0120 19:02:36.821144 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-676bf6649-b97jp" event={"ID":"c7b62ec6-2f48-4eba-a9dc-612d50b1c7f4","Type":"ContainerStarted","Data":"27aa1539165405c3cef6d31f01c8779f30536b05d0703687c5140f97d0808c1c"} Jan 20 19:02:36 crc kubenswrapper[4661]: I0120 19:02:36.821154 4661 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-676bf6649-b97jp" podUID="c7b62ec6-2f48-4eba-a9dc-612d50b1c7f4" containerName="horizon-log" containerID="cri-o://27aa1539165405c3cef6d31f01c8779f30536b05d0703687c5140f97d0808c1c" gracePeriod=30 Jan 20 19:02:36 crc kubenswrapper[4661]: I0120 19:02:36.821213 4661 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-676bf6649-b97jp" podUID="c7b62ec6-2f48-4eba-a9dc-612d50b1c7f4" containerName="horizon" containerID="cri-o://a5f896f95be8c2cc37c73cee92fa21cb98027960309cefa619025e9a051a0760" gracePeriod=30 Jan 20 19:02:36 crc kubenswrapper[4661]: I0120 19:02:36.831788 4661 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-844bcbddd8-v9pcc" podStartSLOduration=3.660349828 podStartE2EDuration="14.831765393s" podCreationTimestamp="2026-01-20 19:02:22 +0000 UTC" firstStartedPulling="2026-01-20 19:02:24.739768901 +0000 UTC m=+3401.070558563" lastFinishedPulling="2026-01-20 19:02:35.911184466 +0000 UTC m=+3412.241974128" observedRunningTime="2026-01-20 19:02:36.829326729 +0000 UTC m=+3413.160116401" watchObservedRunningTime="2026-01-20 19:02:36.831765393 +0000 UTC m=+3413.162555055" Jan 20 19:02:36 crc kubenswrapper[4661]: I0120 19:02:36.853986 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"26659b2b-07b6-4184-b2a8-bad999a10fd3","Type":"ContainerStarted","Data":"9387ddabf620fe4b28fcc1e392d92f5c65ef7e7df43b5768c7cfff5685841003"} Jan 20 19:02:36 crc kubenswrapper[4661]: I0120 19:02:36.862207 4661 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-676bf6649-b97jp" podStartSLOduration=2.833381229 podStartE2EDuration="16.86218814s" podCreationTimestamp="2026-01-20 19:02:20 +0000 UTC" firstStartedPulling="2026-01-20 19:02:21.705637386 +0000 UTC m=+3398.036427038" lastFinishedPulling="2026-01-20 19:02:35.734444277 +0000 UTC m=+3412.065233949" observedRunningTime="2026-01-20 19:02:36.859927071 +0000 UTC m=+3413.190716733" watchObservedRunningTime="2026-01-20 19:02:36.86218814 +0000 UTC m=+3413.192977792" Jan 20 19:02:37 crc kubenswrapper[4661]: I0120 19:02:37.872601 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"ff298cae-f405-48cf-a8b3-c297f1e6cf80","Type":"ContainerStarted","Data":"c3c3843e46e58ff627b51a55c7d2d5b1437ffa9e2b9431b7160c15a9f6a09b77"} Jan 20 19:02:37 crc kubenswrapper[4661]: I0120 19:02:37.875778 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"26659b2b-07b6-4184-b2a8-bad999a10fd3","Type":"ContainerStarted","Data":"239a86cefcdb2537038fa055ca3e670d1f956a40f9860a95e1f37bfb6461030f"} Jan 20 19:02:37 crc kubenswrapper[4661]: I0120 19:02:37.900094 4661 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=9.900076649 podStartE2EDuration="9.900076649s" podCreationTimestamp="2026-01-20 19:02:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 19:02:37.888854055 +0000 UTC m=+3414.219643717" watchObservedRunningTime="2026-01-20 19:02:37.900076649 +0000 UTC m=+3414.230866311" Jan 20 19:02:39 crc kubenswrapper[4661]: I0120 19:02:39.313527 4661 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 20 19:02:39 crc kubenswrapper[4661]: I0120 19:02:39.313888 4661 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 20 19:02:39 crc kubenswrapper[4661]: I0120 19:02:39.352200 4661 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 20 19:02:39 crc kubenswrapper[4661]: I0120 19:02:39.366939 4661 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 20 19:02:39 crc kubenswrapper[4661]: I0120 19:02:39.368114 4661 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 20 19:02:39 crc kubenswrapper[4661]: I0120 19:02:39.368154 4661 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 20 19:02:39 crc kubenswrapper[4661]: I0120 19:02:39.379425 4661 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=11.379405348 podStartE2EDuration="11.379405348s" podCreationTimestamp="2026-01-20 19:02:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 19:02:37.915826392 +0000 UTC m=+3414.246616044" watchObservedRunningTime="2026-01-20 19:02:39.379405348 +0000 UTC m=+3415.710195010" Jan 20 19:02:39 crc kubenswrapper[4661]: I0120 19:02:39.414988 4661 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 20 19:02:39 crc kubenswrapper[4661]: I0120 19:02:39.444548 4661 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 20 19:02:39 crc kubenswrapper[4661]: I0120 19:02:39.901197 4661 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 20 19:02:39 crc kubenswrapper[4661]: I0120 19:02:39.901560 4661 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 20 19:02:39 crc kubenswrapper[4661]: I0120 19:02:39.901577 4661 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 20 19:02:39 crc kubenswrapper[4661]: I0120 19:02:39.901590 4661 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 20 19:02:40 crc kubenswrapper[4661]: I0120 19:02:40.703482 4661 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-68df949b55-t6lcn" Jan 20 19:02:40 crc kubenswrapper[4661]: I0120 19:02:40.924860 4661 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-676bf6649-b97jp" Jan 20 19:02:43 crc kubenswrapper[4661]: I0120 19:02:43.486417 4661 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-844bcbddd8-v9pcc" Jan 20 19:02:43 crc kubenswrapper[4661]: I0120 19:02:43.486921 4661 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-844bcbddd8-v9pcc" Jan 20 19:02:43 crc kubenswrapper[4661]: I0120 19:02:43.664361 4661 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-658f6cd46d-59d52" Jan 20 19:02:43 crc kubenswrapper[4661]: I0120 19:02:43.665124 4661 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-658f6cd46d-59d52" Jan 20 19:02:43 crc kubenswrapper[4661]: I0120 19:02:43.965509 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-db-sync-glqvr" event={"ID":"147f6908-f22c-451e-85e0-d75cce7af6a7","Type":"ContainerStarted","Data":"f837cd65f87bf0f896233b6c14fdf72c29a919824b0b36a248eb103b1c6a5d27"} Jan 20 19:02:43 crc kubenswrapper[4661]: I0120 19:02:43.986384 4661 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/manila-db-sync-glqvr" podStartSLOduration=7.475387148 podStartE2EDuration="13.98636044s" podCreationTimestamp="2026-01-20 19:02:30 +0000 UTC" firstStartedPulling="2026-01-20 19:02:36.328649588 +0000 UTC m=+3412.659439250" lastFinishedPulling="2026-01-20 19:02:42.83962288 +0000 UTC m=+3419.170412542" observedRunningTime="2026-01-20 19:02:43.980490887 +0000 UTC m=+3420.311280559" watchObservedRunningTime="2026-01-20 19:02:43.98636044 +0000 UTC m=+3420.317150122" Jan 20 19:02:53 crc kubenswrapper[4661]: I0120 19:02:53.052052 4661 generic.go:334] "Generic (PLEG): container finished" podID="147f6908-f22c-451e-85e0-d75cce7af6a7" containerID="f837cd65f87bf0f896233b6c14fdf72c29a919824b0b36a248eb103b1c6a5d27" exitCode=0 Jan 20 19:02:53 crc kubenswrapper[4661]: I0120 19:02:53.053776 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-db-sync-glqvr" event={"ID":"147f6908-f22c-451e-85e0-d75cce7af6a7","Type":"ContainerDied","Data":"f837cd65f87bf0f896233b6c14fdf72c29a919824b0b36a248eb103b1c6a5d27"} Jan 20 19:02:53 crc kubenswrapper[4661]: I0120 19:02:53.490094 4661 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-844bcbddd8-v9pcc" podUID="ea579f19-b21d-4098-8f52-517be45768fb" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.246:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.246:8443: connect: connection refused" Jan 20 19:02:53 crc kubenswrapper[4661]: I0120 19:02:53.660969 4661 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-658f6cd46d-59d52" podUID="3c2196ee-0a5d-49b8-9f9b-4eada2792101" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.247:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.247:8443: connect: connection refused" Jan 20 19:02:54 crc kubenswrapper[4661]: I0120 19:02:54.634933 4661 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-sync-glqvr" Jan 20 19:02:54 crc kubenswrapper[4661]: I0120 19:02:54.798192 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/147f6908-f22c-451e-85e0-d75cce7af6a7-combined-ca-bundle\") pod \"147f6908-f22c-451e-85e0-d75cce7af6a7\" (UID: \"147f6908-f22c-451e-85e0-d75cce7af6a7\") " Jan 20 19:02:54 crc kubenswrapper[4661]: I0120 19:02:54.798368 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/147f6908-f22c-451e-85e0-d75cce7af6a7-config-data\") pod \"147f6908-f22c-451e-85e0-d75cce7af6a7\" (UID: \"147f6908-f22c-451e-85e0-d75cce7af6a7\") " Jan 20 19:02:54 crc kubenswrapper[4661]: I0120 19:02:54.798401 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"job-config-data\" (UniqueName: \"kubernetes.io/secret/147f6908-f22c-451e-85e0-d75cce7af6a7-job-config-data\") pod \"147f6908-f22c-451e-85e0-d75cce7af6a7\" (UID: \"147f6908-f22c-451e-85e0-d75cce7af6a7\") " Jan 20 19:02:54 crc kubenswrapper[4661]: I0120 19:02:54.798516 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8ph4t\" (UniqueName: \"kubernetes.io/projected/147f6908-f22c-451e-85e0-d75cce7af6a7-kube-api-access-8ph4t\") pod \"147f6908-f22c-451e-85e0-d75cce7af6a7\" (UID: \"147f6908-f22c-451e-85e0-d75cce7af6a7\") " Jan 20 19:02:54 crc kubenswrapper[4661]: I0120 19:02:54.805012 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/147f6908-f22c-451e-85e0-d75cce7af6a7-kube-api-access-8ph4t" (OuterVolumeSpecName: "kube-api-access-8ph4t") pod "147f6908-f22c-451e-85e0-d75cce7af6a7" (UID: "147f6908-f22c-451e-85e0-d75cce7af6a7"). InnerVolumeSpecName "kube-api-access-8ph4t". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 19:02:54 crc kubenswrapper[4661]: I0120 19:02:54.806755 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/147f6908-f22c-451e-85e0-d75cce7af6a7-job-config-data" (OuterVolumeSpecName: "job-config-data") pod "147f6908-f22c-451e-85e0-d75cce7af6a7" (UID: "147f6908-f22c-451e-85e0-d75cce7af6a7"). InnerVolumeSpecName "job-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 19:02:54 crc kubenswrapper[4661]: I0120 19:02:54.808211 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/147f6908-f22c-451e-85e0-d75cce7af6a7-config-data" (OuterVolumeSpecName: "config-data") pod "147f6908-f22c-451e-85e0-d75cce7af6a7" (UID: "147f6908-f22c-451e-85e0-d75cce7af6a7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 19:02:54 crc kubenswrapper[4661]: I0120 19:02:54.826881 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/147f6908-f22c-451e-85e0-d75cce7af6a7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "147f6908-f22c-451e-85e0-d75cce7af6a7" (UID: "147f6908-f22c-451e-85e0-d75cce7af6a7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 19:02:54 crc kubenswrapper[4661]: I0120 19:02:54.901388 4661 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/147f6908-f22c-451e-85e0-d75cce7af6a7-config-data\") on node \"crc\" DevicePath \"\"" Jan 20 19:02:54 crc kubenswrapper[4661]: I0120 19:02:54.901423 4661 reconciler_common.go:293] "Volume detached for volume \"job-config-data\" (UniqueName: \"kubernetes.io/secret/147f6908-f22c-451e-85e0-d75cce7af6a7-job-config-data\") on node \"crc\" DevicePath \"\"" Jan 20 19:02:54 crc kubenswrapper[4661]: I0120 19:02:54.901440 4661 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8ph4t\" (UniqueName: \"kubernetes.io/projected/147f6908-f22c-451e-85e0-d75cce7af6a7-kube-api-access-8ph4t\") on node \"crc\" DevicePath \"\"" Jan 20 19:02:54 crc kubenswrapper[4661]: I0120 19:02:54.901455 4661 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/147f6908-f22c-451e-85e0-d75cce7af6a7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 20 19:02:55 crc kubenswrapper[4661]: I0120 19:02:55.079342 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-db-sync-glqvr" event={"ID":"147f6908-f22c-451e-85e0-d75cce7af6a7","Type":"ContainerDied","Data":"50cd6a346d7a5df7857cbd71c9e3b1919c635b773508411071bc7dc40696b7eb"} Jan 20 19:02:55 crc kubenswrapper[4661]: I0120 19:02:55.079378 4661 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="50cd6a346d7a5df7857cbd71c9e3b1919c635b773508411071bc7dc40696b7eb" Jan 20 19:02:55 crc kubenswrapper[4661]: I0120 19:02:55.079865 4661 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-sync-glqvr" Jan 20 19:02:55 crc kubenswrapper[4661]: I0120 19:02:55.597273 4661 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-scheduler-0"] Jan 20 19:02:55 crc kubenswrapper[4661]: E0120 19:02:55.597861 4661 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="147f6908-f22c-451e-85e0-d75cce7af6a7" containerName="manila-db-sync" Jan 20 19:02:55 crc kubenswrapper[4661]: I0120 19:02:55.597878 4661 state_mem.go:107] "Deleted CPUSet assignment" podUID="147f6908-f22c-451e-85e0-d75cce7af6a7" containerName="manila-db-sync" Jan 20 19:02:55 crc kubenswrapper[4661]: I0120 19:02:55.598079 4661 memory_manager.go:354] "RemoveStaleState removing state" podUID="147f6908-f22c-451e-85e0-d75cce7af6a7" containerName="manila-db-sync" Jan 20 19:02:55 crc kubenswrapper[4661]: I0120 19:02:55.604113 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-scheduler-0" Jan 20 19:02:55 crc kubenswrapper[4661]: I0120 19:02:55.616352 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-scripts" Jan 20 19:02:55 crc kubenswrapper[4661]: I0120 19:02:55.620255 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-config-data" Jan 20 19:02:55 crc kubenswrapper[4661]: I0120 19:02:55.620504 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-scheduler-config-data" Jan 20 19:02:55 crc kubenswrapper[4661]: I0120 19:02:55.620786 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-manila-dockercfg-x66qp" Jan 20 19:02:55 crc kubenswrapper[4661]: I0120 19:02:55.626785 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-scheduler-0"] Jan 20 19:02:55 crc kubenswrapper[4661]: I0120 19:02:55.662083 4661 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-share-share1-0"] Jan 20 19:02:55 crc kubenswrapper[4661]: I0120 19:02:55.663817 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-share-share1-0" Jan 20 19:02:55 crc kubenswrapper[4661]: I0120 19:02:55.667607 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-share-share1-config-data" Jan 20 19:02:55 crc kubenswrapper[4661]: I0120 19:02:55.671265 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-share-share1-0"] Jan 20 19:02:55 crc kubenswrapper[4661]: I0120 19:02:55.717250 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hjmsp\" (UniqueName: \"kubernetes.io/projected/8abbf31c-d723-4945-9d0c-6268b4100937-kube-api-access-hjmsp\") pod \"manila-scheduler-0\" (UID: \"8abbf31c-d723-4945-9d0c-6268b4100937\") " pod="openstack/manila-scheduler-0" Jan 20 19:02:55 crc kubenswrapper[4661]: I0120 19:02:55.717298 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/8abbf31c-d723-4945-9d0c-6268b4100937-etc-machine-id\") pod \"manila-scheduler-0\" (UID: \"8abbf31c-d723-4945-9d0c-6268b4100937\") " pod="openstack/manila-scheduler-0" Jan 20 19:02:55 crc kubenswrapper[4661]: I0120 19:02:55.717321 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8abbf31c-d723-4945-9d0c-6268b4100937-scripts\") pod \"manila-scheduler-0\" (UID: \"8abbf31c-d723-4945-9d0c-6268b4100937\") " pod="openstack/manila-scheduler-0" Jan 20 19:02:55 crc kubenswrapper[4661]: I0120 19:02:55.717344 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8abbf31c-d723-4945-9d0c-6268b4100937-config-data\") pod \"manila-scheduler-0\" (UID: \"8abbf31c-d723-4945-9d0c-6268b4100937\") " pod="openstack/manila-scheduler-0" Jan 20 19:02:55 crc kubenswrapper[4661]: I0120 19:02:55.717378 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8abbf31c-d723-4945-9d0c-6268b4100937-combined-ca-bundle\") pod \"manila-scheduler-0\" (UID: \"8abbf31c-d723-4945-9d0c-6268b4100937\") " pod="openstack/manila-scheduler-0" Jan 20 19:02:55 crc kubenswrapper[4661]: I0120 19:02:55.717444 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8abbf31c-d723-4945-9d0c-6268b4100937-config-data-custom\") pod \"manila-scheduler-0\" (UID: \"8abbf31c-d723-4945-9d0c-6268b4100937\") " pod="openstack/manila-scheduler-0" Jan 20 19:02:55 crc kubenswrapper[4661]: I0120 19:02:55.770788 4661 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-77766fdf55-f5frm"] Jan 20 19:02:55 crc kubenswrapper[4661]: I0120 19:02:55.781251 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-77766fdf55-f5frm" Jan 20 19:02:55 crc kubenswrapper[4661]: I0120 19:02:55.798088 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-77766fdf55-f5frm"] Jan 20 19:02:55 crc kubenswrapper[4661]: I0120 19:02:55.820503 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8abbf31c-d723-4945-9d0c-6268b4100937-config-data\") pod \"manila-scheduler-0\" (UID: \"8abbf31c-d723-4945-9d0c-6268b4100937\") " pod="openstack/manila-scheduler-0" Jan 20 19:02:55 crc kubenswrapper[4661]: I0120 19:02:55.820572 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/adb303b3-0970-4438-92ee-dd30a6fb55b2-config-data\") pod \"manila-share-share1-0\" (UID: \"adb303b3-0970-4438-92ee-dd30a6fb55b2\") " pod="openstack/manila-share-share1-0" Jan 20 19:02:55 crc kubenswrapper[4661]: I0120 19:02:55.820592 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/adb303b3-0970-4438-92ee-dd30a6fb55b2-ceph\") pod \"manila-share-share1-0\" (UID: \"adb303b3-0970-4438-92ee-dd30a6fb55b2\") " pod="openstack/manila-share-share1-0" Jan 20 19:02:55 crc kubenswrapper[4661]: I0120 19:02:55.820618 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dzmjv\" (UniqueName: \"kubernetes.io/projected/adb303b3-0970-4438-92ee-dd30a6fb55b2-kube-api-access-dzmjv\") pod \"manila-share-share1-0\" (UID: \"adb303b3-0970-4438-92ee-dd30a6fb55b2\") " pod="openstack/manila-share-share1-0" Jan 20 19:02:55 crc kubenswrapper[4661]: I0120 19:02:55.820644 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8abbf31c-d723-4945-9d0c-6268b4100937-combined-ca-bundle\") pod \"manila-scheduler-0\" (UID: \"8abbf31c-d723-4945-9d0c-6268b4100937\") " pod="openstack/manila-scheduler-0" Jan 20 19:02:55 crc kubenswrapper[4661]: I0120 19:02:55.820713 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/adb303b3-0970-4438-92ee-dd30a6fb55b2-combined-ca-bundle\") pod \"manila-share-share1-0\" (UID: \"adb303b3-0970-4438-92ee-dd30a6fb55b2\") " pod="openstack/manila-share-share1-0" Jan 20 19:02:55 crc kubenswrapper[4661]: I0120 19:02:55.820752 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8abbf31c-d723-4945-9d0c-6268b4100937-config-data-custom\") pod \"manila-scheduler-0\" (UID: \"8abbf31c-d723-4945-9d0c-6268b4100937\") " pod="openstack/manila-scheduler-0" Jan 20 19:02:55 crc kubenswrapper[4661]: I0120 19:02:55.820781 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/adb303b3-0970-4438-92ee-dd30a6fb55b2-scripts\") pod \"manila-share-share1-0\" (UID: \"adb303b3-0970-4438-92ee-dd30a6fb55b2\") " pod="openstack/manila-share-share1-0" Jan 20 19:02:55 crc kubenswrapper[4661]: I0120 19:02:55.820842 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hjmsp\" (UniqueName: \"kubernetes.io/projected/8abbf31c-d723-4945-9d0c-6268b4100937-kube-api-access-hjmsp\") pod \"manila-scheduler-0\" (UID: \"8abbf31c-d723-4945-9d0c-6268b4100937\") " pod="openstack/manila-scheduler-0" Jan 20 19:02:55 crc kubenswrapper[4661]: I0120 19:02:55.820865 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/adb303b3-0970-4438-92ee-dd30a6fb55b2-etc-machine-id\") pod \"manila-share-share1-0\" (UID: \"adb303b3-0970-4438-92ee-dd30a6fb55b2\") " pod="openstack/manila-share-share1-0" Jan 20 19:02:55 crc kubenswrapper[4661]: I0120 19:02:55.826120 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-manila\" (UniqueName: \"kubernetes.io/host-path/adb303b3-0970-4438-92ee-dd30a6fb55b2-var-lib-manila\") pod \"manila-share-share1-0\" (UID: \"adb303b3-0970-4438-92ee-dd30a6fb55b2\") " pod="openstack/manila-share-share1-0" Jan 20 19:02:55 crc kubenswrapper[4661]: I0120 19:02:55.826175 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/8abbf31c-d723-4945-9d0c-6268b4100937-etc-machine-id\") pod \"manila-scheduler-0\" (UID: \"8abbf31c-d723-4945-9d0c-6268b4100937\") " pod="openstack/manila-scheduler-0" Jan 20 19:02:55 crc kubenswrapper[4661]: I0120 19:02:55.826201 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8abbf31c-d723-4945-9d0c-6268b4100937-scripts\") pod \"manila-scheduler-0\" (UID: \"8abbf31c-d723-4945-9d0c-6268b4100937\") " pod="openstack/manila-scheduler-0" Jan 20 19:02:55 crc kubenswrapper[4661]: I0120 19:02:55.826607 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/adb303b3-0970-4438-92ee-dd30a6fb55b2-config-data-custom\") pod \"manila-share-share1-0\" (UID: \"adb303b3-0970-4438-92ee-dd30a6fb55b2\") " pod="openstack/manila-share-share1-0" Jan 20 19:02:55 crc kubenswrapper[4661]: I0120 19:02:55.848104 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/8abbf31c-d723-4945-9d0c-6268b4100937-etc-machine-id\") pod \"manila-scheduler-0\" (UID: \"8abbf31c-d723-4945-9d0c-6268b4100937\") " pod="openstack/manila-scheduler-0" Jan 20 19:02:55 crc kubenswrapper[4661]: I0120 19:02:55.895716 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8abbf31c-d723-4945-9d0c-6268b4100937-scripts\") pod \"manila-scheduler-0\" (UID: \"8abbf31c-d723-4945-9d0c-6268b4100937\") " pod="openstack/manila-scheduler-0" Jan 20 19:02:55 crc kubenswrapper[4661]: I0120 19:02:55.901792 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hjmsp\" (UniqueName: \"kubernetes.io/projected/8abbf31c-d723-4945-9d0c-6268b4100937-kube-api-access-hjmsp\") pod \"manila-scheduler-0\" (UID: \"8abbf31c-d723-4945-9d0c-6268b4100937\") " pod="openstack/manila-scheduler-0" Jan 20 19:02:55 crc kubenswrapper[4661]: I0120 19:02:55.905366 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8abbf31c-d723-4945-9d0c-6268b4100937-config-data-custom\") pod \"manila-scheduler-0\" (UID: \"8abbf31c-d723-4945-9d0c-6268b4100937\") " pod="openstack/manila-scheduler-0" Jan 20 19:02:55 crc kubenswrapper[4661]: I0120 19:02:55.906112 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8abbf31c-d723-4945-9d0c-6268b4100937-config-data\") pod \"manila-scheduler-0\" (UID: \"8abbf31c-d723-4945-9d0c-6268b4100937\") " pod="openstack/manila-scheduler-0" Jan 20 19:02:55 crc kubenswrapper[4661]: I0120 19:02:55.929797 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a3cb0dc2-5231-45c2-81ae-038a006f73f0-ovsdbserver-sb\") pod \"dnsmasq-dns-77766fdf55-f5frm\" (UID: \"a3cb0dc2-5231-45c2-81ae-038a006f73f0\") " pod="openstack/dnsmasq-dns-77766fdf55-f5frm" Jan 20 19:02:55 crc kubenswrapper[4661]: I0120 19:02:55.929845 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/adb303b3-0970-4438-92ee-dd30a6fb55b2-etc-machine-id\") pod \"manila-share-share1-0\" (UID: \"adb303b3-0970-4438-92ee-dd30a6fb55b2\") " pod="openstack/manila-share-share1-0" Jan 20 19:02:55 crc kubenswrapper[4661]: I0120 19:02:55.929869 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-manila\" (UniqueName: \"kubernetes.io/host-path/adb303b3-0970-4438-92ee-dd30a6fb55b2-var-lib-manila\") pod \"manila-share-share1-0\" (UID: \"adb303b3-0970-4438-92ee-dd30a6fb55b2\") " pod="openstack/manila-share-share1-0" Jan 20 19:02:55 crc kubenswrapper[4661]: I0120 19:02:55.929895 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/adb303b3-0970-4438-92ee-dd30a6fb55b2-config-data-custom\") pod \"manila-share-share1-0\" (UID: \"adb303b3-0970-4438-92ee-dd30a6fb55b2\") " pod="openstack/manila-share-share1-0" Jan 20 19:02:55 crc kubenswrapper[4661]: I0120 19:02:55.929933 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/adb303b3-0970-4438-92ee-dd30a6fb55b2-config-data\") pod \"manila-share-share1-0\" (UID: \"adb303b3-0970-4438-92ee-dd30a6fb55b2\") " pod="openstack/manila-share-share1-0" Jan 20 19:02:55 crc kubenswrapper[4661]: I0120 19:02:55.929953 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/adb303b3-0970-4438-92ee-dd30a6fb55b2-ceph\") pod \"manila-share-share1-0\" (UID: \"adb303b3-0970-4438-92ee-dd30a6fb55b2\") " pod="openstack/manila-share-share1-0" Jan 20 19:02:55 crc kubenswrapper[4661]: I0120 19:02:55.929969 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a3cb0dc2-5231-45c2-81ae-038a006f73f0-config\") pod \"dnsmasq-dns-77766fdf55-f5frm\" (UID: \"a3cb0dc2-5231-45c2-81ae-038a006f73f0\") " pod="openstack/dnsmasq-dns-77766fdf55-f5frm" Jan 20 19:02:55 crc kubenswrapper[4661]: I0120 19:02:55.930001 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dzmjv\" (UniqueName: \"kubernetes.io/projected/adb303b3-0970-4438-92ee-dd30a6fb55b2-kube-api-access-dzmjv\") pod \"manila-share-share1-0\" (UID: \"adb303b3-0970-4438-92ee-dd30a6fb55b2\") " pod="openstack/manila-share-share1-0" Jan 20 19:02:55 crc kubenswrapper[4661]: I0120 19:02:55.930067 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a3cb0dc2-5231-45c2-81ae-038a006f73f0-ovsdbserver-nb\") pod \"dnsmasq-dns-77766fdf55-f5frm\" (UID: \"a3cb0dc2-5231-45c2-81ae-038a006f73f0\") " pod="openstack/dnsmasq-dns-77766fdf55-f5frm" Jan 20 19:02:55 crc kubenswrapper[4661]: I0120 19:02:55.930097 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/adb303b3-0970-4438-92ee-dd30a6fb55b2-combined-ca-bundle\") pod \"manila-share-share1-0\" (UID: \"adb303b3-0970-4438-92ee-dd30a6fb55b2\") " pod="openstack/manila-share-share1-0" Jan 20 19:02:55 crc kubenswrapper[4661]: I0120 19:02:55.930147 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/adb303b3-0970-4438-92ee-dd30a6fb55b2-scripts\") pod \"manila-share-share1-0\" (UID: \"adb303b3-0970-4438-92ee-dd30a6fb55b2\") " pod="openstack/manila-share-share1-0" Jan 20 19:02:55 crc kubenswrapper[4661]: I0120 19:02:55.930175 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6srgk\" (UniqueName: \"kubernetes.io/projected/a3cb0dc2-5231-45c2-81ae-038a006f73f0-kube-api-access-6srgk\") pod \"dnsmasq-dns-77766fdf55-f5frm\" (UID: \"a3cb0dc2-5231-45c2-81ae-038a006f73f0\") " pod="openstack/dnsmasq-dns-77766fdf55-f5frm" Jan 20 19:02:55 crc kubenswrapper[4661]: I0120 19:02:55.930233 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/a3cb0dc2-5231-45c2-81ae-038a006f73f0-openstack-edpm-ipam\") pod \"dnsmasq-dns-77766fdf55-f5frm\" (UID: \"a3cb0dc2-5231-45c2-81ae-038a006f73f0\") " pod="openstack/dnsmasq-dns-77766fdf55-f5frm" Jan 20 19:02:55 crc kubenswrapper[4661]: I0120 19:02:55.930255 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a3cb0dc2-5231-45c2-81ae-038a006f73f0-dns-svc\") pod \"dnsmasq-dns-77766fdf55-f5frm\" (UID: \"a3cb0dc2-5231-45c2-81ae-038a006f73f0\") " pod="openstack/dnsmasq-dns-77766fdf55-f5frm" Jan 20 19:02:55 crc kubenswrapper[4661]: I0120 19:02:55.930389 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/adb303b3-0970-4438-92ee-dd30a6fb55b2-etc-machine-id\") pod \"manila-share-share1-0\" (UID: \"adb303b3-0970-4438-92ee-dd30a6fb55b2\") " pod="openstack/manila-share-share1-0" Jan 20 19:02:55 crc kubenswrapper[4661]: I0120 19:02:55.935321 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8abbf31c-d723-4945-9d0c-6268b4100937-combined-ca-bundle\") pod \"manila-scheduler-0\" (UID: \"8abbf31c-d723-4945-9d0c-6268b4100937\") " pod="openstack/manila-scheduler-0" Jan 20 19:02:55 crc kubenswrapper[4661]: I0120 19:02:55.947924 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-manila\" (UniqueName: \"kubernetes.io/host-path/adb303b3-0970-4438-92ee-dd30a6fb55b2-var-lib-manila\") pod \"manila-share-share1-0\" (UID: \"adb303b3-0970-4438-92ee-dd30a6fb55b2\") " pod="openstack/manila-share-share1-0" Jan 20 19:02:55 crc kubenswrapper[4661]: I0120 19:02:55.971896 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/adb303b3-0970-4438-92ee-dd30a6fb55b2-config-data-custom\") pod \"manila-share-share1-0\" (UID: \"adb303b3-0970-4438-92ee-dd30a6fb55b2\") " pod="openstack/manila-share-share1-0" Jan 20 19:02:55 crc kubenswrapper[4661]: I0120 19:02:55.975316 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/adb303b3-0970-4438-92ee-dd30a6fb55b2-config-data\") pod \"manila-share-share1-0\" (UID: \"adb303b3-0970-4438-92ee-dd30a6fb55b2\") " pod="openstack/manila-share-share1-0" Jan 20 19:02:55 crc kubenswrapper[4661]: I0120 19:02:55.976445 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dzmjv\" (UniqueName: \"kubernetes.io/projected/adb303b3-0970-4438-92ee-dd30a6fb55b2-kube-api-access-dzmjv\") pod \"manila-share-share1-0\" (UID: \"adb303b3-0970-4438-92ee-dd30a6fb55b2\") " pod="openstack/manila-share-share1-0" Jan 20 19:02:55 crc kubenswrapper[4661]: I0120 19:02:55.986169 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/adb303b3-0970-4438-92ee-dd30a6fb55b2-scripts\") pod \"manila-share-share1-0\" (UID: \"adb303b3-0970-4438-92ee-dd30a6fb55b2\") " pod="openstack/manila-share-share1-0" Jan 20 19:02:55 crc kubenswrapper[4661]: I0120 19:02:55.988583 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/adb303b3-0970-4438-92ee-dd30a6fb55b2-combined-ca-bundle\") pod \"manila-share-share1-0\" (UID: \"adb303b3-0970-4438-92ee-dd30a6fb55b2\") " pod="openstack/manila-share-share1-0" Jan 20 19:02:55 crc kubenswrapper[4661]: I0120 19:02:55.992776 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/adb303b3-0970-4438-92ee-dd30a6fb55b2-ceph\") pod \"manila-share-share1-0\" (UID: \"adb303b3-0970-4438-92ee-dd30a6fb55b2\") " pod="openstack/manila-share-share1-0" Jan 20 19:02:56 crc kubenswrapper[4661]: I0120 19:02:56.000374 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-share-share1-0" Jan 20 19:02:56 crc kubenswrapper[4661]: I0120 19:02:56.060820 4661 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-api-0"] Jan 20 19:02:56 crc kubenswrapper[4661]: I0120 19:02:56.062492 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-api-0" Jan 20 19:02:56 crc kubenswrapper[4661]: I0120 19:02:56.062925 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6srgk\" (UniqueName: \"kubernetes.io/projected/a3cb0dc2-5231-45c2-81ae-038a006f73f0-kube-api-access-6srgk\") pod \"dnsmasq-dns-77766fdf55-f5frm\" (UID: \"a3cb0dc2-5231-45c2-81ae-038a006f73f0\") " pod="openstack/dnsmasq-dns-77766fdf55-f5frm" Jan 20 19:02:56 crc kubenswrapper[4661]: I0120 19:02:56.063013 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a3cb0dc2-5231-45c2-81ae-038a006f73f0-dns-svc\") pod \"dnsmasq-dns-77766fdf55-f5frm\" (UID: \"a3cb0dc2-5231-45c2-81ae-038a006f73f0\") " pod="openstack/dnsmasq-dns-77766fdf55-f5frm" Jan 20 19:02:56 crc kubenswrapper[4661]: I0120 19:02:56.063032 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/a3cb0dc2-5231-45c2-81ae-038a006f73f0-openstack-edpm-ipam\") pod \"dnsmasq-dns-77766fdf55-f5frm\" (UID: \"a3cb0dc2-5231-45c2-81ae-038a006f73f0\") " pod="openstack/dnsmasq-dns-77766fdf55-f5frm" Jan 20 19:02:56 crc kubenswrapper[4661]: I0120 19:02:56.063056 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a3cb0dc2-5231-45c2-81ae-038a006f73f0-ovsdbserver-sb\") pod \"dnsmasq-dns-77766fdf55-f5frm\" (UID: \"a3cb0dc2-5231-45c2-81ae-038a006f73f0\") " pod="openstack/dnsmasq-dns-77766fdf55-f5frm" Jan 20 19:02:56 crc kubenswrapper[4661]: I0120 19:02:56.063093 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a3cb0dc2-5231-45c2-81ae-038a006f73f0-config\") pod \"dnsmasq-dns-77766fdf55-f5frm\" (UID: \"a3cb0dc2-5231-45c2-81ae-038a006f73f0\") " pod="openstack/dnsmasq-dns-77766fdf55-f5frm" Jan 20 19:02:56 crc kubenswrapper[4661]: I0120 19:02:56.063144 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a3cb0dc2-5231-45c2-81ae-038a006f73f0-ovsdbserver-nb\") pod \"dnsmasq-dns-77766fdf55-f5frm\" (UID: \"a3cb0dc2-5231-45c2-81ae-038a006f73f0\") " pod="openstack/dnsmasq-dns-77766fdf55-f5frm" Jan 20 19:02:56 crc kubenswrapper[4661]: I0120 19:02:56.064380 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a3cb0dc2-5231-45c2-81ae-038a006f73f0-ovsdbserver-nb\") pod \"dnsmasq-dns-77766fdf55-f5frm\" (UID: \"a3cb0dc2-5231-45c2-81ae-038a006f73f0\") " pod="openstack/dnsmasq-dns-77766fdf55-f5frm" Jan 20 19:02:56 crc kubenswrapper[4661]: I0120 19:02:56.069587 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-api-config-data" Jan 20 19:02:56 crc kubenswrapper[4661]: I0120 19:02:56.073360 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a3cb0dc2-5231-45c2-81ae-038a006f73f0-dns-svc\") pod \"dnsmasq-dns-77766fdf55-f5frm\" (UID: \"a3cb0dc2-5231-45c2-81ae-038a006f73f0\") " pod="openstack/dnsmasq-dns-77766fdf55-f5frm" Jan 20 19:02:56 crc kubenswrapper[4661]: I0120 19:02:56.081551 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a3cb0dc2-5231-45c2-81ae-038a006f73f0-config\") pod \"dnsmasq-dns-77766fdf55-f5frm\" (UID: \"a3cb0dc2-5231-45c2-81ae-038a006f73f0\") " pod="openstack/dnsmasq-dns-77766fdf55-f5frm" Jan 20 19:02:56 crc kubenswrapper[4661]: I0120 19:02:56.082527 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/a3cb0dc2-5231-45c2-81ae-038a006f73f0-openstack-edpm-ipam\") pod \"dnsmasq-dns-77766fdf55-f5frm\" (UID: \"a3cb0dc2-5231-45c2-81ae-038a006f73f0\") " pod="openstack/dnsmasq-dns-77766fdf55-f5frm" Jan 20 19:02:56 crc kubenswrapper[4661]: I0120 19:02:56.094140 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a3cb0dc2-5231-45c2-81ae-038a006f73f0-ovsdbserver-sb\") pod \"dnsmasq-dns-77766fdf55-f5frm\" (UID: \"a3cb0dc2-5231-45c2-81ae-038a006f73f0\") " pod="openstack/dnsmasq-dns-77766fdf55-f5frm" Jan 20 19:02:56 crc kubenswrapper[4661]: I0120 19:02:56.111840 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-api-0"] Jan 20 19:02:56 crc kubenswrapper[4661]: I0120 19:02:56.126992 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6srgk\" (UniqueName: \"kubernetes.io/projected/a3cb0dc2-5231-45c2-81ae-038a006f73f0-kube-api-access-6srgk\") pod \"dnsmasq-dns-77766fdf55-f5frm\" (UID: \"a3cb0dc2-5231-45c2-81ae-038a006f73f0\") " pod="openstack/dnsmasq-dns-77766fdf55-f5frm" Jan 20 19:02:56 crc kubenswrapper[4661]: I0120 19:02:56.167723 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/87eba907-795e-421b-a22e-8f978bb12efd-combined-ca-bundle\") pod \"manila-api-0\" (UID: \"87eba907-795e-421b-a22e-8f978bb12efd\") " pod="openstack/manila-api-0" Jan 20 19:02:56 crc kubenswrapper[4661]: I0120 19:02:56.167784 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/87eba907-795e-421b-a22e-8f978bb12efd-scripts\") pod \"manila-api-0\" (UID: \"87eba907-795e-421b-a22e-8f978bb12efd\") " pod="openstack/manila-api-0" Jan 20 19:02:56 crc kubenswrapper[4661]: I0120 19:02:56.167805 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/87eba907-795e-421b-a22e-8f978bb12efd-config-data-custom\") pod \"manila-api-0\" (UID: \"87eba907-795e-421b-a22e-8f978bb12efd\") " pod="openstack/manila-api-0" Jan 20 19:02:56 crc kubenswrapper[4661]: I0120 19:02:56.167828 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/87eba907-795e-421b-a22e-8f978bb12efd-etc-machine-id\") pod \"manila-api-0\" (UID: \"87eba907-795e-421b-a22e-8f978bb12efd\") " pod="openstack/manila-api-0" Jan 20 19:02:56 crc kubenswrapper[4661]: I0120 19:02:56.167887 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/87eba907-795e-421b-a22e-8f978bb12efd-config-data\") pod \"manila-api-0\" (UID: \"87eba907-795e-421b-a22e-8f978bb12efd\") " pod="openstack/manila-api-0" Jan 20 19:02:56 crc kubenswrapper[4661]: I0120 19:02:56.167927 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-trdhs\" (UniqueName: \"kubernetes.io/projected/87eba907-795e-421b-a22e-8f978bb12efd-kube-api-access-trdhs\") pod \"manila-api-0\" (UID: \"87eba907-795e-421b-a22e-8f978bb12efd\") " pod="openstack/manila-api-0" Jan 20 19:02:56 crc kubenswrapper[4661]: I0120 19:02:56.167955 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/87eba907-795e-421b-a22e-8f978bb12efd-logs\") pod \"manila-api-0\" (UID: \"87eba907-795e-421b-a22e-8f978bb12efd\") " pod="openstack/manila-api-0" Jan 20 19:02:56 crc kubenswrapper[4661]: I0120 19:02:56.225140 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-scheduler-0" Jan 20 19:02:56 crc kubenswrapper[4661]: I0120 19:02:56.269069 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/87eba907-795e-421b-a22e-8f978bb12efd-scripts\") pod \"manila-api-0\" (UID: \"87eba907-795e-421b-a22e-8f978bb12efd\") " pod="openstack/manila-api-0" Jan 20 19:02:56 crc kubenswrapper[4661]: I0120 19:02:56.269108 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/87eba907-795e-421b-a22e-8f978bb12efd-config-data-custom\") pod \"manila-api-0\" (UID: \"87eba907-795e-421b-a22e-8f978bb12efd\") " pod="openstack/manila-api-0" Jan 20 19:02:56 crc kubenswrapper[4661]: I0120 19:02:56.269133 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/87eba907-795e-421b-a22e-8f978bb12efd-etc-machine-id\") pod \"manila-api-0\" (UID: \"87eba907-795e-421b-a22e-8f978bb12efd\") " pod="openstack/manila-api-0" Jan 20 19:02:56 crc kubenswrapper[4661]: I0120 19:02:56.269212 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/87eba907-795e-421b-a22e-8f978bb12efd-config-data\") pod \"manila-api-0\" (UID: \"87eba907-795e-421b-a22e-8f978bb12efd\") " pod="openstack/manila-api-0" Jan 20 19:02:56 crc kubenswrapper[4661]: I0120 19:02:56.269261 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-trdhs\" (UniqueName: \"kubernetes.io/projected/87eba907-795e-421b-a22e-8f978bb12efd-kube-api-access-trdhs\") pod \"manila-api-0\" (UID: \"87eba907-795e-421b-a22e-8f978bb12efd\") " pod="openstack/manila-api-0" Jan 20 19:02:56 crc kubenswrapper[4661]: I0120 19:02:56.269289 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/87eba907-795e-421b-a22e-8f978bb12efd-logs\") pod \"manila-api-0\" (UID: \"87eba907-795e-421b-a22e-8f978bb12efd\") " pod="openstack/manila-api-0" Jan 20 19:02:56 crc kubenswrapper[4661]: I0120 19:02:56.269343 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/87eba907-795e-421b-a22e-8f978bb12efd-combined-ca-bundle\") pod \"manila-api-0\" (UID: \"87eba907-795e-421b-a22e-8f978bb12efd\") " pod="openstack/manila-api-0" Jan 20 19:02:56 crc kubenswrapper[4661]: I0120 19:02:56.272551 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/87eba907-795e-421b-a22e-8f978bb12efd-etc-machine-id\") pod \"manila-api-0\" (UID: \"87eba907-795e-421b-a22e-8f978bb12efd\") " pod="openstack/manila-api-0" Jan 20 19:02:56 crc kubenswrapper[4661]: I0120 19:02:56.273403 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/87eba907-795e-421b-a22e-8f978bb12efd-logs\") pod \"manila-api-0\" (UID: \"87eba907-795e-421b-a22e-8f978bb12efd\") " pod="openstack/manila-api-0" Jan 20 19:02:56 crc kubenswrapper[4661]: I0120 19:02:56.277500 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/87eba907-795e-421b-a22e-8f978bb12efd-scripts\") pod \"manila-api-0\" (UID: \"87eba907-795e-421b-a22e-8f978bb12efd\") " pod="openstack/manila-api-0" Jan 20 19:02:56 crc kubenswrapper[4661]: I0120 19:02:56.277499 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/87eba907-795e-421b-a22e-8f978bb12efd-config-data\") pod \"manila-api-0\" (UID: \"87eba907-795e-421b-a22e-8f978bb12efd\") " pod="openstack/manila-api-0" Jan 20 19:02:56 crc kubenswrapper[4661]: I0120 19:02:56.278383 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/87eba907-795e-421b-a22e-8f978bb12efd-config-data-custom\") pod \"manila-api-0\" (UID: \"87eba907-795e-421b-a22e-8f978bb12efd\") " pod="openstack/manila-api-0" Jan 20 19:02:56 crc kubenswrapper[4661]: I0120 19:02:56.278552 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/87eba907-795e-421b-a22e-8f978bb12efd-combined-ca-bundle\") pod \"manila-api-0\" (UID: \"87eba907-795e-421b-a22e-8f978bb12efd\") " pod="openstack/manila-api-0" Jan 20 19:02:56 crc kubenswrapper[4661]: I0120 19:02:56.304323 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-trdhs\" (UniqueName: \"kubernetes.io/projected/87eba907-795e-421b-a22e-8f978bb12efd-kube-api-access-trdhs\") pod \"manila-api-0\" (UID: \"87eba907-795e-421b-a22e-8f978bb12efd\") " pod="openstack/manila-api-0" Jan 20 19:02:56 crc kubenswrapper[4661]: I0120 19:02:56.403851 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-77766fdf55-f5frm" Jan 20 19:02:56 crc kubenswrapper[4661]: I0120 19:02:56.483868 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-api-0" Jan 20 19:02:56 crc kubenswrapper[4661]: I0120 19:02:56.908118 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-share-share1-0"] Jan 20 19:02:56 crc kubenswrapper[4661]: I0120 19:02:56.943795 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-scheduler-0"] Jan 20 19:02:57 crc kubenswrapper[4661]: I0120 19:02:57.058339 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-77766fdf55-f5frm"] Jan 20 19:02:57 crc kubenswrapper[4661]: I0120 19:02:57.177978 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"8abbf31c-d723-4945-9d0c-6268b4100937","Type":"ContainerStarted","Data":"92e679e67c6189e0ca9cfbbbdd4a96a8f81ca44d4f5dda995a36ed5350aa4787"} Jan 20 19:02:57 crc kubenswrapper[4661]: I0120 19:02:57.184993 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"adb303b3-0970-4438-92ee-dd30a6fb55b2","Type":"ContainerStarted","Data":"d6c9eebd02b32f6e3a5bf26f2aa0533c9c3c023243cf8c61391f8542756f38d9"} Jan 20 19:02:57 crc kubenswrapper[4661]: I0120 19:02:57.189783 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-77766fdf55-f5frm" event={"ID":"a3cb0dc2-5231-45c2-81ae-038a006f73f0","Type":"ContainerStarted","Data":"8098dccf0c7a4ad86e667b1cec533167266762211e280bfdcc9cd48952442305"} Jan 20 19:02:57 crc kubenswrapper[4661]: I0120 19:02:57.263414 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-api-0"] Jan 20 19:02:58 crc kubenswrapper[4661]: I0120 19:02:58.202025 4661 generic.go:334] "Generic (PLEG): container finished" podID="a3cb0dc2-5231-45c2-81ae-038a006f73f0" containerID="11b89324685fe05bc8b571e9d99dd32bd4c30957fbe0b3b21a9d4542323190dd" exitCode=0 Jan 20 19:02:58 crc kubenswrapper[4661]: I0120 19:02:58.202184 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-77766fdf55-f5frm" event={"ID":"a3cb0dc2-5231-45c2-81ae-038a006f73f0","Type":"ContainerDied","Data":"11b89324685fe05bc8b571e9d99dd32bd4c30957fbe0b3b21a9d4542323190dd"} Jan 20 19:02:58 crc kubenswrapper[4661]: I0120 19:02:58.210266 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"87eba907-795e-421b-a22e-8f978bb12efd","Type":"ContainerStarted","Data":"bb74839c0269ab4f68323750038cf1080b18213c9f8a164a748016dfeecc3629"} Jan 20 19:02:58 crc kubenswrapper[4661]: I0120 19:02:58.210304 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"87eba907-795e-421b-a22e-8f978bb12efd","Type":"ContainerStarted","Data":"d4d53d8ae96d63783774347b6e263bac4bd7106bc60c1708ed507e981296b5b8"} Jan 20 19:02:58 crc kubenswrapper[4661]: I0120 19:02:58.879691 4661 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/manila-api-0"] Jan 20 19:02:59 crc kubenswrapper[4661]: I0120 19:02:59.231691 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-77766fdf55-f5frm" event={"ID":"a3cb0dc2-5231-45c2-81ae-038a006f73f0","Type":"ContainerStarted","Data":"7cf1d426918418e7d1523354d92a5a3a0bdbc25ec22b0eefb2e06973d9a53d2c"} Jan 20 19:02:59 crc kubenswrapper[4661]: I0120 19:02:59.232105 4661 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-77766fdf55-f5frm" Jan 20 19:02:59 crc kubenswrapper[4661]: I0120 19:02:59.237451 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"87eba907-795e-421b-a22e-8f978bb12efd","Type":"ContainerStarted","Data":"3a83bee596a12afbf7077b9e71b2368c9934ae98d90d049b1adf7884be1cab9b"} Jan 20 19:02:59 crc kubenswrapper[4661]: I0120 19:02:59.238434 4661 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/manila-api-0" Jan 20 19:02:59 crc kubenswrapper[4661]: I0120 19:02:59.244762 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"8abbf31c-d723-4945-9d0c-6268b4100937","Type":"ContainerStarted","Data":"1aea894c444a34524f1953d8b2fea6b1b4c902c8ee06cb518d784b937ae71c89"} Jan 20 19:02:59 crc kubenswrapper[4661]: I0120 19:02:59.267042 4661 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-77766fdf55-f5frm" podStartSLOduration=4.267017659 podStartE2EDuration="4.267017659s" podCreationTimestamp="2026-01-20 19:02:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 19:02:59.257377699 +0000 UTC m=+3435.588167361" watchObservedRunningTime="2026-01-20 19:02:59.267017659 +0000 UTC m=+3435.597807321" Jan 20 19:02:59 crc kubenswrapper[4661]: I0120 19:02:59.301483 4661 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/manila-api-0" podStartSLOduration=4.301463893 podStartE2EDuration="4.301463893s" podCreationTimestamp="2026-01-20 19:02:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 19:02:59.28285954 +0000 UTC m=+3435.613649222" watchObservedRunningTime="2026-01-20 19:02:59.301463893 +0000 UTC m=+3435.632253555" Jan 20 19:02:59 crc kubenswrapper[4661]: I0120 19:02:59.339815 4661 patch_prober.go:28] interesting pod/machine-config-daemon-svf7c container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 20 19:02:59 crc kubenswrapper[4661]: I0120 19:02:59.339878 4661 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-svf7c" podUID="78855c94-da90-4523-8d65-70f7fd153dee" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 20 19:03:00 crc kubenswrapper[4661]: I0120 19:03:00.263957 4661 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/manila-api-0" podUID="87eba907-795e-421b-a22e-8f978bb12efd" containerName="manila-api-log" containerID="cri-o://bb74839c0269ab4f68323750038cf1080b18213c9f8a164a748016dfeecc3629" gracePeriod=30 Jan 20 19:03:00 crc kubenswrapper[4661]: I0120 19:03:00.264597 4661 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/manila-api-0" podUID="87eba907-795e-421b-a22e-8f978bb12efd" containerName="manila-api" containerID="cri-o://3a83bee596a12afbf7077b9e71b2368c9934ae98d90d049b1adf7884be1cab9b" gracePeriod=30 Jan 20 19:03:00 crc kubenswrapper[4661]: I0120 19:03:00.264743 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"8abbf31c-d723-4945-9d0c-6268b4100937","Type":"ContainerStarted","Data":"5ac0f780aa3fdcb68c4c06c3d6edc5ddbec9a0e8410b5468264cafb3ff071c5e"} Jan 20 19:03:01 crc kubenswrapper[4661]: I0120 19:03:01.288415 4661 generic.go:334] "Generic (PLEG): container finished" podID="87eba907-795e-421b-a22e-8f978bb12efd" containerID="3a83bee596a12afbf7077b9e71b2368c9934ae98d90d049b1adf7884be1cab9b" exitCode=0 Jan 20 19:03:01 crc kubenswrapper[4661]: I0120 19:03:01.288876 4661 generic.go:334] "Generic (PLEG): container finished" podID="87eba907-795e-421b-a22e-8f978bb12efd" containerID="bb74839c0269ab4f68323750038cf1080b18213c9f8a164a748016dfeecc3629" exitCode=143 Jan 20 19:03:01 crc kubenswrapper[4661]: I0120 19:03:01.288482 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"87eba907-795e-421b-a22e-8f978bb12efd","Type":"ContainerDied","Data":"3a83bee596a12afbf7077b9e71b2368c9934ae98d90d049b1adf7884be1cab9b"} Jan 20 19:03:01 crc kubenswrapper[4661]: I0120 19:03:01.288985 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"87eba907-795e-421b-a22e-8f978bb12efd","Type":"ContainerDied","Data":"bb74839c0269ab4f68323750038cf1080b18213c9f8a164a748016dfeecc3629"} Jan 20 19:03:01 crc kubenswrapper[4661]: I0120 19:03:01.425265 4661 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-api-0" Jan 20 19:03:01 crc kubenswrapper[4661]: I0120 19:03:01.444534 4661 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/manila-scheduler-0" podStartSLOduration=5.520890693 podStartE2EDuration="6.44451401s" podCreationTimestamp="2026-01-20 19:02:55 +0000 UTC" firstStartedPulling="2026-01-20 19:02:56.963287136 +0000 UTC m=+3433.294076798" lastFinishedPulling="2026-01-20 19:02:57.886910453 +0000 UTC m=+3434.217700115" observedRunningTime="2026-01-20 19:03:00.307325817 +0000 UTC m=+3436.638115479" watchObservedRunningTime="2026-01-20 19:03:01.44451401 +0000 UTC m=+3437.775303672" Jan 20 19:03:01 crc kubenswrapper[4661]: I0120 19:03:01.596545 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/87eba907-795e-421b-a22e-8f978bb12efd-scripts\") pod \"87eba907-795e-421b-a22e-8f978bb12efd\" (UID: \"87eba907-795e-421b-a22e-8f978bb12efd\") " Jan 20 19:03:01 crc kubenswrapper[4661]: I0120 19:03:01.596599 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-trdhs\" (UniqueName: \"kubernetes.io/projected/87eba907-795e-421b-a22e-8f978bb12efd-kube-api-access-trdhs\") pod \"87eba907-795e-421b-a22e-8f978bb12efd\" (UID: \"87eba907-795e-421b-a22e-8f978bb12efd\") " Jan 20 19:03:01 crc kubenswrapper[4661]: I0120 19:03:01.596694 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/87eba907-795e-421b-a22e-8f978bb12efd-config-data-custom\") pod \"87eba907-795e-421b-a22e-8f978bb12efd\" (UID: \"87eba907-795e-421b-a22e-8f978bb12efd\") " Jan 20 19:03:01 crc kubenswrapper[4661]: I0120 19:03:01.596717 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/87eba907-795e-421b-a22e-8f978bb12efd-combined-ca-bundle\") pod \"87eba907-795e-421b-a22e-8f978bb12efd\" (UID: \"87eba907-795e-421b-a22e-8f978bb12efd\") " Jan 20 19:03:01 crc kubenswrapper[4661]: I0120 19:03:01.596756 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/87eba907-795e-421b-a22e-8f978bb12efd-config-data\") pod \"87eba907-795e-421b-a22e-8f978bb12efd\" (UID: \"87eba907-795e-421b-a22e-8f978bb12efd\") " Jan 20 19:03:01 crc kubenswrapper[4661]: I0120 19:03:01.596772 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/87eba907-795e-421b-a22e-8f978bb12efd-etc-machine-id\") pod \"87eba907-795e-421b-a22e-8f978bb12efd\" (UID: \"87eba907-795e-421b-a22e-8f978bb12efd\") " Jan 20 19:03:01 crc kubenswrapper[4661]: I0120 19:03:01.596797 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/87eba907-795e-421b-a22e-8f978bb12efd-logs\") pod \"87eba907-795e-421b-a22e-8f978bb12efd\" (UID: \"87eba907-795e-421b-a22e-8f978bb12efd\") " Jan 20 19:03:01 crc kubenswrapper[4661]: I0120 19:03:01.597709 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/87eba907-795e-421b-a22e-8f978bb12efd-logs" (OuterVolumeSpecName: "logs") pod "87eba907-795e-421b-a22e-8f978bb12efd" (UID: "87eba907-795e-421b-a22e-8f978bb12efd"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 19:03:01 crc kubenswrapper[4661]: I0120 19:03:01.598472 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/87eba907-795e-421b-a22e-8f978bb12efd-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "87eba907-795e-421b-a22e-8f978bb12efd" (UID: "87eba907-795e-421b-a22e-8f978bb12efd"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 20 19:03:01 crc kubenswrapper[4661]: I0120 19:03:01.607206 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87eba907-795e-421b-a22e-8f978bb12efd-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "87eba907-795e-421b-a22e-8f978bb12efd" (UID: "87eba907-795e-421b-a22e-8f978bb12efd"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 19:03:01 crc kubenswrapper[4661]: I0120 19:03:01.607377 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87eba907-795e-421b-a22e-8f978bb12efd-kube-api-access-trdhs" (OuterVolumeSpecName: "kube-api-access-trdhs") pod "87eba907-795e-421b-a22e-8f978bb12efd" (UID: "87eba907-795e-421b-a22e-8f978bb12efd"). InnerVolumeSpecName "kube-api-access-trdhs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 19:03:01 crc kubenswrapper[4661]: I0120 19:03:01.614807 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87eba907-795e-421b-a22e-8f978bb12efd-scripts" (OuterVolumeSpecName: "scripts") pod "87eba907-795e-421b-a22e-8f978bb12efd" (UID: "87eba907-795e-421b-a22e-8f978bb12efd"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 19:03:01 crc kubenswrapper[4661]: I0120 19:03:01.638785 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87eba907-795e-421b-a22e-8f978bb12efd-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "87eba907-795e-421b-a22e-8f978bb12efd" (UID: "87eba907-795e-421b-a22e-8f978bb12efd"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 19:03:01 crc kubenswrapper[4661]: I0120 19:03:01.698973 4661 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/87eba907-795e-421b-a22e-8f978bb12efd-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 20 19:03:01 crc kubenswrapper[4661]: I0120 19:03:01.699003 4661 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/87eba907-795e-421b-a22e-8f978bb12efd-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 20 19:03:01 crc kubenswrapper[4661]: I0120 19:03:01.699014 4661 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/87eba907-795e-421b-a22e-8f978bb12efd-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 20 19:03:01 crc kubenswrapper[4661]: I0120 19:03:01.699024 4661 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/87eba907-795e-421b-a22e-8f978bb12efd-logs\") on node \"crc\" DevicePath \"\"" Jan 20 19:03:01 crc kubenswrapper[4661]: I0120 19:03:01.699032 4661 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/87eba907-795e-421b-a22e-8f978bb12efd-scripts\") on node \"crc\" DevicePath \"\"" Jan 20 19:03:01 crc kubenswrapper[4661]: I0120 19:03:01.699041 4661 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-trdhs\" (UniqueName: \"kubernetes.io/projected/87eba907-795e-421b-a22e-8f978bb12efd-kube-api-access-trdhs\") on node \"crc\" DevicePath \"\"" Jan 20 19:03:01 crc kubenswrapper[4661]: I0120 19:03:01.737142 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87eba907-795e-421b-a22e-8f978bb12efd-config-data" (OuterVolumeSpecName: "config-data") pod "87eba907-795e-421b-a22e-8f978bb12efd" (UID: "87eba907-795e-421b-a22e-8f978bb12efd"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 19:03:01 crc kubenswrapper[4661]: I0120 19:03:01.800397 4661 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/87eba907-795e-421b-a22e-8f978bb12efd-config-data\") on node \"crc\" DevicePath \"\"" Jan 20 19:03:02 crc kubenswrapper[4661]: I0120 19:03:02.311933 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"87eba907-795e-421b-a22e-8f978bb12efd","Type":"ContainerDied","Data":"d4d53d8ae96d63783774347b6e263bac4bd7106bc60c1708ed507e981296b5b8"} Jan 20 19:03:02 crc kubenswrapper[4661]: I0120 19:03:02.311980 4661 scope.go:117] "RemoveContainer" containerID="3a83bee596a12afbf7077b9e71b2368c9934ae98d90d049b1adf7884be1cab9b" Jan 20 19:03:02 crc kubenswrapper[4661]: I0120 19:03:02.312094 4661 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-api-0" Jan 20 19:03:02 crc kubenswrapper[4661]: I0120 19:03:02.342727 4661 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/manila-api-0"] Jan 20 19:03:02 crc kubenswrapper[4661]: I0120 19:03:02.351661 4661 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/manila-api-0"] Jan 20 19:03:02 crc kubenswrapper[4661]: I0120 19:03:02.368960 4661 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-api-0"] Jan 20 19:03:02 crc kubenswrapper[4661]: E0120 19:03:02.369420 4661 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="87eba907-795e-421b-a22e-8f978bb12efd" containerName="manila-api" Jan 20 19:03:02 crc kubenswrapper[4661]: I0120 19:03:02.369442 4661 state_mem.go:107] "Deleted CPUSet assignment" podUID="87eba907-795e-421b-a22e-8f978bb12efd" containerName="manila-api" Jan 20 19:03:02 crc kubenswrapper[4661]: E0120 19:03:02.369476 4661 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="87eba907-795e-421b-a22e-8f978bb12efd" containerName="manila-api-log" Jan 20 19:03:02 crc kubenswrapper[4661]: I0120 19:03:02.369483 4661 state_mem.go:107] "Deleted CPUSet assignment" podUID="87eba907-795e-421b-a22e-8f978bb12efd" containerName="manila-api-log" Jan 20 19:03:02 crc kubenswrapper[4661]: I0120 19:03:02.369755 4661 memory_manager.go:354] "RemoveStaleState removing state" podUID="87eba907-795e-421b-a22e-8f978bb12efd" containerName="manila-api" Jan 20 19:03:02 crc kubenswrapper[4661]: I0120 19:03:02.369790 4661 memory_manager.go:354] "RemoveStaleState removing state" podUID="87eba907-795e-421b-a22e-8f978bb12efd" containerName="manila-api-log" Jan 20 19:03:02 crc kubenswrapper[4661]: I0120 19:03:02.371069 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-api-0" Jan 20 19:03:02 crc kubenswrapper[4661]: I0120 19:03:02.378786 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-api-config-data" Jan 20 19:03:02 crc kubenswrapper[4661]: I0120 19:03:02.378990 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-manila-public-svc" Jan 20 19:03:02 crc kubenswrapper[4661]: I0120 19:03:02.379613 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-manila-internal-svc" Jan 20 19:03:02 crc kubenswrapper[4661]: I0120 19:03:02.381563 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-api-0"] Jan 20 19:03:02 crc kubenswrapper[4661]: I0120 19:03:02.512508 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jpdcb\" (UniqueName: \"kubernetes.io/projected/c4f33125-e16c-4df4-9c3f-9f772fe671eb-kube-api-access-jpdcb\") pod \"manila-api-0\" (UID: \"c4f33125-e16c-4df4-9c3f-9f772fe671eb\") " pod="openstack/manila-api-0" Jan 20 19:03:02 crc kubenswrapper[4661]: I0120 19:03:02.512561 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c4f33125-e16c-4df4-9c3f-9f772fe671eb-config-data-custom\") pod \"manila-api-0\" (UID: \"c4f33125-e16c-4df4-9c3f-9f772fe671eb\") " pod="openstack/manila-api-0" Jan 20 19:03:02 crc kubenswrapper[4661]: I0120 19:03:02.512589 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c4f33125-e16c-4df4-9c3f-9f772fe671eb-public-tls-certs\") pod \"manila-api-0\" (UID: \"c4f33125-e16c-4df4-9c3f-9f772fe671eb\") " pod="openstack/manila-api-0" Jan 20 19:03:02 crc kubenswrapper[4661]: I0120 19:03:02.512661 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c4f33125-e16c-4df4-9c3f-9f772fe671eb-etc-machine-id\") pod \"manila-api-0\" (UID: \"c4f33125-e16c-4df4-9c3f-9f772fe671eb\") " pod="openstack/manila-api-0" Jan 20 19:03:02 crc kubenswrapper[4661]: I0120 19:03:02.512706 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c4f33125-e16c-4df4-9c3f-9f772fe671eb-internal-tls-certs\") pod \"manila-api-0\" (UID: \"c4f33125-e16c-4df4-9c3f-9f772fe671eb\") " pod="openstack/manila-api-0" Jan 20 19:03:02 crc kubenswrapper[4661]: I0120 19:03:02.512728 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c4f33125-e16c-4df4-9c3f-9f772fe671eb-combined-ca-bundle\") pod \"manila-api-0\" (UID: \"c4f33125-e16c-4df4-9c3f-9f772fe671eb\") " pod="openstack/manila-api-0" Jan 20 19:03:02 crc kubenswrapper[4661]: I0120 19:03:02.512761 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c4f33125-e16c-4df4-9c3f-9f772fe671eb-scripts\") pod \"manila-api-0\" (UID: \"c4f33125-e16c-4df4-9c3f-9f772fe671eb\") " pod="openstack/manila-api-0" Jan 20 19:03:02 crc kubenswrapper[4661]: I0120 19:03:02.512852 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c4f33125-e16c-4df4-9c3f-9f772fe671eb-logs\") pod \"manila-api-0\" (UID: \"c4f33125-e16c-4df4-9c3f-9f772fe671eb\") " pod="openstack/manila-api-0" Jan 20 19:03:02 crc kubenswrapper[4661]: I0120 19:03:02.512880 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c4f33125-e16c-4df4-9c3f-9f772fe671eb-config-data\") pod \"manila-api-0\" (UID: \"c4f33125-e16c-4df4-9c3f-9f772fe671eb\") " pod="openstack/manila-api-0" Jan 20 19:03:02 crc kubenswrapper[4661]: I0120 19:03:02.614398 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c4f33125-e16c-4df4-9c3f-9f772fe671eb-internal-tls-certs\") pod \"manila-api-0\" (UID: \"c4f33125-e16c-4df4-9c3f-9f772fe671eb\") " pod="openstack/manila-api-0" Jan 20 19:03:02 crc kubenswrapper[4661]: I0120 19:03:02.614455 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c4f33125-e16c-4df4-9c3f-9f772fe671eb-combined-ca-bundle\") pod \"manila-api-0\" (UID: \"c4f33125-e16c-4df4-9c3f-9f772fe671eb\") " pod="openstack/manila-api-0" Jan 20 19:03:02 crc kubenswrapper[4661]: I0120 19:03:02.614493 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c4f33125-e16c-4df4-9c3f-9f772fe671eb-scripts\") pod \"manila-api-0\" (UID: \"c4f33125-e16c-4df4-9c3f-9f772fe671eb\") " pod="openstack/manila-api-0" Jan 20 19:03:02 crc kubenswrapper[4661]: I0120 19:03:02.614515 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c4f33125-e16c-4df4-9c3f-9f772fe671eb-logs\") pod \"manila-api-0\" (UID: \"c4f33125-e16c-4df4-9c3f-9f772fe671eb\") " pod="openstack/manila-api-0" Jan 20 19:03:02 crc kubenswrapper[4661]: I0120 19:03:02.614545 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c4f33125-e16c-4df4-9c3f-9f772fe671eb-config-data\") pod \"manila-api-0\" (UID: \"c4f33125-e16c-4df4-9c3f-9f772fe671eb\") " pod="openstack/manila-api-0" Jan 20 19:03:02 crc kubenswrapper[4661]: I0120 19:03:02.614590 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jpdcb\" (UniqueName: \"kubernetes.io/projected/c4f33125-e16c-4df4-9c3f-9f772fe671eb-kube-api-access-jpdcb\") pod \"manila-api-0\" (UID: \"c4f33125-e16c-4df4-9c3f-9f772fe671eb\") " pod="openstack/manila-api-0" Jan 20 19:03:02 crc kubenswrapper[4661]: I0120 19:03:02.614613 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c4f33125-e16c-4df4-9c3f-9f772fe671eb-config-data-custom\") pod \"manila-api-0\" (UID: \"c4f33125-e16c-4df4-9c3f-9f772fe671eb\") " pod="openstack/manila-api-0" Jan 20 19:03:02 crc kubenswrapper[4661]: I0120 19:03:02.614637 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c4f33125-e16c-4df4-9c3f-9f772fe671eb-public-tls-certs\") pod \"manila-api-0\" (UID: \"c4f33125-e16c-4df4-9c3f-9f772fe671eb\") " pod="openstack/manila-api-0" Jan 20 19:03:02 crc kubenswrapper[4661]: I0120 19:03:02.614706 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c4f33125-e16c-4df4-9c3f-9f772fe671eb-etc-machine-id\") pod \"manila-api-0\" (UID: \"c4f33125-e16c-4df4-9c3f-9f772fe671eb\") " pod="openstack/manila-api-0" Jan 20 19:03:02 crc kubenswrapper[4661]: I0120 19:03:02.614784 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c4f33125-e16c-4df4-9c3f-9f772fe671eb-etc-machine-id\") pod \"manila-api-0\" (UID: \"c4f33125-e16c-4df4-9c3f-9f772fe671eb\") " pod="openstack/manila-api-0" Jan 20 19:03:02 crc kubenswrapper[4661]: I0120 19:03:02.618497 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c4f33125-e16c-4df4-9c3f-9f772fe671eb-logs\") pod \"manila-api-0\" (UID: \"c4f33125-e16c-4df4-9c3f-9f772fe671eb\") " pod="openstack/manila-api-0" Jan 20 19:03:02 crc kubenswrapper[4661]: I0120 19:03:02.630152 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c4f33125-e16c-4df4-9c3f-9f772fe671eb-scripts\") pod \"manila-api-0\" (UID: \"c4f33125-e16c-4df4-9c3f-9f772fe671eb\") " pod="openstack/manila-api-0" Jan 20 19:03:02 crc kubenswrapper[4661]: I0120 19:03:02.631366 4661 scope.go:117] "RemoveContainer" containerID="bb74839c0269ab4f68323750038cf1080b18213c9f8a164a748016dfeecc3629" Jan 20 19:03:02 crc kubenswrapper[4661]: I0120 19:03:02.631635 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c4f33125-e16c-4df4-9c3f-9f772fe671eb-internal-tls-certs\") pod \"manila-api-0\" (UID: \"c4f33125-e16c-4df4-9c3f-9f772fe671eb\") " pod="openstack/manila-api-0" Jan 20 19:03:02 crc kubenswrapper[4661]: I0120 19:03:02.632407 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c4f33125-e16c-4df4-9c3f-9f772fe671eb-public-tls-certs\") pod \"manila-api-0\" (UID: \"c4f33125-e16c-4df4-9c3f-9f772fe671eb\") " pod="openstack/manila-api-0" Jan 20 19:03:02 crc kubenswrapper[4661]: I0120 19:03:02.634456 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c4f33125-e16c-4df4-9c3f-9f772fe671eb-config-data-custom\") pod \"manila-api-0\" (UID: \"c4f33125-e16c-4df4-9c3f-9f772fe671eb\") " pod="openstack/manila-api-0" Jan 20 19:03:02 crc kubenswrapper[4661]: I0120 19:03:02.637334 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c4f33125-e16c-4df4-9c3f-9f772fe671eb-combined-ca-bundle\") pod \"manila-api-0\" (UID: \"c4f33125-e16c-4df4-9c3f-9f772fe671eb\") " pod="openstack/manila-api-0" Jan 20 19:03:02 crc kubenswrapper[4661]: I0120 19:03:02.648106 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c4f33125-e16c-4df4-9c3f-9f772fe671eb-config-data\") pod \"manila-api-0\" (UID: \"c4f33125-e16c-4df4-9c3f-9f772fe671eb\") " pod="openstack/manila-api-0" Jan 20 19:03:02 crc kubenswrapper[4661]: I0120 19:03:02.652214 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jpdcb\" (UniqueName: \"kubernetes.io/projected/c4f33125-e16c-4df4-9c3f-9f772fe671eb-kube-api-access-jpdcb\") pod \"manila-api-0\" (UID: \"c4f33125-e16c-4df4-9c3f-9f772fe671eb\") " pod="openstack/manila-api-0" Jan 20 19:03:02 crc kubenswrapper[4661]: I0120 19:03:02.695420 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-api-0" Jan 20 19:03:03 crc kubenswrapper[4661]: I0120 19:03:03.414742 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-api-0"] Jan 20 19:03:03 crc kubenswrapper[4661]: I0120 19:03:03.941387 4661 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 20 19:03:03 crc kubenswrapper[4661]: I0120 19:03:03.942413 4661 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 20 19:03:04 crc kubenswrapper[4661]: I0120 19:03:04.159164 4661 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87eba907-795e-421b-a22e-8f978bb12efd" path="/var/lib/kubelet/pods/87eba907-795e-421b-a22e-8f978bb12efd/volumes" Jan 20 19:03:04 crc kubenswrapper[4661]: I0120 19:03:04.349037 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"c4f33125-e16c-4df4-9c3f-9f772fe671eb","Type":"ContainerStarted","Data":"26abea7aa72f3524a5c578812619baf6939dcbcdbde1b3c27d8725235589f8e9"} Jan 20 19:03:04 crc kubenswrapper[4661]: I0120 19:03:04.349071 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"c4f33125-e16c-4df4-9c3f-9f772fe671eb","Type":"ContainerStarted","Data":"b22d0a0934437856b0452eff62590e2d4552ad5d4de0bcaf3ffff9fef034ad16"} Jan 20 19:03:04 crc kubenswrapper[4661]: I0120 19:03:04.481915 4661 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 20 19:03:04 crc kubenswrapper[4661]: I0120 19:03:04.482881 4661 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 20 19:03:05 crc kubenswrapper[4661]: I0120 19:03:05.363953 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"c4f33125-e16c-4df4-9c3f-9f772fe671eb","Type":"ContainerStarted","Data":"983b24532e6ca6e8ff9e62e1b1a314e7dbfe1b7234625bf78ddbbd0ed69dc697"} Jan 20 19:03:05 crc kubenswrapper[4661]: I0120 19:03:05.406351 4661 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/manila-api-0" podStartSLOduration=3.406331706 podStartE2EDuration="3.406331706s" podCreationTimestamp="2026-01-20 19:03:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 19:03:05.399881669 +0000 UTC m=+3441.730671331" watchObservedRunningTime="2026-01-20 19:03:05.406331706 +0000 UTC m=+3441.737121368" Jan 20 19:03:06 crc kubenswrapper[4661]: I0120 19:03:06.226266 4661 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/manila-scheduler-0" Jan 20 19:03:06 crc kubenswrapper[4661]: I0120 19:03:06.324053 4661 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-844bcbddd8-v9pcc" Jan 20 19:03:06 crc kubenswrapper[4661]: I0120 19:03:06.361284 4661 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-658f6cd46d-59d52" Jan 20 19:03:06 crc kubenswrapper[4661]: I0120 19:03:06.379661 4661 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/manila-api-0" Jan 20 19:03:06 crc kubenswrapper[4661]: I0120 19:03:06.407105 4661 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-77766fdf55-f5frm" Jan 20 19:03:06 crc kubenswrapper[4661]: I0120 19:03:06.489009 4661 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-fb68d687f-b2v4r"] Jan 20 19:03:06 crc kubenswrapper[4661]: I0120 19:03:06.489538 4661 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-fb68d687f-b2v4r" podUID="370135b1-1365-490b-a9ae-d8ffb1361718" containerName="dnsmasq-dns" containerID="cri-o://5e98075e4224430510b945da043c9e1de92368693b303b6ff3394f290cf2b9d6" gracePeriod=10 Jan 20 19:03:06 crc kubenswrapper[4661]: W0120 19:03:06.876364 4661 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod87eba907_795e_421b_a22e_8f978bb12efd.slice/crio-conmon-3a83bee596a12afbf7077b9e71b2368c9934ae98d90d049b1adf7884be1cab9b.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod87eba907_795e_421b_a22e_8f978bb12efd.slice/crio-conmon-3a83bee596a12afbf7077b9e71b2368c9934ae98d90d049b1adf7884be1cab9b.scope: no such file or directory Jan 20 19:03:06 crc kubenswrapper[4661]: W0120 19:03:06.876411 4661 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod87eba907_795e_421b_a22e_8f978bb12efd.slice/crio-3a83bee596a12afbf7077b9e71b2368c9934ae98d90d049b1adf7884be1cab9b.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod87eba907_795e_421b_a22e_8f978bb12efd.slice/crio-3a83bee596a12afbf7077b9e71b2368c9934ae98d90d049b1adf7884be1cab9b.scope: no such file or directory Jan 20 19:03:06 crc kubenswrapper[4661]: W0120 19:03:06.900007 4661 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod87eba907_795e_421b_a22e_8f978bb12efd.slice/crio-bb74839c0269ab4f68323750038cf1080b18213c9f8a164a748016dfeecc3629.scope WatchSource:0}: Error finding container bb74839c0269ab4f68323750038cf1080b18213c9f8a164a748016dfeecc3629: Status 404 returned error can't find the container with id bb74839c0269ab4f68323750038cf1080b18213c9f8a164a748016dfeecc3629 Jan 20 19:03:07 crc kubenswrapper[4661]: I0120 19:03:07.061200 4661 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 20 19:03:07 crc kubenswrapper[4661]: I0120 19:03:07.065788 4661 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="cf4af574-77f9-45e3-8791-1a9a5ca67b38" containerName="ceilometer-central-agent" containerID="cri-o://b95873b03817a483b37b6ed1e7535453263afc186b179fc7fc1023993731b6fd" gracePeriod=30 Jan 20 19:03:07 crc kubenswrapper[4661]: I0120 19:03:07.066048 4661 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="cf4af574-77f9-45e3-8791-1a9a5ca67b38" containerName="ceilometer-notification-agent" containerID="cri-o://3fab0463fdd09147fb51758b6c3d597b4e32e27d4493ec06b416f7571e4aaa15" gracePeriod=30 Jan 20 19:03:07 crc kubenswrapper[4661]: I0120 19:03:07.066130 4661 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="cf4af574-77f9-45e3-8791-1a9a5ca67b38" containerName="proxy-httpd" containerID="cri-o://1820bd00bb83f6e1cf122d3212d4f4b1f8a9c754950efa1716a5267c6ae93be6" gracePeriod=30 Jan 20 19:03:07 crc kubenswrapper[4661]: I0120 19:03:07.066251 4661 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="cf4af574-77f9-45e3-8791-1a9a5ca67b38" containerName="sg-core" containerID="cri-o://f06e4917bda1c71acade3fe59d15bbc3b29245bded338777e34e0f592a9bf5a0" gracePeriod=30 Jan 20 19:03:07 crc kubenswrapper[4661]: E0120 19:03:07.234801 4661 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod24a02294_d575_420a_a004_9eaac022318e.slice/crio-conmon-96931cecffd32deb0b5a65174eef09968b1a6eab634134c3542de608948b5409.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc7b62ec6_2f48_4eba_a9dc_612d50b1c7f4.slice/crio-27aa1539165405c3cef6d31f01c8779f30536b05d0703687c5140f97d0808c1c.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod370135b1_1365_490b_a9ae_d8ffb1361718.slice/crio-conmon-5e98075e4224430510b945da043c9e1de92368693b303b6ff3394f290cf2b9d6.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod87eba907_795e_421b_a22e_8f978bb12efd.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc7b62ec6_2f48_4eba_a9dc_612d50b1c7f4.slice/crio-conmon-27aa1539165405c3cef6d31f01c8779f30536b05d0703687c5140f97d0808c1c.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc7b62ec6_2f48_4eba_a9dc_612d50b1c7f4.slice/crio-a5f896f95be8c2cc37c73cee92fa21cb98027960309cefa619025e9a051a0760.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc7b62ec6_2f48_4eba_a9dc_612d50b1c7f4.slice/crio-conmon-a5f896f95be8c2cc37c73cee92fa21cb98027960309cefa619025e9a051a0760.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod87eba907_795e_421b_a22e_8f978bb12efd.slice/crio-d4d53d8ae96d63783774347b6e263bac4bd7106bc60c1708ed507e981296b5b8\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod370135b1_1365_490b_a9ae_d8ffb1361718.slice/crio-5e98075e4224430510b945da043c9e1de92368693b303b6ff3394f290cf2b9d6.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod24a02294_d575_420a_a004_9eaac022318e.slice/crio-conmon-41cff747617a35fc61aab25c795cfd19ccc8c87aee5b4b4176116edc066819f0.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod87eba907_795e_421b_a22e_8f978bb12efd.slice/crio-conmon-bb74839c0269ab4f68323750038cf1080b18213c9f8a164a748016dfeecc3629.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod24a02294_d575_420a_a004_9eaac022318e.slice/crio-96931cecffd32deb0b5a65174eef09968b1a6eab634134c3542de608948b5409.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcf4af574_77f9_45e3_8791_1a9a5ca67b38.slice/crio-1820bd00bb83f6e1cf122d3212d4f4b1f8a9c754950efa1716a5267c6ae93be6.scope\": RecentStats: unable to find data in memory cache]" Jan 20 19:03:07 crc kubenswrapper[4661]: I0120 19:03:07.395850 4661 generic.go:334] "Generic (PLEG): container finished" podID="cf4af574-77f9-45e3-8791-1a9a5ca67b38" containerID="1820bd00bb83f6e1cf122d3212d4f4b1f8a9c754950efa1716a5267c6ae93be6" exitCode=0 Jan 20 19:03:07 crc kubenswrapper[4661]: I0120 19:03:07.396248 4661 generic.go:334] "Generic (PLEG): container finished" podID="cf4af574-77f9-45e3-8791-1a9a5ca67b38" containerID="f06e4917bda1c71acade3fe59d15bbc3b29245bded338777e34e0f592a9bf5a0" exitCode=2 Jan 20 19:03:07 crc kubenswrapper[4661]: I0120 19:03:07.396335 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"cf4af574-77f9-45e3-8791-1a9a5ca67b38","Type":"ContainerDied","Data":"1820bd00bb83f6e1cf122d3212d4f4b1f8a9c754950efa1716a5267c6ae93be6"} Jan 20 19:03:07 crc kubenswrapper[4661]: I0120 19:03:07.396405 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"cf4af574-77f9-45e3-8791-1a9a5ca67b38","Type":"ContainerDied","Data":"f06e4917bda1c71acade3fe59d15bbc3b29245bded338777e34e0f592a9bf5a0"} Jan 20 19:03:07 crc kubenswrapper[4661]: I0120 19:03:07.399971 4661 generic.go:334] "Generic (PLEG): container finished" podID="c7b62ec6-2f48-4eba-a9dc-612d50b1c7f4" containerID="a5f896f95be8c2cc37c73cee92fa21cb98027960309cefa619025e9a051a0760" exitCode=137 Jan 20 19:03:07 crc kubenswrapper[4661]: I0120 19:03:07.400073 4661 generic.go:334] "Generic (PLEG): container finished" podID="c7b62ec6-2f48-4eba-a9dc-612d50b1c7f4" containerID="27aa1539165405c3cef6d31f01c8779f30536b05d0703687c5140f97d0808c1c" exitCode=137 Jan 20 19:03:07 crc kubenswrapper[4661]: I0120 19:03:07.400159 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-676bf6649-b97jp" event={"ID":"c7b62ec6-2f48-4eba-a9dc-612d50b1c7f4","Type":"ContainerDied","Data":"a5f896f95be8c2cc37c73cee92fa21cb98027960309cefa619025e9a051a0760"} Jan 20 19:03:07 crc kubenswrapper[4661]: I0120 19:03:07.400225 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-676bf6649-b97jp" event={"ID":"c7b62ec6-2f48-4eba-a9dc-612d50b1c7f4","Type":"ContainerDied","Data":"27aa1539165405c3cef6d31f01c8779f30536b05d0703687c5140f97d0808c1c"} Jan 20 19:03:07 crc kubenswrapper[4661]: I0120 19:03:07.401984 4661 generic.go:334] "Generic (PLEG): container finished" podID="370135b1-1365-490b-a9ae-d8ffb1361718" containerID="5e98075e4224430510b945da043c9e1de92368693b303b6ff3394f290cf2b9d6" exitCode=0 Jan 20 19:03:07 crc kubenswrapper[4661]: I0120 19:03:07.402082 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-fb68d687f-b2v4r" event={"ID":"370135b1-1365-490b-a9ae-d8ffb1361718","Type":"ContainerDied","Data":"5e98075e4224430510b945da043c9e1de92368693b303b6ff3394f290cf2b9d6"} Jan 20 19:03:07 crc kubenswrapper[4661]: I0120 19:03:07.416849 4661 generic.go:334] "Generic (PLEG): container finished" podID="24a02294-d575-420a-a004-9eaac022318e" containerID="41cff747617a35fc61aab25c795cfd19ccc8c87aee5b4b4176116edc066819f0" exitCode=137 Jan 20 19:03:07 crc kubenswrapper[4661]: I0120 19:03:07.416890 4661 generic.go:334] "Generic (PLEG): container finished" podID="24a02294-d575-420a-a004-9eaac022318e" containerID="96931cecffd32deb0b5a65174eef09968b1a6eab634134c3542de608948b5409" exitCode=137 Jan 20 19:03:07 crc kubenswrapper[4661]: I0120 19:03:07.417178 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-68df949b55-t6lcn" event={"ID":"24a02294-d575-420a-a004-9eaac022318e","Type":"ContainerDied","Data":"41cff747617a35fc61aab25c795cfd19ccc8c87aee5b4b4176116edc066819f0"} Jan 20 19:03:07 crc kubenswrapper[4661]: I0120 19:03:07.417261 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-68df949b55-t6lcn" event={"ID":"24a02294-d575-420a-a004-9eaac022318e","Type":"ContainerDied","Data":"96931cecffd32deb0b5a65174eef09968b1a6eab634134c3542de608948b5409"} Jan 20 19:03:08 crc kubenswrapper[4661]: I0120 19:03:08.435976 4661 generic.go:334] "Generic (PLEG): container finished" podID="cf4af574-77f9-45e3-8791-1a9a5ca67b38" containerID="b95873b03817a483b37b6ed1e7535453263afc186b179fc7fc1023993731b6fd" exitCode=0 Jan 20 19:03:08 crc kubenswrapper[4661]: I0120 19:03:08.436019 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"cf4af574-77f9-45e3-8791-1a9a5ca67b38","Type":"ContainerDied","Data":"b95873b03817a483b37b6ed1e7535453263afc186b179fc7fc1023993731b6fd"} Jan 20 19:03:08 crc kubenswrapper[4661]: I0120 19:03:08.638898 4661 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/horizon-658f6cd46d-59d52" Jan 20 19:03:08 crc kubenswrapper[4661]: I0120 19:03:08.753828 4661 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-844bcbddd8-v9pcc"] Jan 20 19:03:08 crc kubenswrapper[4661]: I0120 19:03:08.754038 4661 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-844bcbddd8-v9pcc" podUID="ea579f19-b21d-4098-8f52-517be45768fb" containerName="horizon-log" containerID="cri-o://5111dbc27e814b94b5795799cd69cac1ddd769cbf1816e3666e0f1967d88e741" gracePeriod=30 Jan 20 19:03:08 crc kubenswrapper[4661]: I0120 19:03:08.754448 4661 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-844bcbddd8-v9pcc" podUID="ea579f19-b21d-4098-8f52-517be45768fb" containerName="horizon" containerID="cri-o://f921a9bdfcd4155d0506b5d9b39786056aba403a0899c9ec152853ac2b3655f1" gracePeriod=30 Jan 20 19:03:08 crc kubenswrapper[4661]: I0120 19:03:08.773687 4661 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-844bcbddd8-v9pcc" podUID="ea579f19-b21d-4098-8f52-517be45768fb" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.246:8443/dashboard/auth/login/?next=/dashboard/\": EOF" Jan 20 19:03:08 crc kubenswrapper[4661]: I0120 19:03:08.775837 4661 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-844bcbddd8-v9pcc" podUID="ea579f19-b21d-4098-8f52-517be45768fb" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.246:8443/dashboard/auth/login/?next=/dashboard/\": read tcp 10.217.0.2:46502->10.217.0.246:8443: read: connection reset by peer" Jan 20 19:03:10 crc kubenswrapper[4661]: I0120 19:03:10.168344 4661 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-fb68d687f-b2v4r" Jan 20 19:03:10 crc kubenswrapper[4661]: I0120 19:03:10.291716 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/370135b1-1365-490b-a9ae-d8ffb1361718-dns-svc\") pod \"370135b1-1365-490b-a9ae-d8ffb1361718\" (UID: \"370135b1-1365-490b-a9ae-d8ffb1361718\") " Jan 20 19:03:10 crc kubenswrapper[4661]: I0120 19:03:10.291863 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/370135b1-1365-490b-a9ae-d8ffb1361718-ovsdbserver-nb\") pod \"370135b1-1365-490b-a9ae-d8ffb1361718\" (UID: \"370135b1-1365-490b-a9ae-d8ffb1361718\") " Jan 20 19:03:10 crc kubenswrapper[4661]: I0120 19:03:10.291902 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/370135b1-1365-490b-a9ae-d8ffb1361718-ovsdbserver-sb\") pod \"370135b1-1365-490b-a9ae-d8ffb1361718\" (UID: \"370135b1-1365-490b-a9ae-d8ffb1361718\") " Jan 20 19:03:10 crc kubenswrapper[4661]: I0120 19:03:10.291939 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c56dl\" (UniqueName: \"kubernetes.io/projected/370135b1-1365-490b-a9ae-d8ffb1361718-kube-api-access-c56dl\") pod \"370135b1-1365-490b-a9ae-d8ffb1361718\" (UID: \"370135b1-1365-490b-a9ae-d8ffb1361718\") " Jan 20 19:03:10 crc kubenswrapper[4661]: I0120 19:03:10.291961 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/370135b1-1365-490b-a9ae-d8ffb1361718-openstack-edpm-ipam\") pod \"370135b1-1365-490b-a9ae-d8ffb1361718\" (UID: \"370135b1-1365-490b-a9ae-d8ffb1361718\") " Jan 20 19:03:10 crc kubenswrapper[4661]: I0120 19:03:10.292025 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/370135b1-1365-490b-a9ae-d8ffb1361718-config\") pod \"370135b1-1365-490b-a9ae-d8ffb1361718\" (UID: \"370135b1-1365-490b-a9ae-d8ffb1361718\") " Jan 20 19:03:10 crc kubenswrapper[4661]: I0120 19:03:10.311524 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/370135b1-1365-490b-a9ae-d8ffb1361718-kube-api-access-c56dl" (OuterVolumeSpecName: "kube-api-access-c56dl") pod "370135b1-1365-490b-a9ae-d8ffb1361718" (UID: "370135b1-1365-490b-a9ae-d8ffb1361718"). InnerVolumeSpecName "kube-api-access-c56dl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 19:03:10 crc kubenswrapper[4661]: I0120 19:03:10.395053 4661 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c56dl\" (UniqueName: \"kubernetes.io/projected/370135b1-1365-490b-a9ae-d8ffb1361718-kube-api-access-c56dl\") on node \"crc\" DevicePath \"\"" Jan 20 19:03:10 crc kubenswrapper[4661]: I0120 19:03:10.433131 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/370135b1-1365-490b-a9ae-d8ffb1361718-openstack-edpm-ipam" (OuterVolumeSpecName: "openstack-edpm-ipam") pod "370135b1-1365-490b-a9ae-d8ffb1361718" (UID: "370135b1-1365-490b-a9ae-d8ffb1361718"). InnerVolumeSpecName "openstack-edpm-ipam". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 19:03:10 crc kubenswrapper[4661]: I0120 19:03:10.433605 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/370135b1-1365-490b-a9ae-d8ffb1361718-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "370135b1-1365-490b-a9ae-d8ffb1361718" (UID: "370135b1-1365-490b-a9ae-d8ffb1361718"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 19:03:10 crc kubenswrapper[4661]: I0120 19:03:10.481580 4661 generic.go:334] "Generic (PLEG): container finished" podID="cf4af574-77f9-45e3-8791-1a9a5ca67b38" containerID="3fab0463fdd09147fb51758b6c3d597b4e32e27d4493ec06b416f7571e4aaa15" exitCode=0 Jan 20 19:03:10 crc kubenswrapper[4661]: I0120 19:03:10.481657 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"cf4af574-77f9-45e3-8791-1a9a5ca67b38","Type":"ContainerDied","Data":"3fab0463fdd09147fb51758b6c3d597b4e32e27d4493ec06b416f7571e4aaa15"} Jan 20 19:03:10 crc kubenswrapper[4661]: I0120 19:03:10.482198 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/370135b1-1365-490b-a9ae-d8ffb1361718-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "370135b1-1365-490b-a9ae-d8ffb1361718" (UID: "370135b1-1365-490b-a9ae-d8ffb1361718"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 19:03:10 crc kubenswrapper[4661]: I0120 19:03:10.483534 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-fb68d687f-b2v4r" event={"ID":"370135b1-1365-490b-a9ae-d8ffb1361718","Type":"ContainerDied","Data":"d2ba502a0fe2c4a68dc1d43015bd5ec21c3e7ab11b0b0efc83d5d882814d6b6f"} Jan 20 19:03:10 crc kubenswrapper[4661]: I0120 19:03:10.483576 4661 scope.go:117] "RemoveContainer" containerID="5e98075e4224430510b945da043c9e1de92368693b303b6ff3394f290cf2b9d6" Jan 20 19:03:10 crc kubenswrapper[4661]: I0120 19:03:10.483738 4661 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-fb68d687f-b2v4r" Jan 20 19:03:10 crc kubenswrapper[4661]: I0120 19:03:10.497784 4661 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/370135b1-1365-490b-a9ae-d8ffb1361718-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 20 19:03:10 crc kubenswrapper[4661]: I0120 19:03:10.497818 4661 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/370135b1-1365-490b-a9ae-d8ffb1361718-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 20 19:03:10 crc kubenswrapper[4661]: I0120 19:03:10.497827 4661 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/370135b1-1365-490b-a9ae-d8ffb1361718-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 20 19:03:10 crc kubenswrapper[4661]: I0120 19:03:10.503070 4661 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-676bf6649-b97jp" Jan 20 19:03:10 crc kubenswrapper[4661]: I0120 19:03:10.510210 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/370135b1-1365-490b-a9ae-d8ffb1361718-config" (OuterVolumeSpecName: "config") pod "370135b1-1365-490b-a9ae-d8ffb1361718" (UID: "370135b1-1365-490b-a9ae-d8ffb1361718"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 19:03:10 crc kubenswrapper[4661]: I0120 19:03:10.517560 4661 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-68df949b55-t6lcn" Jan 20 19:03:10 crc kubenswrapper[4661]: I0120 19:03:10.517993 4661 scope.go:117] "RemoveContainer" containerID="9382aa020a5557717fe06067ed0346f0ddaddff9324539aba24af414acd4c186" Jan 20 19:03:10 crc kubenswrapper[4661]: I0120 19:03:10.541473 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/370135b1-1365-490b-a9ae-d8ffb1361718-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "370135b1-1365-490b-a9ae-d8ffb1361718" (UID: "370135b1-1365-490b-a9ae-d8ffb1361718"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 19:03:10 crc kubenswrapper[4661]: I0120 19:03:10.574385 4661 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 20 19:03:10 crc kubenswrapper[4661]: I0120 19:03:10.605572 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/24a02294-d575-420a-a004-9eaac022318e-horizon-secret-key\") pod \"24a02294-d575-420a-a004-9eaac022318e\" (UID: \"24a02294-d575-420a-a004-9eaac022318e\") " Jan 20 19:03:10 crc kubenswrapper[4661]: I0120 19:03:10.605758 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gxvqn\" (UniqueName: \"kubernetes.io/projected/c7b62ec6-2f48-4eba-a9dc-612d50b1c7f4-kube-api-access-gxvqn\") pod \"c7b62ec6-2f48-4eba-a9dc-612d50b1c7f4\" (UID: \"c7b62ec6-2f48-4eba-a9dc-612d50b1c7f4\") " Jan 20 19:03:10 crc kubenswrapper[4661]: I0120 19:03:10.605970 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c7b62ec6-2f48-4eba-a9dc-612d50b1c7f4-logs\") pod \"c7b62ec6-2f48-4eba-a9dc-612d50b1c7f4\" (UID: \"c7b62ec6-2f48-4eba-a9dc-612d50b1c7f4\") " Jan 20 19:03:10 crc kubenswrapper[4661]: I0120 19:03:10.606156 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/24a02294-d575-420a-a004-9eaac022318e-config-data\") pod \"24a02294-d575-420a-a004-9eaac022318e\" (UID: \"24a02294-d575-420a-a004-9eaac022318e\") " Jan 20 19:03:10 crc kubenswrapper[4661]: I0120 19:03:10.606237 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c7b62ec6-2f48-4eba-a9dc-612d50b1c7f4-config-data\") pod \"c7b62ec6-2f48-4eba-a9dc-612d50b1c7f4\" (UID: \"c7b62ec6-2f48-4eba-a9dc-612d50b1c7f4\") " Jan 20 19:03:10 crc kubenswrapper[4661]: I0120 19:03:10.606337 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c7b62ec6-2f48-4eba-a9dc-612d50b1c7f4-scripts\") pod \"c7b62ec6-2f48-4eba-a9dc-612d50b1c7f4\" (UID: \"c7b62ec6-2f48-4eba-a9dc-612d50b1c7f4\") " Jan 20 19:03:10 crc kubenswrapper[4661]: I0120 19:03:10.606419 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f64vj\" (UniqueName: \"kubernetes.io/projected/24a02294-d575-420a-a004-9eaac022318e-kube-api-access-f64vj\") pod \"24a02294-d575-420a-a004-9eaac022318e\" (UID: \"24a02294-d575-420a-a004-9eaac022318e\") " Jan 20 19:03:10 crc kubenswrapper[4661]: I0120 19:03:10.606537 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/24a02294-d575-420a-a004-9eaac022318e-scripts\") pod \"24a02294-d575-420a-a004-9eaac022318e\" (UID: \"24a02294-d575-420a-a004-9eaac022318e\") " Jan 20 19:03:10 crc kubenswrapper[4661]: I0120 19:03:10.606640 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/24a02294-d575-420a-a004-9eaac022318e-logs\") pod \"24a02294-d575-420a-a004-9eaac022318e\" (UID: \"24a02294-d575-420a-a004-9eaac022318e\") " Jan 20 19:03:10 crc kubenswrapper[4661]: I0120 19:03:10.606792 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/c7b62ec6-2f48-4eba-a9dc-612d50b1c7f4-horizon-secret-key\") pod \"c7b62ec6-2f48-4eba-a9dc-612d50b1c7f4\" (UID: \"c7b62ec6-2f48-4eba-a9dc-612d50b1c7f4\") " Jan 20 19:03:10 crc kubenswrapper[4661]: I0120 19:03:10.607825 4661 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/370135b1-1365-490b-a9ae-d8ffb1361718-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 20 19:03:10 crc kubenswrapper[4661]: I0120 19:03:10.607896 4661 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/370135b1-1365-490b-a9ae-d8ffb1361718-config\") on node \"crc\" DevicePath \"\"" Jan 20 19:03:10 crc kubenswrapper[4661]: I0120 19:03:10.613372 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c7b62ec6-2f48-4eba-a9dc-612d50b1c7f4-logs" (OuterVolumeSpecName: "logs") pod "c7b62ec6-2f48-4eba-a9dc-612d50b1c7f4" (UID: "c7b62ec6-2f48-4eba-a9dc-612d50b1c7f4"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 19:03:10 crc kubenswrapper[4661]: I0120 19:03:10.616006 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c7b62ec6-2f48-4eba-a9dc-612d50b1c7f4-kube-api-access-gxvqn" (OuterVolumeSpecName: "kube-api-access-gxvqn") pod "c7b62ec6-2f48-4eba-a9dc-612d50b1c7f4" (UID: "c7b62ec6-2f48-4eba-a9dc-612d50b1c7f4"). InnerVolumeSpecName "kube-api-access-gxvqn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 19:03:10 crc kubenswrapper[4661]: I0120 19:03:10.620781 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/24a02294-d575-420a-a004-9eaac022318e-kube-api-access-f64vj" (OuterVolumeSpecName: "kube-api-access-f64vj") pod "24a02294-d575-420a-a004-9eaac022318e" (UID: "24a02294-d575-420a-a004-9eaac022318e"). InnerVolumeSpecName "kube-api-access-f64vj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 19:03:10 crc kubenswrapper[4661]: I0120 19:03:10.622774 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/24a02294-d575-420a-a004-9eaac022318e-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "24a02294-d575-420a-a004-9eaac022318e" (UID: "24a02294-d575-420a-a004-9eaac022318e"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 19:03:10 crc kubenswrapper[4661]: I0120 19:03:10.622849 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/24a02294-d575-420a-a004-9eaac022318e-logs" (OuterVolumeSpecName: "logs") pod "24a02294-d575-420a-a004-9eaac022318e" (UID: "24a02294-d575-420a-a004-9eaac022318e"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 19:03:10 crc kubenswrapper[4661]: I0120 19:03:10.633746 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c7b62ec6-2f48-4eba-a9dc-612d50b1c7f4-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "c7b62ec6-2f48-4eba-a9dc-612d50b1c7f4" (UID: "c7b62ec6-2f48-4eba-a9dc-612d50b1c7f4"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 19:03:10 crc kubenswrapper[4661]: I0120 19:03:10.681091 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c7b62ec6-2f48-4eba-a9dc-612d50b1c7f4-scripts" (OuterVolumeSpecName: "scripts") pod "c7b62ec6-2f48-4eba-a9dc-612d50b1c7f4" (UID: "c7b62ec6-2f48-4eba-a9dc-612d50b1c7f4"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 19:03:10 crc kubenswrapper[4661]: I0120 19:03:10.696083 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/24a02294-d575-420a-a004-9eaac022318e-scripts" (OuterVolumeSpecName: "scripts") pod "24a02294-d575-420a-a004-9eaac022318e" (UID: "24a02294-d575-420a-a004-9eaac022318e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 19:03:10 crc kubenswrapper[4661]: I0120 19:03:10.697048 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/24a02294-d575-420a-a004-9eaac022318e-config-data" (OuterVolumeSpecName: "config-data") pod "24a02294-d575-420a-a004-9eaac022318e" (UID: "24a02294-d575-420a-a004-9eaac022318e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 19:03:10 crc kubenswrapper[4661]: I0120 19:03:10.703255 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c7b62ec6-2f48-4eba-a9dc-612d50b1c7f4-config-data" (OuterVolumeSpecName: "config-data") pod "c7b62ec6-2f48-4eba-a9dc-612d50b1c7f4" (UID: "c7b62ec6-2f48-4eba-a9dc-612d50b1c7f4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 19:03:10 crc kubenswrapper[4661]: I0120 19:03:10.709256 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cf4af574-77f9-45e3-8791-1a9a5ca67b38-scripts\") pod \"cf4af574-77f9-45e3-8791-1a9a5ca67b38\" (UID: \"cf4af574-77f9-45e3-8791-1a9a5ca67b38\") " Jan 20 19:03:10 crc kubenswrapper[4661]: I0120 19:03:10.709357 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/cf4af574-77f9-45e3-8791-1a9a5ca67b38-ceilometer-tls-certs\") pod \"cf4af574-77f9-45e3-8791-1a9a5ca67b38\" (UID: \"cf4af574-77f9-45e3-8791-1a9a5ca67b38\") " Jan 20 19:03:10 crc kubenswrapper[4661]: I0120 19:03:10.709388 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zf95r\" (UniqueName: \"kubernetes.io/projected/cf4af574-77f9-45e3-8791-1a9a5ca67b38-kube-api-access-zf95r\") pod \"cf4af574-77f9-45e3-8791-1a9a5ca67b38\" (UID: \"cf4af574-77f9-45e3-8791-1a9a5ca67b38\") " Jan 20 19:03:10 crc kubenswrapper[4661]: I0120 19:03:10.709410 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cf4af574-77f9-45e3-8791-1a9a5ca67b38-combined-ca-bundle\") pod \"cf4af574-77f9-45e3-8791-1a9a5ca67b38\" (UID: \"cf4af574-77f9-45e3-8791-1a9a5ca67b38\") " Jan 20 19:03:10 crc kubenswrapper[4661]: I0120 19:03:10.709437 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cf4af574-77f9-45e3-8791-1a9a5ca67b38-config-data\") pod \"cf4af574-77f9-45e3-8791-1a9a5ca67b38\" (UID: \"cf4af574-77f9-45e3-8791-1a9a5ca67b38\") " Jan 20 19:03:10 crc kubenswrapper[4661]: I0120 19:03:10.709540 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cf4af574-77f9-45e3-8791-1a9a5ca67b38-run-httpd\") pod \"cf4af574-77f9-45e3-8791-1a9a5ca67b38\" (UID: \"cf4af574-77f9-45e3-8791-1a9a5ca67b38\") " Jan 20 19:03:10 crc kubenswrapper[4661]: I0120 19:03:10.709713 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/cf4af574-77f9-45e3-8791-1a9a5ca67b38-sg-core-conf-yaml\") pod \"cf4af574-77f9-45e3-8791-1a9a5ca67b38\" (UID: \"cf4af574-77f9-45e3-8791-1a9a5ca67b38\") " Jan 20 19:03:10 crc kubenswrapper[4661]: I0120 19:03:10.709736 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cf4af574-77f9-45e3-8791-1a9a5ca67b38-log-httpd\") pod \"cf4af574-77f9-45e3-8791-1a9a5ca67b38\" (UID: \"cf4af574-77f9-45e3-8791-1a9a5ca67b38\") " Jan 20 19:03:10 crc kubenswrapper[4661]: I0120 19:03:10.710091 4661 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/24a02294-d575-420a-a004-9eaac022318e-config-data\") on node \"crc\" DevicePath \"\"" Jan 20 19:03:10 crc kubenswrapper[4661]: I0120 19:03:10.710107 4661 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c7b62ec6-2f48-4eba-a9dc-612d50b1c7f4-config-data\") on node \"crc\" DevicePath \"\"" Jan 20 19:03:10 crc kubenswrapper[4661]: I0120 19:03:10.710116 4661 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c7b62ec6-2f48-4eba-a9dc-612d50b1c7f4-scripts\") on node \"crc\" DevicePath \"\"" Jan 20 19:03:10 crc kubenswrapper[4661]: I0120 19:03:10.710125 4661 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f64vj\" (UniqueName: \"kubernetes.io/projected/24a02294-d575-420a-a004-9eaac022318e-kube-api-access-f64vj\") on node \"crc\" DevicePath \"\"" Jan 20 19:03:10 crc kubenswrapper[4661]: I0120 19:03:10.710137 4661 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/24a02294-d575-420a-a004-9eaac022318e-scripts\") on node \"crc\" DevicePath \"\"" Jan 20 19:03:10 crc kubenswrapper[4661]: I0120 19:03:10.710146 4661 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/24a02294-d575-420a-a004-9eaac022318e-logs\") on node \"crc\" DevicePath \"\"" Jan 20 19:03:10 crc kubenswrapper[4661]: I0120 19:03:10.710154 4661 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/c7b62ec6-2f48-4eba-a9dc-612d50b1c7f4-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Jan 20 19:03:10 crc kubenswrapper[4661]: I0120 19:03:10.710162 4661 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/24a02294-d575-420a-a004-9eaac022318e-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Jan 20 19:03:10 crc kubenswrapper[4661]: I0120 19:03:10.710171 4661 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gxvqn\" (UniqueName: \"kubernetes.io/projected/c7b62ec6-2f48-4eba-a9dc-612d50b1c7f4-kube-api-access-gxvqn\") on node \"crc\" DevicePath \"\"" Jan 20 19:03:10 crc kubenswrapper[4661]: I0120 19:03:10.710180 4661 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c7b62ec6-2f48-4eba-a9dc-612d50b1c7f4-logs\") on node \"crc\" DevicePath \"\"" Jan 20 19:03:10 crc kubenswrapper[4661]: I0120 19:03:10.710595 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cf4af574-77f9-45e3-8791-1a9a5ca67b38-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "cf4af574-77f9-45e3-8791-1a9a5ca67b38" (UID: "cf4af574-77f9-45e3-8791-1a9a5ca67b38"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 19:03:10 crc kubenswrapper[4661]: I0120 19:03:10.713133 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cf4af574-77f9-45e3-8791-1a9a5ca67b38-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "cf4af574-77f9-45e3-8791-1a9a5ca67b38" (UID: "cf4af574-77f9-45e3-8791-1a9a5ca67b38"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 19:03:10 crc kubenswrapper[4661]: I0120 19:03:10.727102 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cf4af574-77f9-45e3-8791-1a9a5ca67b38-scripts" (OuterVolumeSpecName: "scripts") pod "cf4af574-77f9-45e3-8791-1a9a5ca67b38" (UID: "cf4af574-77f9-45e3-8791-1a9a5ca67b38"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 19:03:10 crc kubenswrapper[4661]: I0120 19:03:10.731715 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cf4af574-77f9-45e3-8791-1a9a5ca67b38-kube-api-access-zf95r" (OuterVolumeSpecName: "kube-api-access-zf95r") pod "cf4af574-77f9-45e3-8791-1a9a5ca67b38" (UID: "cf4af574-77f9-45e3-8791-1a9a5ca67b38"). InnerVolumeSpecName "kube-api-access-zf95r". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 19:03:10 crc kubenswrapper[4661]: I0120 19:03:10.799893 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cf4af574-77f9-45e3-8791-1a9a5ca67b38-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "cf4af574-77f9-45e3-8791-1a9a5ca67b38" (UID: "cf4af574-77f9-45e3-8791-1a9a5ca67b38"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 19:03:10 crc kubenswrapper[4661]: I0120 19:03:10.822892 4661 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/cf4af574-77f9-45e3-8791-1a9a5ca67b38-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 20 19:03:10 crc kubenswrapper[4661]: I0120 19:03:10.823100 4661 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zf95r\" (UniqueName: \"kubernetes.io/projected/cf4af574-77f9-45e3-8791-1a9a5ca67b38-kube-api-access-zf95r\") on node \"crc\" DevicePath \"\"" Jan 20 19:03:10 crc kubenswrapper[4661]: I0120 19:03:10.823206 4661 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cf4af574-77f9-45e3-8791-1a9a5ca67b38-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 20 19:03:10 crc kubenswrapper[4661]: I0120 19:03:10.823259 4661 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cf4af574-77f9-45e3-8791-1a9a5ca67b38-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 20 19:03:10 crc kubenswrapper[4661]: I0120 19:03:10.824152 4661 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cf4af574-77f9-45e3-8791-1a9a5ca67b38-scripts\") on node \"crc\" DevicePath \"\"" Jan 20 19:03:10 crc kubenswrapper[4661]: I0120 19:03:10.841868 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cf4af574-77f9-45e3-8791-1a9a5ca67b38-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "cf4af574-77f9-45e3-8791-1a9a5ca67b38" (UID: "cf4af574-77f9-45e3-8791-1a9a5ca67b38"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 19:03:10 crc kubenswrapper[4661]: I0120 19:03:10.879216 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cf4af574-77f9-45e3-8791-1a9a5ca67b38-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "cf4af574-77f9-45e3-8791-1a9a5ca67b38" (UID: "cf4af574-77f9-45e3-8791-1a9a5ca67b38"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 19:03:10 crc kubenswrapper[4661]: I0120 19:03:10.908185 4661 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-fb68d687f-b2v4r"] Jan 20 19:03:10 crc kubenswrapper[4661]: I0120 19:03:10.922749 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cf4af574-77f9-45e3-8791-1a9a5ca67b38-config-data" (OuterVolumeSpecName: "config-data") pod "cf4af574-77f9-45e3-8791-1a9a5ca67b38" (UID: "cf4af574-77f9-45e3-8791-1a9a5ca67b38"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 19:03:10 crc kubenswrapper[4661]: I0120 19:03:10.923167 4661 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-fb68d687f-b2v4r"] Jan 20 19:03:10 crc kubenswrapper[4661]: I0120 19:03:10.927242 4661 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/cf4af574-77f9-45e3-8791-1a9a5ca67b38-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 20 19:03:10 crc kubenswrapper[4661]: I0120 19:03:10.927273 4661 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cf4af574-77f9-45e3-8791-1a9a5ca67b38-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 20 19:03:10 crc kubenswrapper[4661]: I0120 19:03:10.927283 4661 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cf4af574-77f9-45e3-8791-1a9a5ca67b38-config-data\") on node \"crc\" DevicePath \"\"" Jan 20 19:03:11 crc kubenswrapper[4661]: I0120 19:03:11.495126 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"adb303b3-0970-4438-92ee-dd30a6fb55b2","Type":"ContainerStarted","Data":"5cb80831e7331caf380495931de3ab180eb8f4a45c38b61123eae8ca5723e0c8"} Jan 20 19:03:11 crc kubenswrapper[4661]: I0120 19:03:11.495378 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"adb303b3-0970-4438-92ee-dd30a6fb55b2","Type":"ContainerStarted","Data":"955b99ce833f27e5075b6196fc82597d63b2e76bbc3f6c10cface018594d8efa"} Jan 20 19:03:11 crc kubenswrapper[4661]: I0120 19:03:11.497092 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-68df949b55-t6lcn" event={"ID":"24a02294-d575-420a-a004-9eaac022318e","Type":"ContainerDied","Data":"3f77b1bb053b3f39e6f8f48c055a43bd4cd54b5524821232721a14ac2078bb4e"} Jan 20 19:03:11 crc kubenswrapper[4661]: I0120 19:03:11.497124 4661 scope.go:117] "RemoveContainer" containerID="41cff747617a35fc61aab25c795cfd19ccc8c87aee5b4b4176116edc066819f0" Jan 20 19:03:11 crc kubenswrapper[4661]: I0120 19:03:11.497197 4661 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-68df949b55-t6lcn" Jan 20 19:03:11 crc kubenswrapper[4661]: I0120 19:03:11.505897 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"cf4af574-77f9-45e3-8791-1a9a5ca67b38","Type":"ContainerDied","Data":"4a5532fca5dc717a4ca47aa716bf083c5edb060f6e8f7c4a15df81205046b36c"} Jan 20 19:03:11 crc kubenswrapper[4661]: I0120 19:03:11.506015 4661 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 20 19:03:11 crc kubenswrapper[4661]: I0120 19:03:11.526928 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-676bf6649-b97jp" event={"ID":"c7b62ec6-2f48-4eba-a9dc-612d50b1c7f4","Type":"ContainerDied","Data":"b8f2f9cb07882c796b1da7f28013c0e28f89a0106965273d7d8e5ae5221ee51c"} Jan 20 19:03:11 crc kubenswrapper[4661]: I0120 19:03:11.527021 4661 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-676bf6649-b97jp" Jan 20 19:03:11 crc kubenswrapper[4661]: I0120 19:03:11.555992 4661 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/manila-share-share1-0" podStartSLOduration=3.713169069 podStartE2EDuration="16.555969272s" podCreationTimestamp="2026-01-20 19:02:55 +0000 UTC" firstStartedPulling="2026-01-20 19:02:56.928013653 +0000 UTC m=+3433.258803315" lastFinishedPulling="2026-01-20 19:03:09.770813856 +0000 UTC m=+3446.101603518" observedRunningTime="2026-01-20 19:03:11.533372335 +0000 UTC m=+3447.864161997" watchObservedRunningTime="2026-01-20 19:03:11.555969272 +0000 UTC m=+3447.886758934" Jan 20 19:03:11 crc kubenswrapper[4661]: I0120 19:03:11.562694 4661 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-68df949b55-t6lcn"] Jan 20 19:03:11 crc kubenswrapper[4661]: I0120 19:03:11.578696 4661 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-68df949b55-t6lcn"] Jan 20 19:03:11 crc kubenswrapper[4661]: I0120 19:03:11.593124 4661 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 20 19:03:11 crc kubenswrapper[4661]: I0120 19:03:11.604638 4661 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 20 19:03:11 crc kubenswrapper[4661]: I0120 19:03:11.615794 4661 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-676bf6649-b97jp"] Jan 20 19:03:11 crc kubenswrapper[4661]: I0120 19:03:11.621710 4661 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-676bf6649-b97jp"] Jan 20 19:03:11 crc kubenswrapper[4661]: I0120 19:03:11.634731 4661 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 20 19:03:11 crc kubenswrapper[4661]: E0120 19:03:11.636190 4661 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c7b62ec6-2f48-4eba-a9dc-612d50b1c7f4" containerName="horizon" Jan 20 19:03:11 crc kubenswrapper[4661]: I0120 19:03:11.636218 4661 state_mem.go:107] "Deleted CPUSet assignment" podUID="c7b62ec6-2f48-4eba-a9dc-612d50b1c7f4" containerName="horizon" Jan 20 19:03:11 crc kubenswrapper[4661]: E0120 19:03:11.636256 4661 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24a02294-d575-420a-a004-9eaac022318e" containerName="horizon" Jan 20 19:03:11 crc kubenswrapper[4661]: I0120 19:03:11.636266 4661 state_mem.go:107] "Deleted CPUSet assignment" podUID="24a02294-d575-420a-a004-9eaac022318e" containerName="horizon" Jan 20 19:03:11 crc kubenswrapper[4661]: E0120 19:03:11.636297 4661 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="370135b1-1365-490b-a9ae-d8ffb1361718" containerName="init" Jan 20 19:03:11 crc kubenswrapper[4661]: I0120 19:03:11.636307 4661 state_mem.go:107] "Deleted CPUSet assignment" podUID="370135b1-1365-490b-a9ae-d8ffb1361718" containerName="init" Jan 20 19:03:11 crc kubenswrapper[4661]: E0120 19:03:11.636338 4661 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c7b62ec6-2f48-4eba-a9dc-612d50b1c7f4" containerName="horizon-log" Jan 20 19:03:11 crc kubenswrapper[4661]: I0120 19:03:11.636347 4661 state_mem.go:107] "Deleted CPUSet assignment" podUID="c7b62ec6-2f48-4eba-a9dc-612d50b1c7f4" containerName="horizon-log" Jan 20 19:03:11 crc kubenswrapper[4661]: E0120 19:03:11.636371 4661 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="370135b1-1365-490b-a9ae-d8ffb1361718" containerName="dnsmasq-dns" Jan 20 19:03:11 crc kubenswrapper[4661]: I0120 19:03:11.636381 4661 state_mem.go:107] "Deleted CPUSet assignment" podUID="370135b1-1365-490b-a9ae-d8ffb1361718" containerName="dnsmasq-dns" Jan 20 19:03:11 crc kubenswrapper[4661]: E0120 19:03:11.636416 4661 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cf4af574-77f9-45e3-8791-1a9a5ca67b38" containerName="proxy-httpd" Jan 20 19:03:11 crc kubenswrapper[4661]: I0120 19:03:11.636426 4661 state_mem.go:107] "Deleted CPUSet assignment" podUID="cf4af574-77f9-45e3-8791-1a9a5ca67b38" containerName="proxy-httpd" Jan 20 19:03:11 crc kubenswrapper[4661]: E0120 19:03:11.636446 4661 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24a02294-d575-420a-a004-9eaac022318e" containerName="horizon-log" Jan 20 19:03:11 crc kubenswrapper[4661]: I0120 19:03:11.636455 4661 state_mem.go:107] "Deleted CPUSet assignment" podUID="24a02294-d575-420a-a004-9eaac022318e" containerName="horizon-log" Jan 20 19:03:11 crc kubenswrapper[4661]: E0120 19:03:11.636481 4661 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cf4af574-77f9-45e3-8791-1a9a5ca67b38" containerName="sg-core" Jan 20 19:03:11 crc kubenswrapper[4661]: I0120 19:03:11.636489 4661 state_mem.go:107] "Deleted CPUSet assignment" podUID="cf4af574-77f9-45e3-8791-1a9a5ca67b38" containerName="sg-core" Jan 20 19:03:11 crc kubenswrapper[4661]: E0120 19:03:11.636509 4661 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cf4af574-77f9-45e3-8791-1a9a5ca67b38" containerName="ceilometer-notification-agent" Jan 20 19:03:11 crc kubenswrapper[4661]: I0120 19:03:11.636523 4661 state_mem.go:107] "Deleted CPUSet assignment" podUID="cf4af574-77f9-45e3-8791-1a9a5ca67b38" containerName="ceilometer-notification-agent" Jan 20 19:03:11 crc kubenswrapper[4661]: E0120 19:03:11.636552 4661 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cf4af574-77f9-45e3-8791-1a9a5ca67b38" containerName="ceilometer-central-agent" Jan 20 19:03:11 crc kubenswrapper[4661]: I0120 19:03:11.636560 4661 state_mem.go:107] "Deleted CPUSet assignment" podUID="cf4af574-77f9-45e3-8791-1a9a5ca67b38" containerName="ceilometer-central-agent" Jan 20 19:03:11 crc kubenswrapper[4661]: I0120 19:03:11.637643 4661 memory_manager.go:354] "RemoveStaleState removing state" podUID="cf4af574-77f9-45e3-8791-1a9a5ca67b38" containerName="proxy-httpd" Jan 20 19:03:11 crc kubenswrapper[4661]: I0120 19:03:11.637701 4661 memory_manager.go:354] "RemoveStaleState removing state" podUID="c7b62ec6-2f48-4eba-a9dc-612d50b1c7f4" containerName="horizon-log" Jan 20 19:03:11 crc kubenswrapper[4661]: I0120 19:03:11.637720 4661 memory_manager.go:354] "RemoveStaleState removing state" podUID="24a02294-d575-420a-a004-9eaac022318e" containerName="horizon-log" Jan 20 19:03:11 crc kubenswrapper[4661]: I0120 19:03:11.637750 4661 memory_manager.go:354] "RemoveStaleState removing state" podUID="cf4af574-77f9-45e3-8791-1a9a5ca67b38" containerName="sg-core" Jan 20 19:03:11 crc kubenswrapper[4661]: I0120 19:03:11.637783 4661 memory_manager.go:354] "RemoveStaleState removing state" podUID="c7b62ec6-2f48-4eba-a9dc-612d50b1c7f4" containerName="horizon" Jan 20 19:03:11 crc kubenswrapper[4661]: I0120 19:03:11.637805 4661 memory_manager.go:354] "RemoveStaleState removing state" podUID="24a02294-d575-420a-a004-9eaac022318e" containerName="horizon" Jan 20 19:03:11 crc kubenswrapper[4661]: I0120 19:03:11.637822 4661 memory_manager.go:354] "RemoveStaleState removing state" podUID="370135b1-1365-490b-a9ae-d8ffb1361718" containerName="dnsmasq-dns" Jan 20 19:03:11 crc kubenswrapper[4661]: I0120 19:03:11.637851 4661 memory_manager.go:354] "RemoveStaleState removing state" podUID="cf4af574-77f9-45e3-8791-1a9a5ca67b38" containerName="ceilometer-central-agent" Jan 20 19:03:11 crc kubenswrapper[4661]: I0120 19:03:11.637870 4661 memory_manager.go:354] "RemoveStaleState removing state" podUID="cf4af574-77f9-45e3-8791-1a9a5ca67b38" containerName="ceilometer-notification-agent" Jan 20 19:03:11 crc kubenswrapper[4661]: I0120 19:03:11.648552 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 20 19:03:11 crc kubenswrapper[4661]: I0120 19:03:11.655187 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Jan 20 19:03:11 crc kubenswrapper[4661]: I0120 19:03:11.664813 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 20 19:03:11 crc kubenswrapper[4661]: I0120 19:03:11.665365 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 20 19:03:11 crc kubenswrapper[4661]: I0120 19:03:11.666017 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 20 19:03:11 crc kubenswrapper[4661]: I0120 19:03:11.749975 4661 scope.go:117] "RemoveContainer" containerID="96931cecffd32deb0b5a65174eef09968b1a6eab634134c3542de608948b5409" Jan 20 19:03:11 crc kubenswrapper[4661]: I0120 19:03:11.766537 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e71b515e-15b7-49d7-b4ba-a3676b1e851b-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"e71b515e-15b7-49d7-b4ba-a3676b1e851b\") " pod="openstack/ceilometer-0" Jan 20 19:03:11 crc kubenswrapper[4661]: I0120 19:03:11.766601 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/e71b515e-15b7-49d7-b4ba-a3676b1e851b-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"e71b515e-15b7-49d7-b4ba-a3676b1e851b\") " pod="openstack/ceilometer-0" Jan 20 19:03:11 crc kubenswrapper[4661]: I0120 19:03:11.766633 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e71b515e-15b7-49d7-b4ba-a3676b1e851b-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"e71b515e-15b7-49d7-b4ba-a3676b1e851b\") " pod="openstack/ceilometer-0" Jan 20 19:03:11 crc kubenswrapper[4661]: I0120 19:03:11.766737 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e71b515e-15b7-49d7-b4ba-a3676b1e851b-config-data\") pod \"ceilometer-0\" (UID: \"e71b515e-15b7-49d7-b4ba-a3676b1e851b\") " pod="openstack/ceilometer-0" Jan 20 19:03:11 crc kubenswrapper[4661]: I0120 19:03:11.766755 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ss57x\" (UniqueName: \"kubernetes.io/projected/e71b515e-15b7-49d7-b4ba-a3676b1e851b-kube-api-access-ss57x\") pod \"ceilometer-0\" (UID: \"e71b515e-15b7-49d7-b4ba-a3676b1e851b\") " pod="openstack/ceilometer-0" Jan 20 19:03:11 crc kubenswrapper[4661]: I0120 19:03:11.766792 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e71b515e-15b7-49d7-b4ba-a3676b1e851b-scripts\") pod \"ceilometer-0\" (UID: \"e71b515e-15b7-49d7-b4ba-a3676b1e851b\") " pod="openstack/ceilometer-0" Jan 20 19:03:11 crc kubenswrapper[4661]: I0120 19:03:11.766817 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e71b515e-15b7-49d7-b4ba-a3676b1e851b-run-httpd\") pod \"ceilometer-0\" (UID: \"e71b515e-15b7-49d7-b4ba-a3676b1e851b\") " pod="openstack/ceilometer-0" Jan 20 19:03:11 crc kubenswrapper[4661]: I0120 19:03:11.766840 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e71b515e-15b7-49d7-b4ba-a3676b1e851b-log-httpd\") pod \"ceilometer-0\" (UID: \"e71b515e-15b7-49d7-b4ba-a3676b1e851b\") " pod="openstack/ceilometer-0" Jan 20 19:03:11 crc kubenswrapper[4661]: I0120 19:03:11.779101 4661 scope.go:117] "RemoveContainer" containerID="1820bd00bb83f6e1cf122d3212d4f4b1f8a9c754950efa1716a5267c6ae93be6" Jan 20 19:03:11 crc kubenswrapper[4661]: I0120 19:03:11.804795 4661 scope.go:117] "RemoveContainer" containerID="f06e4917bda1c71acade3fe59d15bbc3b29245bded338777e34e0f592a9bf5a0" Jan 20 19:03:11 crc kubenswrapper[4661]: I0120 19:03:11.823131 4661 scope.go:117] "RemoveContainer" containerID="3fab0463fdd09147fb51758b6c3d597b4e32e27d4493ec06b416f7571e4aaa15" Jan 20 19:03:11 crc kubenswrapper[4661]: I0120 19:03:11.842970 4661 scope.go:117] "RemoveContainer" containerID="b95873b03817a483b37b6ed1e7535453263afc186b179fc7fc1023993731b6fd" Jan 20 19:03:11 crc kubenswrapper[4661]: I0120 19:03:11.869451 4661 scope.go:117] "RemoveContainer" containerID="a5f896f95be8c2cc37c73cee92fa21cb98027960309cefa619025e9a051a0760" Jan 20 19:03:11 crc kubenswrapper[4661]: I0120 19:03:11.869842 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e71b515e-15b7-49d7-b4ba-a3676b1e851b-scripts\") pod \"ceilometer-0\" (UID: \"e71b515e-15b7-49d7-b4ba-a3676b1e851b\") " pod="openstack/ceilometer-0" Jan 20 19:03:11 crc kubenswrapper[4661]: I0120 19:03:11.869910 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e71b515e-15b7-49d7-b4ba-a3676b1e851b-run-httpd\") pod \"ceilometer-0\" (UID: \"e71b515e-15b7-49d7-b4ba-a3676b1e851b\") " pod="openstack/ceilometer-0" Jan 20 19:03:11 crc kubenswrapper[4661]: I0120 19:03:11.869984 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e71b515e-15b7-49d7-b4ba-a3676b1e851b-log-httpd\") pod \"ceilometer-0\" (UID: \"e71b515e-15b7-49d7-b4ba-a3676b1e851b\") " pod="openstack/ceilometer-0" Jan 20 19:03:11 crc kubenswrapper[4661]: I0120 19:03:11.870018 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e71b515e-15b7-49d7-b4ba-a3676b1e851b-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"e71b515e-15b7-49d7-b4ba-a3676b1e851b\") " pod="openstack/ceilometer-0" Jan 20 19:03:11 crc kubenswrapper[4661]: I0120 19:03:11.870424 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e71b515e-15b7-49d7-b4ba-a3676b1e851b-run-httpd\") pod \"ceilometer-0\" (UID: \"e71b515e-15b7-49d7-b4ba-a3676b1e851b\") " pod="openstack/ceilometer-0" Jan 20 19:03:11 crc kubenswrapper[4661]: I0120 19:03:11.870485 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e71b515e-15b7-49d7-b4ba-a3676b1e851b-log-httpd\") pod \"ceilometer-0\" (UID: \"e71b515e-15b7-49d7-b4ba-a3676b1e851b\") " pod="openstack/ceilometer-0" Jan 20 19:03:11 crc kubenswrapper[4661]: I0120 19:03:11.870623 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/e71b515e-15b7-49d7-b4ba-a3676b1e851b-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"e71b515e-15b7-49d7-b4ba-a3676b1e851b\") " pod="openstack/ceilometer-0" Jan 20 19:03:11 crc kubenswrapper[4661]: I0120 19:03:11.870659 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e71b515e-15b7-49d7-b4ba-a3676b1e851b-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"e71b515e-15b7-49d7-b4ba-a3676b1e851b\") " pod="openstack/ceilometer-0" Jan 20 19:03:11 crc kubenswrapper[4661]: I0120 19:03:11.871162 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e71b515e-15b7-49d7-b4ba-a3676b1e851b-config-data\") pod \"ceilometer-0\" (UID: \"e71b515e-15b7-49d7-b4ba-a3676b1e851b\") " pod="openstack/ceilometer-0" Jan 20 19:03:11 crc kubenswrapper[4661]: I0120 19:03:11.871196 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ss57x\" (UniqueName: \"kubernetes.io/projected/e71b515e-15b7-49d7-b4ba-a3676b1e851b-kube-api-access-ss57x\") pod \"ceilometer-0\" (UID: \"e71b515e-15b7-49d7-b4ba-a3676b1e851b\") " pod="openstack/ceilometer-0" Jan 20 19:03:11 crc kubenswrapper[4661]: I0120 19:03:11.878742 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e71b515e-15b7-49d7-b4ba-a3676b1e851b-scripts\") pod \"ceilometer-0\" (UID: \"e71b515e-15b7-49d7-b4ba-a3676b1e851b\") " pod="openstack/ceilometer-0" Jan 20 19:03:11 crc kubenswrapper[4661]: I0120 19:03:11.879566 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e71b515e-15b7-49d7-b4ba-a3676b1e851b-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"e71b515e-15b7-49d7-b4ba-a3676b1e851b\") " pod="openstack/ceilometer-0" Jan 20 19:03:11 crc kubenswrapper[4661]: I0120 19:03:11.880552 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e71b515e-15b7-49d7-b4ba-a3676b1e851b-config-data\") pod \"ceilometer-0\" (UID: \"e71b515e-15b7-49d7-b4ba-a3676b1e851b\") " pod="openstack/ceilometer-0" Jan 20 19:03:11 crc kubenswrapper[4661]: I0120 19:03:11.883110 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/e71b515e-15b7-49d7-b4ba-a3676b1e851b-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"e71b515e-15b7-49d7-b4ba-a3676b1e851b\") " pod="openstack/ceilometer-0" Jan 20 19:03:11 crc kubenswrapper[4661]: I0120 19:03:11.893395 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e71b515e-15b7-49d7-b4ba-a3676b1e851b-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"e71b515e-15b7-49d7-b4ba-a3676b1e851b\") " pod="openstack/ceilometer-0" Jan 20 19:03:11 crc kubenswrapper[4661]: I0120 19:03:11.901177 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ss57x\" (UniqueName: \"kubernetes.io/projected/e71b515e-15b7-49d7-b4ba-a3676b1e851b-kube-api-access-ss57x\") pod \"ceilometer-0\" (UID: \"e71b515e-15b7-49d7-b4ba-a3676b1e851b\") " pod="openstack/ceilometer-0" Jan 20 19:03:11 crc kubenswrapper[4661]: I0120 19:03:11.990808 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 20 19:03:12 crc kubenswrapper[4661]: I0120 19:03:12.260057 4661 scope.go:117] "RemoveContainer" containerID="27aa1539165405c3cef6d31f01c8779f30536b05d0703687c5140f97d0808c1c" Jan 20 19:03:12 crc kubenswrapper[4661]: I0120 19:03:12.276369 4661 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="24a02294-d575-420a-a004-9eaac022318e" path="/var/lib/kubelet/pods/24a02294-d575-420a-a004-9eaac022318e/volumes" Jan 20 19:03:12 crc kubenswrapper[4661]: I0120 19:03:12.277143 4661 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="370135b1-1365-490b-a9ae-d8ffb1361718" path="/var/lib/kubelet/pods/370135b1-1365-490b-a9ae-d8ffb1361718/volumes" Jan 20 19:03:12 crc kubenswrapper[4661]: I0120 19:03:12.277762 4661 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c7b62ec6-2f48-4eba-a9dc-612d50b1c7f4" path="/var/lib/kubelet/pods/c7b62ec6-2f48-4eba-a9dc-612d50b1c7f4/volumes" Jan 20 19:03:12 crc kubenswrapper[4661]: I0120 19:03:12.279449 4661 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cf4af574-77f9-45e3-8791-1a9a5ca67b38" path="/var/lib/kubelet/pods/cf4af574-77f9-45e3-8791-1a9a5ca67b38/volumes" Jan 20 19:03:12 crc kubenswrapper[4661]: I0120 19:03:12.280146 4661 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 20 19:03:12 crc kubenswrapper[4661]: I0120 19:03:12.541167 4661 generic.go:334] "Generic (PLEG): container finished" podID="ea579f19-b21d-4098-8f52-517be45768fb" containerID="f921a9bdfcd4155d0506b5d9b39786056aba403a0899c9ec152853ac2b3655f1" exitCode=0 Jan 20 19:03:12 crc kubenswrapper[4661]: I0120 19:03:12.541225 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-844bcbddd8-v9pcc" event={"ID":"ea579f19-b21d-4098-8f52-517be45768fb","Type":"ContainerDied","Data":"f921a9bdfcd4155d0506b5d9b39786056aba403a0899c9ec152853ac2b3655f1"} Jan 20 19:03:12 crc kubenswrapper[4661]: I0120 19:03:12.845918 4661 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 20 19:03:13 crc kubenswrapper[4661]: I0120 19:03:13.486032 4661 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-844bcbddd8-v9pcc" podUID="ea579f19-b21d-4098-8f52-517be45768fb" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.246:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.246:8443: connect: connection refused" Jan 20 19:03:13 crc kubenswrapper[4661]: I0120 19:03:13.576589 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e71b515e-15b7-49d7-b4ba-a3676b1e851b","Type":"ContainerStarted","Data":"83ee3784647fe69a536a01dbf46066e6298e1d9a33896db7c4d78b3b368d23c8"} Jan 20 19:03:14 crc kubenswrapper[4661]: I0120 19:03:14.592319 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e71b515e-15b7-49d7-b4ba-a3676b1e851b","Type":"ContainerStarted","Data":"199e101302493bdb623964bd04fd5e4b118c08ddcb3a0303e4d761bd8bfb8b49"} Jan 20 19:03:15 crc kubenswrapper[4661]: I0120 19:03:15.604479 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e71b515e-15b7-49d7-b4ba-a3676b1e851b","Type":"ContainerStarted","Data":"067f29a421106dafa985de5ca282d0a35c29ced4cfa79d19a85b567dce6f1f31"} Jan 20 19:03:16 crc kubenswrapper[4661]: I0120 19:03:16.001970 4661 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/manila-share-share1-0" Jan 20 19:03:16 crc kubenswrapper[4661]: I0120 19:03:16.614879 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e71b515e-15b7-49d7-b4ba-a3676b1e851b","Type":"ContainerStarted","Data":"af69256a643cd9b02a3ba0cb27fca5a3367c5bbaae9f317a6e25e9c69b35bc9e"} Jan 20 19:03:17 crc kubenswrapper[4661]: I0120 19:03:17.626576 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e71b515e-15b7-49d7-b4ba-a3676b1e851b","Type":"ContainerStarted","Data":"f9588c446aa9916ba0bcfecbed56426400cad6882de43059c3eb4b4cea4562dc"} Jan 20 19:03:17 crc kubenswrapper[4661]: I0120 19:03:17.627062 4661 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 20 19:03:17 crc kubenswrapper[4661]: I0120 19:03:17.626951 4661 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="e71b515e-15b7-49d7-b4ba-a3676b1e851b" containerName="ceilometer-central-agent" containerID="cri-o://199e101302493bdb623964bd04fd5e4b118c08ddcb3a0303e4d761bd8bfb8b49" gracePeriod=30 Jan 20 19:03:17 crc kubenswrapper[4661]: I0120 19:03:17.626914 4661 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="e71b515e-15b7-49d7-b4ba-a3676b1e851b" containerName="ceilometer-notification-agent" containerID="cri-o://067f29a421106dafa985de5ca282d0a35c29ced4cfa79d19a85b567dce6f1f31" gracePeriod=30 Jan 20 19:03:17 crc kubenswrapper[4661]: I0120 19:03:17.626952 4661 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="e71b515e-15b7-49d7-b4ba-a3676b1e851b" containerName="proxy-httpd" containerID="cri-o://f9588c446aa9916ba0bcfecbed56426400cad6882de43059c3eb4b4cea4562dc" gracePeriod=30 Jan 20 19:03:17 crc kubenswrapper[4661]: I0120 19:03:17.626912 4661 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="e71b515e-15b7-49d7-b4ba-a3676b1e851b" containerName="sg-core" containerID="cri-o://af69256a643cd9b02a3ba0cb27fca5a3367c5bbaae9f317a6e25e9c69b35bc9e" gracePeriod=30 Jan 20 19:03:17 crc kubenswrapper[4661]: I0120 19:03:17.662703 4661 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.740336422 podStartE2EDuration="6.662681884s" podCreationTimestamp="2026-01-20 19:03:11 +0000 UTC" firstStartedPulling="2026-01-20 19:03:12.855431508 +0000 UTC m=+3449.186221170" lastFinishedPulling="2026-01-20 19:03:16.77777697 +0000 UTC m=+3453.108566632" observedRunningTime="2026-01-20 19:03:17.656509093 +0000 UTC m=+3453.987298775" watchObservedRunningTime="2026-01-20 19:03:17.662681884 +0000 UTC m=+3453.993471556" Jan 20 19:03:18 crc kubenswrapper[4661]: I0120 19:03:18.258034 4661 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/manila-scheduler-0" Jan 20 19:03:18 crc kubenswrapper[4661]: I0120 19:03:18.320140 4661 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/manila-scheduler-0"] Jan 20 19:03:18 crc kubenswrapper[4661]: I0120 19:03:18.644212 4661 generic.go:334] "Generic (PLEG): container finished" podID="e71b515e-15b7-49d7-b4ba-a3676b1e851b" containerID="f9588c446aa9916ba0bcfecbed56426400cad6882de43059c3eb4b4cea4562dc" exitCode=0 Jan 20 19:03:18 crc kubenswrapper[4661]: I0120 19:03:18.644802 4661 generic.go:334] "Generic (PLEG): container finished" podID="e71b515e-15b7-49d7-b4ba-a3676b1e851b" containerID="af69256a643cd9b02a3ba0cb27fca5a3367c5bbaae9f317a6e25e9c69b35bc9e" exitCode=2 Jan 20 19:03:18 crc kubenswrapper[4661]: I0120 19:03:18.644867 4661 generic.go:334] "Generic (PLEG): container finished" podID="e71b515e-15b7-49d7-b4ba-a3676b1e851b" containerID="067f29a421106dafa985de5ca282d0a35c29ced4cfa79d19a85b567dce6f1f31" exitCode=0 Jan 20 19:03:18 crc kubenswrapper[4661]: I0120 19:03:18.645214 4661 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/manila-scheduler-0" podUID="8abbf31c-d723-4945-9d0c-6268b4100937" containerName="manila-scheduler" containerID="cri-o://1aea894c444a34524f1953d8b2fea6b1b4c902c8ee06cb518d784b937ae71c89" gracePeriod=30 Jan 20 19:03:18 crc kubenswrapper[4661]: I0120 19:03:18.645408 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e71b515e-15b7-49d7-b4ba-a3676b1e851b","Type":"ContainerDied","Data":"f9588c446aa9916ba0bcfecbed56426400cad6882de43059c3eb4b4cea4562dc"} Jan 20 19:03:18 crc kubenswrapper[4661]: I0120 19:03:18.645446 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e71b515e-15b7-49d7-b4ba-a3676b1e851b","Type":"ContainerDied","Data":"af69256a643cd9b02a3ba0cb27fca5a3367c5bbaae9f317a6e25e9c69b35bc9e"} Jan 20 19:03:18 crc kubenswrapper[4661]: I0120 19:03:18.645475 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e71b515e-15b7-49d7-b4ba-a3676b1e851b","Type":"ContainerDied","Data":"067f29a421106dafa985de5ca282d0a35c29ced4cfa79d19a85b567dce6f1f31"} Jan 20 19:03:18 crc kubenswrapper[4661]: I0120 19:03:18.646051 4661 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/manila-scheduler-0" podUID="8abbf31c-d723-4945-9d0c-6268b4100937" containerName="probe" containerID="cri-o://5ac0f780aa3fdcb68c4c06c3d6edc5ddbec9a0e8410b5468264cafb3ff071c5e" gracePeriod=30 Jan 20 19:03:19 crc kubenswrapper[4661]: I0120 19:03:19.654784 4661 generic.go:334] "Generic (PLEG): container finished" podID="8abbf31c-d723-4945-9d0c-6268b4100937" containerID="5ac0f780aa3fdcb68c4c06c3d6edc5ddbec9a0e8410b5468264cafb3ff071c5e" exitCode=0 Jan 20 19:03:19 crc kubenswrapper[4661]: I0120 19:03:19.654860 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"8abbf31c-d723-4945-9d0c-6268b4100937","Type":"ContainerDied","Data":"5ac0f780aa3fdcb68c4c06c3d6edc5ddbec9a0e8410b5468264cafb3ff071c5e"} Jan 20 19:03:20 crc kubenswrapper[4661]: I0120 19:03:20.505546 4661 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 20 19:03:20 crc kubenswrapper[4661]: I0120 19:03:20.568933 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/e71b515e-15b7-49d7-b4ba-a3676b1e851b-ceilometer-tls-certs\") pod \"e71b515e-15b7-49d7-b4ba-a3676b1e851b\" (UID: \"e71b515e-15b7-49d7-b4ba-a3676b1e851b\") " Jan 20 19:03:20 crc kubenswrapper[4661]: I0120 19:03:20.569218 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e71b515e-15b7-49d7-b4ba-a3676b1e851b-run-httpd\") pod \"e71b515e-15b7-49d7-b4ba-a3676b1e851b\" (UID: \"e71b515e-15b7-49d7-b4ba-a3676b1e851b\") " Jan 20 19:03:20 crc kubenswrapper[4661]: I0120 19:03:20.569287 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e71b515e-15b7-49d7-b4ba-a3676b1e851b-scripts\") pod \"e71b515e-15b7-49d7-b4ba-a3676b1e851b\" (UID: \"e71b515e-15b7-49d7-b4ba-a3676b1e851b\") " Jan 20 19:03:20 crc kubenswrapper[4661]: I0120 19:03:20.569374 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ss57x\" (UniqueName: \"kubernetes.io/projected/e71b515e-15b7-49d7-b4ba-a3676b1e851b-kube-api-access-ss57x\") pod \"e71b515e-15b7-49d7-b4ba-a3676b1e851b\" (UID: \"e71b515e-15b7-49d7-b4ba-a3676b1e851b\") " Jan 20 19:03:20 crc kubenswrapper[4661]: I0120 19:03:20.569496 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e71b515e-15b7-49d7-b4ba-a3676b1e851b-config-data\") pod \"e71b515e-15b7-49d7-b4ba-a3676b1e851b\" (UID: \"e71b515e-15b7-49d7-b4ba-a3676b1e851b\") " Jan 20 19:03:20 crc kubenswrapper[4661]: I0120 19:03:20.569584 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e71b515e-15b7-49d7-b4ba-a3676b1e851b-log-httpd\") pod \"e71b515e-15b7-49d7-b4ba-a3676b1e851b\" (UID: \"e71b515e-15b7-49d7-b4ba-a3676b1e851b\") " Jan 20 19:03:20 crc kubenswrapper[4661]: I0120 19:03:20.569704 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e71b515e-15b7-49d7-b4ba-a3676b1e851b-sg-core-conf-yaml\") pod \"e71b515e-15b7-49d7-b4ba-a3676b1e851b\" (UID: \"e71b515e-15b7-49d7-b4ba-a3676b1e851b\") " Jan 20 19:03:20 crc kubenswrapper[4661]: I0120 19:03:20.569797 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e71b515e-15b7-49d7-b4ba-a3676b1e851b-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "e71b515e-15b7-49d7-b4ba-a3676b1e851b" (UID: "e71b515e-15b7-49d7-b4ba-a3676b1e851b"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 19:03:20 crc kubenswrapper[4661]: I0120 19:03:20.569866 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e71b515e-15b7-49d7-b4ba-a3676b1e851b-combined-ca-bundle\") pod \"e71b515e-15b7-49d7-b4ba-a3676b1e851b\" (UID: \"e71b515e-15b7-49d7-b4ba-a3676b1e851b\") " Jan 20 19:03:20 crc kubenswrapper[4661]: I0120 19:03:20.570107 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e71b515e-15b7-49d7-b4ba-a3676b1e851b-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "e71b515e-15b7-49d7-b4ba-a3676b1e851b" (UID: "e71b515e-15b7-49d7-b4ba-a3676b1e851b"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 19:03:20 crc kubenswrapper[4661]: I0120 19:03:20.570938 4661 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e71b515e-15b7-49d7-b4ba-a3676b1e851b-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 20 19:03:20 crc kubenswrapper[4661]: I0120 19:03:20.570971 4661 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e71b515e-15b7-49d7-b4ba-a3676b1e851b-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 20 19:03:20 crc kubenswrapper[4661]: I0120 19:03:20.577042 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e71b515e-15b7-49d7-b4ba-a3676b1e851b-scripts" (OuterVolumeSpecName: "scripts") pod "e71b515e-15b7-49d7-b4ba-a3676b1e851b" (UID: "e71b515e-15b7-49d7-b4ba-a3676b1e851b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 19:03:20 crc kubenswrapper[4661]: I0120 19:03:20.577077 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e71b515e-15b7-49d7-b4ba-a3676b1e851b-kube-api-access-ss57x" (OuterVolumeSpecName: "kube-api-access-ss57x") pod "e71b515e-15b7-49d7-b4ba-a3676b1e851b" (UID: "e71b515e-15b7-49d7-b4ba-a3676b1e851b"). InnerVolumeSpecName "kube-api-access-ss57x". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 19:03:20 crc kubenswrapper[4661]: I0120 19:03:20.634095 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e71b515e-15b7-49d7-b4ba-a3676b1e851b-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "e71b515e-15b7-49d7-b4ba-a3676b1e851b" (UID: "e71b515e-15b7-49d7-b4ba-a3676b1e851b"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 19:03:20 crc kubenswrapper[4661]: I0120 19:03:20.650459 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e71b515e-15b7-49d7-b4ba-a3676b1e851b-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "e71b515e-15b7-49d7-b4ba-a3676b1e851b" (UID: "e71b515e-15b7-49d7-b4ba-a3676b1e851b"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 19:03:20 crc kubenswrapper[4661]: I0120 19:03:20.669308 4661 generic.go:334] "Generic (PLEG): container finished" podID="e71b515e-15b7-49d7-b4ba-a3676b1e851b" containerID="199e101302493bdb623964bd04fd5e4b118c08ddcb3a0303e4d761bd8bfb8b49" exitCode=0 Jan 20 19:03:20 crc kubenswrapper[4661]: I0120 19:03:20.669357 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e71b515e-15b7-49d7-b4ba-a3676b1e851b","Type":"ContainerDied","Data":"199e101302493bdb623964bd04fd5e4b118c08ddcb3a0303e4d761bd8bfb8b49"} Jan 20 19:03:20 crc kubenswrapper[4661]: I0120 19:03:20.669391 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e71b515e-15b7-49d7-b4ba-a3676b1e851b","Type":"ContainerDied","Data":"83ee3784647fe69a536a01dbf46066e6298e1d9a33896db7c4d78b3b368d23c8"} Jan 20 19:03:20 crc kubenswrapper[4661]: I0120 19:03:20.669412 4661 scope.go:117] "RemoveContainer" containerID="f9588c446aa9916ba0bcfecbed56426400cad6882de43059c3eb4b4cea4562dc" Jan 20 19:03:20 crc kubenswrapper[4661]: I0120 19:03:20.669566 4661 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 20 19:03:20 crc kubenswrapper[4661]: I0120 19:03:20.673227 4661 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/e71b515e-15b7-49d7-b4ba-a3676b1e851b-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 20 19:03:20 crc kubenswrapper[4661]: I0120 19:03:20.673255 4661 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e71b515e-15b7-49d7-b4ba-a3676b1e851b-scripts\") on node \"crc\" DevicePath \"\"" Jan 20 19:03:20 crc kubenswrapper[4661]: I0120 19:03:20.673267 4661 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ss57x\" (UniqueName: \"kubernetes.io/projected/e71b515e-15b7-49d7-b4ba-a3676b1e851b-kube-api-access-ss57x\") on node \"crc\" DevicePath \"\"" Jan 20 19:03:20 crc kubenswrapper[4661]: I0120 19:03:20.673276 4661 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e71b515e-15b7-49d7-b4ba-a3676b1e851b-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 20 19:03:20 crc kubenswrapper[4661]: I0120 19:03:20.674989 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e71b515e-15b7-49d7-b4ba-a3676b1e851b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e71b515e-15b7-49d7-b4ba-a3676b1e851b" (UID: "e71b515e-15b7-49d7-b4ba-a3676b1e851b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 19:03:20 crc kubenswrapper[4661]: I0120 19:03:20.692857 4661 scope.go:117] "RemoveContainer" containerID="af69256a643cd9b02a3ba0cb27fca5a3367c5bbaae9f317a6e25e9c69b35bc9e" Jan 20 19:03:20 crc kubenswrapper[4661]: I0120 19:03:20.714833 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e71b515e-15b7-49d7-b4ba-a3676b1e851b-config-data" (OuterVolumeSpecName: "config-data") pod "e71b515e-15b7-49d7-b4ba-a3676b1e851b" (UID: "e71b515e-15b7-49d7-b4ba-a3676b1e851b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 19:03:20 crc kubenswrapper[4661]: I0120 19:03:20.719079 4661 scope.go:117] "RemoveContainer" containerID="067f29a421106dafa985de5ca282d0a35c29ced4cfa79d19a85b567dce6f1f31" Jan 20 19:03:20 crc kubenswrapper[4661]: I0120 19:03:20.738991 4661 scope.go:117] "RemoveContainer" containerID="199e101302493bdb623964bd04fd5e4b118c08ddcb3a0303e4d761bd8bfb8b49" Jan 20 19:03:20 crc kubenswrapper[4661]: I0120 19:03:20.758496 4661 scope.go:117] "RemoveContainer" containerID="f9588c446aa9916ba0bcfecbed56426400cad6882de43059c3eb4b4cea4562dc" Jan 20 19:03:20 crc kubenswrapper[4661]: E0120 19:03:20.759329 4661 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f9588c446aa9916ba0bcfecbed56426400cad6882de43059c3eb4b4cea4562dc\": container with ID starting with f9588c446aa9916ba0bcfecbed56426400cad6882de43059c3eb4b4cea4562dc not found: ID does not exist" containerID="f9588c446aa9916ba0bcfecbed56426400cad6882de43059c3eb4b4cea4562dc" Jan 20 19:03:20 crc kubenswrapper[4661]: I0120 19:03:20.759354 4661 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f9588c446aa9916ba0bcfecbed56426400cad6882de43059c3eb4b4cea4562dc"} err="failed to get container status \"f9588c446aa9916ba0bcfecbed56426400cad6882de43059c3eb4b4cea4562dc\": rpc error: code = NotFound desc = could not find container \"f9588c446aa9916ba0bcfecbed56426400cad6882de43059c3eb4b4cea4562dc\": container with ID starting with f9588c446aa9916ba0bcfecbed56426400cad6882de43059c3eb4b4cea4562dc not found: ID does not exist" Jan 20 19:03:20 crc kubenswrapper[4661]: I0120 19:03:20.759373 4661 scope.go:117] "RemoveContainer" containerID="af69256a643cd9b02a3ba0cb27fca5a3367c5bbaae9f317a6e25e9c69b35bc9e" Jan 20 19:03:20 crc kubenswrapper[4661]: E0120 19:03:20.759583 4661 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"af69256a643cd9b02a3ba0cb27fca5a3367c5bbaae9f317a6e25e9c69b35bc9e\": container with ID starting with af69256a643cd9b02a3ba0cb27fca5a3367c5bbaae9f317a6e25e9c69b35bc9e not found: ID does not exist" containerID="af69256a643cd9b02a3ba0cb27fca5a3367c5bbaae9f317a6e25e9c69b35bc9e" Jan 20 19:03:20 crc kubenswrapper[4661]: I0120 19:03:20.759613 4661 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"af69256a643cd9b02a3ba0cb27fca5a3367c5bbaae9f317a6e25e9c69b35bc9e"} err="failed to get container status \"af69256a643cd9b02a3ba0cb27fca5a3367c5bbaae9f317a6e25e9c69b35bc9e\": rpc error: code = NotFound desc = could not find container \"af69256a643cd9b02a3ba0cb27fca5a3367c5bbaae9f317a6e25e9c69b35bc9e\": container with ID starting with af69256a643cd9b02a3ba0cb27fca5a3367c5bbaae9f317a6e25e9c69b35bc9e not found: ID does not exist" Jan 20 19:03:20 crc kubenswrapper[4661]: I0120 19:03:20.759627 4661 scope.go:117] "RemoveContainer" containerID="067f29a421106dafa985de5ca282d0a35c29ced4cfa79d19a85b567dce6f1f31" Jan 20 19:03:20 crc kubenswrapper[4661]: E0120 19:03:20.760261 4661 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"067f29a421106dafa985de5ca282d0a35c29ced4cfa79d19a85b567dce6f1f31\": container with ID starting with 067f29a421106dafa985de5ca282d0a35c29ced4cfa79d19a85b567dce6f1f31 not found: ID does not exist" containerID="067f29a421106dafa985de5ca282d0a35c29ced4cfa79d19a85b567dce6f1f31" Jan 20 19:03:20 crc kubenswrapper[4661]: I0120 19:03:20.760283 4661 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"067f29a421106dafa985de5ca282d0a35c29ced4cfa79d19a85b567dce6f1f31"} err="failed to get container status \"067f29a421106dafa985de5ca282d0a35c29ced4cfa79d19a85b567dce6f1f31\": rpc error: code = NotFound desc = could not find container \"067f29a421106dafa985de5ca282d0a35c29ced4cfa79d19a85b567dce6f1f31\": container with ID starting with 067f29a421106dafa985de5ca282d0a35c29ced4cfa79d19a85b567dce6f1f31 not found: ID does not exist" Jan 20 19:03:20 crc kubenswrapper[4661]: I0120 19:03:20.760296 4661 scope.go:117] "RemoveContainer" containerID="199e101302493bdb623964bd04fd5e4b118c08ddcb3a0303e4d761bd8bfb8b49" Jan 20 19:03:20 crc kubenswrapper[4661]: E0120 19:03:20.760598 4661 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"199e101302493bdb623964bd04fd5e4b118c08ddcb3a0303e4d761bd8bfb8b49\": container with ID starting with 199e101302493bdb623964bd04fd5e4b118c08ddcb3a0303e4d761bd8bfb8b49 not found: ID does not exist" containerID="199e101302493bdb623964bd04fd5e4b118c08ddcb3a0303e4d761bd8bfb8b49" Jan 20 19:03:20 crc kubenswrapper[4661]: I0120 19:03:20.760618 4661 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"199e101302493bdb623964bd04fd5e4b118c08ddcb3a0303e4d761bd8bfb8b49"} err="failed to get container status \"199e101302493bdb623964bd04fd5e4b118c08ddcb3a0303e4d761bd8bfb8b49\": rpc error: code = NotFound desc = could not find container \"199e101302493bdb623964bd04fd5e4b118c08ddcb3a0303e4d761bd8bfb8b49\": container with ID starting with 199e101302493bdb623964bd04fd5e4b118c08ddcb3a0303e4d761bd8bfb8b49 not found: ID does not exist" Jan 20 19:03:20 crc kubenswrapper[4661]: I0120 19:03:20.774510 4661 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e71b515e-15b7-49d7-b4ba-a3676b1e851b-config-data\") on node \"crc\" DevicePath \"\"" Jan 20 19:03:20 crc kubenswrapper[4661]: I0120 19:03:20.774531 4661 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e71b515e-15b7-49d7-b4ba-a3676b1e851b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 20 19:03:21 crc kubenswrapper[4661]: I0120 19:03:21.022463 4661 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 20 19:03:21 crc kubenswrapper[4661]: I0120 19:03:21.040758 4661 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 20 19:03:21 crc kubenswrapper[4661]: I0120 19:03:21.055119 4661 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 20 19:03:21 crc kubenswrapper[4661]: E0120 19:03:21.055597 4661 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e71b515e-15b7-49d7-b4ba-a3676b1e851b" containerName="proxy-httpd" Jan 20 19:03:21 crc kubenswrapper[4661]: I0120 19:03:21.055623 4661 state_mem.go:107] "Deleted CPUSet assignment" podUID="e71b515e-15b7-49d7-b4ba-a3676b1e851b" containerName="proxy-httpd" Jan 20 19:03:21 crc kubenswrapper[4661]: E0120 19:03:21.055694 4661 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e71b515e-15b7-49d7-b4ba-a3676b1e851b" containerName="ceilometer-notification-agent" Jan 20 19:03:21 crc kubenswrapper[4661]: I0120 19:03:21.055705 4661 state_mem.go:107] "Deleted CPUSet assignment" podUID="e71b515e-15b7-49d7-b4ba-a3676b1e851b" containerName="ceilometer-notification-agent" Jan 20 19:03:21 crc kubenswrapper[4661]: E0120 19:03:21.055724 4661 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e71b515e-15b7-49d7-b4ba-a3676b1e851b" containerName="ceilometer-central-agent" Jan 20 19:03:21 crc kubenswrapper[4661]: I0120 19:03:21.055732 4661 state_mem.go:107] "Deleted CPUSet assignment" podUID="e71b515e-15b7-49d7-b4ba-a3676b1e851b" containerName="ceilometer-central-agent" Jan 20 19:03:21 crc kubenswrapper[4661]: E0120 19:03:21.055745 4661 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e71b515e-15b7-49d7-b4ba-a3676b1e851b" containerName="sg-core" Jan 20 19:03:21 crc kubenswrapper[4661]: I0120 19:03:21.055753 4661 state_mem.go:107] "Deleted CPUSet assignment" podUID="e71b515e-15b7-49d7-b4ba-a3676b1e851b" containerName="sg-core" Jan 20 19:03:21 crc kubenswrapper[4661]: I0120 19:03:21.055979 4661 memory_manager.go:354] "RemoveStaleState removing state" podUID="e71b515e-15b7-49d7-b4ba-a3676b1e851b" containerName="ceilometer-central-agent" Jan 20 19:03:21 crc kubenswrapper[4661]: I0120 19:03:21.056001 4661 memory_manager.go:354] "RemoveStaleState removing state" podUID="e71b515e-15b7-49d7-b4ba-a3676b1e851b" containerName="sg-core" Jan 20 19:03:21 crc kubenswrapper[4661]: I0120 19:03:21.056012 4661 memory_manager.go:354] "RemoveStaleState removing state" podUID="e71b515e-15b7-49d7-b4ba-a3676b1e851b" containerName="ceilometer-notification-agent" Jan 20 19:03:21 crc kubenswrapper[4661]: I0120 19:03:21.056028 4661 memory_manager.go:354] "RemoveStaleState removing state" podUID="e71b515e-15b7-49d7-b4ba-a3676b1e851b" containerName="proxy-httpd" Jan 20 19:03:21 crc kubenswrapper[4661]: I0120 19:03:21.058033 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 20 19:03:21 crc kubenswrapper[4661]: I0120 19:03:21.064507 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 20 19:03:21 crc kubenswrapper[4661]: I0120 19:03:21.064760 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 20 19:03:21 crc kubenswrapper[4661]: I0120 19:03:21.065043 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Jan 20 19:03:21 crc kubenswrapper[4661]: I0120 19:03:21.067832 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 20 19:03:21 crc kubenswrapper[4661]: I0120 19:03:21.181973 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/359e2c27-69df-47ab-95bb-e9f70c04f988-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"359e2c27-69df-47ab-95bb-e9f70c04f988\") " pod="openstack/ceilometer-0" Jan 20 19:03:21 crc kubenswrapper[4661]: I0120 19:03:21.182067 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/359e2c27-69df-47ab-95bb-e9f70c04f988-scripts\") pod \"ceilometer-0\" (UID: \"359e2c27-69df-47ab-95bb-e9f70c04f988\") " pod="openstack/ceilometer-0" Jan 20 19:03:21 crc kubenswrapper[4661]: I0120 19:03:21.182143 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/359e2c27-69df-47ab-95bb-e9f70c04f988-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"359e2c27-69df-47ab-95bb-e9f70c04f988\") " pod="openstack/ceilometer-0" Jan 20 19:03:21 crc kubenswrapper[4661]: I0120 19:03:21.182299 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/359e2c27-69df-47ab-95bb-e9f70c04f988-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"359e2c27-69df-47ab-95bb-e9f70c04f988\") " pod="openstack/ceilometer-0" Jan 20 19:03:21 crc kubenswrapper[4661]: I0120 19:03:21.182427 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/359e2c27-69df-47ab-95bb-e9f70c04f988-run-httpd\") pod \"ceilometer-0\" (UID: \"359e2c27-69df-47ab-95bb-e9f70c04f988\") " pod="openstack/ceilometer-0" Jan 20 19:03:21 crc kubenswrapper[4661]: I0120 19:03:21.182566 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q2cwj\" (UniqueName: \"kubernetes.io/projected/359e2c27-69df-47ab-95bb-e9f70c04f988-kube-api-access-q2cwj\") pod \"ceilometer-0\" (UID: \"359e2c27-69df-47ab-95bb-e9f70c04f988\") " pod="openstack/ceilometer-0" Jan 20 19:03:21 crc kubenswrapper[4661]: I0120 19:03:21.182693 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/359e2c27-69df-47ab-95bb-e9f70c04f988-log-httpd\") pod \"ceilometer-0\" (UID: \"359e2c27-69df-47ab-95bb-e9f70c04f988\") " pod="openstack/ceilometer-0" Jan 20 19:03:21 crc kubenswrapper[4661]: I0120 19:03:21.182908 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/359e2c27-69df-47ab-95bb-e9f70c04f988-config-data\") pod \"ceilometer-0\" (UID: \"359e2c27-69df-47ab-95bb-e9f70c04f988\") " pod="openstack/ceilometer-0" Jan 20 19:03:21 crc kubenswrapper[4661]: I0120 19:03:21.284652 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/359e2c27-69df-47ab-95bb-e9f70c04f988-run-httpd\") pod \"ceilometer-0\" (UID: \"359e2c27-69df-47ab-95bb-e9f70c04f988\") " pod="openstack/ceilometer-0" Jan 20 19:03:21 crc kubenswrapper[4661]: I0120 19:03:21.284705 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q2cwj\" (UniqueName: \"kubernetes.io/projected/359e2c27-69df-47ab-95bb-e9f70c04f988-kube-api-access-q2cwj\") pod \"ceilometer-0\" (UID: \"359e2c27-69df-47ab-95bb-e9f70c04f988\") " pod="openstack/ceilometer-0" Jan 20 19:03:21 crc kubenswrapper[4661]: I0120 19:03:21.284736 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/359e2c27-69df-47ab-95bb-e9f70c04f988-log-httpd\") pod \"ceilometer-0\" (UID: \"359e2c27-69df-47ab-95bb-e9f70c04f988\") " pod="openstack/ceilometer-0" Jan 20 19:03:21 crc kubenswrapper[4661]: I0120 19:03:21.284776 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/359e2c27-69df-47ab-95bb-e9f70c04f988-config-data\") pod \"ceilometer-0\" (UID: \"359e2c27-69df-47ab-95bb-e9f70c04f988\") " pod="openstack/ceilometer-0" Jan 20 19:03:21 crc kubenswrapper[4661]: I0120 19:03:21.284808 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/359e2c27-69df-47ab-95bb-e9f70c04f988-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"359e2c27-69df-47ab-95bb-e9f70c04f988\") " pod="openstack/ceilometer-0" Jan 20 19:03:21 crc kubenswrapper[4661]: I0120 19:03:21.284856 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/359e2c27-69df-47ab-95bb-e9f70c04f988-scripts\") pod \"ceilometer-0\" (UID: \"359e2c27-69df-47ab-95bb-e9f70c04f988\") " pod="openstack/ceilometer-0" Jan 20 19:03:21 crc kubenswrapper[4661]: I0120 19:03:21.284895 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/359e2c27-69df-47ab-95bb-e9f70c04f988-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"359e2c27-69df-47ab-95bb-e9f70c04f988\") " pod="openstack/ceilometer-0" Jan 20 19:03:21 crc kubenswrapper[4661]: I0120 19:03:21.284927 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/359e2c27-69df-47ab-95bb-e9f70c04f988-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"359e2c27-69df-47ab-95bb-e9f70c04f988\") " pod="openstack/ceilometer-0" Jan 20 19:03:21 crc kubenswrapper[4661]: I0120 19:03:21.285332 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/359e2c27-69df-47ab-95bb-e9f70c04f988-run-httpd\") pod \"ceilometer-0\" (UID: \"359e2c27-69df-47ab-95bb-e9f70c04f988\") " pod="openstack/ceilometer-0" Jan 20 19:03:21 crc kubenswrapper[4661]: I0120 19:03:21.285432 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/359e2c27-69df-47ab-95bb-e9f70c04f988-log-httpd\") pod \"ceilometer-0\" (UID: \"359e2c27-69df-47ab-95bb-e9f70c04f988\") " pod="openstack/ceilometer-0" Jan 20 19:03:21 crc kubenswrapper[4661]: I0120 19:03:21.289780 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/359e2c27-69df-47ab-95bb-e9f70c04f988-config-data\") pod \"ceilometer-0\" (UID: \"359e2c27-69df-47ab-95bb-e9f70c04f988\") " pod="openstack/ceilometer-0" Jan 20 19:03:21 crc kubenswrapper[4661]: I0120 19:03:21.291108 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/359e2c27-69df-47ab-95bb-e9f70c04f988-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"359e2c27-69df-47ab-95bb-e9f70c04f988\") " pod="openstack/ceilometer-0" Jan 20 19:03:21 crc kubenswrapper[4661]: I0120 19:03:21.291524 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/359e2c27-69df-47ab-95bb-e9f70c04f988-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"359e2c27-69df-47ab-95bb-e9f70c04f988\") " pod="openstack/ceilometer-0" Jan 20 19:03:21 crc kubenswrapper[4661]: I0120 19:03:21.295140 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/359e2c27-69df-47ab-95bb-e9f70c04f988-scripts\") pod \"ceilometer-0\" (UID: \"359e2c27-69df-47ab-95bb-e9f70c04f988\") " pod="openstack/ceilometer-0" Jan 20 19:03:21 crc kubenswrapper[4661]: I0120 19:03:21.295722 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/359e2c27-69df-47ab-95bb-e9f70c04f988-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"359e2c27-69df-47ab-95bb-e9f70c04f988\") " pod="openstack/ceilometer-0" Jan 20 19:03:21 crc kubenswrapper[4661]: I0120 19:03:21.313137 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q2cwj\" (UniqueName: \"kubernetes.io/projected/359e2c27-69df-47ab-95bb-e9f70c04f988-kube-api-access-q2cwj\") pod \"ceilometer-0\" (UID: \"359e2c27-69df-47ab-95bb-e9f70c04f988\") " pod="openstack/ceilometer-0" Jan 20 19:03:21 crc kubenswrapper[4661]: I0120 19:03:21.378544 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 20 19:03:21 crc kubenswrapper[4661]: I0120 19:03:21.841281 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 20 19:03:21 crc kubenswrapper[4661]: W0120 19:03:21.855825 4661 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod359e2c27_69df_47ab_95bb_e9f70c04f988.slice/crio-aecfd97e5e15a43b465ffa016989b291b0c597a7a01ae6e0d98e9d6ebdfb41fb WatchSource:0}: Error finding container aecfd97e5e15a43b465ffa016989b291b0c597a7a01ae6e0d98e9d6ebdfb41fb: Status 404 returned error can't find the container with id aecfd97e5e15a43b465ffa016989b291b0c597a7a01ae6e0d98e9d6ebdfb41fb Jan 20 19:03:22 crc kubenswrapper[4661]: I0120 19:03:22.070299 4661 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-scheduler-0" Jan 20 19:03:22 crc kubenswrapper[4661]: I0120 19:03:22.102428 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8abbf31c-d723-4945-9d0c-6268b4100937-scripts\") pod \"8abbf31c-d723-4945-9d0c-6268b4100937\" (UID: \"8abbf31c-d723-4945-9d0c-6268b4100937\") " Jan 20 19:03:22 crc kubenswrapper[4661]: I0120 19:03:22.102519 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8abbf31c-d723-4945-9d0c-6268b4100937-config-data-custom\") pod \"8abbf31c-d723-4945-9d0c-6268b4100937\" (UID: \"8abbf31c-d723-4945-9d0c-6268b4100937\") " Jan 20 19:03:22 crc kubenswrapper[4661]: I0120 19:03:22.102592 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8abbf31c-d723-4945-9d0c-6268b4100937-config-data\") pod \"8abbf31c-d723-4945-9d0c-6268b4100937\" (UID: \"8abbf31c-d723-4945-9d0c-6268b4100937\") " Jan 20 19:03:22 crc kubenswrapper[4661]: I0120 19:03:22.102696 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hjmsp\" (UniqueName: \"kubernetes.io/projected/8abbf31c-d723-4945-9d0c-6268b4100937-kube-api-access-hjmsp\") pod \"8abbf31c-d723-4945-9d0c-6268b4100937\" (UID: \"8abbf31c-d723-4945-9d0c-6268b4100937\") " Jan 20 19:03:22 crc kubenswrapper[4661]: I0120 19:03:22.102754 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8abbf31c-d723-4945-9d0c-6268b4100937-combined-ca-bundle\") pod \"8abbf31c-d723-4945-9d0c-6268b4100937\" (UID: \"8abbf31c-d723-4945-9d0c-6268b4100937\") " Jan 20 19:03:22 crc kubenswrapper[4661]: I0120 19:03:22.102810 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/8abbf31c-d723-4945-9d0c-6268b4100937-etc-machine-id\") pod \"8abbf31c-d723-4945-9d0c-6268b4100937\" (UID: \"8abbf31c-d723-4945-9d0c-6268b4100937\") " Jan 20 19:03:22 crc kubenswrapper[4661]: I0120 19:03:22.103941 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8abbf31c-d723-4945-9d0c-6268b4100937-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "8abbf31c-d723-4945-9d0c-6268b4100937" (UID: "8abbf31c-d723-4945-9d0c-6268b4100937"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 20 19:03:22 crc kubenswrapper[4661]: I0120 19:03:22.104275 4661 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/8abbf31c-d723-4945-9d0c-6268b4100937-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 20 19:03:22 crc kubenswrapper[4661]: I0120 19:03:22.111518 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8abbf31c-d723-4945-9d0c-6268b4100937-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "8abbf31c-d723-4945-9d0c-6268b4100937" (UID: "8abbf31c-d723-4945-9d0c-6268b4100937"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 19:03:22 crc kubenswrapper[4661]: I0120 19:03:22.112188 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8abbf31c-d723-4945-9d0c-6268b4100937-scripts" (OuterVolumeSpecName: "scripts") pod "8abbf31c-d723-4945-9d0c-6268b4100937" (UID: "8abbf31c-d723-4945-9d0c-6268b4100937"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 19:03:22 crc kubenswrapper[4661]: I0120 19:03:22.128009 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8abbf31c-d723-4945-9d0c-6268b4100937-kube-api-access-hjmsp" (OuterVolumeSpecName: "kube-api-access-hjmsp") pod "8abbf31c-d723-4945-9d0c-6268b4100937" (UID: "8abbf31c-d723-4945-9d0c-6268b4100937"). InnerVolumeSpecName "kube-api-access-hjmsp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 19:03:22 crc kubenswrapper[4661]: I0120 19:03:22.187886 4661 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e71b515e-15b7-49d7-b4ba-a3676b1e851b" path="/var/lib/kubelet/pods/e71b515e-15b7-49d7-b4ba-a3676b1e851b/volumes" Jan 20 19:03:22 crc kubenswrapper[4661]: I0120 19:03:22.206078 4661 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hjmsp\" (UniqueName: \"kubernetes.io/projected/8abbf31c-d723-4945-9d0c-6268b4100937-kube-api-access-hjmsp\") on node \"crc\" DevicePath \"\"" Jan 20 19:03:22 crc kubenswrapper[4661]: I0120 19:03:22.206259 4661 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8abbf31c-d723-4945-9d0c-6268b4100937-scripts\") on node \"crc\" DevicePath \"\"" Jan 20 19:03:22 crc kubenswrapper[4661]: I0120 19:03:22.206314 4661 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8abbf31c-d723-4945-9d0c-6268b4100937-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 20 19:03:22 crc kubenswrapper[4661]: I0120 19:03:22.207019 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8abbf31c-d723-4945-9d0c-6268b4100937-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8abbf31c-d723-4945-9d0c-6268b4100937" (UID: "8abbf31c-d723-4945-9d0c-6268b4100937"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 19:03:22 crc kubenswrapper[4661]: I0120 19:03:22.238444 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8abbf31c-d723-4945-9d0c-6268b4100937-config-data" (OuterVolumeSpecName: "config-data") pod "8abbf31c-d723-4945-9d0c-6268b4100937" (UID: "8abbf31c-d723-4945-9d0c-6268b4100937"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 19:03:22 crc kubenswrapper[4661]: I0120 19:03:22.307921 4661 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8abbf31c-d723-4945-9d0c-6268b4100937-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 20 19:03:22 crc kubenswrapper[4661]: I0120 19:03:22.308072 4661 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8abbf31c-d723-4945-9d0c-6268b4100937-config-data\") on node \"crc\" DevicePath \"\"" Jan 20 19:03:22 crc kubenswrapper[4661]: I0120 19:03:22.689193 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"359e2c27-69df-47ab-95bb-e9f70c04f988","Type":"ContainerStarted","Data":"6baf7a6042562c490c5f7d395d470240f5f43627271d54f6b5e1bdf06a3b7557"} Jan 20 19:03:22 crc kubenswrapper[4661]: I0120 19:03:22.689773 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"359e2c27-69df-47ab-95bb-e9f70c04f988","Type":"ContainerStarted","Data":"aecfd97e5e15a43b465ffa016989b291b0c597a7a01ae6e0d98e9d6ebdfb41fb"} Jan 20 19:03:22 crc kubenswrapper[4661]: I0120 19:03:22.691822 4661 generic.go:334] "Generic (PLEG): container finished" podID="8abbf31c-d723-4945-9d0c-6268b4100937" containerID="1aea894c444a34524f1953d8b2fea6b1b4c902c8ee06cb518d784b937ae71c89" exitCode=0 Jan 20 19:03:22 crc kubenswrapper[4661]: I0120 19:03:22.691875 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"8abbf31c-d723-4945-9d0c-6268b4100937","Type":"ContainerDied","Data":"1aea894c444a34524f1953d8b2fea6b1b4c902c8ee06cb518d784b937ae71c89"} Jan 20 19:03:22 crc kubenswrapper[4661]: I0120 19:03:22.692100 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"8abbf31c-d723-4945-9d0c-6268b4100937","Type":"ContainerDied","Data":"92e679e67c6189e0ca9cfbbbdd4a96a8f81ca44d4f5dda995a36ed5350aa4787"} Jan 20 19:03:22 crc kubenswrapper[4661]: I0120 19:03:22.691887 4661 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-scheduler-0" Jan 20 19:03:22 crc kubenswrapper[4661]: I0120 19:03:22.692207 4661 scope.go:117] "RemoveContainer" containerID="5ac0f780aa3fdcb68c4c06c3d6edc5ddbec9a0e8410b5468264cafb3ff071c5e" Jan 20 19:03:22 crc kubenswrapper[4661]: I0120 19:03:22.740080 4661 scope.go:117] "RemoveContainer" containerID="1aea894c444a34524f1953d8b2fea6b1b4c902c8ee06cb518d784b937ae71c89" Jan 20 19:03:22 crc kubenswrapper[4661]: I0120 19:03:22.747032 4661 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/manila-scheduler-0"] Jan 20 19:03:22 crc kubenswrapper[4661]: I0120 19:03:22.762112 4661 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/manila-scheduler-0"] Jan 20 19:03:22 crc kubenswrapper[4661]: I0120 19:03:22.766868 4661 scope.go:117] "RemoveContainer" containerID="5ac0f780aa3fdcb68c4c06c3d6edc5ddbec9a0e8410b5468264cafb3ff071c5e" Jan 20 19:03:22 crc kubenswrapper[4661]: E0120 19:03:22.777028 4661 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5ac0f780aa3fdcb68c4c06c3d6edc5ddbec9a0e8410b5468264cafb3ff071c5e\": container with ID starting with 5ac0f780aa3fdcb68c4c06c3d6edc5ddbec9a0e8410b5468264cafb3ff071c5e not found: ID does not exist" containerID="5ac0f780aa3fdcb68c4c06c3d6edc5ddbec9a0e8410b5468264cafb3ff071c5e" Jan 20 19:03:22 crc kubenswrapper[4661]: I0120 19:03:22.777081 4661 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5ac0f780aa3fdcb68c4c06c3d6edc5ddbec9a0e8410b5468264cafb3ff071c5e"} err="failed to get container status \"5ac0f780aa3fdcb68c4c06c3d6edc5ddbec9a0e8410b5468264cafb3ff071c5e\": rpc error: code = NotFound desc = could not find container \"5ac0f780aa3fdcb68c4c06c3d6edc5ddbec9a0e8410b5468264cafb3ff071c5e\": container with ID starting with 5ac0f780aa3fdcb68c4c06c3d6edc5ddbec9a0e8410b5468264cafb3ff071c5e not found: ID does not exist" Jan 20 19:03:22 crc kubenswrapper[4661]: I0120 19:03:22.777121 4661 scope.go:117] "RemoveContainer" containerID="1aea894c444a34524f1953d8b2fea6b1b4c902c8ee06cb518d784b937ae71c89" Jan 20 19:03:22 crc kubenswrapper[4661]: E0120 19:03:22.778867 4661 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1aea894c444a34524f1953d8b2fea6b1b4c902c8ee06cb518d784b937ae71c89\": container with ID starting with 1aea894c444a34524f1953d8b2fea6b1b4c902c8ee06cb518d784b937ae71c89 not found: ID does not exist" containerID="1aea894c444a34524f1953d8b2fea6b1b4c902c8ee06cb518d784b937ae71c89" Jan 20 19:03:22 crc kubenswrapper[4661]: I0120 19:03:22.778898 4661 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1aea894c444a34524f1953d8b2fea6b1b4c902c8ee06cb518d784b937ae71c89"} err="failed to get container status \"1aea894c444a34524f1953d8b2fea6b1b4c902c8ee06cb518d784b937ae71c89\": rpc error: code = NotFound desc = could not find container \"1aea894c444a34524f1953d8b2fea6b1b4c902c8ee06cb518d784b937ae71c89\": container with ID starting with 1aea894c444a34524f1953d8b2fea6b1b4c902c8ee06cb518d784b937ae71c89 not found: ID does not exist" Jan 20 19:03:22 crc kubenswrapper[4661]: I0120 19:03:22.783744 4661 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-scheduler-0"] Jan 20 19:03:22 crc kubenswrapper[4661]: E0120 19:03:22.784168 4661 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8abbf31c-d723-4945-9d0c-6268b4100937" containerName="manila-scheduler" Jan 20 19:03:22 crc kubenswrapper[4661]: I0120 19:03:22.784185 4661 state_mem.go:107] "Deleted CPUSet assignment" podUID="8abbf31c-d723-4945-9d0c-6268b4100937" containerName="manila-scheduler" Jan 20 19:03:22 crc kubenswrapper[4661]: E0120 19:03:22.784214 4661 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8abbf31c-d723-4945-9d0c-6268b4100937" containerName="probe" Jan 20 19:03:22 crc kubenswrapper[4661]: I0120 19:03:22.784221 4661 state_mem.go:107] "Deleted CPUSet assignment" podUID="8abbf31c-d723-4945-9d0c-6268b4100937" containerName="probe" Jan 20 19:03:22 crc kubenswrapper[4661]: I0120 19:03:22.784383 4661 memory_manager.go:354] "RemoveStaleState removing state" podUID="8abbf31c-d723-4945-9d0c-6268b4100937" containerName="probe" Jan 20 19:03:22 crc kubenswrapper[4661]: I0120 19:03:22.784399 4661 memory_manager.go:354] "RemoveStaleState removing state" podUID="8abbf31c-d723-4945-9d0c-6268b4100937" containerName="manila-scheduler" Jan 20 19:03:22 crc kubenswrapper[4661]: I0120 19:03:22.785574 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-scheduler-0" Jan 20 19:03:22 crc kubenswrapper[4661]: I0120 19:03:22.789832 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-scheduler-config-data" Jan 20 19:03:22 crc kubenswrapper[4661]: I0120 19:03:22.794931 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-scheduler-0"] Jan 20 19:03:22 crc kubenswrapper[4661]: I0120 19:03:22.821025 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c4217d44-feda-4241-9ede-1e22b3324b01-config-data\") pod \"manila-scheduler-0\" (UID: \"c4217d44-feda-4241-9ede-1e22b3324b01\") " pod="openstack/manila-scheduler-0" Jan 20 19:03:22 crc kubenswrapper[4661]: I0120 19:03:22.821081 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c4217d44-feda-4241-9ede-1e22b3324b01-scripts\") pod \"manila-scheduler-0\" (UID: \"c4217d44-feda-4241-9ede-1e22b3324b01\") " pod="openstack/manila-scheduler-0" Jan 20 19:03:22 crc kubenswrapper[4661]: I0120 19:03:22.821142 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c4217d44-feda-4241-9ede-1e22b3324b01-config-data-custom\") pod \"manila-scheduler-0\" (UID: \"c4217d44-feda-4241-9ede-1e22b3324b01\") " pod="openstack/manila-scheduler-0" Jan 20 19:03:22 crc kubenswrapper[4661]: I0120 19:03:22.821197 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c4217d44-feda-4241-9ede-1e22b3324b01-combined-ca-bundle\") pod \"manila-scheduler-0\" (UID: \"c4217d44-feda-4241-9ede-1e22b3324b01\") " pod="openstack/manila-scheduler-0" Jan 20 19:03:22 crc kubenswrapper[4661]: I0120 19:03:22.821265 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jpk59\" (UniqueName: \"kubernetes.io/projected/c4217d44-feda-4241-9ede-1e22b3324b01-kube-api-access-jpk59\") pod \"manila-scheduler-0\" (UID: \"c4217d44-feda-4241-9ede-1e22b3324b01\") " pod="openstack/manila-scheduler-0" Jan 20 19:03:22 crc kubenswrapper[4661]: I0120 19:03:22.821300 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c4217d44-feda-4241-9ede-1e22b3324b01-etc-machine-id\") pod \"manila-scheduler-0\" (UID: \"c4217d44-feda-4241-9ede-1e22b3324b01\") " pod="openstack/manila-scheduler-0" Jan 20 19:03:22 crc kubenswrapper[4661]: I0120 19:03:22.923243 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c4217d44-feda-4241-9ede-1e22b3324b01-scripts\") pod \"manila-scheduler-0\" (UID: \"c4217d44-feda-4241-9ede-1e22b3324b01\") " pod="openstack/manila-scheduler-0" Jan 20 19:03:22 crc kubenswrapper[4661]: I0120 19:03:22.923327 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c4217d44-feda-4241-9ede-1e22b3324b01-config-data-custom\") pod \"manila-scheduler-0\" (UID: \"c4217d44-feda-4241-9ede-1e22b3324b01\") " pod="openstack/manila-scheduler-0" Jan 20 19:03:22 crc kubenswrapper[4661]: I0120 19:03:22.923376 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c4217d44-feda-4241-9ede-1e22b3324b01-combined-ca-bundle\") pod \"manila-scheduler-0\" (UID: \"c4217d44-feda-4241-9ede-1e22b3324b01\") " pod="openstack/manila-scheduler-0" Jan 20 19:03:22 crc kubenswrapper[4661]: I0120 19:03:22.923413 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jpk59\" (UniqueName: \"kubernetes.io/projected/c4217d44-feda-4241-9ede-1e22b3324b01-kube-api-access-jpk59\") pod \"manila-scheduler-0\" (UID: \"c4217d44-feda-4241-9ede-1e22b3324b01\") " pod="openstack/manila-scheduler-0" Jan 20 19:03:22 crc kubenswrapper[4661]: I0120 19:03:22.923430 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c4217d44-feda-4241-9ede-1e22b3324b01-etc-machine-id\") pod \"manila-scheduler-0\" (UID: \"c4217d44-feda-4241-9ede-1e22b3324b01\") " pod="openstack/manila-scheduler-0" Jan 20 19:03:22 crc kubenswrapper[4661]: I0120 19:03:22.923513 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c4217d44-feda-4241-9ede-1e22b3324b01-config-data\") pod \"manila-scheduler-0\" (UID: \"c4217d44-feda-4241-9ede-1e22b3324b01\") " pod="openstack/manila-scheduler-0" Jan 20 19:03:22 crc kubenswrapper[4661]: I0120 19:03:22.923619 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c4217d44-feda-4241-9ede-1e22b3324b01-etc-machine-id\") pod \"manila-scheduler-0\" (UID: \"c4217d44-feda-4241-9ede-1e22b3324b01\") " pod="openstack/manila-scheduler-0" Jan 20 19:03:22 crc kubenswrapper[4661]: I0120 19:03:22.928364 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c4217d44-feda-4241-9ede-1e22b3324b01-config-data-custom\") pod \"manila-scheduler-0\" (UID: \"c4217d44-feda-4241-9ede-1e22b3324b01\") " pod="openstack/manila-scheduler-0" Jan 20 19:03:22 crc kubenswrapper[4661]: I0120 19:03:22.929119 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c4217d44-feda-4241-9ede-1e22b3324b01-combined-ca-bundle\") pod \"manila-scheduler-0\" (UID: \"c4217d44-feda-4241-9ede-1e22b3324b01\") " pod="openstack/manila-scheduler-0" Jan 20 19:03:22 crc kubenswrapper[4661]: I0120 19:03:22.929336 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c4217d44-feda-4241-9ede-1e22b3324b01-scripts\") pod \"manila-scheduler-0\" (UID: \"c4217d44-feda-4241-9ede-1e22b3324b01\") " pod="openstack/manila-scheduler-0" Jan 20 19:03:22 crc kubenswrapper[4661]: I0120 19:03:22.931288 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c4217d44-feda-4241-9ede-1e22b3324b01-config-data\") pod \"manila-scheduler-0\" (UID: \"c4217d44-feda-4241-9ede-1e22b3324b01\") " pod="openstack/manila-scheduler-0" Jan 20 19:03:22 crc kubenswrapper[4661]: I0120 19:03:22.940319 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jpk59\" (UniqueName: \"kubernetes.io/projected/c4217d44-feda-4241-9ede-1e22b3324b01-kube-api-access-jpk59\") pod \"manila-scheduler-0\" (UID: \"c4217d44-feda-4241-9ede-1e22b3324b01\") " pod="openstack/manila-scheduler-0" Jan 20 19:03:23 crc kubenswrapper[4661]: I0120 19:03:23.137307 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-scheduler-0" Jan 20 19:03:23 crc kubenswrapper[4661]: I0120 19:03:23.485942 4661 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-844bcbddd8-v9pcc" podUID="ea579f19-b21d-4098-8f52-517be45768fb" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.246:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.246:8443: connect: connection refused" Jan 20 19:03:23 crc kubenswrapper[4661]: I0120 19:03:23.648067 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-scheduler-0"] Jan 20 19:03:23 crc kubenswrapper[4661]: I0120 19:03:23.713506 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"c4217d44-feda-4241-9ede-1e22b3324b01","Type":"ContainerStarted","Data":"0376229fa7f34789c491175d89772b6155b6b4830d5e24ae7e9bb294acf9bedd"} Jan 20 19:03:23 crc kubenswrapper[4661]: I0120 19:03:23.729272 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"359e2c27-69df-47ab-95bb-e9f70c04f988","Type":"ContainerStarted","Data":"75cfa97fd7f21fc243138a10ab10c4785ffe64ebab4dd49515561d66a4c284f2"} Jan 20 19:03:24 crc kubenswrapper[4661]: I0120 19:03:24.162971 4661 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8abbf31c-d723-4945-9d0c-6268b4100937" path="/var/lib/kubelet/pods/8abbf31c-d723-4945-9d0c-6268b4100937/volumes" Jan 20 19:03:24 crc kubenswrapper[4661]: I0120 19:03:24.607984 4661 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/manila-api-0" Jan 20 19:03:24 crc kubenswrapper[4661]: I0120 19:03:24.743783 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"359e2c27-69df-47ab-95bb-e9f70c04f988","Type":"ContainerStarted","Data":"de3ff4ed35781951ea55431cbbf48fd150e4d1b88502b3da92e3cc4175de26ff"} Jan 20 19:03:24 crc kubenswrapper[4661]: I0120 19:03:24.746725 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"c4217d44-feda-4241-9ede-1e22b3324b01","Type":"ContainerStarted","Data":"506b8b0d534dc2a2de39eb3b73e04cd4968f8363ecd62c59d19c09463d5ca803"} Jan 20 19:03:24 crc kubenswrapper[4661]: I0120 19:03:24.746759 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"c4217d44-feda-4241-9ede-1e22b3324b01","Type":"ContainerStarted","Data":"fcf4e48ae43c72acfb122427a27624b465c79334b9ac4201badbf086429ea840"} Jan 20 19:03:24 crc kubenswrapper[4661]: I0120 19:03:24.781940 4661 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/manila-scheduler-0" podStartSLOduration=2.781918771 podStartE2EDuration="2.781918771s" podCreationTimestamp="2026-01-20 19:03:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 19:03:24.773462582 +0000 UTC m=+3461.104252244" watchObservedRunningTime="2026-01-20 19:03:24.781918771 +0000 UTC m=+3461.112708433" Jan 20 19:03:25 crc kubenswrapper[4661]: I0120 19:03:25.757913 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"359e2c27-69df-47ab-95bb-e9f70c04f988","Type":"ContainerStarted","Data":"b090c65ba391cf3ea8367c9ad62a4f8b22b7bedc5a320c2cb2410d38f4cd4b7c"} Jan 20 19:03:26 crc kubenswrapper[4661]: I0120 19:03:26.765638 4661 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 20 19:03:27 crc kubenswrapper[4661]: I0120 19:03:27.479861 4661 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/manila-share-share1-0" Jan 20 19:03:27 crc kubenswrapper[4661]: I0120 19:03:27.516260 4661 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=3.280690849 podStartE2EDuration="6.516234219s" podCreationTimestamp="2026-01-20 19:03:21 +0000 UTC" firstStartedPulling="2026-01-20 19:03:21.858957997 +0000 UTC m=+3458.189747679" lastFinishedPulling="2026-01-20 19:03:25.094501387 +0000 UTC m=+3461.425291049" observedRunningTime="2026-01-20 19:03:25.786248495 +0000 UTC m=+3462.117038157" watchObservedRunningTime="2026-01-20 19:03:27.516234219 +0000 UTC m=+3463.847023921" Jan 20 19:03:27 crc kubenswrapper[4661]: I0120 19:03:27.571570 4661 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/manila-share-share1-0"] Jan 20 19:03:27 crc kubenswrapper[4661]: I0120 19:03:27.775146 4661 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/manila-share-share1-0" podUID="adb303b3-0970-4438-92ee-dd30a6fb55b2" containerName="manila-share" containerID="cri-o://955b99ce833f27e5075b6196fc82597d63b2e76bbc3f6c10cface018594d8efa" gracePeriod=30 Jan 20 19:03:27 crc kubenswrapper[4661]: I0120 19:03:27.775264 4661 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/manila-share-share1-0" podUID="adb303b3-0970-4438-92ee-dd30a6fb55b2" containerName="probe" containerID="cri-o://5cb80831e7331caf380495931de3ab180eb8f4a45c38b61123eae8ca5723e0c8" gracePeriod=30 Jan 20 19:03:28 crc kubenswrapper[4661]: I0120 19:03:28.723608 4661 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-share-share1-0" Jan 20 19:03:28 crc kubenswrapper[4661]: I0120 19:03:28.780516 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/adb303b3-0970-4438-92ee-dd30a6fb55b2-combined-ca-bundle\") pod \"adb303b3-0970-4438-92ee-dd30a6fb55b2\" (UID: \"adb303b3-0970-4438-92ee-dd30a6fb55b2\") " Jan 20 19:03:28 crc kubenswrapper[4661]: I0120 19:03:28.780632 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/adb303b3-0970-4438-92ee-dd30a6fb55b2-etc-machine-id\") pod \"adb303b3-0970-4438-92ee-dd30a6fb55b2\" (UID: \"adb303b3-0970-4438-92ee-dd30a6fb55b2\") " Jan 20 19:03:28 crc kubenswrapper[4661]: I0120 19:03:28.780682 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/adb303b3-0970-4438-92ee-dd30a6fb55b2-ceph\") pod \"adb303b3-0970-4438-92ee-dd30a6fb55b2\" (UID: \"adb303b3-0970-4438-92ee-dd30a6fb55b2\") " Jan 20 19:03:28 crc kubenswrapper[4661]: I0120 19:03:28.780752 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/adb303b3-0970-4438-92ee-dd30a6fb55b2-config-data\") pod \"adb303b3-0970-4438-92ee-dd30a6fb55b2\" (UID: \"adb303b3-0970-4438-92ee-dd30a6fb55b2\") " Jan 20 19:03:28 crc kubenswrapper[4661]: I0120 19:03:28.780821 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/adb303b3-0970-4438-92ee-dd30a6fb55b2-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "adb303b3-0970-4438-92ee-dd30a6fb55b2" (UID: "adb303b3-0970-4438-92ee-dd30a6fb55b2"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 20 19:03:28 crc kubenswrapper[4661]: I0120 19:03:28.780886 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-manila\" (UniqueName: \"kubernetes.io/host-path/adb303b3-0970-4438-92ee-dd30a6fb55b2-var-lib-manila\") pod \"adb303b3-0970-4438-92ee-dd30a6fb55b2\" (UID: \"adb303b3-0970-4438-92ee-dd30a6fb55b2\") " Jan 20 19:03:28 crc kubenswrapper[4661]: I0120 19:03:28.780935 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/adb303b3-0970-4438-92ee-dd30a6fb55b2-config-data-custom\") pod \"adb303b3-0970-4438-92ee-dd30a6fb55b2\" (UID: \"adb303b3-0970-4438-92ee-dd30a6fb55b2\") " Jan 20 19:03:28 crc kubenswrapper[4661]: I0120 19:03:28.780969 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dzmjv\" (UniqueName: \"kubernetes.io/projected/adb303b3-0970-4438-92ee-dd30a6fb55b2-kube-api-access-dzmjv\") pod \"adb303b3-0970-4438-92ee-dd30a6fb55b2\" (UID: \"adb303b3-0970-4438-92ee-dd30a6fb55b2\") " Jan 20 19:03:28 crc kubenswrapper[4661]: I0120 19:03:28.780990 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/adb303b3-0970-4438-92ee-dd30a6fb55b2-scripts\") pod \"adb303b3-0970-4438-92ee-dd30a6fb55b2\" (UID: \"adb303b3-0970-4438-92ee-dd30a6fb55b2\") " Jan 20 19:03:28 crc kubenswrapper[4661]: I0120 19:03:28.781533 4661 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/adb303b3-0970-4438-92ee-dd30a6fb55b2-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 20 19:03:28 crc kubenswrapper[4661]: I0120 19:03:28.781540 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/adb303b3-0970-4438-92ee-dd30a6fb55b2-var-lib-manila" (OuterVolumeSpecName: "var-lib-manila") pod "adb303b3-0970-4438-92ee-dd30a6fb55b2" (UID: "adb303b3-0970-4438-92ee-dd30a6fb55b2"). InnerVolumeSpecName "var-lib-manila". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 20 19:03:28 crc kubenswrapper[4661]: I0120 19:03:28.788327 4661 generic.go:334] "Generic (PLEG): container finished" podID="adb303b3-0970-4438-92ee-dd30a6fb55b2" containerID="5cb80831e7331caf380495931de3ab180eb8f4a45c38b61123eae8ca5723e0c8" exitCode=0 Jan 20 19:03:28 crc kubenswrapper[4661]: I0120 19:03:28.788365 4661 generic.go:334] "Generic (PLEG): container finished" podID="adb303b3-0970-4438-92ee-dd30a6fb55b2" containerID="955b99ce833f27e5075b6196fc82597d63b2e76bbc3f6c10cface018594d8efa" exitCode=1 Jan 20 19:03:28 crc kubenswrapper[4661]: I0120 19:03:28.788390 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"adb303b3-0970-4438-92ee-dd30a6fb55b2","Type":"ContainerDied","Data":"5cb80831e7331caf380495931de3ab180eb8f4a45c38b61123eae8ca5723e0c8"} Jan 20 19:03:28 crc kubenswrapper[4661]: I0120 19:03:28.788421 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"adb303b3-0970-4438-92ee-dd30a6fb55b2","Type":"ContainerDied","Data":"955b99ce833f27e5075b6196fc82597d63b2e76bbc3f6c10cface018594d8efa"} Jan 20 19:03:28 crc kubenswrapper[4661]: I0120 19:03:28.788433 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"adb303b3-0970-4438-92ee-dd30a6fb55b2","Type":"ContainerDied","Data":"d6c9eebd02b32f6e3a5bf26f2aa0533c9c3c023243cf8c61391f8542756f38d9"} Jan 20 19:03:28 crc kubenswrapper[4661]: I0120 19:03:28.788450 4661 scope.go:117] "RemoveContainer" containerID="5cb80831e7331caf380495931de3ab180eb8f4a45c38b61123eae8ca5723e0c8" Jan 20 19:03:28 crc kubenswrapper[4661]: I0120 19:03:28.788873 4661 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-share-share1-0" Jan 20 19:03:28 crc kubenswrapper[4661]: I0120 19:03:28.789235 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/adb303b3-0970-4438-92ee-dd30a6fb55b2-kube-api-access-dzmjv" (OuterVolumeSpecName: "kube-api-access-dzmjv") pod "adb303b3-0970-4438-92ee-dd30a6fb55b2" (UID: "adb303b3-0970-4438-92ee-dd30a6fb55b2"). InnerVolumeSpecName "kube-api-access-dzmjv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 19:03:28 crc kubenswrapper[4661]: I0120 19:03:28.791976 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/adb303b3-0970-4438-92ee-dd30a6fb55b2-scripts" (OuterVolumeSpecName: "scripts") pod "adb303b3-0970-4438-92ee-dd30a6fb55b2" (UID: "adb303b3-0970-4438-92ee-dd30a6fb55b2"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 19:03:28 crc kubenswrapper[4661]: I0120 19:03:28.794445 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/adb303b3-0970-4438-92ee-dd30a6fb55b2-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "adb303b3-0970-4438-92ee-dd30a6fb55b2" (UID: "adb303b3-0970-4438-92ee-dd30a6fb55b2"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 19:03:28 crc kubenswrapper[4661]: I0120 19:03:28.805197 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/adb303b3-0970-4438-92ee-dd30a6fb55b2-ceph" (OuterVolumeSpecName: "ceph") pod "adb303b3-0970-4438-92ee-dd30a6fb55b2" (UID: "adb303b3-0970-4438-92ee-dd30a6fb55b2"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 19:03:28 crc kubenswrapper[4661]: I0120 19:03:28.867208 4661 scope.go:117] "RemoveContainer" containerID="955b99ce833f27e5075b6196fc82597d63b2e76bbc3f6c10cface018594d8efa" Jan 20 19:03:28 crc kubenswrapper[4661]: I0120 19:03:28.886196 4661 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/adb303b3-0970-4438-92ee-dd30a6fb55b2-ceph\") on node \"crc\" DevicePath \"\"" Jan 20 19:03:28 crc kubenswrapper[4661]: I0120 19:03:28.886259 4661 reconciler_common.go:293] "Volume detached for volume \"var-lib-manila\" (UniqueName: \"kubernetes.io/host-path/adb303b3-0970-4438-92ee-dd30a6fb55b2-var-lib-manila\") on node \"crc\" DevicePath \"\"" Jan 20 19:03:28 crc kubenswrapper[4661]: I0120 19:03:28.886275 4661 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/adb303b3-0970-4438-92ee-dd30a6fb55b2-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 20 19:03:28 crc kubenswrapper[4661]: I0120 19:03:28.886286 4661 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dzmjv\" (UniqueName: \"kubernetes.io/projected/adb303b3-0970-4438-92ee-dd30a6fb55b2-kube-api-access-dzmjv\") on node \"crc\" DevicePath \"\"" Jan 20 19:03:28 crc kubenswrapper[4661]: I0120 19:03:28.886308 4661 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/adb303b3-0970-4438-92ee-dd30a6fb55b2-scripts\") on node \"crc\" DevicePath \"\"" Jan 20 19:03:28 crc kubenswrapper[4661]: I0120 19:03:28.893907 4661 scope.go:117] "RemoveContainer" containerID="5cb80831e7331caf380495931de3ab180eb8f4a45c38b61123eae8ca5723e0c8" Jan 20 19:03:28 crc kubenswrapper[4661]: E0120 19:03:28.894487 4661 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5cb80831e7331caf380495931de3ab180eb8f4a45c38b61123eae8ca5723e0c8\": container with ID starting with 5cb80831e7331caf380495931de3ab180eb8f4a45c38b61123eae8ca5723e0c8 not found: ID does not exist" containerID="5cb80831e7331caf380495931de3ab180eb8f4a45c38b61123eae8ca5723e0c8" Jan 20 19:03:28 crc kubenswrapper[4661]: I0120 19:03:28.894582 4661 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5cb80831e7331caf380495931de3ab180eb8f4a45c38b61123eae8ca5723e0c8"} err="failed to get container status \"5cb80831e7331caf380495931de3ab180eb8f4a45c38b61123eae8ca5723e0c8\": rpc error: code = NotFound desc = could not find container \"5cb80831e7331caf380495931de3ab180eb8f4a45c38b61123eae8ca5723e0c8\": container with ID starting with 5cb80831e7331caf380495931de3ab180eb8f4a45c38b61123eae8ca5723e0c8 not found: ID does not exist" Jan 20 19:03:28 crc kubenswrapper[4661]: I0120 19:03:28.894655 4661 scope.go:117] "RemoveContainer" containerID="955b99ce833f27e5075b6196fc82597d63b2e76bbc3f6c10cface018594d8efa" Jan 20 19:03:28 crc kubenswrapper[4661]: E0120 19:03:28.895370 4661 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"955b99ce833f27e5075b6196fc82597d63b2e76bbc3f6c10cface018594d8efa\": container with ID starting with 955b99ce833f27e5075b6196fc82597d63b2e76bbc3f6c10cface018594d8efa not found: ID does not exist" containerID="955b99ce833f27e5075b6196fc82597d63b2e76bbc3f6c10cface018594d8efa" Jan 20 19:03:28 crc kubenswrapper[4661]: I0120 19:03:28.895546 4661 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"955b99ce833f27e5075b6196fc82597d63b2e76bbc3f6c10cface018594d8efa"} err="failed to get container status \"955b99ce833f27e5075b6196fc82597d63b2e76bbc3f6c10cface018594d8efa\": rpc error: code = NotFound desc = could not find container \"955b99ce833f27e5075b6196fc82597d63b2e76bbc3f6c10cface018594d8efa\": container with ID starting with 955b99ce833f27e5075b6196fc82597d63b2e76bbc3f6c10cface018594d8efa not found: ID does not exist" Jan 20 19:03:28 crc kubenswrapper[4661]: I0120 19:03:28.895629 4661 scope.go:117] "RemoveContainer" containerID="5cb80831e7331caf380495931de3ab180eb8f4a45c38b61123eae8ca5723e0c8" Jan 20 19:03:28 crc kubenswrapper[4661]: I0120 19:03:28.896055 4661 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5cb80831e7331caf380495931de3ab180eb8f4a45c38b61123eae8ca5723e0c8"} err="failed to get container status \"5cb80831e7331caf380495931de3ab180eb8f4a45c38b61123eae8ca5723e0c8\": rpc error: code = NotFound desc = could not find container \"5cb80831e7331caf380495931de3ab180eb8f4a45c38b61123eae8ca5723e0c8\": container with ID starting with 5cb80831e7331caf380495931de3ab180eb8f4a45c38b61123eae8ca5723e0c8 not found: ID does not exist" Jan 20 19:03:28 crc kubenswrapper[4661]: I0120 19:03:28.896365 4661 scope.go:117] "RemoveContainer" containerID="955b99ce833f27e5075b6196fc82597d63b2e76bbc3f6c10cface018594d8efa" Jan 20 19:03:28 crc kubenswrapper[4661]: I0120 19:03:28.896690 4661 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"955b99ce833f27e5075b6196fc82597d63b2e76bbc3f6c10cface018594d8efa"} err="failed to get container status \"955b99ce833f27e5075b6196fc82597d63b2e76bbc3f6c10cface018594d8efa\": rpc error: code = NotFound desc = could not find container \"955b99ce833f27e5075b6196fc82597d63b2e76bbc3f6c10cface018594d8efa\": container with ID starting with 955b99ce833f27e5075b6196fc82597d63b2e76bbc3f6c10cface018594d8efa not found: ID does not exist" Jan 20 19:03:28 crc kubenswrapper[4661]: I0120 19:03:28.896829 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/adb303b3-0970-4438-92ee-dd30a6fb55b2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "adb303b3-0970-4438-92ee-dd30a6fb55b2" (UID: "adb303b3-0970-4438-92ee-dd30a6fb55b2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 19:03:28 crc kubenswrapper[4661]: I0120 19:03:28.922219 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/adb303b3-0970-4438-92ee-dd30a6fb55b2-config-data" (OuterVolumeSpecName: "config-data") pod "adb303b3-0970-4438-92ee-dd30a6fb55b2" (UID: "adb303b3-0970-4438-92ee-dd30a6fb55b2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 19:03:28 crc kubenswrapper[4661]: I0120 19:03:28.988212 4661 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/adb303b3-0970-4438-92ee-dd30a6fb55b2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 20 19:03:28 crc kubenswrapper[4661]: I0120 19:03:28.988249 4661 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/adb303b3-0970-4438-92ee-dd30a6fb55b2-config-data\") on node \"crc\" DevicePath \"\"" Jan 20 19:03:29 crc kubenswrapper[4661]: I0120 19:03:29.153747 4661 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/manila-share-share1-0"] Jan 20 19:03:29 crc kubenswrapper[4661]: I0120 19:03:29.162864 4661 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/manila-share-share1-0"] Jan 20 19:03:29 crc kubenswrapper[4661]: I0120 19:03:29.183250 4661 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-share-share1-0"] Jan 20 19:03:29 crc kubenswrapper[4661]: E0120 19:03:29.184584 4661 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="adb303b3-0970-4438-92ee-dd30a6fb55b2" containerName="probe" Jan 20 19:03:29 crc kubenswrapper[4661]: I0120 19:03:29.184606 4661 state_mem.go:107] "Deleted CPUSet assignment" podUID="adb303b3-0970-4438-92ee-dd30a6fb55b2" containerName="probe" Jan 20 19:03:29 crc kubenswrapper[4661]: E0120 19:03:29.184616 4661 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="adb303b3-0970-4438-92ee-dd30a6fb55b2" containerName="manila-share" Jan 20 19:03:29 crc kubenswrapper[4661]: I0120 19:03:29.184623 4661 state_mem.go:107] "Deleted CPUSet assignment" podUID="adb303b3-0970-4438-92ee-dd30a6fb55b2" containerName="manila-share" Jan 20 19:03:29 crc kubenswrapper[4661]: I0120 19:03:29.184821 4661 memory_manager.go:354] "RemoveStaleState removing state" podUID="adb303b3-0970-4438-92ee-dd30a6fb55b2" containerName="manila-share" Jan 20 19:03:29 crc kubenswrapper[4661]: I0120 19:03:29.184847 4661 memory_manager.go:354] "RemoveStaleState removing state" podUID="adb303b3-0970-4438-92ee-dd30a6fb55b2" containerName="probe" Jan 20 19:03:29 crc kubenswrapper[4661]: I0120 19:03:29.185749 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-share-share1-0" Jan 20 19:03:29 crc kubenswrapper[4661]: I0120 19:03:29.195946 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-share-share1-config-data" Jan 20 19:03:29 crc kubenswrapper[4661]: I0120 19:03:29.197679 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-share-share1-0"] Jan 20 19:03:29 crc kubenswrapper[4661]: I0120 19:03:29.292115 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6c223935-3aa9-491c-8f9d-638441f57742-scripts\") pod \"manila-share-share1-0\" (UID: \"6c223935-3aa9-491c-8f9d-638441f57742\") " pod="openstack/manila-share-share1-0" Jan 20 19:03:29 crc kubenswrapper[4661]: I0120 19:03:29.292249 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/6c223935-3aa9-491c-8f9d-638441f57742-etc-machine-id\") pod \"manila-share-share1-0\" (UID: \"6c223935-3aa9-491c-8f9d-638441f57742\") " pod="openstack/manila-share-share1-0" Jan 20 19:03:29 crc kubenswrapper[4661]: I0120 19:03:29.292279 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c223935-3aa9-491c-8f9d-638441f57742-combined-ca-bundle\") pod \"manila-share-share1-0\" (UID: \"6c223935-3aa9-491c-8f9d-638441f57742\") " pod="openstack/manila-share-share1-0" Jan 20 19:03:29 crc kubenswrapper[4661]: I0120 19:03:29.292343 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/6c223935-3aa9-491c-8f9d-638441f57742-ceph\") pod \"manila-share-share1-0\" (UID: \"6c223935-3aa9-491c-8f9d-638441f57742\") " pod="openstack/manila-share-share1-0" Jan 20 19:03:29 crc kubenswrapper[4661]: I0120 19:03:29.292406 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-manila\" (UniqueName: \"kubernetes.io/host-path/6c223935-3aa9-491c-8f9d-638441f57742-var-lib-manila\") pod \"manila-share-share1-0\" (UID: \"6c223935-3aa9-491c-8f9d-638441f57742\") " pod="openstack/manila-share-share1-0" Jan 20 19:03:29 crc kubenswrapper[4661]: I0120 19:03:29.292447 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6c223935-3aa9-491c-8f9d-638441f57742-config-data-custom\") pod \"manila-share-share1-0\" (UID: \"6c223935-3aa9-491c-8f9d-638441f57742\") " pod="openstack/manila-share-share1-0" Jan 20 19:03:29 crc kubenswrapper[4661]: I0120 19:03:29.292471 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6c223935-3aa9-491c-8f9d-638441f57742-config-data\") pod \"manila-share-share1-0\" (UID: \"6c223935-3aa9-491c-8f9d-638441f57742\") " pod="openstack/manila-share-share1-0" Jan 20 19:03:29 crc kubenswrapper[4661]: I0120 19:03:29.292515 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-trfth\" (UniqueName: \"kubernetes.io/projected/6c223935-3aa9-491c-8f9d-638441f57742-kube-api-access-trfth\") pod \"manila-share-share1-0\" (UID: \"6c223935-3aa9-491c-8f9d-638441f57742\") " pod="openstack/manila-share-share1-0" Jan 20 19:03:29 crc kubenswrapper[4661]: I0120 19:03:29.323183 4661 patch_prober.go:28] interesting pod/machine-config-daemon-svf7c container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 20 19:03:29 crc kubenswrapper[4661]: I0120 19:03:29.323388 4661 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-svf7c" podUID="78855c94-da90-4523-8d65-70f7fd153dee" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 20 19:03:29 crc kubenswrapper[4661]: I0120 19:03:29.323499 4661 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-svf7c" Jan 20 19:03:29 crc kubenswrapper[4661]: I0120 19:03:29.324284 4661 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"03255bad160b69aedb631395e65d9a4b12434de8081b19d4e9a6358a608a74a9"} pod="openshift-machine-config-operator/machine-config-daemon-svf7c" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 20 19:03:29 crc kubenswrapper[4661]: I0120 19:03:29.324393 4661 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-svf7c" podUID="78855c94-da90-4523-8d65-70f7fd153dee" containerName="machine-config-daemon" containerID="cri-o://03255bad160b69aedb631395e65d9a4b12434de8081b19d4e9a6358a608a74a9" gracePeriod=600 Jan 20 19:03:29 crc kubenswrapper[4661]: I0120 19:03:29.394336 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6c223935-3aa9-491c-8f9d-638441f57742-scripts\") pod \"manila-share-share1-0\" (UID: \"6c223935-3aa9-491c-8f9d-638441f57742\") " pod="openstack/manila-share-share1-0" Jan 20 19:03:29 crc kubenswrapper[4661]: I0120 19:03:29.394434 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/6c223935-3aa9-491c-8f9d-638441f57742-etc-machine-id\") pod \"manila-share-share1-0\" (UID: \"6c223935-3aa9-491c-8f9d-638441f57742\") " pod="openstack/manila-share-share1-0" Jan 20 19:03:29 crc kubenswrapper[4661]: I0120 19:03:29.394467 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c223935-3aa9-491c-8f9d-638441f57742-combined-ca-bundle\") pod \"manila-share-share1-0\" (UID: \"6c223935-3aa9-491c-8f9d-638441f57742\") " pod="openstack/manila-share-share1-0" Jan 20 19:03:29 crc kubenswrapper[4661]: I0120 19:03:29.394531 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/6c223935-3aa9-491c-8f9d-638441f57742-ceph\") pod \"manila-share-share1-0\" (UID: \"6c223935-3aa9-491c-8f9d-638441f57742\") " pod="openstack/manila-share-share1-0" Jan 20 19:03:29 crc kubenswrapper[4661]: I0120 19:03:29.394587 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-manila\" (UniqueName: \"kubernetes.io/host-path/6c223935-3aa9-491c-8f9d-638441f57742-var-lib-manila\") pod \"manila-share-share1-0\" (UID: \"6c223935-3aa9-491c-8f9d-638441f57742\") " pod="openstack/manila-share-share1-0" Jan 20 19:03:29 crc kubenswrapper[4661]: I0120 19:03:29.394622 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6c223935-3aa9-491c-8f9d-638441f57742-config-data-custom\") pod \"manila-share-share1-0\" (UID: \"6c223935-3aa9-491c-8f9d-638441f57742\") " pod="openstack/manila-share-share1-0" Jan 20 19:03:29 crc kubenswrapper[4661]: I0120 19:03:29.394647 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6c223935-3aa9-491c-8f9d-638441f57742-config-data\") pod \"manila-share-share1-0\" (UID: \"6c223935-3aa9-491c-8f9d-638441f57742\") " pod="openstack/manila-share-share1-0" Jan 20 19:03:29 crc kubenswrapper[4661]: I0120 19:03:29.394699 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-trfth\" (UniqueName: \"kubernetes.io/projected/6c223935-3aa9-491c-8f9d-638441f57742-kube-api-access-trfth\") pod \"manila-share-share1-0\" (UID: \"6c223935-3aa9-491c-8f9d-638441f57742\") " pod="openstack/manila-share-share1-0" Jan 20 19:03:29 crc kubenswrapper[4661]: I0120 19:03:29.395115 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/6c223935-3aa9-491c-8f9d-638441f57742-etc-machine-id\") pod \"manila-share-share1-0\" (UID: \"6c223935-3aa9-491c-8f9d-638441f57742\") " pod="openstack/manila-share-share1-0" Jan 20 19:03:29 crc kubenswrapper[4661]: I0120 19:03:29.395498 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-manila\" (UniqueName: \"kubernetes.io/host-path/6c223935-3aa9-491c-8f9d-638441f57742-var-lib-manila\") pod \"manila-share-share1-0\" (UID: \"6c223935-3aa9-491c-8f9d-638441f57742\") " pod="openstack/manila-share-share1-0" Jan 20 19:03:29 crc kubenswrapper[4661]: I0120 19:03:29.403838 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6c223935-3aa9-491c-8f9d-638441f57742-config-data\") pod \"manila-share-share1-0\" (UID: \"6c223935-3aa9-491c-8f9d-638441f57742\") " pod="openstack/manila-share-share1-0" Jan 20 19:03:29 crc kubenswrapper[4661]: I0120 19:03:29.403938 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c223935-3aa9-491c-8f9d-638441f57742-combined-ca-bundle\") pod \"manila-share-share1-0\" (UID: \"6c223935-3aa9-491c-8f9d-638441f57742\") " pod="openstack/manila-share-share1-0" Jan 20 19:03:29 crc kubenswrapper[4661]: I0120 19:03:29.404914 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6c223935-3aa9-491c-8f9d-638441f57742-scripts\") pod \"manila-share-share1-0\" (UID: \"6c223935-3aa9-491c-8f9d-638441f57742\") " pod="openstack/manila-share-share1-0" Jan 20 19:03:29 crc kubenswrapper[4661]: I0120 19:03:29.409202 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/6c223935-3aa9-491c-8f9d-638441f57742-ceph\") pod \"manila-share-share1-0\" (UID: \"6c223935-3aa9-491c-8f9d-638441f57742\") " pod="openstack/manila-share-share1-0" Jan 20 19:03:29 crc kubenswrapper[4661]: I0120 19:03:29.429490 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6c223935-3aa9-491c-8f9d-638441f57742-config-data-custom\") pod \"manila-share-share1-0\" (UID: \"6c223935-3aa9-491c-8f9d-638441f57742\") " pod="openstack/manila-share-share1-0" Jan 20 19:03:29 crc kubenswrapper[4661]: I0120 19:03:29.430727 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-trfth\" (UniqueName: \"kubernetes.io/projected/6c223935-3aa9-491c-8f9d-638441f57742-kube-api-access-trfth\") pod \"manila-share-share1-0\" (UID: \"6c223935-3aa9-491c-8f9d-638441f57742\") " pod="openstack/manila-share-share1-0" Jan 20 19:03:29 crc kubenswrapper[4661]: I0120 19:03:29.504183 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-share-share1-0" Jan 20 19:03:29 crc kubenswrapper[4661]: I0120 19:03:29.799366 4661 generic.go:334] "Generic (PLEG): container finished" podID="78855c94-da90-4523-8d65-70f7fd153dee" containerID="03255bad160b69aedb631395e65d9a4b12434de8081b19d4e9a6358a608a74a9" exitCode=0 Jan 20 19:03:29 crc kubenswrapper[4661]: I0120 19:03:29.799678 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-svf7c" event={"ID":"78855c94-da90-4523-8d65-70f7fd153dee","Type":"ContainerDied","Data":"03255bad160b69aedb631395e65d9a4b12434de8081b19d4e9a6358a608a74a9"} Jan 20 19:03:29 crc kubenswrapper[4661]: I0120 19:03:29.799705 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-svf7c" event={"ID":"78855c94-da90-4523-8d65-70f7fd153dee","Type":"ContainerStarted","Data":"87b9063ae5d35d6fe33c871869276deec2f28bf2db00de46e4f772a447877d06"} Jan 20 19:03:29 crc kubenswrapper[4661]: I0120 19:03:29.799723 4661 scope.go:117] "RemoveContainer" containerID="8cf060cb371bfd85fb7ec3dfa349258b41b6b6371ddbb02e6b4120e489296593" Jan 20 19:03:30 crc kubenswrapper[4661]: I0120 19:03:30.139247 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-share-share1-0"] Jan 20 19:03:30 crc kubenswrapper[4661]: I0120 19:03:30.154009 4661 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="adb303b3-0970-4438-92ee-dd30a6fb55b2" path="/var/lib/kubelet/pods/adb303b3-0970-4438-92ee-dd30a6fb55b2/volumes" Jan 20 19:03:30 crc kubenswrapper[4661]: I0120 19:03:30.840956 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"6c223935-3aa9-491c-8f9d-638441f57742","Type":"ContainerStarted","Data":"31af327112c19445bbc3984be0cd50250722453b936f27b6471e1333e804224f"} Jan 20 19:03:30 crc kubenswrapper[4661]: I0120 19:03:30.841321 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"6c223935-3aa9-491c-8f9d-638441f57742","Type":"ContainerStarted","Data":"9b4ec1515c79ccb0f9a38d30599e1e3b8b2a357081c6afabe77d1938ab386b4e"} Jan 20 19:03:31 crc kubenswrapper[4661]: I0120 19:03:31.877115 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"6c223935-3aa9-491c-8f9d-638441f57742","Type":"ContainerStarted","Data":"440bcf2a9dbb24caf4e6f5e7c047f8b1a07775d5ee98bee36ac5d89cfd261454"} Jan 20 19:03:31 crc kubenswrapper[4661]: I0120 19:03:31.903116 4661 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/manila-share-share1-0" podStartSLOduration=2.90309572 podStartE2EDuration="2.90309572s" podCreationTimestamp="2026-01-20 19:03:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 19:03:31.896085128 +0000 UTC m=+3468.226874790" watchObservedRunningTime="2026-01-20 19:03:31.90309572 +0000 UTC m=+3468.233885382" Jan 20 19:03:33 crc kubenswrapper[4661]: I0120 19:03:33.138827 4661 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/manila-scheduler-0" Jan 20 19:03:33 crc kubenswrapper[4661]: I0120 19:03:33.486503 4661 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-844bcbddd8-v9pcc" podUID="ea579f19-b21d-4098-8f52-517be45768fb" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.246:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.246:8443: connect: connection refused" Jan 20 19:03:38 crc kubenswrapper[4661]: I0120 19:03:38.958780 4661 generic.go:334] "Generic (PLEG): container finished" podID="ea579f19-b21d-4098-8f52-517be45768fb" containerID="5111dbc27e814b94b5795799cd69cac1ddd769cbf1816e3666e0f1967d88e741" exitCode=137 Jan 20 19:03:38 crc kubenswrapper[4661]: I0120 19:03:38.958876 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-844bcbddd8-v9pcc" event={"ID":"ea579f19-b21d-4098-8f52-517be45768fb","Type":"ContainerDied","Data":"5111dbc27e814b94b5795799cd69cac1ddd769cbf1816e3666e0f1967d88e741"} Jan 20 19:03:39 crc kubenswrapper[4661]: I0120 19:03:39.227387 4661 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-844bcbddd8-v9pcc" Jan 20 19:03:39 crc kubenswrapper[4661]: I0120 19:03:39.313615 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/ea579f19-b21d-4098-8f52-517be45768fb-horizon-secret-key\") pod \"ea579f19-b21d-4098-8f52-517be45768fb\" (UID: \"ea579f19-b21d-4098-8f52-517be45768fb\") " Jan 20 19:03:39 crc kubenswrapper[4661]: I0120 19:03:39.313658 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bz2vc\" (UniqueName: \"kubernetes.io/projected/ea579f19-b21d-4098-8f52-517be45768fb-kube-api-access-bz2vc\") pod \"ea579f19-b21d-4098-8f52-517be45768fb\" (UID: \"ea579f19-b21d-4098-8f52-517be45768fb\") " Jan 20 19:03:39 crc kubenswrapper[4661]: I0120 19:03:39.313738 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ea579f19-b21d-4098-8f52-517be45768fb-combined-ca-bundle\") pod \"ea579f19-b21d-4098-8f52-517be45768fb\" (UID: \"ea579f19-b21d-4098-8f52-517be45768fb\") " Jan 20 19:03:39 crc kubenswrapper[4661]: I0120 19:03:39.314495 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ea579f19-b21d-4098-8f52-517be45768fb-logs\") pod \"ea579f19-b21d-4098-8f52-517be45768fb\" (UID: \"ea579f19-b21d-4098-8f52-517be45768fb\") " Jan 20 19:03:39 crc kubenswrapper[4661]: I0120 19:03:39.314621 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ea579f19-b21d-4098-8f52-517be45768fb-scripts\") pod \"ea579f19-b21d-4098-8f52-517be45768fb\" (UID: \"ea579f19-b21d-4098-8f52-517be45768fb\") " Jan 20 19:03:39 crc kubenswrapper[4661]: I0120 19:03:39.314728 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ea579f19-b21d-4098-8f52-517be45768fb-config-data\") pod \"ea579f19-b21d-4098-8f52-517be45768fb\" (UID: \"ea579f19-b21d-4098-8f52-517be45768fb\") " Jan 20 19:03:39 crc kubenswrapper[4661]: I0120 19:03:39.314752 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/ea579f19-b21d-4098-8f52-517be45768fb-horizon-tls-certs\") pod \"ea579f19-b21d-4098-8f52-517be45768fb\" (UID: \"ea579f19-b21d-4098-8f52-517be45768fb\") " Jan 20 19:03:39 crc kubenswrapper[4661]: I0120 19:03:39.314963 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ea579f19-b21d-4098-8f52-517be45768fb-logs" (OuterVolumeSpecName: "logs") pod "ea579f19-b21d-4098-8f52-517be45768fb" (UID: "ea579f19-b21d-4098-8f52-517be45768fb"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 19:03:39 crc kubenswrapper[4661]: I0120 19:03:39.315281 4661 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ea579f19-b21d-4098-8f52-517be45768fb-logs\") on node \"crc\" DevicePath \"\"" Jan 20 19:03:39 crc kubenswrapper[4661]: I0120 19:03:39.337766 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ea579f19-b21d-4098-8f52-517be45768fb-kube-api-access-bz2vc" (OuterVolumeSpecName: "kube-api-access-bz2vc") pod "ea579f19-b21d-4098-8f52-517be45768fb" (UID: "ea579f19-b21d-4098-8f52-517be45768fb"). InnerVolumeSpecName "kube-api-access-bz2vc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 19:03:39 crc kubenswrapper[4661]: I0120 19:03:39.340583 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ea579f19-b21d-4098-8f52-517be45768fb-config-data" (OuterVolumeSpecName: "config-data") pod "ea579f19-b21d-4098-8f52-517be45768fb" (UID: "ea579f19-b21d-4098-8f52-517be45768fb"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 19:03:39 crc kubenswrapper[4661]: I0120 19:03:39.343485 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ea579f19-b21d-4098-8f52-517be45768fb-scripts" (OuterVolumeSpecName: "scripts") pod "ea579f19-b21d-4098-8f52-517be45768fb" (UID: "ea579f19-b21d-4098-8f52-517be45768fb"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 19:03:39 crc kubenswrapper[4661]: I0120 19:03:39.355929 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ea579f19-b21d-4098-8f52-517be45768fb-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "ea579f19-b21d-4098-8f52-517be45768fb" (UID: "ea579f19-b21d-4098-8f52-517be45768fb"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 19:03:39 crc kubenswrapper[4661]: I0120 19:03:39.377684 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ea579f19-b21d-4098-8f52-517be45768fb-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ea579f19-b21d-4098-8f52-517be45768fb" (UID: "ea579f19-b21d-4098-8f52-517be45768fb"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 19:03:39 crc kubenswrapper[4661]: I0120 19:03:39.395235 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ea579f19-b21d-4098-8f52-517be45768fb-horizon-tls-certs" (OuterVolumeSpecName: "horizon-tls-certs") pod "ea579f19-b21d-4098-8f52-517be45768fb" (UID: "ea579f19-b21d-4098-8f52-517be45768fb"). InnerVolumeSpecName "horizon-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 19:03:39 crc kubenswrapper[4661]: I0120 19:03:39.417139 4661 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/ea579f19-b21d-4098-8f52-517be45768fb-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Jan 20 19:03:39 crc kubenswrapper[4661]: I0120 19:03:39.417173 4661 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bz2vc\" (UniqueName: \"kubernetes.io/projected/ea579f19-b21d-4098-8f52-517be45768fb-kube-api-access-bz2vc\") on node \"crc\" DevicePath \"\"" Jan 20 19:03:39 crc kubenswrapper[4661]: I0120 19:03:39.417185 4661 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ea579f19-b21d-4098-8f52-517be45768fb-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 20 19:03:39 crc kubenswrapper[4661]: I0120 19:03:39.417196 4661 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ea579f19-b21d-4098-8f52-517be45768fb-scripts\") on node \"crc\" DevicePath \"\"" Jan 20 19:03:39 crc kubenswrapper[4661]: I0120 19:03:39.417204 4661 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ea579f19-b21d-4098-8f52-517be45768fb-config-data\") on node \"crc\" DevicePath \"\"" Jan 20 19:03:39 crc kubenswrapper[4661]: I0120 19:03:39.417213 4661 reconciler_common.go:293] "Volume detached for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/ea579f19-b21d-4098-8f52-517be45768fb-horizon-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 20 19:03:39 crc kubenswrapper[4661]: I0120 19:03:39.504774 4661 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/manila-share-share1-0" Jan 20 19:03:39 crc kubenswrapper[4661]: I0120 19:03:39.971109 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-844bcbddd8-v9pcc" event={"ID":"ea579f19-b21d-4098-8f52-517be45768fb","Type":"ContainerDied","Data":"3fcc2297ae30cc28d680d0e8e286bd5cc6b552e3381dd6df9d6b9cabb32b985e"} Jan 20 19:03:39 crc kubenswrapper[4661]: I0120 19:03:39.971168 4661 scope.go:117] "RemoveContainer" containerID="f921a9bdfcd4155d0506b5d9b39786056aba403a0899c9ec152853ac2b3655f1" Jan 20 19:03:39 crc kubenswrapper[4661]: I0120 19:03:39.971294 4661 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-844bcbddd8-v9pcc" Jan 20 19:03:40 crc kubenswrapper[4661]: I0120 19:03:40.030861 4661 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-844bcbddd8-v9pcc"] Jan 20 19:03:40 crc kubenswrapper[4661]: I0120 19:03:40.042598 4661 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-844bcbddd8-v9pcc"] Jan 20 19:03:40 crc kubenswrapper[4661]: I0120 19:03:40.163794 4661 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ea579f19-b21d-4098-8f52-517be45768fb" path="/var/lib/kubelet/pods/ea579f19-b21d-4098-8f52-517be45768fb/volumes" Jan 20 19:03:40 crc kubenswrapper[4661]: I0120 19:03:40.238265 4661 scope.go:117] "RemoveContainer" containerID="5111dbc27e814b94b5795799cd69cac1ddd769cbf1816e3666e0f1967d88e741" Jan 20 19:03:44 crc kubenswrapper[4661]: I0120 19:03:44.654633 4661 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/manila-scheduler-0" Jan 20 19:03:51 crc kubenswrapper[4661]: I0120 19:03:51.113853 4661 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/manila-share-share1-0" Jan 20 19:03:51 crc kubenswrapper[4661]: I0120 19:03:51.388898 4661 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Jan 20 19:04:59 crc kubenswrapper[4661]: I0120 19:04:59.832244 4661 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/tempest-tests-tempest"] Jan 20 19:04:59 crc kubenswrapper[4661]: E0120 19:04:59.833139 4661 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ea579f19-b21d-4098-8f52-517be45768fb" containerName="horizon" Jan 20 19:04:59 crc kubenswrapper[4661]: I0120 19:04:59.833155 4661 state_mem.go:107] "Deleted CPUSet assignment" podUID="ea579f19-b21d-4098-8f52-517be45768fb" containerName="horizon" Jan 20 19:04:59 crc kubenswrapper[4661]: E0120 19:04:59.833177 4661 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ea579f19-b21d-4098-8f52-517be45768fb" containerName="horizon-log" Jan 20 19:04:59 crc kubenswrapper[4661]: I0120 19:04:59.833183 4661 state_mem.go:107] "Deleted CPUSet assignment" podUID="ea579f19-b21d-4098-8f52-517be45768fb" containerName="horizon-log" Jan 20 19:04:59 crc kubenswrapper[4661]: I0120 19:04:59.833356 4661 memory_manager.go:354] "RemoveStaleState removing state" podUID="ea579f19-b21d-4098-8f52-517be45768fb" containerName="horizon" Jan 20 19:04:59 crc kubenswrapper[4661]: I0120 19:04:59.833372 4661 memory_manager.go:354] "RemoveStaleState removing state" podUID="ea579f19-b21d-4098-8f52-517be45768fb" containerName="horizon-log" Jan 20 19:04:59 crc kubenswrapper[4661]: I0120 19:04:59.833966 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Jan 20 19:04:59 crc kubenswrapper[4661]: I0120 19:04:59.836449 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"test-operator-controller-priv-key" Jan 20 19:04:59 crc kubenswrapper[4661]: I0120 19:04:59.837803 4661 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-custom-data-s0" Jan 20 19:04:59 crc kubenswrapper[4661]: I0120 19:04:59.837975 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-zxggd" Jan 20 19:04:59 crc kubenswrapper[4661]: I0120 19:04:59.838166 4661 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s0" Jan 20 19:04:59 crc kubenswrapper[4661]: I0120 19:04:59.849061 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Jan 20 19:05:00 crc kubenswrapper[4661]: I0120 19:05:00.000538 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/fcc30bf2-7b68-4438-b7db-b041e1d1e2ff-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"fcc30bf2-7b68-4438-b7db-b041e1d1e2ff\") " pod="openstack/tempest-tests-tempest" Jan 20 19:05:00 crc kubenswrapper[4661]: I0120 19:05:00.001131 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/fcc30bf2-7b68-4438-b7db-b041e1d1e2ff-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"fcc30bf2-7b68-4438-b7db-b041e1d1e2ff\") " pod="openstack/tempest-tests-tempest" Jan 20 19:05:00 crc kubenswrapper[4661]: I0120 19:05:00.001256 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/fcc30bf2-7b68-4438-b7db-b041e1d1e2ff-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"fcc30bf2-7b68-4438-b7db-b041e1d1e2ff\") " pod="openstack/tempest-tests-tempest" Jan 20 19:05:00 crc kubenswrapper[4661]: I0120 19:05:00.001411 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/fcc30bf2-7b68-4438-b7db-b041e1d1e2ff-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"fcc30bf2-7b68-4438-b7db-b041e1d1e2ff\") " pod="openstack/tempest-tests-tempest" Jan 20 19:05:00 crc kubenswrapper[4661]: I0120 19:05:00.001530 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/fcc30bf2-7b68-4438-b7db-b041e1d1e2ff-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"fcc30bf2-7b68-4438-b7db-b041e1d1e2ff\") " pod="openstack/tempest-tests-tempest" Jan 20 19:05:00 crc kubenswrapper[4661]: I0120 19:05:00.001708 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"tempest-tests-tempest\" (UID: \"fcc30bf2-7b68-4438-b7db-b041e1d1e2ff\") " pod="openstack/tempest-tests-tempest" Jan 20 19:05:00 crc kubenswrapper[4661]: I0120 19:05:00.001880 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/fcc30bf2-7b68-4438-b7db-b041e1d1e2ff-config-data\") pod \"tempest-tests-tempest\" (UID: \"fcc30bf2-7b68-4438-b7db-b041e1d1e2ff\") " pod="openstack/tempest-tests-tempest" Jan 20 19:05:00 crc kubenswrapper[4661]: I0120 19:05:00.002066 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/fcc30bf2-7b68-4438-b7db-b041e1d1e2ff-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"fcc30bf2-7b68-4438-b7db-b041e1d1e2ff\") " pod="openstack/tempest-tests-tempest" Jan 20 19:05:00 crc kubenswrapper[4661]: I0120 19:05:00.002218 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mgw94\" (UniqueName: \"kubernetes.io/projected/fcc30bf2-7b68-4438-b7db-b041e1d1e2ff-kube-api-access-mgw94\") pod \"tempest-tests-tempest\" (UID: \"fcc30bf2-7b68-4438-b7db-b041e1d1e2ff\") " pod="openstack/tempest-tests-tempest" Jan 20 19:05:00 crc kubenswrapper[4661]: I0120 19:05:00.103803 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/fcc30bf2-7b68-4438-b7db-b041e1d1e2ff-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"fcc30bf2-7b68-4438-b7db-b041e1d1e2ff\") " pod="openstack/tempest-tests-tempest" Jan 20 19:05:00 crc kubenswrapper[4661]: I0120 19:05:00.103850 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mgw94\" (UniqueName: \"kubernetes.io/projected/fcc30bf2-7b68-4438-b7db-b041e1d1e2ff-kube-api-access-mgw94\") pod \"tempest-tests-tempest\" (UID: \"fcc30bf2-7b68-4438-b7db-b041e1d1e2ff\") " pod="openstack/tempest-tests-tempest" Jan 20 19:05:00 crc kubenswrapper[4661]: I0120 19:05:00.103898 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/fcc30bf2-7b68-4438-b7db-b041e1d1e2ff-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"fcc30bf2-7b68-4438-b7db-b041e1d1e2ff\") " pod="openstack/tempest-tests-tempest" Jan 20 19:05:00 crc kubenswrapper[4661]: I0120 19:05:00.103928 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/fcc30bf2-7b68-4438-b7db-b041e1d1e2ff-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"fcc30bf2-7b68-4438-b7db-b041e1d1e2ff\") " pod="openstack/tempest-tests-tempest" Jan 20 19:05:00 crc kubenswrapper[4661]: I0120 19:05:00.103946 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/fcc30bf2-7b68-4438-b7db-b041e1d1e2ff-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"fcc30bf2-7b68-4438-b7db-b041e1d1e2ff\") " pod="openstack/tempest-tests-tempest" Jan 20 19:05:00 crc kubenswrapper[4661]: I0120 19:05:00.103985 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/fcc30bf2-7b68-4438-b7db-b041e1d1e2ff-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"fcc30bf2-7b68-4438-b7db-b041e1d1e2ff\") " pod="openstack/tempest-tests-tempest" Jan 20 19:05:00 crc kubenswrapper[4661]: I0120 19:05:00.104005 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/fcc30bf2-7b68-4438-b7db-b041e1d1e2ff-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"fcc30bf2-7b68-4438-b7db-b041e1d1e2ff\") " pod="openstack/tempest-tests-tempest" Jan 20 19:05:00 crc kubenswrapper[4661]: I0120 19:05:00.104038 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"tempest-tests-tempest\" (UID: \"fcc30bf2-7b68-4438-b7db-b041e1d1e2ff\") " pod="openstack/tempest-tests-tempest" Jan 20 19:05:00 crc kubenswrapper[4661]: I0120 19:05:00.104065 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/fcc30bf2-7b68-4438-b7db-b041e1d1e2ff-config-data\") pod \"tempest-tests-tempest\" (UID: \"fcc30bf2-7b68-4438-b7db-b041e1d1e2ff\") " pod="openstack/tempest-tests-tempest" Jan 20 19:05:00 crc kubenswrapper[4661]: I0120 19:05:00.105085 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/fcc30bf2-7b68-4438-b7db-b041e1d1e2ff-config-data\") pod \"tempest-tests-tempest\" (UID: \"fcc30bf2-7b68-4438-b7db-b041e1d1e2ff\") " pod="openstack/tempest-tests-tempest" Jan 20 19:05:00 crc kubenswrapper[4661]: I0120 19:05:00.106304 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/fcc30bf2-7b68-4438-b7db-b041e1d1e2ff-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"fcc30bf2-7b68-4438-b7db-b041e1d1e2ff\") " pod="openstack/tempest-tests-tempest" Jan 20 19:05:00 crc kubenswrapper[4661]: I0120 19:05:00.107963 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/fcc30bf2-7b68-4438-b7db-b041e1d1e2ff-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"fcc30bf2-7b68-4438-b7db-b041e1d1e2ff\") " pod="openstack/tempest-tests-tempest" Jan 20 19:05:00 crc kubenswrapper[4661]: I0120 19:05:00.108083 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/fcc30bf2-7b68-4438-b7db-b041e1d1e2ff-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"fcc30bf2-7b68-4438-b7db-b041e1d1e2ff\") " pod="openstack/tempest-tests-tempest" Jan 20 19:05:00 crc kubenswrapper[4661]: I0120 19:05:00.108360 4661 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"tempest-tests-tempest\" (UID: \"fcc30bf2-7b68-4438-b7db-b041e1d1e2ff\") device mount path \"/mnt/openstack/pv06\"" pod="openstack/tempest-tests-tempest" Jan 20 19:05:00 crc kubenswrapper[4661]: I0120 19:05:00.120313 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/fcc30bf2-7b68-4438-b7db-b041e1d1e2ff-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"fcc30bf2-7b68-4438-b7db-b041e1d1e2ff\") " pod="openstack/tempest-tests-tempest" Jan 20 19:05:00 crc kubenswrapper[4661]: I0120 19:05:00.125915 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/fcc30bf2-7b68-4438-b7db-b041e1d1e2ff-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"fcc30bf2-7b68-4438-b7db-b041e1d1e2ff\") " pod="openstack/tempest-tests-tempest" Jan 20 19:05:00 crc kubenswrapper[4661]: I0120 19:05:00.129328 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mgw94\" (UniqueName: \"kubernetes.io/projected/fcc30bf2-7b68-4438-b7db-b041e1d1e2ff-kube-api-access-mgw94\") pod \"tempest-tests-tempest\" (UID: \"fcc30bf2-7b68-4438-b7db-b041e1d1e2ff\") " pod="openstack/tempest-tests-tempest" Jan 20 19:05:00 crc kubenswrapper[4661]: I0120 19:05:00.137273 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/fcc30bf2-7b68-4438-b7db-b041e1d1e2ff-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"fcc30bf2-7b68-4438-b7db-b041e1d1e2ff\") " pod="openstack/tempest-tests-tempest" Jan 20 19:05:00 crc kubenswrapper[4661]: I0120 19:05:00.179141 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"tempest-tests-tempest\" (UID: \"fcc30bf2-7b68-4438-b7db-b041e1d1e2ff\") " pod="openstack/tempest-tests-tempest" Jan 20 19:05:00 crc kubenswrapper[4661]: I0120 19:05:00.466698 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Jan 20 19:05:00 crc kubenswrapper[4661]: I0120 19:05:00.985648 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Jan 20 19:05:01 crc kubenswrapper[4661]: I0120 19:05:01.902775 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"fcc30bf2-7b68-4438-b7db-b041e1d1e2ff","Type":"ContainerStarted","Data":"8ce27c5ea5e20be44b55091f7303b1c9971c4e3009cba0b0f474d5df5d3cd949"} Jan 20 19:05:29 crc kubenswrapper[4661]: I0120 19:05:29.324899 4661 patch_prober.go:28] interesting pod/machine-config-daemon-svf7c container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 20 19:05:29 crc kubenswrapper[4661]: I0120 19:05:29.325865 4661 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-svf7c" podUID="78855c94-da90-4523-8d65-70f7fd153dee" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 20 19:05:34 crc kubenswrapper[4661]: E0120 19:05:34.359088 4661 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified" Jan 20 19:05:34 crc kubenswrapper[4661]: E0120 19:05:34.361578 4661 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:tempest-tests-tempest-tests-runner,Image:quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/test_operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-ephemeral-workdir,ReadOnly:false,MountPath:/var/lib/tempest,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-ephemeral-temporary,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-logs,ReadOnly:false,MountPath:/var/lib/tempest/external_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/etc/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/var/lib/tempest/.config/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config-secret,ReadOnly:false,MountPath:/etc/openstack/secure.yaml,SubPath:secure.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ca-certs,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ssh-key,ReadOnly:false,MountPath:/var/lib/tempest/id_ecdsa,SubPath:ssh_key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mgw94,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42480,RunAsNonRoot:*false,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:*true,RunAsGroup:*42480,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:tempest-tests-tempest-custom-data-s0,},Optional:nil,},SecretRef:nil,},EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:tempest-tests-tempest-env-vars-s0,},Optional:nil,},SecretRef:nil,},},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod tempest-tests-tempest_openstack(fcc30bf2-7b68-4438-b7db-b041e1d1e2ff): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 20 19:05:34 crc kubenswrapper[4661]: E0120 19:05:34.362828 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tempest-tests-tempest-tests-runner\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/tempest-tests-tempest" podUID="fcc30bf2-7b68-4438-b7db-b041e1d1e2ff" Jan 20 19:05:35 crc kubenswrapper[4661]: E0120 19:05:35.246226 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tempest-tests-tempest-tests-runner\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified\\\"\"" pod="openstack/tempest-tests-tempest" podUID="fcc30bf2-7b68-4438-b7db-b041e1d1e2ff" Jan 20 19:05:48 crc kubenswrapper[4661]: I0120 19:05:48.602649 4661 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s0" Jan 20 19:05:50 crc kubenswrapper[4661]: I0120 19:05:50.395978 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"fcc30bf2-7b68-4438-b7db-b041e1d1e2ff","Type":"ContainerStarted","Data":"5d6626055d7f4e634db9c354fe71f73c4e5c0f9ea36ba6d8224e9c21c40d0966"} Jan 20 19:05:50 crc kubenswrapper[4661]: I0120 19:05:50.427536 4661 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/tempest-tests-tempest" podStartSLOduration=4.815998908 podStartE2EDuration="52.427514518s" podCreationTimestamp="2026-01-20 19:04:58 +0000 UTC" firstStartedPulling="2026-01-20 19:05:00.988422312 +0000 UTC m=+3557.319211974" lastFinishedPulling="2026-01-20 19:05:48.599937921 +0000 UTC m=+3604.930727584" observedRunningTime="2026-01-20 19:05:50.418855463 +0000 UTC m=+3606.749645215" watchObservedRunningTime="2026-01-20 19:05:50.427514518 +0000 UTC m=+3606.758304180" Jan 20 19:05:59 crc kubenswrapper[4661]: I0120 19:05:59.323606 4661 patch_prober.go:28] interesting pod/machine-config-daemon-svf7c container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 20 19:05:59 crc kubenswrapper[4661]: I0120 19:05:59.324727 4661 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-svf7c" podUID="78855c94-da90-4523-8d65-70f7fd153dee" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 20 19:06:29 crc kubenswrapper[4661]: I0120 19:06:29.323774 4661 patch_prober.go:28] interesting pod/machine-config-daemon-svf7c container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 20 19:06:29 crc kubenswrapper[4661]: I0120 19:06:29.324237 4661 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-svf7c" podUID="78855c94-da90-4523-8d65-70f7fd153dee" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 20 19:06:29 crc kubenswrapper[4661]: I0120 19:06:29.324274 4661 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-svf7c" Jan 20 19:06:29 crc kubenswrapper[4661]: I0120 19:06:29.325016 4661 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"87b9063ae5d35d6fe33c871869276deec2f28bf2db00de46e4f772a447877d06"} pod="openshift-machine-config-operator/machine-config-daemon-svf7c" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 20 19:06:29 crc kubenswrapper[4661]: I0120 19:06:29.325069 4661 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-svf7c" podUID="78855c94-da90-4523-8d65-70f7fd153dee" containerName="machine-config-daemon" containerID="cri-o://87b9063ae5d35d6fe33c871869276deec2f28bf2db00de46e4f772a447877d06" gracePeriod=600 Jan 20 19:06:29 crc kubenswrapper[4661]: E0120 19:06:29.474286 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-svf7c_openshift-machine-config-operator(78855c94-da90-4523-8d65-70f7fd153dee)\"" pod="openshift-machine-config-operator/machine-config-daemon-svf7c" podUID="78855c94-da90-4523-8d65-70f7fd153dee" Jan 20 19:06:29 crc kubenswrapper[4661]: I0120 19:06:29.795391 4661 generic.go:334] "Generic (PLEG): container finished" podID="78855c94-da90-4523-8d65-70f7fd153dee" containerID="87b9063ae5d35d6fe33c871869276deec2f28bf2db00de46e4f772a447877d06" exitCode=0 Jan 20 19:06:29 crc kubenswrapper[4661]: I0120 19:06:29.795696 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-svf7c" event={"ID":"78855c94-da90-4523-8d65-70f7fd153dee","Type":"ContainerDied","Data":"87b9063ae5d35d6fe33c871869276deec2f28bf2db00de46e4f772a447877d06"} Jan 20 19:06:29 crc kubenswrapper[4661]: I0120 19:06:29.795738 4661 scope.go:117] "RemoveContainer" containerID="03255bad160b69aedb631395e65d9a4b12434de8081b19d4e9a6358a608a74a9" Jan 20 19:06:29 crc kubenswrapper[4661]: I0120 19:06:29.796939 4661 scope.go:117] "RemoveContainer" containerID="87b9063ae5d35d6fe33c871869276deec2f28bf2db00de46e4f772a447877d06" Jan 20 19:06:29 crc kubenswrapper[4661]: E0120 19:06:29.797269 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-svf7c_openshift-machine-config-operator(78855c94-da90-4523-8d65-70f7fd153dee)\"" pod="openshift-machine-config-operator/machine-config-daemon-svf7c" podUID="78855c94-da90-4523-8d65-70f7fd153dee" Jan 20 19:06:40 crc kubenswrapper[4661]: I0120 19:06:40.142377 4661 scope.go:117] "RemoveContainer" containerID="87b9063ae5d35d6fe33c871869276deec2f28bf2db00de46e4f772a447877d06" Jan 20 19:06:40 crc kubenswrapper[4661]: E0120 19:06:40.143460 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-svf7c_openshift-machine-config-operator(78855c94-da90-4523-8d65-70f7fd153dee)\"" pod="openshift-machine-config-operator/machine-config-daemon-svf7c" podUID="78855c94-da90-4523-8d65-70f7fd153dee" Jan 20 19:06:54 crc kubenswrapper[4661]: I0120 19:06:54.147277 4661 scope.go:117] "RemoveContainer" containerID="87b9063ae5d35d6fe33c871869276deec2f28bf2db00de46e4f772a447877d06" Jan 20 19:06:54 crc kubenswrapper[4661]: E0120 19:06:54.148264 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-svf7c_openshift-machine-config-operator(78855c94-da90-4523-8d65-70f7fd153dee)\"" pod="openshift-machine-config-operator/machine-config-daemon-svf7c" podUID="78855c94-da90-4523-8d65-70f7fd153dee" Jan 20 19:07:09 crc kubenswrapper[4661]: I0120 19:07:09.141946 4661 scope.go:117] "RemoveContainer" containerID="87b9063ae5d35d6fe33c871869276deec2f28bf2db00de46e4f772a447877d06" Jan 20 19:07:09 crc kubenswrapper[4661]: E0120 19:07:09.142646 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-svf7c_openshift-machine-config-operator(78855c94-da90-4523-8d65-70f7fd153dee)\"" pod="openshift-machine-config-operator/machine-config-daemon-svf7c" podUID="78855c94-da90-4523-8d65-70f7fd153dee" Jan 20 19:07:21 crc kubenswrapper[4661]: I0120 19:07:21.142361 4661 scope.go:117] "RemoveContainer" containerID="87b9063ae5d35d6fe33c871869276deec2f28bf2db00de46e4f772a447877d06" Jan 20 19:07:21 crc kubenswrapper[4661]: E0120 19:07:21.143323 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-svf7c_openshift-machine-config-operator(78855c94-da90-4523-8d65-70f7fd153dee)\"" pod="openshift-machine-config-operator/machine-config-daemon-svf7c" podUID="78855c94-da90-4523-8d65-70f7fd153dee" Jan 20 19:07:35 crc kubenswrapper[4661]: I0120 19:07:35.142782 4661 scope.go:117] "RemoveContainer" containerID="87b9063ae5d35d6fe33c871869276deec2f28bf2db00de46e4f772a447877d06" Jan 20 19:07:35 crc kubenswrapper[4661]: E0120 19:07:35.143750 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-svf7c_openshift-machine-config-operator(78855c94-da90-4523-8d65-70f7fd153dee)\"" pod="openshift-machine-config-operator/machine-config-daemon-svf7c" podUID="78855c94-da90-4523-8d65-70f7fd153dee" Jan 20 19:07:49 crc kubenswrapper[4661]: I0120 19:07:49.142171 4661 scope.go:117] "RemoveContainer" containerID="87b9063ae5d35d6fe33c871869276deec2f28bf2db00de46e4f772a447877d06" Jan 20 19:07:49 crc kubenswrapper[4661]: E0120 19:07:49.142953 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-svf7c_openshift-machine-config-operator(78855c94-da90-4523-8d65-70f7fd153dee)\"" pod="openshift-machine-config-operator/machine-config-daemon-svf7c" podUID="78855c94-da90-4523-8d65-70f7fd153dee" Jan 20 19:08:01 crc kubenswrapper[4661]: I0120 19:08:01.142358 4661 scope.go:117] "RemoveContainer" containerID="87b9063ae5d35d6fe33c871869276deec2f28bf2db00de46e4f772a447877d06" Jan 20 19:08:01 crc kubenswrapper[4661]: E0120 19:08:01.143082 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-svf7c_openshift-machine-config-operator(78855c94-da90-4523-8d65-70f7fd153dee)\"" pod="openshift-machine-config-operator/machine-config-daemon-svf7c" podUID="78855c94-da90-4523-8d65-70f7fd153dee" Jan 20 19:08:14 crc kubenswrapper[4661]: I0120 19:08:14.149126 4661 scope.go:117] "RemoveContainer" containerID="87b9063ae5d35d6fe33c871869276deec2f28bf2db00de46e4f772a447877d06" Jan 20 19:08:14 crc kubenswrapper[4661]: E0120 19:08:14.151274 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-svf7c_openshift-machine-config-operator(78855c94-da90-4523-8d65-70f7fd153dee)\"" pod="openshift-machine-config-operator/machine-config-daemon-svf7c" podUID="78855c94-da90-4523-8d65-70f7fd153dee" Jan 20 19:08:17 crc kubenswrapper[4661]: I0120 19:08:17.557700 4661 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-xddvz"] Jan 20 19:08:17 crc kubenswrapper[4661]: I0120 19:08:17.560726 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-xddvz" Jan 20 19:08:17 crc kubenswrapper[4661]: I0120 19:08:17.580815 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-xddvz"] Jan 20 19:08:17 crc kubenswrapper[4661]: I0120 19:08:17.698858 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e67510e4-209c-459a-a4c8-4a3e4b3d7e35-utilities\") pod \"community-operators-xddvz\" (UID: \"e67510e4-209c-459a-a4c8-4a3e4b3d7e35\") " pod="openshift-marketplace/community-operators-xddvz" Jan 20 19:08:17 crc kubenswrapper[4661]: I0120 19:08:17.699200 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8svdh\" (UniqueName: \"kubernetes.io/projected/e67510e4-209c-459a-a4c8-4a3e4b3d7e35-kube-api-access-8svdh\") pod \"community-operators-xddvz\" (UID: \"e67510e4-209c-459a-a4c8-4a3e4b3d7e35\") " pod="openshift-marketplace/community-operators-xddvz" Jan 20 19:08:17 crc kubenswrapper[4661]: I0120 19:08:17.699348 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e67510e4-209c-459a-a4c8-4a3e4b3d7e35-catalog-content\") pod \"community-operators-xddvz\" (UID: \"e67510e4-209c-459a-a4c8-4a3e4b3d7e35\") " pod="openshift-marketplace/community-operators-xddvz" Jan 20 19:08:17 crc kubenswrapper[4661]: I0120 19:08:17.801676 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e67510e4-209c-459a-a4c8-4a3e4b3d7e35-utilities\") pod \"community-operators-xddvz\" (UID: \"e67510e4-209c-459a-a4c8-4a3e4b3d7e35\") " pod="openshift-marketplace/community-operators-xddvz" Jan 20 19:08:17 crc kubenswrapper[4661]: I0120 19:08:17.802104 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e67510e4-209c-459a-a4c8-4a3e4b3d7e35-utilities\") pod \"community-operators-xddvz\" (UID: \"e67510e4-209c-459a-a4c8-4a3e4b3d7e35\") " pod="openshift-marketplace/community-operators-xddvz" Jan 20 19:08:17 crc kubenswrapper[4661]: I0120 19:08:17.804189 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8svdh\" (UniqueName: \"kubernetes.io/projected/e67510e4-209c-459a-a4c8-4a3e4b3d7e35-kube-api-access-8svdh\") pod \"community-operators-xddvz\" (UID: \"e67510e4-209c-459a-a4c8-4a3e4b3d7e35\") " pod="openshift-marketplace/community-operators-xddvz" Jan 20 19:08:17 crc kubenswrapper[4661]: I0120 19:08:17.804636 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e67510e4-209c-459a-a4c8-4a3e4b3d7e35-catalog-content\") pod \"community-operators-xddvz\" (UID: \"e67510e4-209c-459a-a4c8-4a3e4b3d7e35\") " pod="openshift-marketplace/community-operators-xddvz" Jan 20 19:08:17 crc kubenswrapper[4661]: I0120 19:08:17.804990 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e67510e4-209c-459a-a4c8-4a3e4b3d7e35-catalog-content\") pod \"community-operators-xddvz\" (UID: \"e67510e4-209c-459a-a4c8-4a3e4b3d7e35\") " pod="openshift-marketplace/community-operators-xddvz" Jan 20 19:08:17 crc kubenswrapper[4661]: I0120 19:08:17.824343 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8svdh\" (UniqueName: \"kubernetes.io/projected/e67510e4-209c-459a-a4c8-4a3e4b3d7e35-kube-api-access-8svdh\") pod \"community-operators-xddvz\" (UID: \"e67510e4-209c-459a-a4c8-4a3e4b3d7e35\") " pod="openshift-marketplace/community-operators-xddvz" Jan 20 19:08:17 crc kubenswrapper[4661]: I0120 19:08:17.882148 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-xddvz" Jan 20 19:08:18 crc kubenswrapper[4661]: I0120 19:08:18.496238 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-xddvz"] Jan 20 19:08:18 crc kubenswrapper[4661]: I0120 19:08:18.931922 4661 generic.go:334] "Generic (PLEG): container finished" podID="e67510e4-209c-459a-a4c8-4a3e4b3d7e35" containerID="666d96093f089904ae49b493af3a224d51a4c7e876890c0310bc9723070c56fd" exitCode=0 Jan 20 19:08:18 crc kubenswrapper[4661]: I0120 19:08:18.931978 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xddvz" event={"ID":"e67510e4-209c-459a-a4c8-4a3e4b3d7e35","Type":"ContainerDied","Data":"666d96093f089904ae49b493af3a224d51a4c7e876890c0310bc9723070c56fd"} Jan 20 19:08:18 crc kubenswrapper[4661]: I0120 19:08:18.932197 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xddvz" event={"ID":"e67510e4-209c-459a-a4c8-4a3e4b3d7e35","Type":"ContainerStarted","Data":"9be54fa088237298738ffc7f53dae610056c23f56e299b4581bade19bbe3ae5e"} Jan 20 19:08:18 crc kubenswrapper[4661]: I0120 19:08:18.934488 4661 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 20 19:08:19 crc kubenswrapper[4661]: I0120 19:08:19.943155 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xddvz" event={"ID":"e67510e4-209c-459a-a4c8-4a3e4b3d7e35","Type":"ContainerStarted","Data":"1fbc4ff11ca535c50b2949b4025644d5844db69baac376f81a33b90fe8cdaba7"} Jan 20 19:08:21 crc kubenswrapper[4661]: I0120 19:08:21.960814 4661 generic.go:334] "Generic (PLEG): container finished" podID="e67510e4-209c-459a-a4c8-4a3e4b3d7e35" containerID="1fbc4ff11ca535c50b2949b4025644d5844db69baac376f81a33b90fe8cdaba7" exitCode=0 Jan 20 19:08:21 crc kubenswrapper[4661]: I0120 19:08:21.960995 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xddvz" event={"ID":"e67510e4-209c-459a-a4c8-4a3e4b3d7e35","Type":"ContainerDied","Data":"1fbc4ff11ca535c50b2949b4025644d5844db69baac376f81a33b90fe8cdaba7"} Jan 20 19:08:22 crc kubenswrapper[4661]: I0120 19:08:22.986435 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xddvz" event={"ID":"e67510e4-209c-459a-a4c8-4a3e4b3d7e35","Type":"ContainerStarted","Data":"14e7707d7ee3ff6001cbf8956716e33e2ee0b15930d99be438c57f5d8c398168"} Jan 20 19:08:23 crc kubenswrapper[4661]: I0120 19:08:23.025948 4661 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-xddvz" podStartSLOduration=2.565254467 podStartE2EDuration="6.025931537s" podCreationTimestamp="2026-01-20 19:08:17 +0000 UTC" firstStartedPulling="2026-01-20 19:08:18.934141027 +0000 UTC m=+3755.264930699" lastFinishedPulling="2026-01-20 19:08:22.394818107 +0000 UTC m=+3758.725607769" observedRunningTime="2026-01-20 19:08:23.018574366 +0000 UTC m=+3759.349364068" watchObservedRunningTime="2026-01-20 19:08:23.025931537 +0000 UTC m=+3759.356721199" Jan 20 19:08:25 crc kubenswrapper[4661]: I0120 19:08:25.142532 4661 scope.go:117] "RemoveContainer" containerID="87b9063ae5d35d6fe33c871869276deec2f28bf2db00de46e4f772a447877d06" Jan 20 19:08:25 crc kubenswrapper[4661]: E0120 19:08:25.143802 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-svf7c_openshift-machine-config-operator(78855c94-da90-4523-8d65-70f7fd153dee)\"" pod="openshift-machine-config-operator/machine-config-daemon-svf7c" podUID="78855c94-da90-4523-8d65-70f7fd153dee" Jan 20 19:08:27 crc kubenswrapper[4661]: I0120 19:08:27.882736 4661 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-xddvz" Jan 20 19:08:27 crc kubenswrapper[4661]: I0120 19:08:27.883038 4661 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-xddvz" Jan 20 19:08:27 crc kubenswrapper[4661]: I0120 19:08:27.955264 4661 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-xddvz" Jan 20 19:08:28 crc kubenswrapper[4661]: I0120 19:08:28.110096 4661 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-xddvz" Jan 20 19:08:28 crc kubenswrapper[4661]: I0120 19:08:28.205779 4661 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-xddvz"] Jan 20 19:08:30 crc kubenswrapper[4661]: I0120 19:08:30.077615 4661 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-xddvz" podUID="e67510e4-209c-459a-a4c8-4a3e4b3d7e35" containerName="registry-server" containerID="cri-o://14e7707d7ee3ff6001cbf8956716e33e2ee0b15930d99be438c57f5d8c398168" gracePeriod=2 Jan 20 19:08:30 crc kubenswrapper[4661]: I0120 19:08:30.642783 4661 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-xddvz" Jan 20 19:08:30 crc kubenswrapper[4661]: I0120 19:08:30.683806 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e67510e4-209c-459a-a4c8-4a3e4b3d7e35-utilities\") pod \"e67510e4-209c-459a-a4c8-4a3e4b3d7e35\" (UID: \"e67510e4-209c-459a-a4c8-4a3e4b3d7e35\") " Jan 20 19:08:30 crc kubenswrapper[4661]: I0120 19:08:30.684122 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8svdh\" (UniqueName: \"kubernetes.io/projected/e67510e4-209c-459a-a4c8-4a3e4b3d7e35-kube-api-access-8svdh\") pod \"e67510e4-209c-459a-a4c8-4a3e4b3d7e35\" (UID: \"e67510e4-209c-459a-a4c8-4a3e4b3d7e35\") " Jan 20 19:08:30 crc kubenswrapper[4661]: I0120 19:08:30.684160 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e67510e4-209c-459a-a4c8-4a3e4b3d7e35-catalog-content\") pod \"e67510e4-209c-459a-a4c8-4a3e4b3d7e35\" (UID: \"e67510e4-209c-459a-a4c8-4a3e4b3d7e35\") " Jan 20 19:08:30 crc kubenswrapper[4661]: I0120 19:08:30.686804 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e67510e4-209c-459a-a4c8-4a3e4b3d7e35-utilities" (OuterVolumeSpecName: "utilities") pod "e67510e4-209c-459a-a4c8-4a3e4b3d7e35" (UID: "e67510e4-209c-459a-a4c8-4a3e4b3d7e35"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 19:08:30 crc kubenswrapper[4661]: I0120 19:08:30.714956 4661 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e67510e4-209c-459a-a4c8-4a3e4b3d7e35-utilities\") on node \"crc\" DevicePath \"\"" Jan 20 19:08:30 crc kubenswrapper[4661]: I0120 19:08:30.715189 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e67510e4-209c-459a-a4c8-4a3e4b3d7e35-kube-api-access-8svdh" (OuterVolumeSpecName: "kube-api-access-8svdh") pod "e67510e4-209c-459a-a4c8-4a3e4b3d7e35" (UID: "e67510e4-209c-459a-a4c8-4a3e4b3d7e35"). InnerVolumeSpecName "kube-api-access-8svdh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 19:08:30 crc kubenswrapper[4661]: I0120 19:08:30.752970 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e67510e4-209c-459a-a4c8-4a3e4b3d7e35-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e67510e4-209c-459a-a4c8-4a3e4b3d7e35" (UID: "e67510e4-209c-459a-a4c8-4a3e4b3d7e35"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 19:08:30 crc kubenswrapper[4661]: I0120 19:08:30.816720 4661 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8svdh\" (UniqueName: \"kubernetes.io/projected/e67510e4-209c-459a-a4c8-4a3e4b3d7e35-kube-api-access-8svdh\") on node \"crc\" DevicePath \"\"" Jan 20 19:08:30 crc kubenswrapper[4661]: I0120 19:08:30.816755 4661 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e67510e4-209c-459a-a4c8-4a3e4b3d7e35-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 20 19:08:31 crc kubenswrapper[4661]: I0120 19:08:31.089803 4661 generic.go:334] "Generic (PLEG): container finished" podID="e67510e4-209c-459a-a4c8-4a3e4b3d7e35" containerID="14e7707d7ee3ff6001cbf8956716e33e2ee0b15930d99be438c57f5d8c398168" exitCode=0 Jan 20 19:08:31 crc kubenswrapper[4661]: I0120 19:08:31.089854 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xddvz" event={"ID":"e67510e4-209c-459a-a4c8-4a3e4b3d7e35","Type":"ContainerDied","Data":"14e7707d7ee3ff6001cbf8956716e33e2ee0b15930d99be438c57f5d8c398168"} Jan 20 19:08:31 crc kubenswrapper[4661]: I0120 19:08:31.089906 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xddvz" event={"ID":"e67510e4-209c-459a-a4c8-4a3e4b3d7e35","Type":"ContainerDied","Data":"9be54fa088237298738ffc7f53dae610056c23f56e299b4581bade19bbe3ae5e"} Jan 20 19:08:31 crc kubenswrapper[4661]: I0120 19:08:31.089916 4661 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-xddvz" Jan 20 19:08:31 crc kubenswrapper[4661]: I0120 19:08:31.089928 4661 scope.go:117] "RemoveContainer" containerID="14e7707d7ee3ff6001cbf8956716e33e2ee0b15930d99be438c57f5d8c398168" Jan 20 19:08:31 crc kubenswrapper[4661]: I0120 19:08:31.111093 4661 scope.go:117] "RemoveContainer" containerID="1fbc4ff11ca535c50b2949b4025644d5844db69baac376f81a33b90fe8cdaba7" Jan 20 19:08:31 crc kubenswrapper[4661]: I0120 19:08:31.130581 4661 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-xddvz"] Jan 20 19:08:31 crc kubenswrapper[4661]: I0120 19:08:31.144719 4661 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-xddvz"] Jan 20 19:08:31 crc kubenswrapper[4661]: I0120 19:08:31.152894 4661 scope.go:117] "RemoveContainer" containerID="666d96093f089904ae49b493af3a224d51a4c7e876890c0310bc9723070c56fd" Jan 20 19:08:31 crc kubenswrapper[4661]: I0120 19:08:31.194800 4661 scope.go:117] "RemoveContainer" containerID="14e7707d7ee3ff6001cbf8956716e33e2ee0b15930d99be438c57f5d8c398168" Jan 20 19:08:31 crc kubenswrapper[4661]: E0120 19:08:31.195321 4661 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"14e7707d7ee3ff6001cbf8956716e33e2ee0b15930d99be438c57f5d8c398168\": container with ID starting with 14e7707d7ee3ff6001cbf8956716e33e2ee0b15930d99be438c57f5d8c398168 not found: ID does not exist" containerID="14e7707d7ee3ff6001cbf8956716e33e2ee0b15930d99be438c57f5d8c398168" Jan 20 19:08:31 crc kubenswrapper[4661]: I0120 19:08:31.195358 4661 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"14e7707d7ee3ff6001cbf8956716e33e2ee0b15930d99be438c57f5d8c398168"} err="failed to get container status \"14e7707d7ee3ff6001cbf8956716e33e2ee0b15930d99be438c57f5d8c398168\": rpc error: code = NotFound desc = could not find container \"14e7707d7ee3ff6001cbf8956716e33e2ee0b15930d99be438c57f5d8c398168\": container with ID starting with 14e7707d7ee3ff6001cbf8956716e33e2ee0b15930d99be438c57f5d8c398168 not found: ID does not exist" Jan 20 19:08:31 crc kubenswrapper[4661]: I0120 19:08:31.195384 4661 scope.go:117] "RemoveContainer" containerID="1fbc4ff11ca535c50b2949b4025644d5844db69baac376f81a33b90fe8cdaba7" Jan 20 19:08:31 crc kubenswrapper[4661]: E0120 19:08:31.195785 4661 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1fbc4ff11ca535c50b2949b4025644d5844db69baac376f81a33b90fe8cdaba7\": container with ID starting with 1fbc4ff11ca535c50b2949b4025644d5844db69baac376f81a33b90fe8cdaba7 not found: ID does not exist" containerID="1fbc4ff11ca535c50b2949b4025644d5844db69baac376f81a33b90fe8cdaba7" Jan 20 19:08:31 crc kubenswrapper[4661]: I0120 19:08:31.195809 4661 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1fbc4ff11ca535c50b2949b4025644d5844db69baac376f81a33b90fe8cdaba7"} err="failed to get container status \"1fbc4ff11ca535c50b2949b4025644d5844db69baac376f81a33b90fe8cdaba7\": rpc error: code = NotFound desc = could not find container \"1fbc4ff11ca535c50b2949b4025644d5844db69baac376f81a33b90fe8cdaba7\": container with ID starting with 1fbc4ff11ca535c50b2949b4025644d5844db69baac376f81a33b90fe8cdaba7 not found: ID does not exist" Jan 20 19:08:31 crc kubenswrapper[4661]: I0120 19:08:31.195832 4661 scope.go:117] "RemoveContainer" containerID="666d96093f089904ae49b493af3a224d51a4c7e876890c0310bc9723070c56fd" Jan 20 19:08:31 crc kubenswrapper[4661]: E0120 19:08:31.196074 4661 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"666d96093f089904ae49b493af3a224d51a4c7e876890c0310bc9723070c56fd\": container with ID starting with 666d96093f089904ae49b493af3a224d51a4c7e876890c0310bc9723070c56fd not found: ID does not exist" containerID="666d96093f089904ae49b493af3a224d51a4c7e876890c0310bc9723070c56fd" Jan 20 19:08:31 crc kubenswrapper[4661]: I0120 19:08:31.196101 4661 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"666d96093f089904ae49b493af3a224d51a4c7e876890c0310bc9723070c56fd"} err="failed to get container status \"666d96093f089904ae49b493af3a224d51a4c7e876890c0310bc9723070c56fd\": rpc error: code = NotFound desc = could not find container \"666d96093f089904ae49b493af3a224d51a4c7e876890c0310bc9723070c56fd\": container with ID starting with 666d96093f089904ae49b493af3a224d51a4c7e876890c0310bc9723070c56fd not found: ID does not exist" Jan 20 19:08:32 crc kubenswrapper[4661]: I0120 19:08:32.155899 4661 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e67510e4-209c-459a-a4c8-4a3e4b3d7e35" path="/var/lib/kubelet/pods/e67510e4-209c-459a-a4c8-4a3e4b3d7e35/volumes" Jan 20 19:08:37 crc kubenswrapper[4661]: I0120 19:08:37.143821 4661 scope.go:117] "RemoveContainer" containerID="87b9063ae5d35d6fe33c871869276deec2f28bf2db00de46e4f772a447877d06" Jan 20 19:08:37 crc kubenswrapper[4661]: E0120 19:08:37.144629 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-svf7c_openshift-machine-config-operator(78855c94-da90-4523-8d65-70f7fd153dee)\"" pod="openshift-machine-config-operator/machine-config-daemon-svf7c" podUID="78855c94-da90-4523-8d65-70f7fd153dee" Jan 20 19:08:50 crc kubenswrapper[4661]: I0120 19:08:50.143111 4661 scope.go:117] "RemoveContainer" containerID="87b9063ae5d35d6fe33c871869276deec2f28bf2db00de46e4f772a447877d06" Jan 20 19:08:50 crc kubenswrapper[4661]: E0120 19:08:50.144130 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-svf7c_openshift-machine-config-operator(78855c94-da90-4523-8d65-70f7fd153dee)\"" pod="openshift-machine-config-operator/machine-config-daemon-svf7c" podUID="78855c94-da90-4523-8d65-70f7fd153dee" Jan 20 19:09:05 crc kubenswrapper[4661]: I0120 19:09:05.157715 4661 scope.go:117] "RemoveContainer" containerID="87b9063ae5d35d6fe33c871869276deec2f28bf2db00de46e4f772a447877d06" Jan 20 19:09:05 crc kubenswrapper[4661]: E0120 19:09:05.158415 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-svf7c_openshift-machine-config-operator(78855c94-da90-4523-8d65-70f7fd153dee)\"" pod="openshift-machine-config-operator/machine-config-daemon-svf7c" podUID="78855c94-da90-4523-8d65-70f7fd153dee" Jan 20 19:09:14 crc kubenswrapper[4661]: I0120 19:09:14.982879 4661 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-92wqg"] Jan 20 19:09:14 crc kubenswrapper[4661]: E0120 19:09:14.988771 4661 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e67510e4-209c-459a-a4c8-4a3e4b3d7e35" containerName="registry-server" Jan 20 19:09:14 crc kubenswrapper[4661]: I0120 19:09:14.988982 4661 state_mem.go:107] "Deleted CPUSet assignment" podUID="e67510e4-209c-459a-a4c8-4a3e4b3d7e35" containerName="registry-server" Jan 20 19:09:14 crc kubenswrapper[4661]: E0120 19:09:14.989096 4661 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e67510e4-209c-459a-a4c8-4a3e4b3d7e35" containerName="extract-utilities" Jan 20 19:09:14 crc kubenswrapper[4661]: I0120 19:09:14.989173 4661 state_mem.go:107] "Deleted CPUSet assignment" podUID="e67510e4-209c-459a-a4c8-4a3e4b3d7e35" containerName="extract-utilities" Jan 20 19:09:14 crc kubenswrapper[4661]: E0120 19:09:14.989279 4661 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e67510e4-209c-459a-a4c8-4a3e4b3d7e35" containerName="extract-content" Jan 20 19:09:14 crc kubenswrapper[4661]: I0120 19:09:14.989352 4661 state_mem.go:107] "Deleted CPUSet assignment" podUID="e67510e4-209c-459a-a4c8-4a3e4b3d7e35" containerName="extract-content" Jan 20 19:09:14 crc kubenswrapper[4661]: I0120 19:09:14.989768 4661 memory_manager.go:354] "RemoveStaleState removing state" podUID="e67510e4-209c-459a-a4c8-4a3e4b3d7e35" containerName="registry-server" Jan 20 19:09:14 crc kubenswrapper[4661]: I0120 19:09:14.996079 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-92wqg"] Jan 20 19:09:14 crc kubenswrapper[4661]: I0120 19:09:14.996170 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-92wqg" Jan 20 19:09:15 crc kubenswrapper[4661]: I0120 19:09:15.164291 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/03bc9f87-0087-4bc7-bf16-fc0d90dc6bb0-catalog-content\") pod \"certified-operators-92wqg\" (UID: \"03bc9f87-0087-4bc7-bf16-fc0d90dc6bb0\") " pod="openshift-marketplace/certified-operators-92wqg" Jan 20 19:09:15 crc kubenswrapper[4661]: I0120 19:09:15.164715 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/03bc9f87-0087-4bc7-bf16-fc0d90dc6bb0-utilities\") pod \"certified-operators-92wqg\" (UID: \"03bc9f87-0087-4bc7-bf16-fc0d90dc6bb0\") " pod="openshift-marketplace/certified-operators-92wqg" Jan 20 19:09:15 crc kubenswrapper[4661]: I0120 19:09:15.164846 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-trrph\" (UniqueName: \"kubernetes.io/projected/03bc9f87-0087-4bc7-bf16-fc0d90dc6bb0-kube-api-access-trrph\") pod \"certified-operators-92wqg\" (UID: \"03bc9f87-0087-4bc7-bf16-fc0d90dc6bb0\") " pod="openshift-marketplace/certified-operators-92wqg" Jan 20 19:09:15 crc kubenswrapper[4661]: I0120 19:09:15.266368 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-trrph\" (UniqueName: \"kubernetes.io/projected/03bc9f87-0087-4bc7-bf16-fc0d90dc6bb0-kube-api-access-trrph\") pod \"certified-operators-92wqg\" (UID: \"03bc9f87-0087-4bc7-bf16-fc0d90dc6bb0\") " pod="openshift-marketplace/certified-operators-92wqg" Jan 20 19:09:15 crc kubenswrapper[4661]: I0120 19:09:15.266460 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/03bc9f87-0087-4bc7-bf16-fc0d90dc6bb0-catalog-content\") pod \"certified-operators-92wqg\" (UID: \"03bc9f87-0087-4bc7-bf16-fc0d90dc6bb0\") " pod="openshift-marketplace/certified-operators-92wqg" Jan 20 19:09:15 crc kubenswrapper[4661]: I0120 19:09:15.266498 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/03bc9f87-0087-4bc7-bf16-fc0d90dc6bb0-utilities\") pod \"certified-operators-92wqg\" (UID: \"03bc9f87-0087-4bc7-bf16-fc0d90dc6bb0\") " pod="openshift-marketplace/certified-operators-92wqg" Jan 20 19:09:15 crc kubenswrapper[4661]: I0120 19:09:15.267047 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/03bc9f87-0087-4bc7-bf16-fc0d90dc6bb0-utilities\") pod \"certified-operators-92wqg\" (UID: \"03bc9f87-0087-4bc7-bf16-fc0d90dc6bb0\") " pod="openshift-marketplace/certified-operators-92wqg" Jan 20 19:09:15 crc kubenswrapper[4661]: I0120 19:09:15.267158 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/03bc9f87-0087-4bc7-bf16-fc0d90dc6bb0-catalog-content\") pod \"certified-operators-92wqg\" (UID: \"03bc9f87-0087-4bc7-bf16-fc0d90dc6bb0\") " pod="openshift-marketplace/certified-operators-92wqg" Jan 20 19:09:15 crc kubenswrapper[4661]: I0120 19:09:15.289272 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-trrph\" (UniqueName: \"kubernetes.io/projected/03bc9f87-0087-4bc7-bf16-fc0d90dc6bb0-kube-api-access-trrph\") pod \"certified-operators-92wqg\" (UID: \"03bc9f87-0087-4bc7-bf16-fc0d90dc6bb0\") " pod="openshift-marketplace/certified-operators-92wqg" Jan 20 19:09:15 crc kubenswrapper[4661]: I0120 19:09:15.335031 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-92wqg" Jan 20 19:09:16 crc kubenswrapper[4661]: I0120 19:09:16.276358 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-92wqg"] Jan 20 19:09:16 crc kubenswrapper[4661]: I0120 19:09:16.614988 4661 generic.go:334] "Generic (PLEG): container finished" podID="03bc9f87-0087-4bc7-bf16-fc0d90dc6bb0" containerID="46ca33ea7b3e320fb10f5809a89c1e5d4ee0159c1b7231f98060f34998472899" exitCode=0 Jan 20 19:09:16 crc kubenswrapper[4661]: I0120 19:09:16.615048 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-92wqg" event={"ID":"03bc9f87-0087-4bc7-bf16-fc0d90dc6bb0","Type":"ContainerDied","Data":"46ca33ea7b3e320fb10f5809a89c1e5d4ee0159c1b7231f98060f34998472899"} Jan 20 19:09:16 crc kubenswrapper[4661]: I0120 19:09:16.615104 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-92wqg" event={"ID":"03bc9f87-0087-4bc7-bf16-fc0d90dc6bb0","Type":"ContainerStarted","Data":"f8c1f7c0c48847a57408f28681e53861fe37af16d6b36660dd3c3b79ab7b8a83"} Jan 20 19:09:17 crc kubenswrapper[4661]: I0120 19:09:17.623917 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-92wqg" event={"ID":"03bc9f87-0087-4bc7-bf16-fc0d90dc6bb0","Type":"ContainerStarted","Data":"8a9a1c20a31a87c3611630245930931e286d8ef4f9184aecf0d9806a688c1506"} Jan 20 19:09:19 crc kubenswrapper[4661]: I0120 19:09:19.142441 4661 scope.go:117] "RemoveContainer" containerID="87b9063ae5d35d6fe33c871869276deec2f28bf2db00de46e4f772a447877d06" Jan 20 19:09:19 crc kubenswrapper[4661]: E0120 19:09:19.143430 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-svf7c_openshift-machine-config-operator(78855c94-da90-4523-8d65-70f7fd153dee)\"" pod="openshift-machine-config-operator/machine-config-daemon-svf7c" podUID="78855c94-da90-4523-8d65-70f7fd153dee" Jan 20 19:09:19 crc kubenswrapper[4661]: I0120 19:09:19.639856 4661 generic.go:334] "Generic (PLEG): container finished" podID="03bc9f87-0087-4bc7-bf16-fc0d90dc6bb0" containerID="8a9a1c20a31a87c3611630245930931e286d8ef4f9184aecf0d9806a688c1506" exitCode=0 Jan 20 19:09:19 crc kubenswrapper[4661]: I0120 19:09:19.639889 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-92wqg" event={"ID":"03bc9f87-0087-4bc7-bf16-fc0d90dc6bb0","Type":"ContainerDied","Data":"8a9a1c20a31a87c3611630245930931e286d8ef4f9184aecf0d9806a688c1506"} Jan 20 19:09:20 crc kubenswrapper[4661]: I0120 19:09:20.649232 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-92wqg" event={"ID":"03bc9f87-0087-4bc7-bf16-fc0d90dc6bb0","Type":"ContainerStarted","Data":"1126fced1bc61c36d729521133c00c49f63e48def042a9a7fd6e6321ebd3606a"} Jan 20 19:09:20 crc kubenswrapper[4661]: I0120 19:09:20.675883 4661 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-92wqg" podStartSLOduration=3.157483899 podStartE2EDuration="6.675863855s" podCreationTimestamp="2026-01-20 19:09:14 +0000 UTC" firstStartedPulling="2026-01-20 19:09:16.618167259 +0000 UTC m=+3812.948956941" lastFinishedPulling="2026-01-20 19:09:20.136547195 +0000 UTC m=+3816.467336897" observedRunningTime="2026-01-20 19:09:20.67102681 +0000 UTC m=+3817.001816482" watchObservedRunningTime="2026-01-20 19:09:20.675863855 +0000 UTC m=+3817.006653517" Jan 20 19:09:25 crc kubenswrapper[4661]: I0120 19:09:25.336094 4661 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-92wqg" Jan 20 19:09:25 crc kubenswrapper[4661]: I0120 19:09:25.336711 4661 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-92wqg" Jan 20 19:09:25 crc kubenswrapper[4661]: I0120 19:09:25.893879 4661 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-92wqg" Jan 20 19:09:25 crc kubenswrapper[4661]: I0120 19:09:25.944896 4661 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-92wqg" Jan 20 19:09:26 crc kubenswrapper[4661]: I0120 19:09:26.168103 4661 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-92wqg"] Jan 20 19:09:27 crc kubenswrapper[4661]: I0120 19:09:27.706102 4661 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-92wqg" podUID="03bc9f87-0087-4bc7-bf16-fc0d90dc6bb0" containerName="registry-server" containerID="cri-o://1126fced1bc61c36d729521133c00c49f63e48def042a9a7fd6e6321ebd3606a" gracePeriod=2 Jan 20 19:09:28 crc kubenswrapper[4661]: I0120 19:09:28.332759 4661 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-92wqg" Jan 20 19:09:28 crc kubenswrapper[4661]: I0120 19:09:28.445024 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/03bc9f87-0087-4bc7-bf16-fc0d90dc6bb0-catalog-content\") pod \"03bc9f87-0087-4bc7-bf16-fc0d90dc6bb0\" (UID: \"03bc9f87-0087-4bc7-bf16-fc0d90dc6bb0\") " Jan 20 19:09:28 crc kubenswrapper[4661]: I0120 19:09:28.445087 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-trrph\" (UniqueName: \"kubernetes.io/projected/03bc9f87-0087-4bc7-bf16-fc0d90dc6bb0-kube-api-access-trrph\") pod \"03bc9f87-0087-4bc7-bf16-fc0d90dc6bb0\" (UID: \"03bc9f87-0087-4bc7-bf16-fc0d90dc6bb0\") " Jan 20 19:09:28 crc kubenswrapper[4661]: I0120 19:09:28.445294 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/03bc9f87-0087-4bc7-bf16-fc0d90dc6bb0-utilities\") pod \"03bc9f87-0087-4bc7-bf16-fc0d90dc6bb0\" (UID: \"03bc9f87-0087-4bc7-bf16-fc0d90dc6bb0\") " Jan 20 19:09:28 crc kubenswrapper[4661]: I0120 19:09:28.445957 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/03bc9f87-0087-4bc7-bf16-fc0d90dc6bb0-utilities" (OuterVolumeSpecName: "utilities") pod "03bc9f87-0087-4bc7-bf16-fc0d90dc6bb0" (UID: "03bc9f87-0087-4bc7-bf16-fc0d90dc6bb0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 19:09:28 crc kubenswrapper[4661]: I0120 19:09:28.451551 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/03bc9f87-0087-4bc7-bf16-fc0d90dc6bb0-kube-api-access-trrph" (OuterVolumeSpecName: "kube-api-access-trrph") pod "03bc9f87-0087-4bc7-bf16-fc0d90dc6bb0" (UID: "03bc9f87-0087-4bc7-bf16-fc0d90dc6bb0"). InnerVolumeSpecName "kube-api-access-trrph". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 19:09:28 crc kubenswrapper[4661]: I0120 19:09:28.485009 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/03bc9f87-0087-4bc7-bf16-fc0d90dc6bb0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "03bc9f87-0087-4bc7-bf16-fc0d90dc6bb0" (UID: "03bc9f87-0087-4bc7-bf16-fc0d90dc6bb0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 19:09:28 crc kubenswrapper[4661]: I0120 19:09:28.547581 4661 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/03bc9f87-0087-4bc7-bf16-fc0d90dc6bb0-utilities\") on node \"crc\" DevicePath \"\"" Jan 20 19:09:28 crc kubenswrapper[4661]: I0120 19:09:28.547619 4661 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/03bc9f87-0087-4bc7-bf16-fc0d90dc6bb0-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 20 19:09:28 crc kubenswrapper[4661]: I0120 19:09:28.547631 4661 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-trrph\" (UniqueName: \"kubernetes.io/projected/03bc9f87-0087-4bc7-bf16-fc0d90dc6bb0-kube-api-access-trrph\") on node \"crc\" DevicePath \"\"" Jan 20 19:09:28 crc kubenswrapper[4661]: I0120 19:09:28.715210 4661 generic.go:334] "Generic (PLEG): container finished" podID="03bc9f87-0087-4bc7-bf16-fc0d90dc6bb0" containerID="1126fced1bc61c36d729521133c00c49f63e48def042a9a7fd6e6321ebd3606a" exitCode=0 Jan 20 19:09:28 crc kubenswrapper[4661]: I0120 19:09:28.715567 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-92wqg" event={"ID":"03bc9f87-0087-4bc7-bf16-fc0d90dc6bb0","Type":"ContainerDied","Data":"1126fced1bc61c36d729521133c00c49f63e48def042a9a7fd6e6321ebd3606a"} Jan 20 19:09:28 crc kubenswrapper[4661]: I0120 19:09:28.715601 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-92wqg" event={"ID":"03bc9f87-0087-4bc7-bf16-fc0d90dc6bb0","Type":"ContainerDied","Data":"f8c1f7c0c48847a57408f28681e53861fe37af16d6b36660dd3c3b79ab7b8a83"} Jan 20 19:09:28 crc kubenswrapper[4661]: I0120 19:09:28.715620 4661 scope.go:117] "RemoveContainer" containerID="1126fced1bc61c36d729521133c00c49f63e48def042a9a7fd6e6321ebd3606a" Jan 20 19:09:28 crc kubenswrapper[4661]: I0120 19:09:28.715795 4661 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-92wqg" Jan 20 19:09:28 crc kubenswrapper[4661]: I0120 19:09:28.737090 4661 scope.go:117] "RemoveContainer" containerID="8a9a1c20a31a87c3611630245930931e286d8ef4f9184aecf0d9806a688c1506" Jan 20 19:09:28 crc kubenswrapper[4661]: I0120 19:09:28.762762 4661 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-92wqg"] Jan 20 19:09:28 crc kubenswrapper[4661]: I0120 19:09:28.777280 4661 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-92wqg"] Jan 20 19:09:28 crc kubenswrapper[4661]: I0120 19:09:28.779606 4661 scope.go:117] "RemoveContainer" containerID="46ca33ea7b3e320fb10f5809a89c1e5d4ee0159c1b7231f98060f34998472899" Jan 20 19:09:28 crc kubenswrapper[4661]: I0120 19:09:28.821242 4661 scope.go:117] "RemoveContainer" containerID="1126fced1bc61c36d729521133c00c49f63e48def042a9a7fd6e6321ebd3606a" Jan 20 19:09:28 crc kubenswrapper[4661]: E0120 19:09:28.822856 4661 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1126fced1bc61c36d729521133c00c49f63e48def042a9a7fd6e6321ebd3606a\": container with ID starting with 1126fced1bc61c36d729521133c00c49f63e48def042a9a7fd6e6321ebd3606a not found: ID does not exist" containerID="1126fced1bc61c36d729521133c00c49f63e48def042a9a7fd6e6321ebd3606a" Jan 20 19:09:28 crc kubenswrapper[4661]: I0120 19:09:28.822910 4661 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1126fced1bc61c36d729521133c00c49f63e48def042a9a7fd6e6321ebd3606a"} err="failed to get container status \"1126fced1bc61c36d729521133c00c49f63e48def042a9a7fd6e6321ebd3606a\": rpc error: code = NotFound desc = could not find container \"1126fced1bc61c36d729521133c00c49f63e48def042a9a7fd6e6321ebd3606a\": container with ID starting with 1126fced1bc61c36d729521133c00c49f63e48def042a9a7fd6e6321ebd3606a not found: ID does not exist" Jan 20 19:09:28 crc kubenswrapper[4661]: I0120 19:09:28.822942 4661 scope.go:117] "RemoveContainer" containerID="8a9a1c20a31a87c3611630245930931e286d8ef4f9184aecf0d9806a688c1506" Jan 20 19:09:28 crc kubenswrapper[4661]: E0120 19:09:28.823186 4661 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8a9a1c20a31a87c3611630245930931e286d8ef4f9184aecf0d9806a688c1506\": container with ID starting with 8a9a1c20a31a87c3611630245930931e286d8ef4f9184aecf0d9806a688c1506 not found: ID does not exist" containerID="8a9a1c20a31a87c3611630245930931e286d8ef4f9184aecf0d9806a688c1506" Jan 20 19:09:28 crc kubenswrapper[4661]: I0120 19:09:28.823214 4661 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8a9a1c20a31a87c3611630245930931e286d8ef4f9184aecf0d9806a688c1506"} err="failed to get container status \"8a9a1c20a31a87c3611630245930931e286d8ef4f9184aecf0d9806a688c1506\": rpc error: code = NotFound desc = could not find container \"8a9a1c20a31a87c3611630245930931e286d8ef4f9184aecf0d9806a688c1506\": container with ID starting with 8a9a1c20a31a87c3611630245930931e286d8ef4f9184aecf0d9806a688c1506 not found: ID does not exist" Jan 20 19:09:28 crc kubenswrapper[4661]: I0120 19:09:28.823230 4661 scope.go:117] "RemoveContainer" containerID="46ca33ea7b3e320fb10f5809a89c1e5d4ee0159c1b7231f98060f34998472899" Jan 20 19:09:28 crc kubenswrapper[4661]: E0120 19:09:28.823415 4661 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"46ca33ea7b3e320fb10f5809a89c1e5d4ee0159c1b7231f98060f34998472899\": container with ID starting with 46ca33ea7b3e320fb10f5809a89c1e5d4ee0159c1b7231f98060f34998472899 not found: ID does not exist" containerID="46ca33ea7b3e320fb10f5809a89c1e5d4ee0159c1b7231f98060f34998472899" Jan 20 19:09:28 crc kubenswrapper[4661]: I0120 19:09:28.823439 4661 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"46ca33ea7b3e320fb10f5809a89c1e5d4ee0159c1b7231f98060f34998472899"} err="failed to get container status \"46ca33ea7b3e320fb10f5809a89c1e5d4ee0159c1b7231f98060f34998472899\": rpc error: code = NotFound desc = could not find container \"46ca33ea7b3e320fb10f5809a89c1e5d4ee0159c1b7231f98060f34998472899\": container with ID starting with 46ca33ea7b3e320fb10f5809a89c1e5d4ee0159c1b7231f98060f34998472899 not found: ID does not exist" Jan 20 19:09:30 crc kubenswrapper[4661]: I0120 19:09:30.161820 4661 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="03bc9f87-0087-4bc7-bf16-fc0d90dc6bb0" path="/var/lib/kubelet/pods/03bc9f87-0087-4bc7-bf16-fc0d90dc6bb0/volumes" Jan 20 19:09:33 crc kubenswrapper[4661]: I0120 19:09:33.142128 4661 scope.go:117] "RemoveContainer" containerID="87b9063ae5d35d6fe33c871869276deec2f28bf2db00de46e4f772a447877d06" Jan 20 19:09:33 crc kubenswrapper[4661]: E0120 19:09:33.142647 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-svf7c_openshift-machine-config-operator(78855c94-da90-4523-8d65-70f7fd153dee)\"" pod="openshift-machine-config-operator/machine-config-daemon-svf7c" podUID="78855c94-da90-4523-8d65-70f7fd153dee" Jan 20 19:09:44 crc kubenswrapper[4661]: I0120 19:09:44.155642 4661 scope.go:117] "RemoveContainer" containerID="87b9063ae5d35d6fe33c871869276deec2f28bf2db00de46e4f772a447877d06" Jan 20 19:09:44 crc kubenswrapper[4661]: E0120 19:09:44.156381 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-svf7c_openshift-machine-config-operator(78855c94-da90-4523-8d65-70f7fd153dee)\"" pod="openshift-machine-config-operator/machine-config-daemon-svf7c" podUID="78855c94-da90-4523-8d65-70f7fd153dee" Jan 20 19:09:57 crc kubenswrapper[4661]: I0120 19:09:57.142802 4661 scope.go:117] "RemoveContainer" containerID="87b9063ae5d35d6fe33c871869276deec2f28bf2db00de46e4f772a447877d06" Jan 20 19:09:57 crc kubenswrapper[4661]: E0120 19:09:57.143954 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-svf7c_openshift-machine-config-operator(78855c94-da90-4523-8d65-70f7fd153dee)\"" pod="openshift-machine-config-operator/machine-config-daemon-svf7c" podUID="78855c94-da90-4523-8d65-70f7fd153dee" Jan 20 19:10:10 crc kubenswrapper[4661]: I0120 19:10:10.142155 4661 scope.go:117] "RemoveContainer" containerID="87b9063ae5d35d6fe33c871869276deec2f28bf2db00de46e4f772a447877d06" Jan 20 19:10:10 crc kubenswrapper[4661]: E0120 19:10:10.142982 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-svf7c_openshift-machine-config-operator(78855c94-da90-4523-8d65-70f7fd153dee)\"" pod="openshift-machine-config-operator/machine-config-daemon-svf7c" podUID="78855c94-da90-4523-8d65-70f7fd153dee" Jan 20 19:10:25 crc kubenswrapper[4661]: I0120 19:10:25.142769 4661 scope.go:117] "RemoveContainer" containerID="87b9063ae5d35d6fe33c871869276deec2f28bf2db00de46e4f772a447877d06" Jan 20 19:10:25 crc kubenswrapper[4661]: E0120 19:10:25.143546 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-svf7c_openshift-machine-config-operator(78855c94-da90-4523-8d65-70f7fd153dee)\"" pod="openshift-machine-config-operator/machine-config-daemon-svf7c" podUID="78855c94-da90-4523-8d65-70f7fd153dee" Jan 20 19:10:40 crc kubenswrapper[4661]: I0120 19:10:40.143302 4661 scope.go:117] "RemoveContainer" containerID="87b9063ae5d35d6fe33c871869276deec2f28bf2db00de46e4f772a447877d06" Jan 20 19:10:40 crc kubenswrapper[4661]: E0120 19:10:40.144492 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-svf7c_openshift-machine-config-operator(78855c94-da90-4523-8d65-70f7fd153dee)\"" pod="openshift-machine-config-operator/machine-config-daemon-svf7c" podUID="78855c94-da90-4523-8d65-70f7fd153dee" Jan 20 19:10:52 crc kubenswrapper[4661]: I0120 19:10:52.143010 4661 scope.go:117] "RemoveContainer" containerID="87b9063ae5d35d6fe33c871869276deec2f28bf2db00de46e4f772a447877d06" Jan 20 19:10:52 crc kubenswrapper[4661]: E0120 19:10:52.143790 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-svf7c_openshift-machine-config-operator(78855c94-da90-4523-8d65-70f7fd153dee)\"" pod="openshift-machine-config-operator/machine-config-daemon-svf7c" podUID="78855c94-da90-4523-8d65-70f7fd153dee" Jan 20 19:11:04 crc kubenswrapper[4661]: I0120 19:11:04.146059 4661 scope.go:117] "RemoveContainer" containerID="87b9063ae5d35d6fe33c871869276deec2f28bf2db00de46e4f772a447877d06" Jan 20 19:11:04 crc kubenswrapper[4661]: E0120 19:11:04.146865 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-svf7c_openshift-machine-config-operator(78855c94-da90-4523-8d65-70f7fd153dee)\"" pod="openshift-machine-config-operator/machine-config-daemon-svf7c" podUID="78855c94-da90-4523-8d65-70f7fd153dee" Jan 20 19:11:15 crc kubenswrapper[4661]: I0120 19:11:15.111234 4661 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-qwrx4"] Jan 20 19:11:15 crc kubenswrapper[4661]: E0120 19:11:15.113126 4661 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="03bc9f87-0087-4bc7-bf16-fc0d90dc6bb0" containerName="extract-utilities" Jan 20 19:11:15 crc kubenswrapper[4661]: I0120 19:11:15.113156 4661 state_mem.go:107] "Deleted CPUSet assignment" podUID="03bc9f87-0087-4bc7-bf16-fc0d90dc6bb0" containerName="extract-utilities" Jan 20 19:11:15 crc kubenswrapper[4661]: E0120 19:11:15.113186 4661 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="03bc9f87-0087-4bc7-bf16-fc0d90dc6bb0" containerName="registry-server" Jan 20 19:11:15 crc kubenswrapper[4661]: I0120 19:11:15.113196 4661 state_mem.go:107] "Deleted CPUSet assignment" podUID="03bc9f87-0087-4bc7-bf16-fc0d90dc6bb0" containerName="registry-server" Jan 20 19:11:15 crc kubenswrapper[4661]: E0120 19:11:15.113219 4661 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="03bc9f87-0087-4bc7-bf16-fc0d90dc6bb0" containerName="extract-content" Jan 20 19:11:15 crc kubenswrapper[4661]: I0120 19:11:15.113227 4661 state_mem.go:107] "Deleted CPUSet assignment" podUID="03bc9f87-0087-4bc7-bf16-fc0d90dc6bb0" containerName="extract-content" Jan 20 19:11:15 crc kubenswrapper[4661]: I0120 19:11:15.113728 4661 memory_manager.go:354] "RemoveStaleState removing state" podUID="03bc9f87-0087-4bc7-bf16-fc0d90dc6bb0" containerName="registry-server" Jan 20 19:11:15 crc kubenswrapper[4661]: I0120 19:11:15.121501 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qwrx4" Jan 20 19:11:15 crc kubenswrapper[4661]: I0120 19:11:15.162414 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-qwrx4"] Jan 20 19:11:15 crc kubenswrapper[4661]: I0120 19:11:15.276756 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e276be9c-406f-4832-b677-8cd432e94253-catalog-content\") pod \"redhat-marketplace-qwrx4\" (UID: \"e276be9c-406f-4832-b677-8cd432e94253\") " pod="openshift-marketplace/redhat-marketplace-qwrx4" Jan 20 19:11:15 crc kubenswrapper[4661]: I0120 19:11:15.277300 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e276be9c-406f-4832-b677-8cd432e94253-utilities\") pod \"redhat-marketplace-qwrx4\" (UID: \"e276be9c-406f-4832-b677-8cd432e94253\") " pod="openshift-marketplace/redhat-marketplace-qwrx4" Jan 20 19:11:15 crc kubenswrapper[4661]: I0120 19:11:15.277372 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-czwv7\" (UniqueName: \"kubernetes.io/projected/e276be9c-406f-4832-b677-8cd432e94253-kube-api-access-czwv7\") pod \"redhat-marketplace-qwrx4\" (UID: \"e276be9c-406f-4832-b677-8cd432e94253\") " pod="openshift-marketplace/redhat-marketplace-qwrx4" Jan 20 19:11:15 crc kubenswrapper[4661]: I0120 19:11:15.378890 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-czwv7\" (UniqueName: \"kubernetes.io/projected/e276be9c-406f-4832-b677-8cd432e94253-kube-api-access-czwv7\") pod \"redhat-marketplace-qwrx4\" (UID: \"e276be9c-406f-4832-b677-8cd432e94253\") " pod="openshift-marketplace/redhat-marketplace-qwrx4" Jan 20 19:11:15 crc kubenswrapper[4661]: I0120 19:11:15.379000 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e276be9c-406f-4832-b677-8cd432e94253-catalog-content\") pod \"redhat-marketplace-qwrx4\" (UID: \"e276be9c-406f-4832-b677-8cd432e94253\") " pod="openshift-marketplace/redhat-marketplace-qwrx4" Jan 20 19:11:15 crc kubenswrapper[4661]: I0120 19:11:15.379156 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e276be9c-406f-4832-b677-8cd432e94253-utilities\") pod \"redhat-marketplace-qwrx4\" (UID: \"e276be9c-406f-4832-b677-8cd432e94253\") " pod="openshift-marketplace/redhat-marketplace-qwrx4" Jan 20 19:11:15 crc kubenswrapper[4661]: I0120 19:11:15.379542 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e276be9c-406f-4832-b677-8cd432e94253-utilities\") pod \"redhat-marketplace-qwrx4\" (UID: \"e276be9c-406f-4832-b677-8cd432e94253\") " pod="openshift-marketplace/redhat-marketplace-qwrx4" Jan 20 19:11:15 crc kubenswrapper[4661]: I0120 19:11:15.379541 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e276be9c-406f-4832-b677-8cd432e94253-catalog-content\") pod \"redhat-marketplace-qwrx4\" (UID: \"e276be9c-406f-4832-b677-8cd432e94253\") " pod="openshift-marketplace/redhat-marketplace-qwrx4" Jan 20 19:11:15 crc kubenswrapper[4661]: I0120 19:11:15.399428 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-czwv7\" (UniqueName: \"kubernetes.io/projected/e276be9c-406f-4832-b677-8cd432e94253-kube-api-access-czwv7\") pod \"redhat-marketplace-qwrx4\" (UID: \"e276be9c-406f-4832-b677-8cd432e94253\") " pod="openshift-marketplace/redhat-marketplace-qwrx4" Jan 20 19:11:15 crc kubenswrapper[4661]: I0120 19:11:15.452102 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qwrx4" Jan 20 19:11:15 crc kubenswrapper[4661]: I0120 19:11:15.953494 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-qwrx4"] Jan 20 19:11:16 crc kubenswrapper[4661]: I0120 19:11:16.763126 4661 generic.go:334] "Generic (PLEG): container finished" podID="e276be9c-406f-4832-b677-8cd432e94253" containerID="337b7033e8307f4ca51d048336f1baf19b37534279af9ddd2f185c898f3c3fea" exitCode=0 Jan 20 19:11:16 crc kubenswrapper[4661]: I0120 19:11:16.763166 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qwrx4" event={"ID":"e276be9c-406f-4832-b677-8cd432e94253","Type":"ContainerDied","Data":"337b7033e8307f4ca51d048336f1baf19b37534279af9ddd2f185c898f3c3fea"} Jan 20 19:11:16 crc kubenswrapper[4661]: I0120 19:11:16.763449 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qwrx4" event={"ID":"e276be9c-406f-4832-b677-8cd432e94253","Type":"ContainerStarted","Data":"c618ba35bf5e139452ede43b8c4e2ead3277ed24dda4a1e6d752c993db8c3f7e"} Jan 20 19:11:17 crc kubenswrapper[4661]: I0120 19:11:17.773339 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qwrx4" event={"ID":"e276be9c-406f-4832-b677-8cd432e94253","Type":"ContainerStarted","Data":"60b1aa8235f35e0b7af69a4a1c7726b850a9e30626f0bfbc63701a69e4e64ea5"} Jan 20 19:11:18 crc kubenswrapper[4661]: I0120 19:11:18.142880 4661 scope.go:117] "RemoveContainer" containerID="87b9063ae5d35d6fe33c871869276deec2f28bf2db00de46e4f772a447877d06" Jan 20 19:11:18 crc kubenswrapper[4661]: E0120 19:11:18.143566 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-svf7c_openshift-machine-config-operator(78855c94-da90-4523-8d65-70f7fd153dee)\"" pod="openshift-machine-config-operator/machine-config-daemon-svf7c" podUID="78855c94-da90-4523-8d65-70f7fd153dee" Jan 20 19:11:18 crc kubenswrapper[4661]: I0120 19:11:18.783496 4661 generic.go:334] "Generic (PLEG): container finished" podID="e276be9c-406f-4832-b677-8cd432e94253" containerID="60b1aa8235f35e0b7af69a4a1c7726b850a9e30626f0bfbc63701a69e4e64ea5" exitCode=0 Jan 20 19:11:18 crc kubenswrapper[4661]: I0120 19:11:18.783541 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qwrx4" event={"ID":"e276be9c-406f-4832-b677-8cd432e94253","Type":"ContainerDied","Data":"60b1aa8235f35e0b7af69a4a1c7726b850a9e30626f0bfbc63701a69e4e64ea5"} Jan 20 19:11:19 crc kubenswrapper[4661]: I0120 19:11:19.793539 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qwrx4" event={"ID":"e276be9c-406f-4832-b677-8cd432e94253","Type":"ContainerStarted","Data":"f22640bc068fbf34e954ca207537f16d8a72a8b0e5de24827f26ec74f2b9e96d"} Jan 20 19:11:19 crc kubenswrapper[4661]: I0120 19:11:19.816388 4661 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-qwrx4" podStartSLOduration=2.351715901 podStartE2EDuration="4.816372043s" podCreationTimestamp="2026-01-20 19:11:15 +0000 UTC" firstStartedPulling="2026-01-20 19:11:16.764842037 +0000 UTC m=+3933.095631699" lastFinishedPulling="2026-01-20 19:11:19.229498179 +0000 UTC m=+3935.560287841" observedRunningTime="2026-01-20 19:11:19.811106246 +0000 UTC m=+3936.141895918" watchObservedRunningTime="2026-01-20 19:11:19.816372043 +0000 UTC m=+3936.147161705" Jan 20 19:11:25 crc kubenswrapper[4661]: I0120 19:11:25.452573 4661 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-qwrx4" Jan 20 19:11:25 crc kubenswrapper[4661]: I0120 19:11:25.453287 4661 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-qwrx4" Jan 20 19:11:25 crc kubenswrapper[4661]: I0120 19:11:25.506556 4661 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-qwrx4" Jan 20 19:11:25 crc kubenswrapper[4661]: I0120 19:11:25.919270 4661 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-qwrx4" Jan 20 19:11:25 crc kubenswrapper[4661]: I0120 19:11:25.977445 4661 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-qwrx4"] Jan 20 19:11:27 crc kubenswrapper[4661]: I0120 19:11:27.860752 4661 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-qwrx4" podUID="e276be9c-406f-4832-b677-8cd432e94253" containerName="registry-server" containerID="cri-o://f22640bc068fbf34e954ca207537f16d8a72a8b0e5de24827f26ec74f2b9e96d" gracePeriod=2 Jan 20 19:11:28 crc kubenswrapper[4661]: I0120 19:11:28.414366 4661 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qwrx4" Jan 20 19:11:28 crc kubenswrapper[4661]: I0120 19:11:28.453295 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-czwv7\" (UniqueName: \"kubernetes.io/projected/e276be9c-406f-4832-b677-8cd432e94253-kube-api-access-czwv7\") pod \"e276be9c-406f-4832-b677-8cd432e94253\" (UID: \"e276be9c-406f-4832-b677-8cd432e94253\") " Jan 20 19:11:28 crc kubenswrapper[4661]: I0120 19:11:28.453353 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e276be9c-406f-4832-b677-8cd432e94253-catalog-content\") pod \"e276be9c-406f-4832-b677-8cd432e94253\" (UID: \"e276be9c-406f-4832-b677-8cd432e94253\") " Jan 20 19:11:28 crc kubenswrapper[4661]: I0120 19:11:28.453460 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e276be9c-406f-4832-b677-8cd432e94253-utilities\") pod \"e276be9c-406f-4832-b677-8cd432e94253\" (UID: \"e276be9c-406f-4832-b677-8cd432e94253\") " Jan 20 19:11:28 crc kubenswrapper[4661]: I0120 19:11:28.455322 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e276be9c-406f-4832-b677-8cd432e94253-utilities" (OuterVolumeSpecName: "utilities") pod "e276be9c-406f-4832-b677-8cd432e94253" (UID: "e276be9c-406f-4832-b677-8cd432e94253"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 19:11:28 crc kubenswrapper[4661]: I0120 19:11:28.465903 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e276be9c-406f-4832-b677-8cd432e94253-kube-api-access-czwv7" (OuterVolumeSpecName: "kube-api-access-czwv7") pod "e276be9c-406f-4832-b677-8cd432e94253" (UID: "e276be9c-406f-4832-b677-8cd432e94253"). InnerVolumeSpecName "kube-api-access-czwv7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 19:11:28 crc kubenswrapper[4661]: I0120 19:11:28.483035 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e276be9c-406f-4832-b677-8cd432e94253-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e276be9c-406f-4832-b677-8cd432e94253" (UID: "e276be9c-406f-4832-b677-8cd432e94253"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 19:11:28 crc kubenswrapper[4661]: I0120 19:11:28.556615 4661 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e276be9c-406f-4832-b677-8cd432e94253-utilities\") on node \"crc\" DevicePath \"\"" Jan 20 19:11:28 crc kubenswrapper[4661]: I0120 19:11:28.556655 4661 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-czwv7\" (UniqueName: \"kubernetes.io/projected/e276be9c-406f-4832-b677-8cd432e94253-kube-api-access-czwv7\") on node \"crc\" DevicePath \"\"" Jan 20 19:11:28 crc kubenswrapper[4661]: I0120 19:11:28.556758 4661 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e276be9c-406f-4832-b677-8cd432e94253-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 20 19:11:28 crc kubenswrapper[4661]: I0120 19:11:28.871026 4661 generic.go:334] "Generic (PLEG): container finished" podID="e276be9c-406f-4832-b677-8cd432e94253" containerID="f22640bc068fbf34e954ca207537f16d8a72a8b0e5de24827f26ec74f2b9e96d" exitCode=0 Jan 20 19:11:28 crc kubenswrapper[4661]: I0120 19:11:28.871103 4661 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qwrx4" Jan 20 19:11:28 crc kubenswrapper[4661]: I0120 19:11:28.871130 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qwrx4" event={"ID":"e276be9c-406f-4832-b677-8cd432e94253","Type":"ContainerDied","Data":"f22640bc068fbf34e954ca207537f16d8a72a8b0e5de24827f26ec74f2b9e96d"} Jan 20 19:11:28 crc kubenswrapper[4661]: I0120 19:11:28.871403 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qwrx4" event={"ID":"e276be9c-406f-4832-b677-8cd432e94253","Type":"ContainerDied","Data":"c618ba35bf5e139452ede43b8c4e2ead3277ed24dda4a1e6d752c993db8c3f7e"} Jan 20 19:11:28 crc kubenswrapper[4661]: I0120 19:11:28.871425 4661 scope.go:117] "RemoveContainer" containerID="f22640bc068fbf34e954ca207537f16d8a72a8b0e5de24827f26ec74f2b9e96d" Jan 20 19:11:28 crc kubenswrapper[4661]: I0120 19:11:28.896819 4661 scope.go:117] "RemoveContainer" containerID="60b1aa8235f35e0b7af69a4a1c7726b850a9e30626f0bfbc63701a69e4e64ea5" Jan 20 19:11:28 crc kubenswrapper[4661]: I0120 19:11:28.913513 4661 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-qwrx4"] Jan 20 19:11:28 crc kubenswrapper[4661]: I0120 19:11:28.925478 4661 scope.go:117] "RemoveContainer" containerID="337b7033e8307f4ca51d048336f1baf19b37534279af9ddd2f185c898f3c3fea" Jan 20 19:11:28 crc kubenswrapper[4661]: I0120 19:11:28.927694 4661 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-qwrx4"] Jan 20 19:11:28 crc kubenswrapper[4661]: I0120 19:11:28.984278 4661 scope.go:117] "RemoveContainer" containerID="f22640bc068fbf34e954ca207537f16d8a72a8b0e5de24827f26ec74f2b9e96d" Jan 20 19:11:28 crc kubenswrapper[4661]: E0120 19:11:28.985214 4661 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f22640bc068fbf34e954ca207537f16d8a72a8b0e5de24827f26ec74f2b9e96d\": container with ID starting with f22640bc068fbf34e954ca207537f16d8a72a8b0e5de24827f26ec74f2b9e96d not found: ID does not exist" containerID="f22640bc068fbf34e954ca207537f16d8a72a8b0e5de24827f26ec74f2b9e96d" Jan 20 19:11:28 crc kubenswrapper[4661]: I0120 19:11:28.985281 4661 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f22640bc068fbf34e954ca207537f16d8a72a8b0e5de24827f26ec74f2b9e96d"} err="failed to get container status \"f22640bc068fbf34e954ca207537f16d8a72a8b0e5de24827f26ec74f2b9e96d\": rpc error: code = NotFound desc = could not find container \"f22640bc068fbf34e954ca207537f16d8a72a8b0e5de24827f26ec74f2b9e96d\": container with ID starting with f22640bc068fbf34e954ca207537f16d8a72a8b0e5de24827f26ec74f2b9e96d not found: ID does not exist" Jan 20 19:11:28 crc kubenswrapper[4661]: I0120 19:11:28.985302 4661 scope.go:117] "RemoveContainer" containerID="60b1aa8235f35e0b7af69a4a1c7726b850a9e30626f0bfbc63701a69e4e64ea5" Jan 20 19:11:28 crc kubenswrapper[4661]: E0120 19:11:28.985588 4661 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"60b1aa8235f35e0b7af69a4a1c7726b850a9e30626f0bfbc63701a69e4e64ea5\": container with ID starting with 60b1aa8235f35e0b7af69a4a1c7726b850a9e30626f0bfbc63701a69e4e64ea5 not found: ID does not exist" containerID="60b1aa8235f35e0b7af69a4a1c7726b850a9e30626f0bfbc63701a69e4e64ea5" Jan 20 19:11:28 crc kubenswrapper[4661]: I0120 19:11:28.985631 4661 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"60b1aa8235f35e0b7af69a4a1c7726b850a9e30626f0bfbc63701a69e4e64ea5"} err="failed to get container status \"60b1aa8235f35e0b7af69a4a1c7726b850a9e30626f0bfbc63701a69e4e64ea5\": rpc error: code = NotFound desc = could not find container \"60b1aa8235f35e0b7af69a4a1c7726b850a9e30626f0bfbc63701a69e4e64ea5\": container with ID starting with 60b1aa8235f35e0b7af69a4a1c7726b850a9e30626f0bfbc63701a69e4e64ea5 not found: ID does not exist" Jan 20 19:11:28 crc kubenswrapper[4661]: I0120 19:11:28.985658 4661 scope.go:117] "RemoveContainer" containerID="337b7033e8307f4ca51d048336f1baf19b37534279af9ddd2f185c898f3c3fea" Jan 20 19:11:28 crc kubenswrapper[4661]: E0120 19:11:28.986059 4661 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"337b7033e8307f4ca51d048336f1baf19b37534279af9ddd2f185c898f3c3fea\": container with ID starting with 337b7033e8307f4ca51d048336f1baf19b37534279af9ddd2f185c898f3c3fea not found: ID does not exist" containerID="337b7033e8307f4ca51d048336f1baf19b37534279af9ddd2f185c898f3c3fea" Jan 20 19:11:28 crc kubenswrapper[4661]: I0120 19:11:28.986088 4661 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"337b7033e8307f4ca51d048336f1baf19b37534279af9ddd2f185c898f3c3fea"} err="failed to get container status \"337b7033e8307f4ca51d048336f1baf19b37534279af9ddd2f185c898f3c3fea\": rpc error: code = NotFound desc = could not find container \"337b7033e8307f4ca51d048336f1baf19b37534279af9ddd2f185c898f3c3fea\": container with ID starting with 337b7033e8307f4ca51d048336f1baf19b37534279af9ddd2f185c898f3c3fea not found: ID does not exist" Jan 20 19:11:30 crc kubenswrapper[4661]: I0120 19:11:30.156032 4661 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e276be9c-406f-4832-b677-8cd432e94253" path="/var/lib/kubelet/pods/e276be9c-406f-4832-b677-8cd432e94253/volumes" Jan 20 19:11:32 crc kubenswrapper[4661]: I0120 19:11:32.142051 4661 scope.go:117] "RemoveContainer" containerID="87b9063ae5d35d6fe33c871869276deec2f28bf2db00de46e4f772a447877d06" Jan 20 19:11:32 crc kubenswrapper[4661]: I0120 19:11:32.904350 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-svf7c" event={"ID":"78855c94-da90-4523-8d65-70f7fd153dee","Type":"ContainerStarted","Data":"9757b3bc3bc7510737ffec1f5986f4d984f53c48810da16a3a8aa159c2c5415e"} Jan 20 19:12:25 crc kubenswrapper[4661]: I0120 19:12:25.073719 4661 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/manila-316e-account-create-update-bp2gj"] Jan 20 19:12:25 crc kubenswrapper[4661]: I0120 19:12:25.083105 4661 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/manila-db-create-24n4j"] Jan 20 19:12:25 crc kubenswrapper[4661]: I0120 19:12:25.097663 4661 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/manila-316e-account-create-update-bp2gj"] Jan 20 19:12:25 crc kubenswrapper[4661]: I0120 19:12:25.105963 4661 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/manila-db-create-24n4j"] Jan 20 19:12:26 crc kubenswrapper[4661]: I0120 19:12:26.164043 4661 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7fa209ed-495f-49bb-b9cc-01ad4e1032a3" path="/var/lib/kubelet/pods/7fa209ed-495f-49bb-b9cc-01ad4e1032a3/volumes" Jan 20 19:12:26 crc kubenswrapper[4661]: I0120 19:12:26.166404 4661 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f250573d-7ee6-4f08-92f7-b8997189e124" path="/var/lib/kubelet/pods/f250573d-7ee6-4f08-92f7-b8997189e124/volumes" Jan 20 19:12:55 crc kubenswrapper[4661]: I0120 19:12:55.064778 4661 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/manila-db-sync-glqvr"] Jan 20 19:12:55 crc kubenswrapper[4661]: I0120 19:12:55.076291 4661 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/manila-db-sync-glqvr"] Jan 20 19:12:56 crc kubenswrapper[4661]: I0120 19:12:56.157329 4661 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="147f6908-f22c-451e-85e0-d75cce7af6a7" path="/var/lib/kubelet/pods/147f6908-f22c-451e-85e0-d75cce7af6a7/volumes" Jan 20 19:13:19 crc kubenswrapper[4661]: I0120 19:13:19.016724 4661 scope.go:117] "RemoveContainer" containerID="b587ce68b106b5ae693f9592d665999efe528a3c0e15079b8ad23da1349b8d97" Jan 20 19:13:19 crc kubenswrapper[4661]: I0120 19:13:19.051270 4661 scope.go:117] "RemoveContainer" containerID="d2f1d230ef760e9a7e511a0e8233b7a7dd94d66a7a0bd6fbdd810b6ddf328966" Jan 20 19:13:19 crc kubenswrapper[4661]: I0120 19:13:19.117550 4661 scope.go:117] "RemoveContainer" containerID="f837cd65f87bf0f896233b6c14fdf72c29a919824b0b36a248eb103b1c6a5d27" Jan 20 19:13:59 crc kubenswrapper[4661]: I0120 19:13:59.323796 4661 patch_prober.go:28] interesting pod/machine-config-daemon-svf7c container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 20 19:13:59 crc kubenswrapper[4661]: I0120 19:13:59.324397 4661 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-svf7c" podUID="78855c94-da90-4523-8d65-70f7fd153dee" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 20 19:14:02 crc kubenswrapper[4661]: I0120 19:14:02.966478 4661 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-q7mmn"] Jan 20 19:14:02 crc kubenswrapper[4661]: E0120 19:14:02.967342 4661 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e276be9c-406f-4832-b677-8cd432e94253" containerName="registry-server" Jan 20 19:14:02 crc kubenswrapper[4661]: I0120 19:14:02.967355 4661 state_mem.go:107] "Deleted CPUSet assignment" podUID="e276be9c-406f-4832-b677-8cd432e94253" containerName="registry-server" Jan 20 19:14:02 crc kubenswrapper[4661]: E0120 19:14:02.967379 4661 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e276be9c-406f-4832-b677-8cd432e94253" containerName="extract-utilities" Jan 20 19:14:02 crc kubenswrapper[4661]: I0120 19:14:02.967385 4661 state_mem.go:107] "Deleted CPUSet assignment" podUID="e276be9c-406f-4832-b677-8cd432e94253" containerName="extract-utilities" Jan 20 19:14:02 crc kubenswrapper[4661]: E0120 19:14:02.967403 4661 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e276be9c-406f-4832-b677-8cd432e94253" containerName="extract-content" Jan 20 19:14:02 crc kubenswrapper[4661]: I0120 19:14:02.967409 4661 state_mem.go:107] "Deleted CPUSet assignment" podUID="e276be9c-406f-4832-b677-8cd432e94253" containerName="extract-content" Jan 20 19:14:02 crc kubenswrapper[4661]: I0120 19:14:02.967576 4661 memory_manager.go:354] "RemoveStaleState removing state" podUID="e276be9c-406f-4832-b677-8cd432e94253" containerName="registry-server" Jan 20 19:14:02 crc kubenswrapper[4661]: I0120 19:14:02.968801 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-q7mmn" Jan 20 19:14:02 crc kubenswrapper[4661]: I0120 19:14:02.987527 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-q7mmn"] Jan 20 19:14:03 crc kubenswrapper[4661]: I0120 19:14:03.147777 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7edb2dbc-f079-4eb8-9c40-bac2e1bdc1c7-catalog-content\") pod \"redhat-operators-q7mmn\" (UID: \"7edb2dbc-f079-4eb8-9c40-bac2e1bdc1c7\") " pod="openshift-marketplace/redhat-operators-q7mmn" Jan 20 19:14:03 crc kubenswrapper[4661]: I0120 19:14:03.147826 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7edb2dbc-f079-4eb8-9c40-bac2e1bdc1c7-utilities\") pod \"redhat-operators-q7mmn\" (UID: \"7edb2dbc-f079-4eb8-9c40-bac2e1bdc1c7\") " pod="openshift-marketplace/redhat-operators-q7mmn" Jan 20 19:14:03 crc kubenswrapper[4661]: I0120 19:14:03.147851 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ckg6l\" (UniqueName: \"kubernetes.io/projected/7edb2dbc-f079-4eb8-9c40-bac2e1bdc1c7-kube-api-access-ckg6l\") pod \"redhat-operators-q7mmn\" (UID: \"7edb2dbc-f079-4eb8-9c40-bac2e1bdc1c7\") " pod="openshift-marketplace/redhat-operators-q7mmn" Jan 20 19:14:03 crc kubenswrapper[4661]: I0120 19:14:03.249782 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7edb2dbc-f079-4eb8-9c40-bac2e1bdc1c7-utilities\") pod \"redhat-operators-q7mmn\" (UID: \"7edb2dbc-f079-4eb8-9c40-bac2e1bdc1c7\") " pod="openshift-marketplace/redhat-operators-q7mmn" Jan 20 19:14:03 crc kubenswrapper[4661]: I0120 19:14:03.249837 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ckg6l\" (UniqueName: \"kubernetes.io/projected/7edb2dbc-f079-4eb8-9c40-bac2e1bdc1c7-kube-api-access-ckg6l\") pod \"redhat-operators-q7mmn\" (UID: \"7edb2dbc-f079-4eb8-9c40-bac2e1bdc1c7\") " pod="openshift-marketplace/redhat-operators-q7mmn" Jan 20 19:14:03 crc kubenswrapper[4661]: I0120 19:14:03.250016 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7edb2dbc-f079-4eb8-9c40-bac2e1bdc1c7-catalog-content\") pod \"redhat-operators-q7mmn\" (UID: \"7edb2dbc-f079-4eb8-9c40-bac2e1bdc1c7\") " pod="openshift-marketplace/redhat-operators-q7mmn" Jan 20 19:14:03 crc kubenswrapper[4661]: I0120 19:14:03.250330 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7edb2dbc-f079-4eb8-9c40-bac2e1bdc1c7-utilities\") pod \"redhat-operators-q7mmn\" (UID: \"7edb2dbc-f079-4eb8-9c40-bac2e1bdc1c7\") " pod="openshift-marketplace/redhat-operators-q7mmn" Jan 20 19:14:03 crc kubenswrapper[4661]: I0120 19:14:03.250403 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7edb2dbc-f079-4eb8-9c40-bac2e1bdc1c7-catalog-content\") pod \"redhat-operators-q7mmn\" (UID: \"7edb2dbc-f079-4eb8-9c40-bac2e1bdc1c7\") " pod="openshift-marketplace/redhat-operators-q7mmn" Jan 20 19:14:03 crc kubenswrapper[4661]: I0120 19:14:03.529951 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ckg6l\" (UniqueName: \"kubernetes.io/projected/7edb2dbc-f079-4eb8-9c40-bac2e1bdc1c7-kube-api-access-ckg6l\") pod \"redhat-operators-q7mmn\" (UID: \"7edb2dbc-f079-4eb8-9c40-bac2e1bdc1c7\") " pod="openshift-marketplace/redhat-operators-q7mmn" Jan 20 19:14:03 crc kubenswrapper[4661]: I0120 19:14:03.593331 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-q7mmn" Jan 20 19:14:04 crc kubenswrapper[4661]: I0120 19:14:04.079472 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-q7mmn"] Jan 20 19:14:04 crc kubenswrapper[4661]: W0120 19:14:04.098451 4661 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7edb2dbc_f079_4eb8_9c40_bac2e1bdc1c7.slice/crio-0a37bce001ca78524ea008a98dc6b6a4217658d5708ea40c89f8487cba5921a8 WatchSource:0}: Error finding container 0a37bce001ca78524ea008a98dc6b6a4217658d5708ea40c89f8487cba5921a8: Status 404 returned error can't find the container with id 0a37bce001ca78524ea008a98dc6b6a4217658d5708ea40c89f8487cba5921a8 Jan 20 19:14:04 crc kubenswrapper[4661]: I0120 19:14:04.326768 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-q7mmn" event={"ID":"7edb2dbc-f079-4eb8-9c40-bac2e1bdc1c7","Type":"ContainerStarted","Data":"9bbfba132e7f0f353c425bf14dc1386315ca307006dc20a540546181646aa7e1"} Jan 20 19:14:04 crc kubenswrapper[4661]: I0120 19:14:04.327152 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-q7mmn" event={"ID":"7edb2dbc-f079-4eb8-9c40-bac2e1bdc1c7","Type":"ContainerStarted","Data":"0a37bce001ca78524ea008a98dc6b6a4217658d5708ea40c89f8487cba5921a8"} Jan 20 19:14:05 crc kubenswrapper[4661]: I0120 19:14:05.337734 4661 generic.go:334] "Generic (PLEG): container finished" podID="7edb2dbc-f079-4eb8-9c40-bac2e1bdc1c7" containerID="9bbfba132e7f0f353c425bf14dc1386315ca307006dc20a540546181646aa7e1" exitCode=0 Jan 20 19:14:05 crc kubenswrapper[4661]: I0120 19:14:05.337794 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-q7mmn" event={"ID":"7edb2dbc-f079-4eb8-9c40-bac2e1bdc1c7","Type":"ContainerDied","Data":"9bbfba132e7f0f353c425bf14dc1386315ca307006dc20a540546181646aa7e1"} Jan 20 19:14:05 crc kubenswrapper[4661]: I0120 19:14:05.340291 4661 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 20 19:14:07 crc kubenswrapper[4661]: I0120 19:14:07.356329 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-q7mmn" event={"ID":"7edb2dbc-f079-4eb8-9c40-bac2e1bdc1c7","Type":"ContainerStarted","Data":"1e7779047fd64ef72c86fb9bb6b2fe3dad0b59cc50812f8ecd224b080901fc3f"} Jan 20 19:14:10 crc kubenswrapper[4661]: I0120 19:14:10.388935 4661 generic.go:334] "Generic (PLEG): container finished" podID="7edb2dbc-f079-4eb8-9c40-bac2e1bdc1c7" containerID="1e7779047fd64ef72c86fb9bb6b2fe3dad0b59cc50812f8ecd224b080901fc3f" exitCode=0 Jan 20 19:14:10 crc kubenswrapper[4661]: I0120 19:14:10.389528 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-q7mmn" event={"ID":"7edb2dbc-f079-4eb8-9c40-bac2e1bdc1c7","Type":"ContainerDied","Data":"1e7779047fd64ef72c86fb9bb6b2fe3dad0b59cc50812f8ecd224b080901fc3f"} Jan 20 19:14:12 crc kubenswrapper[4661]: I0120 19:14:12.432100 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-q7mmn" event={"ID":"7edb2dbc-f079-4eb8-9c40-bac2e1bdc1c7","Type":"ContainerStarted","Data":"7f085c1dc650078f1baf7f9d0d97037ebb00f1d71d27527397c30c446b611756"} Jan 20 19:14:12 crc kubenswrapper[4661]: I0120 19:14:12.483227 4661 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-q7mmn" podStartSLOduration=4.56769253 podStartE2EDuration="10.483194727s" podCreationTimestamp="2026-01-20 19:14:02 +0000 UTC" firstStartedPulling="2026-01-20 19:14:05.339975721 +0000 UTC m=+4101.670765383" lastFinishedPulling="2026-01-20 19:14:11.255477908 +0000 UTC m=+4107.586267580" observedRunningTime="2026-01-20 19:14:12.461641758 +0000 UTC m=+4108.792431430" watchObservedRunningTime="2026-01-20 19:14:12.483194727 +0000 UTC m=+4108.813984429" Jan 20 19:14:13 crc kubenswrapper[4661]: I0120 19:14:13.595725 4661 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-q7mmn" Jan 20 19:14:13 crc kubenswrapper[4661]: I0120 19:14:13.596022 4661 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-q7mmn" Jan 20 19:14:14 crc kubenswrapper[4661]: I0120 19:14:14.723798 4661 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-q7mmn" podUID="7edb2dbc-f079-4eb8-9c40-bac2e1bdc1c7" containerName="registry-server" probeResult="failure" output=< Jan 20 19:14:14 crc kubenswrapper[4661]: timeout: failed to connect service ":50051" within 1s Jan 20 19:14:14 crc kubenswrapper[4661]: > Jan 20 19:14:23 crc kubenswrapper[4661]: I0120 19:14:23.655917 4661 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-q7mmn" Jan 20 19:14:23 crc kubenswrapper[4661]: I0120 19:14:23.709423 4661 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-q7mmn" Jan 20 19:14:23 crc kubenswrapper[4661]: I0120 19:14:23.891982 4661 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-q7mmn"] Jan 20 19:14:25 crc kubenswrapper[4661]: I0120 19:14:25.573012 4661 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-q7mmn" podUID="7edb2dbc-f079-4eb8-9c40-bac2e1bdc1c7" containerName="registry-server" containerID="cri-o://7f085c1dc650078f1baf7f9d0d97037ebb00f1d71d27527397c30c446b611756" gracePeriod=2 Jan 20 19:14:26 crc kubenswrapper[4661]: I0120 19:14:26.327776 4661 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-q7mmn" Jan 20 19:14:26 crc kubenswrapper[4661]: I0120 19:14:26.457084 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7edb2dbc-f079-4eb8-9c40-bac2e1bdc1c7-catalog-content\") pod \"7edb2dbc-f079-4eb8-9c40-bac2e1bdc1c7\" (UID: \"7edb2dbc-f079-4eb8-9c40-bac2e1bdc1c7\") " Jan 20 19:14:26 crc kubenswrapper[4661]: I0120 19:14:26.457300 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7edb2dbc-f079-4eb8-9c40-bac2e1bdc1c7-utilities\") pod \"7edb2dbc-f079-4eb8-9c40-bac2e1bdc1c7\" (UID: \"7edb2dbc-f079-4eb8-9c40-bac2e1bdc1c7\") " Jan 20 19:14:26 crc kubenswrapper[4661]: I0120 19:14:26.457385 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ckg6l\" (UniqueName: \"kubernetes.io/projected/7edb2dbc-f079-4eb8-9c40-bac2e1bdc1c7-kube-api-access-ckg6l\") pod \"7edb2dbc-f079-4eb8-9c40-bac2e1bdc1c7\" (UID: \"7edb2dbc-f079-4eb8-9c40-bac2e1bdc1c7\") " Jan 20 19:14:26 crc kubenswrapper[4661]: I0120 19:14:26.458539 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7edb2dbc-f079-4eb8-9c40-bac2e1bdc1c7-utilities" (OuterVolumeSpecName: "utilities") pod "7edb2dbc-f079-4eb8-9c40-bac2e1bdc1c7" (UID: "7edb2dbc-f079-4eb8-9c40-bac2e1bdc1c7"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 19:14:26 crc kubenswrapper[4661]: I0120 19:14:26.467419 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7edb2dbc-f079-4eb8-9c40-bac2e1bdc1c7-kube-api-access-ckg6l" (OuterVolumeSpecName: "kube-api-access-ckg6l") pod "7edb2dbc-f079-4eb8-9c40-bac2e1bdc1c7" (UID: "7edb2dbc-f079-4eb8-9c40-bac2e1bdc1c7"). InnerVolumeSpecName "kube-api-access-ckg6l". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 19:14:26 crc kubenswrapper[4661]: I0120 19:14:26.559947 4661 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7edb2dbc-f079-4eb8-9c40-bac2e1bdc1c7-utilities\") on node \"crc\" DevicePath \"\"" Jan 20 19:14:26 crc kubenswrapper[4661]: I0120 19:14:26.559987 4661 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ckg6l\" (UniqueName: \"kubernetes.io/projected/7edb2dbc-f079-4eb8-9c40-bac2e1bdc1c7-kube-api-access-ckg6l\") on node \"crc\" DevicePath \"\"" Jan 20 19:14:26 crc kubenswrapper[4661]: I0120 19:14:26.584884 4661 generic.go:334] "Generic (PLEG): container finished" podID="7edb2dbc-f079-4eb8-9c40-bac2e1bdc1c7" containerID="7f085c1dc650078f1baf7f9d0d97037ebb00f1d71d27527397c30c446b611756" exitCode=0 Jan 20 19:14:26 crc kubenswrapper[4661]: I0120 19:14:26.584931 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-q7mmn" event={"ID":"7edb2dbc-f079-4eb8-9c40-bac2e1bdc1c7","Type":"ContainerDied","Data":"7f085c1dc650078f1baf7f9d0d97037ebb00f1d71d27527397c30c446b611756"} Jan 20 19:14:26 crc kubenswrapper[4661]: I0120 19:14:26.584963 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-q7mmn" event={"ID":"7edb2dbc-f079-4eb8-9c40-bac2e1bdc1c7","Type":"ContainerDied","Data":"0a37bce001ca78524ea008a98dc6b6a4217658d5708ea40c89f8487cba5921a8"} Jan 20 19:14:26 crc kubenswrapper[4661]: I0120 19:14:26.584983 4661 scope.go:117] "RemoveContainer" containerID="7f085c1dc650078f1baf7f9d0d97037ebb00f1d71d27527397c30c446b611756" Jan 20 19:14:26 crc kubenswrapper[4661]: I0120 19:14:26.585121 4661 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-q7mmn" Jan 20 19:14:26 crc kubenswrapper[4661]: I0120 19:14:26.588491 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7edb2dbc-f079-4eb8-9c40-bac2e1bdc1c7-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7edb2dbc-f079-4eb8-9c40-bac2e1bdc1c7" (UID: "7edb2dbc-f079-4eb8-9c40-bac2e1bdc1c7"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 19:14:26 crc kubenswrapper[4661]: I0120 19:14:26.608092 4661 scope.go:117] "RemoveContainer" containerID="1e7779047fd64ef72c86fb9bb6b2fe3dad0b59cc50812f8ecd224b080901fc3f" Jan 20 19:14:26 crc kubenswrapper[4661]: I0120 19:14:26.648030 4661 scope.go:117] "RemoveContainer" containerID="9bbfba132e7f0f353c425bf14dc1386315ca307006dc20a540546181646aa7e1" Jan 20 19:14:26 crc kubenswrapper[4661]: I0120 19:14:26.663431 4661 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7edb2dbc-f079-4eb8-9c40-bac2e1bdc1c7-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 20 19:14:26 crc kubenswrapper[4661]: I0120 19:14:26.687074 4661 scope.go:117] "RemoveContainer" containerID="7f085c1dc650078f1baf7f9d0d97037ebb00f1d71d27527397c30c446b611756" Jan 20 19:14:26 crc kubenswrapper[4661]: E0120 19:14:26.687532 4661 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7f085c1dc650078f1baf7f9d0d97037ebb00f1d71d27527397c30c446b611756\": container with ID starting with 7f085c1dc650078f1baf7f9d0d97037ebb00f1d71d27527397c30c446b611756 not found: ID does not exist" containerID="7f085c1dc650078f1baf7f9d0d97037ebb00f1d71d27527397c30c446b611756" Jan 20 19:14:26 crc kubenswrapper[4661]: I0120 19:14:26.687575 4661 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7f085c1dc650078f1baf7f9d0d97037ebb00f1d71d27527397c30c446b611756"} err="failed to get container status \"7f085c1dc650078f1baf7f9d0d97037ebb00f1d71d27527397c30c446b611756\": rpc error: code = NotFound desc = could not find container \"7f085c1dc650078f1baf7f9d0d97037ebb00f1d71d27527397c30c446b611756\": container with ID starting with 7f085c1dc650078f1baf7f9d0d97037ebb00f1d71d27527397c30c446b611756 not found: ID does not exist" Jan 20 19:14:26 crc kubenswrapper[4661]: I0120 19:14:26.687605 4661 scope.go:117] "RemoveContainer" containerID="1e7779047fd64ef72c86fb9bb6b2fe3dad0b59cc50812f8ecd224b080901fc3f" Jan 20 19:14:26 crc kubenswrapper[4661]: E0120 19:14:26.688193 4661 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1e7779047fd64ef72c86fb9bb6b2fe3dad0b59cc50812f8ecd224b080901fc3f\": container with ID starting with 1e7779047fd64ef72c86fb9bb6b2fe3dad0b59cc50812f8ecd224b080901fc3f not found: ID does not exist" containerID="1e7779047fd64ef72c86fb9bb6b2fe3dad0b59cc50812f8ecd224b080901fc3f" Jan 20 19:14:26 crc kubenswrapper[4661]: I0120 19:14:26.688223 4661 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1e7779047fd64ef72c86fb9bb6b2fe3dad0b59cc50812f8ecd224b080901fc3f"} err="failed to get container status \"1e7779047fd64ef72c86fb9bb6b2fe3dad0b59cc50812f8ecd224b080901fc3f\": rpc error: code = NotFound desc = could not find container \"1e7779047fd64ef72c86fb9bb6b2fe3dad0b59cc50812f8ecd224b080901fc3f\": container with ID starting with 1e7779047fd64ef72c86fb9bb6b2fe3dad0b59cc50812f8ecd224b080901fc3f not found: ID does not exist" Jan 20 19:14:26 crc kubenswrapper[4661]: I0120 19:14:26.688245 4661 scope.go:117] "RemoveContainer" containerID="9bbfba132e7f0f353c425bf14dc1386315ca307006dc20a540546181646aa7e1" Jan 20 19:14:26 crc kubenswrapper[4661]: E0120 19:14:26.688441 4661 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9bbfba132e7f0f353c425bf14dc1386315ca307006dc20a540546181646aa7e1\": container with ID starting with 9bbfba132e7f0f353c425bf14dc1386315ca307006dc20a540546181646aa7e1 not found: ID does not exist" containerID="9bbfba132e7f0f353c425bf14dc1386315ca307006dc20a540546181646aa7e1" Jan 20 19:14:26 crc kubenswrapper[4661]: I0120 19:14:26.688464 4661 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9bbfba132e7f0f353c425bf14dc1386315ca307006dc20a540546181646aa7e1"} err="failed to get container status \"9bbfba132e7f0f353c425bf14dc1386315ca307006dc20a540546181646aa7e1\": rpc error: code = NotFound desc = could not find container \"9bbfba132e7f0f353c425bf14dc1386315ca307006dc20a540546181646aa7e1\": container with ID starting with 9bbfba132e7f0f353c425bf14dc1386315ca307006dc20a540546181646aa7e1 not found: ID does not exist" Jan 20 19:14:26 crc kubenswrapper[4661]: I0120 19:14:26.920252 4661 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-q7mmn"] Jan 20 19:14:26 crc kubenswrapper[4661]: I0120 19:14:26.934800 4661 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-q7mmn"] Jan 20 19:14:28 crc kubenswrapper[4661]: I0120 19:14:28.157232 4661 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7edb2dbc-f079-4eb8-9c40-bac2e1bdc1c7" path="/var/lib/kubelet/pods/7edb2dbc-f079-4eb8-9c40-bac2e1bdc1c7/volumes" Jan 20 19:14:29 crc kubenswrapper[4661]: I0120 19:14:29.323992 4661 patch_prober.go:28] interesting pod/machine-config-daemon-svf7c container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 20 19:14:29 crc kubenswrapper[4661]: I0120 19:14:29.324508 4661 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-svf7c" podUID="78855c94-da90-4523-8d65-70f7fd153dee" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 20 19:14:59 crc kubenswrapper[4661]: I0120 19:14:59.324008 4661 patch_prober.go:28] interesting pod/machine-config-daemon-svf7c container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 20 19:14:59 crc kubenswrapper[4661]: I0120 19:14:59.326244 4661 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-svf7c" podUID="78855c94-da90-4523-8d65-70f7fd153dee" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 20 19:14:59 crc kubenswrapper[4661]: I0120 19:14:59.326476 4661 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-svf7c" Jan 20 19:14:59 crc kubenswrapper[4661]: I0120 19:14:59.328011 4661 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"9757b3bc3bc7510737ffec1f5986f4d984f53c48810da16a3a8aa159c2c5415e"} pod="openshift-machine-config-operator/machine-config-daemon-svf7c" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 20 19:14:59 crc kubenswrapper[4661]: I0120 19:14:59.328322 4661 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-svf7c" podUID="78855c94-da90-4523-8d65-70f7fd153dee" containerName="machine-config-daemon" containerID="cri-o://9757b3bc3bc7510737ffec1f5986f4d984f53c48810da16a3a8aa159c2c5415e" gracePeriod=600 Jan 20 19:14:59 crc kubenswrapper[4661]: I0120 19:14:59.905134 4661 generic.go:334] "Generic (PLEG): container finished" podID="78855c94-da90-4523-8d65-70f7fd153dee" containerID="9757b3bc3bc7510737ffec1f5986f4d984f53c48810da16a3a8aa159c2c5415e" exitCode=0 Jan 20 19:14:59 crc kubenswrapper[4661]: I0120 19:14:59.905459 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-svf7c" event={"ID":"78855c94-da90-4523-8d65-70f7fd153dee","Type":"ContainerDied","Data":"9757b3bc3bc7510737ffec1f5986f4d984f53c48810da16a3a8aa159c2c5415e"} Jan 20 19:14:59 crc kubenswrapper[4661]: I0120 19:14:59.905488 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-svf7c" event={"ID":"78855c94-da90-4523-8d65-70f7fd153dee","Type":"ContainerStarted","Data":"88aefdb17fbdb2f9a99910e27de0632e3833c47bcd174020bd5e7e94c3da0469"} Jan 20 19:14:59 crc kubenswrapper[4661]: I0120 19:14:59.905507 4661 scope.go:117] "RemoveContainer" containerID="87b9063ae5d35d6fe33c871869276deec2f28bf2db00de46e4f772a447877d06" Jan 20 19:15:00 crc kubenswrapper[4661]: I0120 19:15:00.189957 4661 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29482275-75fvn"] Jan 20 19:15:00 crc kubenswrapper[4661]: E0120 19:15:00.190761 4661 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7edb2dbc-f079-4eb8-9c40-bac2e1bdc1c7" containerName="extract-content" Jan 20 19:15:00 crc kubenswrapper[4661]: I0120 19:15:00.190783 4661 state_mem.go:107] "Deleted CPUSet assignment" podUID="7edb2dbc-f079-4eb8-9c40-bac2e1bdc1c7" containerName="extract-content" Jan 20 19:15:00 crc kubenswrapper[4661]: E0120 19:15:00.190808 4661 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7edb2dbc-f079-4eb8-9c40-bac2e1bdc1c7" containerName="extract-utilities" Jan 20 19:15:00 crc kubenswrapper[4661]: I0120 19:15:00.190817 4661 state_mem.go:107] "Deleted CPUSet assignment" podUID="7edb2dbc-f079-4eb8-9c40-bac2e1bdc1c7" containerName="extract-utilities" Jan 20 19:15:00 crc kubenswrapper[4661]: E0120 19:15:00.190844 4661 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7edb2dbc-f079-4eb8-9c40-bac2e1bdc1c7" containerName="registry-server" Jan 20 19:15:00 crc kubenswrapper[4661]: I0120 19:15:00.190853 4661 state_mem.go:107] "Deleted CPUSet assignment" podUID="7edb2dbc-f079-4eb8-9c40-bac2e1bdc1c7" containerName="registry-server" Jan 20 19:15:00 crc kubenswrapper[4661]: I0120 19:15:00.191037 4661 memory_manager.go:354] "RemoveStaleState removing state" podUID="7edb2dbc-f079-4eb8-9c40-bac2e1bdc1c7" containerName="registry-server" Jan 20 19:15:00 crc kubenswrapper[4661]: I0120 19:15:00.191816 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29482275-75fvn" Jan 20 19:15:00 crc kubenswrapper[4661]: I0120 19:15:00.194472 4661 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 20 19:15:00 crc kubenswrapper[4661]: I0120 19:15:00.194984 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 20 19:15:00 crc kubenswrapper[4661]: I0120 19:15:00.204656 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29482275-75fvn"] Jan 20 19:15:00 crc kubenswrapper[4661]: I0120 19:15:00.310732 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2adb6dec-5d06-46fd-9f93-c7c2d4f4aa8e-config-volume\") pod \"collect-profiles-29482275-75fvn\" (UID: \"2adb6dec-5d06-46fd-9f93-c7c2d4f4aa8e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29482275-75fvn" Jan 20 19:15:00 crc kubenswrapper[4661]: I0120 19:15:00.310786 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-znfmj\" (UniqueName: \"kubernetes.io/projected/2adb6dec-5d06-46fd-9f93-c7c2d4f4aa8e-kube-api-access-znfmj\") pod \"collect-profiles-29482275-75fvn\" (UID: \"2adb6dec-5d06-46fd-9f93-c7c2d4f4aa8e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29482275-75fvn" Jan 20 19:15:00 crc kubenswrapper[4661]: I0120 19:15:00.311108 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2adb6dec-5d06-46fd-9f93-c7c2d4f4aa8e-secret-volume\") pod \"collect-profiles-29482275-75fvn\" (UID: \"2adb6dec-5d06-46fd-9f93-c7c2d4f4aa8e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29482275-75fvn" Jan 20 19:15:00 crc kubenswrapper[4661]: I0120 19:15:00.413125 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2adb6dec-5d06-46fd-9f93-c7c2d4f4aa8e-config-volume\") pod \"collect-profiles-29482275-75fvn\" (UID: \"2adb6dec-5d06-46fd-9f93-c7c2d4f4aa8e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29482275-75fvn" Jan 20 19:15:00 crc kubenswrapper[4661]: I0120 19:15:00.413201 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-znfmj\" (UniqueName: \"kubernetes.io/projected/2adb6dec-5d06-46fd-9f93-c7c2d4f4aa8e-kube-api-access-znfmj\") pod \"collect-profiles-29482275-75fvn\" (UID: \"2adb6dec-5d06-46fd-9f93-c7c2d4f4aa8e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29482275-75fvn" Jan 20 19:15:00 crc kubenswrapper[4661]: I0120 19:15:00.413284 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2adb6dec-5d06-46fd-9f93-c7c2d4f4aa8e-secret-volume\") pod \"collect-profiles-29482275-75fvn\" (UID: \"2adb6dec-5d06-46fd-9f93-c7c2d4f4aa8e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29482275-75fvn" Jan 20 19:15:00 crc kubenswrapper[4661]: I0120 19:15:00.414695 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2adb6dec-5d06-46fd-9f93-c7c2d4f4aa8e-config-volume\") pod \"collect-profiles-29482275-75fvn\" (UID: \"2adb6dec-5d06-46fd-9f93-c7c2d4f4aa8e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29482275-75fvn" Jan 20 19:15:00 crc kubenswrapper[4661]: I0120 19:15:00.531770 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2adb6dec-5d06-46fd-9f93-c7c2d4f4aa8e-secret-volume\") pod \"collect-profiles-29482275-75fvn\" (UID: \"2adb6dec-5d06-46fd-9f93-c7c2d4f4aa8e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29482275-75fvn" Jan 20 19:15:00 crc kubenswrapper[4661]: I0120 19:15:00.532720 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-znfmj\" (UniqueName: \"kubernetes.io/projected/2adb6dec-5d06-46fd-9f93-c7c2d4f4aa8e-kube-api-access-znfmj\") pod \"collect-profiles-29482275-75fvn\" (UID: \"2adb6dec-5d06-46fd-9f93-c7c2d4f4aa8e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29482275-75fvn" Jan 20 19:15:00 crc kubenswrapper[4661]: I0120 19:15:00.810223 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29482275-75fvn" Jan 20 19:15:01 crc kubenswrapper[4661]: I0120 19:15:01.323992 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29482275-75fvn"] Jan 20 19:15:01 crc kubenswrapper[4661]: I0120 19:15:01.950133 4661 generic.go:334] "Generic (PLEG): container finished" podID="2adb6dec-5d06-46fd-9f93-c7c2d4f4aa8e" containerID="b7b766c0701ec1e1eae437331bfa4f2edcc33c3c8cae6beca1c10af24e410660" exitCode=0 Jan 20 19:15:01 crc kubenswrapper[4661]: I0120 19:15:01.950191 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29482275-75fvn" event={"ID":"2adb6dec-5d06-46fd-9f93-c7c2d4f4aa8e","Type":"ContainerDied","Data":"b7b766c0701ec1e1eae437331bfa4f2edcc33c3c8cae6beca1c10af24e410660"} Jan 20 19:15:01 crc kubenswrapper[4661]: I0120 19:15:01.950524 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29482275-75fvn" event={"ID":"2adb6dec-5d06-46fd-9f93-c7c2d4f4aa8e","Type":"ContainerStarted","Data":"82fb0110d541aaf7e6af8da5c8b5e8da54b4799800ba6f3ca6bd56727ce347dd"} Jan 20 19:15:03 crc kubenswrapper[4661]: I0120 19:15:03.446214 4661 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29482275-75fvn" Jan 20 19:15:03 crc kubenswrapper[4661]: I0120 19:15:03.486236 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2adb6dec-5d06-46fd-9f93-c7c2d4f4aa8e-secret-volume\") pod \"2adb6dec-5d06-46fd-9f93-c7c2d4f4aa8e\" (UID: \"2adb6dec-5d06-46fd-9f93-c7c2d4f4aa8e\") " Jan 20 19:15:03 crc kubenswrapper[4661]: I0120 19:15:03.486412 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2adb6dec-5d06-46fd-9f93-c7c2d4f4aa8e-config-volume\") pod \"2adb6dec-5d06-46fd-9f93-c7c2d4f4aa8e\" (UID: \"2adb6dec-5d06-46fd-9f93-c7c2d4f4aa8e\") " Jan 20 19:15:03 crc kubenswrapper[4661]: I0120 19:15:03.486564 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-znfmj\" (UniqueName: \"kubernetes.io/projected/2adb6dec-5d06-46fd-9f93-c7c2d4f4aa8e-kube-api-access-znfmj\") pod \"2adb6dec-5d06-46fd-9f93-c7c2d4f4aa8e\" (UID: \"2adb6dec-5d06-46fd-9f93-c7c2d4f4aa8e\") " Jan 20 19:15:03 crc kubenswrapper[4661]: I0120 19:15:03.487529 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2adb6dec-5d06-46fd-9f93-c7c2d4f4aa8e-config-volume" (OuterVolumeSpecName: "config-volume") pod "2adb6dec-5d06-46fd-9f93-c7c2d4f4aa8e" (UID: "2adb6dec-5d06-46fd-9f93-c7c2d4f4aa8e"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 19:15:03 crc kubenswrapper[4661]: I0120 19:15:03.497939 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2adb6dec-5d06-46fd-9f93-c7c2d4f4aa8e-kube-api-access-znfmj" (OuterVolumeSpecName: "kube-api-access-znfmj") pod "2adb6dec-5d06-46fd-9f93-c7c2d4f4aa8e" (UID: "2adb6dec-5d06-46fd-9f93-c7c2d4f4aa8e"). InnerVolumeSpecName "kube-api-access-znfmj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 19:15:03 crc kubenswrapper[4661]: I0120 19:15:03.498784 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2adb6dec-5d06-46fd-9f93-c7c2d4f4aa8e-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "2adb6dec-5d06-46fd-9f93-c7c2d4f4aa8e" (UID: "2adb6dec-5d06-46fd-9f93-c7c2d4f4aa8e"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 19:15:03 crc kubenswrapper[4661]: I0120 19:15:03.589040 4661 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2adb6dec-5d06-46fd-9f93-c7c2d4f4aa8e-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 20 19:15:03 crc kubenswrapper[4661]: I0120 19:15:03.589088 4661 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2adb6dec-5d06-46fd-9f93-c7c2d4f4aa8e-config-volume\") on node \"crc\" DevicePath \"\"" Jan 20 19:15:03 crc kubenswrapper[4661]: I0120 19:15:03.589098 4661 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-znfmj\" (UniqueName: \"kubernetes.io/projected/2adb6dec-5d06-46fd-9f93-c7c2d4f4aa8e-kube-api-access-znfmj\") on node \"crc\" DevicePath \"\"" Jan 20 19:15:03 crc kubenswrapper[4661]: I0120 19:15:03.969254 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29482275-75fvn" event={"ID":"2adb6dec-5d06-46fd-9f93-c7c2d4f4aa8e","Type":"ContainerDied","Data":"82fb0110d541aaf7e6af8da5c8b5e8da54b4799800ba6f3ca6bd56727ce347dd"} Jan 20 19:15:03 crc kubenswrapper[4661]: I0120 19:15:03.969312 4661 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29482275-75fvn" Jan 20 19:15:03 crc kubenswrapper[4661]: I0120 19:15:03.969493 4661 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="82fb0110d541aaf7e6af8da5c8b5e8da54b4799800ba6f3ca6bd56727ce347dd" Jan 20 19:15:04 crc kubenswrapper[4661]: I0120 19:15:04.531713 4661 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29482230-pvvcl"] Jan 20 19:15:04 crc kubenswrapper[4661]: I0120 19:15:04.537950 4661 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29482230-pvvcl"] Jan 20 19:15:06 crc kubenswrapper[4661]: I0120 19:15:06.156481 4661 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fbb388f0-e71a-4a38-88d2-0569af45dad4" path="/var/lib/kubelet/pods/fbb388f0-e71a-4a38-88d2-0569af45dad4/volumes" Jan 20 19:15:19 crc kubenswrapper[4661]: I0120 19:15:19.208797 4661 scope.go:117] "RemoveContainer" containerID="f8a4ae0b9950e8fd39c5057982e00647917bbb2b8176cd8e574d5847ce553728" Jan 20 19:16:59 crc kubenswrapper[4661]: I0120 19:16:59.323494 4661 patch_prober.go:28] interesting pod/machine-config-daemon-svf7c container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 20 19:16:59 crc kubenswrapper[4661]: I0120 19:16:59.324291 4661 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-svf7c" podUID="78855c94-da90-4523-8d65-70f7fd153dee" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 20 19:17:24 crc kubenswrapper[4661]: I0120 19:17:24.235728 4661 generic.go:334] "Generic (PLEG): container finished" podID="fcc30bf2-7b68-4438-b7db-b041e1d1e2ff" containerID="5d6626055d7f4e634db9c354fe71f73c4e5c0f9ea36ba6d8224e9c21c40d0966" exitCode=0 Jan 20 19:17:24 crc kubenswrapper[4661]: I0120 19:17:24.235817 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"fcc30bf2-7b68-4438-b7db-b041e1d1e2ff","Type":"ContainerDied","Data":"5d6626055d7f4e634db9c354fe71f73c4e5c0f9ea36ba6d8224e9c21c40d0966"} Jan 20 19:17:25 crc kubenswrapper[4661]: I0120 19:17:25.549963 4661 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Jan 20 19:17:25 crc kubenswrapper[4661]: I0120 19:17:25.661167 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/fcc30bf2-7b68-4438-b7db-b041e1d1e2ff-test-operator-ephemeral-temporary\") pod \"fcc30bf2-7b68-4438-b7db-b041e1d1e2ff\" (UID: \"fcc30bf2-7b68-4438-b7db-b041e1d1e2ff\") " Jan 20 19:17:25 crc kubenswrapper[4661]: I0120 19:17:25.661419 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/fcc30bf2-7b68-4438-b7db-b041e1d1e2ff-test-operator-ephemeral-workdir\") pod \"fcc30bf2-7b68-4438-b7db-b041e1d1e2ff\" (UID: \"fcc30bf2-7b68-4438-b7db-b041e1d1e2ff\") " Jan 20 19:17:25 crc kubenswrapper[4661]: I0120 19:17:25.661443 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/fcc30bf2-7b68-4438-b7db-b041e1d1e2ff-config-data\") pod \"fcc30bf2-7b68-4438-b7db-b041e1d1e2ff\" (UID: \"fcc30bf2-7b68-4438-b7db-b041e1d1e2ff\") " Jan 20 19:17:25 crc kubenswrapper[4661]: I0120 19:17:25.661936 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fcc30bf2-7b68-4438-b7db-b041e1d1e2ff-test-operator-ephemeral-temporary" (OuterVolumeSpecName: "test-operator-ephemeral-temporary") pod "fcc30bf2-7b68-4438-b7db-b041e1d1e2ff" (UID: "fcc30bf2-7b68-4438-b7db-b041e1d1e2ff"). InnerVolumeSpecName "test-operator-ephemeral-temporary". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 19:17:25 crc kubenswrapper[4661]: I0120 19:17:25.662577 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fcc30bf2-7b68-4438-b7db-b041e1d1e2ff-config-data" (OuterVolumeSpecName: "config-data") pod "fcc30bf2-7b68-4438-b7db-b041e1d1e2ff" (UID: "fcc30bf2-7b68-4438-b7db-b041e1d1e2ff"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 19:17:25 crc kubenswrapper[4661]: I0120 19:17:25.665388 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fcc30bf2-7b68-4438-b7db-b041e1d1e2ff-test-operator-ephemeral-workdir" (OuterVolumeSpecName: "test-operator-ephemeral-workdir") pod "fcc30bf2-7b68-4438-b7db-b041e1d1e2ff" (UID: "fcc30bf2-7b68-4438-b7db-b041e1d1e2ff"). InnerVolumeSpecName "test-operator-ephemeral-workdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 19:17:25 crc kubenswrapper[4661]: I0120 19:17:25.666022 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mgw94\" (UniqueName: \"kubernetes.io/projected/fcc30bf2-7b68-4438-b7db-b041e1d1e2ff-kube-api-access-mgw94\") pod \"fcc30bf2-7b68-4438-b7db-b041e1d1e2ff\" (UID: \"fcc30bf2-7b68-4438-b7db-b041e1d1e2ff\") " Jan 20 19:17:25 crc kubenswrapper[4661]: I0120 19:17:25.666064 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/fcc30bf2-7b68-4438-b7db-b041e1d1e2ff-openstack-config-secret\") pod \"fcc30bf2-7b68-4438-b7db-b041e1d1e2ff\" (UID: \"fcc30bf2-7b68-4438-b7db-b041e1d1e2ff\") " Jan 20 19:17:25 crc kubenswrapper[4661]: I0120 19:17:25.666109 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/fcc30bf2-7b68-4438-b7db-b041e1d1e2ff-ssh-key\") pod \"fcc30bf2-7b68-4438-b7db-b041e1d1e2ff\" (UID: \"fcc30bf2-7b68-4438-b7db-b041e1d1e2ff\") " Jan 20 19:17:25 crc kubenswrapper[4661]: I0120 19:17:25.666188 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-logs\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"fcc30bf2-7b68-4438-b7db-b041e1d1e2ff\" (UID: \"fcc30bf2-7b68-4438-b7db-b041e1d1e2ff\") " Jan 20 19:17:25 crc kubenswrapper[4661]: I0120 19:17:25.666236 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/fcc30bf2-7b68-4438-b7db-b041e1d1e2ff-openstack-config\") pod \"fcc30bf2-7b68-4438-b7db-b041e1d1e2ff\" (UID: \"fcc30bf2-7b68-4438-b7db-b041e1d1e2ff\") " Jan 20 19:17:25 crc kubenswrapper[4661]: I0120 19:17:25.666301 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/fcc30bf2-7b68-4438-b7db-b041e1d1e2ff-ca-certs\") pod \"fcc30bf2-7b68-4438-b7db-b041e1d1e2ff\" (UID: \"fcc30bf2-7b68-4438-b7db-b041e1d1e2ff\") " Jan 20 19:17:25 crc kubenswrapper[4661]: I0120 19:17:25.666908 4661 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/fcc30bf2-7b68-4438-b7db-b041e1d1e2ff-test-operator-ephemeral-workdir\") on node \"crc\" DevicePath \"\"" Jan 20 19:17:25 crc kubenswrapper[4661]: I0120 19:17:25.667636 4661 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/fcc30bf2-7b68-4438-b7db-b041e1d1e2ff-config-data\") on node \"crc\" DevicePath \"\"" Jan 20 19:17:25 crc kubenswrapper[4661]: I0120 19:17:25.667660 4661 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/fcc30bf2-7b68-4438-b7db-b041e1d1e2ff-test-operator-ephemeral-temporary\") on node \"crc\" DevicePath \"\"" Jan 20 19:17:25 crc kubenswrapper[4661]: I0120 19:17:25.673276 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fcc30bf2-7b68-4438-b7db-b041e1d1e2ff-kube-api-access-mgw94" (OuterVolumeSpecName: "kube-api-access-mgw94") pod "fcc30bf2-7b68-4438-b7db-b041e1d1e2ff" (UID: "fcc30bf2-7b68-4438-b7db-b041e1d1e2ff"). InnerVolumeSpecName "kube-api-access-mgw94". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 19:17:25 crc kubenswrapper[4661]: I0120 19:17:25.675138 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage06-crc" (OuterVolumeSpecName: "test-operator-logs") pod "fcc30bf2-7b68-4438-b7db-b041e1d1e2ff" (UID: "fcc30bf2-7b68-4438-b7db-b041e1d1e2ff"). InnerVolumeSpecName "local-storage06-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 20 19:17:25 crc kubenswrapper[4661]: I0120 19:17:25.698375 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fcc30bf2-7b68-4438-b7db-b041e1d1e2ff-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "fcc30bf2-7b68-4438-b7db-b041e1d1e2ff" (UID: "fcc30bf2-7b68-4438-b7db-b041e1d1e2ff"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 19:17:25 crc kubenswrapper[4661]: I0120 19:17:25.723404 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fcc30bf2-7b68-4438-b7db-b041e1d1e2ff-ca-certs" (OuterVolumeSpecName: "ca-certs") pod "fcc30bf2-7b68-4438-b7db-b041e1d1e2ff" (UID: "fcc30bf2-7b68-4438-b7db-b041e1d1e2ff"). InnerVolumeSpecName "ca-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 19:17:25 crc kubenswrapper[4661]: I0120 19:17:25.731313 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fcc30bf2-7b68-4438-b7db-b041e1d1e2ff-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "fcc30bf2-7b68-4438-b7db-b041e1d1e2ff" (UID: "fcc30bf2-7b68-4438-b7db-b041e1d1e2ff"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 19:17:25 crc kubenswrapper[4661]: I0120 19:17:25.732624 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fcc30bf2-7b68-4438-b7db-b041e1d1e2ff-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "fcc30bf2-7b68-4438-b7db-b041e1d1e2ff" (UID: "fcc30bf2-7b68-4438-b7db-b041e1d1e2ff"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 19:17:25 crc kubenswrapper[4661]: I0120 19:17:25.770506 4661 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/fcc30bf2-7b68-4438-b7db-b041e1d1e2ff-ssh-key\") on node \"crc\" DevicePath \"\"" Jan 20 19:17:25 crc kubenswrapper[4661]: I0120 19:17:25.770857 4661 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") on node \"crc\" " Jan 20 19:17:25 crc kubenswrapper[4661]: I0120 19:17:25.770876 4661 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/fcc30bf2-7b68-4438-b7db-b041e1d1e2ff-openstack-config\") on node \"crc\" DevicePath \"\"" Jan 20 19:17:25 crc kubenswrapper[4661]: I0120 19:17:25.770891 4661 reconciler_common.go:293] "Volume detached for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/fcc30bf2-7b68-4438-b7db-b041e1d1e2ff-ca-certs\") on node \"crc\" DevicePath \"\"" Jan 20 19:17:25 crc kubenswrapper[4661]: I0120 19:17:25.770900 4661 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mgw94\" (UniqueName: \"kubernetes.io/projected/fcc30bf2-7b68-4438-b7db-b041e1d1e2ff-kube-api-access-mgw94\") on node \"crc\" DevicePath \"\"" Jan 20 19:17:25 crc kubenswrapper[4661]: I0120 19:17:25.770909 4661 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/fcc30bf2-7b68-4438-b7db-b041e1d1e2ff-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Jan 20 19:17:25 crc kubenswrapper[4661]: I0120 19:17:25.790222 4661 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage06-crc" (UniqueName: "kubernetes.io/local-volume/local-storage06-crc") on node "crc" Jan 20 19:17:25 crc kubenswrapper[4661]: I0120 19:17:25.872169 4661 reconciler_common.go:293] "Volume detached for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") on node \"crc\" DevicePath \"\"" Jan 20 19:17:26 crc kubenswrapper[4661]: I0120 19:17:26.255334 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"fcc30bf2-7b68-4438-b7db-b041e1d1e2ff","Type":"ContainerDied","Data":"8ce27c5ea5e20be44b55091f7303b1c9971c4e3009cba0b0f474d5df5d3cd949"} Jan 20 19:17:26 crc kubenswrapper[4661]: I0120 19:17:26.255372 4661 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8ce27c5ea5e20be44b55091f7303b1c9971c4e3009cba0b0f474d5df5d3cd949" Jan 20 19:17:26 crc kubenswrapper[4661]: I0120 19:17:26.255416 4661 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Jan 20 19:17:29 crc kubenswrapper[4661]: I0120 19:17:29.323500 4661 patch_prober.go:28] interesting pod/machine-config-daemon-svf7c container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 20 19:17:29 crc kubenswrapper[4661]: I0120 19:17:29.324411 4661 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-svf7c" podUID="78855c94-da90-4523-8d65-70f7fd153dee" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 20 19:17:34 crc kubenswrapper[4661]: I0120 19:17:34.075285 4661 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Jan 20 19:17:34 crc kubenswrapper[4661]: E0120 19:17:34.076811 4661 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2adb6dec-5d06-46fd-9f93-c7c2d4f4aa8e" containerName="collect-profiles" Jan 20 19:17:34 crc kubenswrapper[4661]: I0120 19:17:34.076839 4661 state_mem.go:107] "Deleted CPUSet assignment" podUID="2adb6dec-5d06-46fd-9f93-c7c2d4f4aa8e" containerName="collect-profiles" Jan 20 19:17:34 crc kubenswrapper[4661]: E0120 19:17:34.076891 4661 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fcc30bf2-7b68-4438-b7db-b041e1d1e2ff" containerName="tempest-tests-tempest-tests-runner" Jan 20 19:17:34 crc kubenswrapper[4661]: I0120 19:17:34.076905 4661 state_mem.go:107] "Deleted CPUSet assignment" podUID="fcc30bf2-7b68-4438-b7db-b041e1d1e2ff" containerName="tempest-tests-tempest-tests-runner" Jan 20 19:17:34 crc kubenswrapper[4661]: I0120 19:17:34.077262 4661 memory_manager.go:354] "RemoveStaleState removing state" podUID="2adb6dec-5d06-46fd-9f93-c7c2d4f4aa8e" containerName="collect-profiles" Jan 20 19:17:34 crc kubenswrapper[4661]: I0120 19:17:34.077287 4661 memory_manager.go:354] "RemoveStaleState removing state" podUID="fcc30bf2-7b68-4438-b7db-b041e1d1e2ff" containerName="tempest-tests-tempest-tests-runner" Jan 20 19:17:34 crc kubenswrapper[4661]: I0120 19:17:34.079190 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 20 19:17:34 crc kubenswrapper[4661]: I0120 19:17:34.082184 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-zxggd" Jan 20 19:17:34 crc kubenswrapper[4661]: I0120 19:17:34.088899 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Jan 20 19:17:34 crc kubenswrapper[4661]: I0120 19:17:34.249548 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6bck8\" (UniqueName: \"kubernetes.io/projected/db09fccc-50d1-4990-b482-a782822de50d-kube-api-access-6bck8\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"db09fccc-50d1-4990-b482-a782822de50d\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 20 19:17:34 crc kubenswrapper[4661]: I0120 19:17:34.250100 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"db09fccc-50d1-4990-b482-a782822de50d\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 20 19:17:34 crc kubenswrapper[4661]: I0120 19:17:34.352099 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"db09fccc-50d1-4990-b482-a782822de50d\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 20 19:17:34 crc kubenswrapper[4661]: I0120 19:17:34.352233 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6bck8\" (UniqueName: \"kubernetes.io/projected/db09fccc-50d1-4990-b482-a782822de50d-kube-api-access-6bck8\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"db09fccc-50d1-4990-b482-a782822de50d\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 20 19:17:34 crc kubenswrapper[4661]: I0120 19:17:34.352529 4661 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"db09fccc-50d1-4990-b482-a782822de50d\") device mount path \"/mnt/openstack/pv06\"" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 20 19:17:34 crc kubenswrapper[4661]: I0120 19:17:34.378216 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6bck8\" (UniqueName: \"kubernetes.io/projected/db09fccc-50d1-4990-b482-a782822de50d-kube-api-access-6bck8\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"db09fccc-50d1-4990-b482-a782822de50d\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 20 19:17:34 crc kubenswrapper[4661]: I0120 19:17:34.381766 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"db09fccc-50d1-4990-b482-a782822de50d\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 20 19:17:34 crc kubenswrapper[4661]: I0120 19:17:34.402739 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 20 19:17:34 crc kubenswrapper[4661]: I0120 19:17:34.910635 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Jan 20 19:17:35 crc kubenswrapper[4661]: I0120 19:17:35.335426 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"db09fccc-50d1-4990-b482-a782822de50d","Type":"ContainerStarted","Data":"b6560285471de93380c0db26f60cb633b3622bf4a60b268f937f90734692a3c9"} Jan 20 19:17:36 crc kubenswrapper[4661]: I0120 19:17:36.348027 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"db09fccc-50d1-4990-b482-a782822de50d","Type":"ContainerStarted","Data":"6bc24b7e2a1d105d3e245525167a70601b5212e71154c61c1d8a041ce0a0ce27"} Jan 20 19:17:36 crc kubenswrapper[4661]: I0120 19:17:36.376787 4661 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" podStartSLOduration=1.459964883 podStartE2EDuration="2.376764842s" podCreationTimestamp="2026-01-20 19:17:34 +0000 UTC" firstStartedPulling="2026-01-20 19:17:34.911159291 +0000 UTC m=+4311.241948953" lastFinishedPulling="2026-01-20 19:17:35.82795925 +0000 UTC m=+4312.158748912" observedRunningTime="2026-01-20 19:17:36.369589713 +0000 UTC m=+4312.700379395" watchObservedRunningTime="2026-01-20 19:17:36.376764842 +0000 UTC m=+4312.707554514" Jan 20 19:17:59 crc kubenswrapper[4661]: I0120 19:17:59.323233 4661 patch_prober.go:28] interesting pod/machine-config-daemon-svf7c container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 20 19:17:59 crc kubenswrapper[4661]: I0120 19:17:59.323938 4661 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-svf7c" podUID="78855c94-da90-4523-8d65-70f7fd153dee" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 20 19:17:59 crc kubenswrapper[4661]: I0120 19:17:59.324003 4661 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-svf7c" Jan 20 19:17:59 crc kubenswrapper[4661]: I0120 19:17:59.324936 4661 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"88aefdb17fbdb2f9a99910e27de0632e3833c47bcd174020bd5e7e94c3da0469"} pod="openshift-machine-config-operator/machine-config-daemon-svf7c" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 20 19:17:59 crc kubenswrapper[4661]: I0120 19:17:59.325031 4661 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-svf7c" podUID="78855c94-da90-4523-8d65-70f7fd153dee" containerName="machine-config-daemon" containerID="cri-o://88aefdb17fbdb2f9a99910e27de0632e3833c47bcd174020bd5e7e94c3da0469" gracePeriod=600 Jan 20 19:17:59 crc kubenswrapper[4661]: E0120 19:17:59.468897 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-svf7c_openshift-machine-config-operator(78855c94-da90-4523-8d65-70f7fd153dee)\"" pod="openshift-machine-config-operator/machine-config-daemon-svf7c" podUID="78855c94-da90-4523-8d65-70f7fd153dee" Jan 20 19:17:59 crc kubenswrapper[4661]: I0120 19:17:59.581856 4661 generic.go:334] "Generic (PLEG): container finished" podID="78855c94-da90-4523-8d65-70f7fd153dee" containerID="88aefdb17fbdb2f9a99910e27de0632e3833c47bcd174020bd5e7e94c3da0469" exitCode=0 Jan 20 19:17:59 crc kubenswrapper[4661]: I0120 19:17:59.581896 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-svf7c" event={"ID":"78855c94-da90-4523-8d65-70f7fd153dee","Type":"ContainerDied","Data":"88aefdb17fbdb2f9a99910e27de0632e3833c47bcd174020bd5e7e94c3da0469"} Jan 20 19:17:59 crc kubenswrapper[4661]: I0120 19:17:59.581929 4661 scope.go:117] "RemoveContainer" containerID="9757b3bc3bc7510737ffec1f5986f4d984f53c48810da16a3a8aa159c2c5415e" Jan 20 19:17:59 crc kubenswrapper[4661]: I0120 19:17:59.582532 4661 scope.go:117] "RemoveContainer" containerID="88aefdb17fbdb2f9a99910e27de0632e3833c47bcd174020bd5e7e94c3da0469" Jan 20 19:17:59 crc kubenswrapper[4661]: E0120 19:17:59.582781 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-svf7c_openshift-machine-config-operator(78855c94-da90-4523-8d65-70f7fd153dee)\"" pod="openshift-machine-config-operator/machine-config-daemon-svf7c" podUID="78855c94-da90-4523-8d65-70f7fd153dee" Jan 20 19:18:02 crc kubenswrapper[4661]: I0120 19:18:02.009471 4661 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-7bgfs/must-gather-vdf4s"] Jan 20 19:18:02 crc kubenswrapper[4661]: I0120 19:18:02.012369 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-7bgfs/must-gather-vdf4s" Jan 20 19:18:02 crc kubenswrapper[4661]: I0120 19:18:02.015652 4661 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-7bgfs"/"kube-root-ca.crt" Jan 20 19:18:02 crc kubenswrapper[4661]: I0120 19:18:02.016815 4661 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-7bgfs"/"openshift-service-ca.crt" Jan 20 19:18:02 crc kubenswrapper[4661]: I0120 19:18:02.020370 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-7bgfs"/"default-dockercfg-gv62q" Jan 20 19:18:02 crc kubenswrapper[4661]: I0120 19:18:02.029987 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-7bgfs/must-gather-vdf4s"] Jan 20 19:18:02 crc kubenswrapper[4661]: I0120 19:18:02.178283 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vh27h\" (UniqueName: \"kubernetes.io/projected/dec0aa71-5aea-4d67-bf53-1f4a04b38a7a-kube-api-access-vh27h\") pod \"must-gather-vdf4s\" (UID: \"dec0aa71-5aea-4d67-bf53-1f4a04b38a7a\") " pod="openshift-must-gather-7bgfs/must-gather-vdf4s" Jan 20 19:18:02 crc kubenswrapper[4661]: I0120 19:18:02.178763 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/dec0aa71-5aea-4d67-bf53-1f4a04b38a7a-must-gather-output\") pod \"must-gather-vdf4s\" (UID: \"dec0aa71-5aea-4d67-bf53-1f4a04b38a7a\") " pod="openshift-must-gather-7bgfs/must-gather-vdf4s" Jan 20 19:18:02 crc kubenswrapper[4661]: I0120 19:18:02.280115 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/dec0aa71-5aea-4d67-bf53-1f4a04b38a7a-must-gather-output\") pod \"must-gather-vdf4s\" (UID: \"dec0aa71-5aea-4d67-bf53-1f4a04b38a7a\") " pod="openshift-must-gather-7bgfs/must-gather-vdf4s" Jan 20 19:18:02 crc kubenswrapper[4661]: I0120 19:18:02.280215 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vh27h\" (UniqueName: \"kubernetes.io/projected/dec0aa71-5aea-4d67-bf53-1f4a04b38a7a-kube-api-access-vh27h\") pod \"must-gather-vdf4s\" (UID: \"dec0aa71-5aea-4d67-bf53-1f4a04b38a7a\") " pod="openshift-must-gather-7bgfs/must-gather-vdf4s" Jan 20 19:18:02 crc kubenswrapper[4661]: I0120 19:18:02.280541 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/dec0aa71-5aea-4d67-bf53-1f4a04b38a7a-must-gather-output\") pod \"must-gather-vdf4s\" (UID: \"dec0aa71-5aea-4d67-bf53-1f4a04b38a7a\") " pod="openshift-must-gather-7bgfs/must-gather-vdf4s" Jan 20 19:18:02 crc kubenswrapper[4661]: I0120 19:18:02.315751 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vh27h\" (UniqueName: \"kubernetes.io/projected/dec0aa71-5aea-4d67-bf53-1f4a04b38a7a-kube-api-access-vh27h\") pod \"must-gather-vdf4s\" (UID: \"dec0aa71-5aea-4d67-bf53-1f4a04b38a7a\") " pod="openshift-must-gather-7bgfs/must-gather-vdf4s" Jan 20 19:18:02 crc kubenswrapper[4661]: I0120 19:18:02.329411 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-7bgfs/must-gather-vdf4s" Jan 20 19:18:02 crc kubenswrapper[4661]: I0120 19:18:02.784632 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-7bgfs/must-gather-vdf4s"] Jan 20 19:18:03 crc kubenswrapper[4661]: I0120 19:18:03.633880 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-7bgfs/must-gather-vdf4s" event={"ID":"dec0aa71-5aea-4d67-bf53-1f4a04b38a7a","Type":"ContainerStarted","Data":"68310c1fb8607564f277d1a1e545e26072eeedaa666a4e5d80fb1df27c6c545e"} Jan 20 19:18:12 crc kubenswrapper[4661]: I0120 19:18:12.737635 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-7bgfs/must-gather-vdf4s" event={"ID":"dec0aa71-5aea-4d67-bf53-1f4a04b38a7a","Type":"ContainerStarted","Data":"59b5ede8e1f2c412bf65eaf53dccb54cf13dbb1159c80808b98bac29da71ea7c"} Jan 20 19:18:12 crc kubenswrapper[4661]: I0120 19:18:12.738293 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-7bgfs/must-gather-vdf4s" event={"ID":"dec0aa71-5aea-4d67-bf53-1f4a04b38a7a","Type":"ContainerStarted","Data":"21da3d6317aa762ceff13c328487020472a0b2825302720548d71b607385ae89"} Jan 20 19:18:13 crc kubenswrapper[4661]: I0120 19:18:13.766105 4661 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-7bgfs/must-gather-vdf4s" podStartSLOduration=3.313841447 podStartE2EDuration="12.766082716s" podCreationTimestamp="2026-01-20 19:18:01 +0000 UTC" firstStartedPulling="2026-01-20 19:18:02.794472516 +0000 UTC m=+4339.125262188" lastFinishedPulling="2026-01-20 19:18:12.246713795 +0000 UTC m=+4348.577503457" observedRunningTime="2026-01-20 19:18:13.760806006 +0000 UTC m=+4350.091595668" watchObservedRunningTime="2026-01-20 19:18:13.766082716 +0000 UTC m=+4350.096872368" Jan 20 19:18:15 crc kubenswrapper[4661]: I0120 19:18:15.142233 4661 scope.go:117] "RemoveContainer" containerID="88aefdb17fbdb2f9a99910e27de0632e3833c47bcd174020bd5e7e94c3da0469" Jan 20 19:18:15 crc kubenswrapper[4661]: E0120 19:18:15.142825 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-svf7c_openshift-machine-config-operator(78855c94-da90-4523-8d65-70f7fd153dee)\"" pod="openshift-machine-config-operator/machine-config-daemon-svf7c" podUID="78855c94-da90-4523-8d65-70f7fd153dee" Jan 20 19:18:18 crc kubenswrapper[4661]: I0120 19:18:18.208895 4661 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-7bgfs/crc-debug-vx6vh"] Jan 20 19:18:18 crc kubenswrapper[4661]: I0120 19:18:18.210379 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-7bgfs/crc-debug-vx6vh" Jan 20 19:18:18 crc kubenswrapper[4661]: I0120 19:18:18.393797 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/f5be9df1-a80b-4503-8bb4-e1b1d314d3f5-host\") pod \"crc-debug-vx6vh\" (UID: \"f5be9df1-a80b-4503-8bb4-e1b1d314d3f5\") " pod="openshift-must-gather-7bgfs/crc-debug-vx6vh" Jan 20 19:18:18 crc kubenswrapper[4661]: I0120 19:18:18.393910 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lrnhc\" (UniqueName: \"kubernetes.io/projected/f5be9df1-a80b-4503-8bb4-e1b1d314d3f5-kube-api-access-lrnhc\") pod \"crc-debug-vx6vh\" (UID: \"f5be9df1-a80b-4503-8bb4-e1b1d314d3f5\") " pod="openshift-must-gather-7bgfs/crc-debug-vx6vh" Jan 20 19:18:18 crc kubenswrapper[4661]: I0120 19:18:18.512181 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lrnhc\" (UniqueName: \"kubernetes.io/projected/f5be9df1-a80b-4503-8bb4-e1b1d314d3f5-kube-api-access-lrnhc\") pod \"crc-debug-vx6vh\" (UID: \"f5be9df1-a80b-4503-8bb4-e1b1d314d3f5\") " pod="openshift-must-gather-7bgfs/crc-debug-vx6vh" Jan 20 19:18:18 crc kubenswrapper[4661]: I0120 19:18:18.512442 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/f5be9df1-a80b-4503-8bb4-e1b1d314d3f5-host\") pod \"crc-debug-vx6vh\" (UID: \"f5be9df1-a80b-4503-8bb4-e1b1d314d3f5\") " pod="openshift-must-gather-7bgfs/crc-debug-vx6vh" Jan 20 19:18:18 crc kubenswrapper[4661]: I0120 19:18:18.512712 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/f5be9df1-a80b-4503-8bb4-e1b1d314d3f5-host\") pod \"crc-debug-vx6vh\" (UID: \"f5be9df1-a80b-4503-8bb4-e1b1d314d3f5\") " pod="openshift-must-gather-7bgfs/crc-debug-vx6vh" Jan 20 19:18:18 crc kubenswrapper[4661]: I0120 19:18:18.534400 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lrnhc\" (UniqueName: \"kubernetes.io/projected/f5be9df1-a80b-4503-8bb4-e1b1d314d3f5-kube-api-access-lrnhc\") pod \"crc-debug-vx6vh\" (UID: \"f5be9df1-a80b-4503-8bb4-e1b1d314d3f5\") " pod="openshift-must-gather-7bgfs/crc-debug-vx6vh" Jan 20 19:18:18 crc kubenswrapper[4661]: I0120 19:18:18.826302 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-7bgfs/crc-debug-vx6vh" Jan 20 19:18:19 crc kubenswrapper[4661]: I0120 19:18:19.792237 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-7bgfs/crc-debug-vx6vh" event={"ID":"f5be9df1-a80b-4503-8bb4-e1b1d314d3f5","Type":"ContainerStarted","Data":"0db8c3d13211bd7e420e3750435c1a4e67cdc0fbde0a840b1ef5d8c14c3cbc81"} Jan 20 19:18:21 crc kubenswrapper[4661]: I0120 19:18:21.607359 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-7b8c8444c8-77p78_e636b383-8c8d-4554-9717-35ba37b726f5/barbican-api-log/0.log" Jan 20 19:18:21 crc kubenswrapper[4661]: I0120 19:18:21.619723 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-7b8c8444c8-77p78_e636b383-8c8d-4554-9717-35ba37b726f5/barbican-api/0.log" Jan 20 19:18:21 crc kubenswrapper[4661]: I0120 19:18:21.651618 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-596b75897b-2g4gm_f94c4b9e-2f53-44d0-a637-4e8f4a3f9d58/barbican-keystone-listener-log/0.log" Jan 20 19:18:21 crc kubenswrapper[4661]: I0120 19:18:21.667959 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-596b75897b-2g4gm_f94c4b9e-2f53-44d0-a637-4e8f4a3f9d58/barbican-keystone-listener/0.log" Jan 20 19:18:21 crc kubenswrapper[4661]: I0120 19:18:21.750457 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-5b484f76ff-qrd8w_341d9328-73af-4986-9901-43b929a9e030/barbican-worker-log/0.log" Jan 20 19:18:21 crc kubenswrapper[4661]: I0120 19:18:21.759696 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-5b484f76ff-qrd8w_341d9328-73af-4986-9901-43b929a9e030/barbican-worker/0.log" Jan 20 19:18:21 crc kubenswrapper[4661]: I0120 19:18:21.821107 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-edpm-deployment-openstack-edpm-ipam-r28s7_e65dc54b-d336-441e-b167-cb297ef179a5/bootstrap-edpm-deployment-openstack-edpm-ipam/0.log" Jan 20 19:18:21 crc kubenswrapper[4661]: I0120 19:18:21.867334 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_359e2c27-69df-47ab-95bb-e9f70c04f988/ceilometer-central-agent/0.log" Jan 20 19:18:21 crc kubenswrapper[4661]: I0120 19:18:21.894034 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_359e2c27-69df-47ab-95bb-e9f70c04f988/ceilometer-notification-agent/0.log" Jan 20 19:18:21 crc kubenswrapper[4661]: I0120 19:18:21.902578 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_359e2c27-69df-47ab-95bb-e9f70c04f988/sg-core/0.log" Jan 20 19:18:21 crc kubenswrapper[4661]: I0120 19:18:21.913583 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_359e2c27-69df-47ab-95bb-e9f70c04f988/proxy-httpd/0.log" Jan 20 19:18:21 crc kubenswrapper[4661]: I0120 19:18:21.939050 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceph-client-edpm-deployment-openstack-edpm-ipam-28cfq_3689afcd-a340-4415-a127-c9ce66ab8d7b/ceph-client-edpm-deployment-openstack-edpm-ipam/0.log" Jan 20 19:18:21 crc kubenswrapper[4661]: I0120 19:18:21.954743 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-7nsxk_98f2afa8-7c09-4427-a558-a3da2bfd4df4/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam/0.log" Jan 20 19:18:21 crc kubenswrapper[4661]: I0120 19:18:21.974648 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_b4fa215a-165d-44b7-9bfd-19a2a9a5205c/cinder-api-log/0.log" Jan 20 19:18:22 crc kubenswrapper[4661]: I0120 19:18:22.057952 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_b4fa215a-165d-44b7-9bfd-19a2a9a5205c/cinder-api/0.log" Jan 20 19:18:22 crc kubenswrapper[4661]: I0120 19:18:22.226424 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-backup-0_3066acf4-e48e-410e-8623-f29b5424f4fe/cinder-backup/0.log" Jan 20 19:18:22 crc kubenswrapper[4661]: I0120 19:18:22.243087 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-backup-0_3066acf4-e48e-410e-8623-f29b5424f4fe/probe/0.log" Jan 20 19:18:22 crc kubenswrapper[4661]: I0120 19:18:22.302203 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_da178eaf-bf04-4638-a071-808d119fd4ec/cinder-scheduler/0.log" Jan 20 19:18:22 crc kubenswrapper[4661]: I0120 19:18:22.385435 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_da178eaf-bf04-4638-a071-808d119fd4ec/probe/0.log" Jan 20 19:18:22 crc kubenswrapper[4661]: I0120 19:18:22.448342 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-volume-volume1-0_c8c04a60-5bb8-4d54-93a6-1acfcbea3358/cinder-volume/0.log" Jan 20 19:18:22 crc kubenswrapper[4661]: I0120 19:18:22.532799 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-volume-volume1-0_c8c04a60-5bb8-4d54-93a6-1acfcbea3358/probe/0.log" Jan 20 19:18:22 crc kubenswrapper[4661]: I0120 19:18:22.560147 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-network-edpm-deployment-openstack-edpm-ipam-n2tt9_77d1abe1-5293-4f5d-b062-d3fc2bb71510/configure-network-edpm-deployment-openstack-edpm-ipam/0.log" Jan 20 19:18:22 crc kubenswrapper[4661]: I0120 19:18:22.601545 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-edpm-deployment-openstack-edpm-ipam-nzqc8_e38a8deb-8469-4d4d-a865-4d374e8fcb7c/configure-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 20 19:18:22 crc kubenswrapper[4661]: I0120 19:18:22.770902 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-77766fdf55-f5frm_a3cb0dc2-5231-45c2-81ae-038a006f73f0/dnsmasq-dns/0.log" Jan 20 19:18:22 crc kubenswrapper[4661]: I0120 19:18:22.780207 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-77766fdf55-f5frm_a3cb0dc2-5231-45c2-81ae-038a006f73f0/init/0.log" Jan 20 19:18:22 crc kubenswrapper[4661]: I0120 19:18:22.797271 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_ff298cae-f405-48cf-a8b3-c297f1e6cf80/glance-log/0.log" Jan 20 19:18:22 crc kubenswrapper[4661]: I0120 19:18:22.812735 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_ff298cae-f405-48cf-a8b3-c297f1e6cf80/glance-httpd/0.log" Jan 20 19:18:22 crc kubenswrapper[4661]: I0120 19:18:22.825919 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_26659b2b-07b6-4184-b2a8-bad999a10fd3/glance-log/0.log" Jan 20 19:18:22 crc kubenswrapper[4661]: I0120 19:18:22.840329 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_26659b2b-07b6-4184-b2a8-bad999a10fd3/glance-httpd/0.log" Jan 20 19:18:22 crc kubenswrapper[4661]: I0120 19:18:22.947789 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-658f6cd46d-59d52_3c2196ee-0a5d-49b8-9f9b-4eada2792101/horizon-log/0.log" Jan 20 19:18:23 crc kubenswrapper[4661]: I0120 19:18:23.056148 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-658f6cd46d-59d52_3c2196ee-0a5d-49b8-9f9b-4eada2792101/horizon/0.log" Jan 20 19:18:23 crc kubenswrapper[4661]: I0120 19:18:23.092949 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-certs-edpm-deployment-openstack-edpm-ipam-5r5z7_47dfef92-6673-4a9f-9999-47f830dd42bc/install-certs-edpm-deployment-openstack-edpm-ipam/0.log" Jan 20 19:18:23 crc kubenswrapper[4661]: I0120 19:18:23.131957 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-os-edpm-deployment-openstack-edpm-ipam-p6cj5_4fae988d-a1e4-4f89-8a5f-45989cd3584c/install-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 20 19:18:23 crc kubenswrapper[4661]: I0120 19:18:23.304648 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-c468c8b55-f2kw4_40923d71-e4f3-4c19-939c-e9f9b12fe635/keystone-api/0.log" Jan 20 19:18:23 crc kubenswrapper[4661]: I0120 19:18:23.315696 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29482261-r7s7h_089b2d3d-e382-4407-828e-cbeb9199951f/keystone-cron/0.log" Jan 20 19:18:23 crc kubenswrapper[4661]: I0120 19:18:23.329686 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_a42dbd72-de9b-49d9-b7fb-b8255659f933/kube-state-metrics/0.log" Jan 20 19:18:23 crc kubenswrapper[4661]: I0120 19:18:23.498412 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_libvirt-edpm-deployment-openstack-edpm-ipam-h99fk_e8b8d2fe-c25d-41e1-b32e-6e81c03e0717/libvirt-edpm-deployment-openstack-edpm-ipam/0.log" Jan 20 19:18:23 crc kubenswrapper[4661]: I0120 19:18:23.511247 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-api-0_c4f33125-e16c-4df4-9c3f-9f772fe671eb/manila-api-log/0.log" Jan 20 19:18:23 crc kubenswrapper[4661]: I0120 19:18:23.624534 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-api-0_c4f33125-e16c-4df4-9c3f-9f772fe671eb/manila-api/0.log" Jan 20 19:18:23 crc kubenswrapper[4661]: I0120 19:18:23.719650 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-scheduler-0_c4217d44-feda-4241-9ede-1e22b3324b01/manila-scheduler/0.log" Jan 20 19:18:23 crc kubenswrapper[4661]: I0120 19:18:23.726493 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-scheduler-0_c4217d44-feda-4241-9ede-1e22b3324b01/probe/0.log" Jan 20 19:18:23 crc kubenswrapper[4661]: I0120 19:18:23.790095 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-share-share1-0_6c223935-3aa9-491c-8f9d-638441f57742/manila-share/0.log" Jan 20 19:18:23 crc kubenswrapper[4661]: I0120 19:18:23.795776 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-share-share1-0_6c223935-3aa9-491c-8f9d-638441f57742/probe/0.log" Jan 20 19:18:26 crc kubenswrapper[4661]: I0120 19:18:26.141973 4661 scope.go:117] "RemoveContainer" containerID="88aefdb17fbdb2f9a99910e27de0632e3833c47bcd174020bd5e7e94c3da0469" Jan 20 19:18:26 crc kubenswrapper[4661]: E0120 19:18:26.143044 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-svf7c_openshift-machine-config-operator(78855c94-da90-4523-8d65-70f7fd153dee)\"" pod="openshift-machine-config-operator/machine-config-daemon-svf7c" podUID="78855c94-da90-4523-8d65-70f7fd153dee" Jan 20 19:18:31 crc kubenswrapper[4661]: I0120 19:18:31.921371 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-7bgfs/crc-debug-vx6vh" event={"ID":"f5be9df1-a80b-4503-8bb4-e1b1d314d3f5","Type":"ContainerStarted","Data":"163965f4c4bbe998e5d80d06a2ce4e9bafe18c567946a5ef5ba56fd3c4c2fc3e"} Jan 20 19:18:31 crc kubenswrapper[4661]: I0120 19:18:31.940084 4661 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-7bgfs/crc-debug-vx6vh" podStartSLOduration=1.391539804 podStartE2EDuration="13.940063714s" podCreationTimestamp="2026-01-20 19:18:18 +0000 UTC" firstStartedPulling="2026-01-20 19:18:18.864556388 +0000 UTC m=+4355.195346050" lastFinishedPulling="2026-01-20 19:18:31.413080298 +0000 UTC m=+4367.743869960" observedRunningTime="2026-01-20 19:18:31.93421917 +0000 UTC m=+4368.265008832" watchObservedRunningTime="2026-01-20 19:18:31.940063714 +0000 UTC m=+4368.270853376" Jan 20 19:18:33 crc kubenswrapper[4661]: I0120 19:18:33.043805 4661 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-tj7mg"] Jan 20 19:18:33 crc kubenswrapper[4661]: I0120 19:18:33.046498 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-tj7mg" Jan 20 19:18:33 crc kubenswrapper[4661]: I0120 19:18:33.082155 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-tj7mg"] Jan 20 19:18:33 crc kubenswrapper[4661]: I0120 19:18:33.226177 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/97a3fd4b-8c90-4fdf-bf19-c7c5cf95a244-catalog-content\") pod \"community-operators-tj7mg\" (UID: \"97a3fd4b-8c90-4fdf-bf19-c7c5cf95a244\") " pod="openshift-marketplace/community-operators-tj7mg" Jan 20 19:18:33 crc kubenswrapper[4661]: I0120 19:18:33.226302 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4mlfh\" (UniqueName: \"kubernetes.io/projected/97a3fd4b-8c90-4fdf-bf19-c7c5cf95a244-kube-api-access-4mlfh\") pod \"community-operators-tj7mg\" (UID: \"97a3fd4b-8c90-4fdf-bf19-c7c5cf95a244\") " pod="openshift-marketplace/community-operators-tj7mg" Jan 20 19:18:33 crc kubenswrapper[4661]: I0120 19:18:33.226381 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/97a3fd4b-8c90-4fdf-bf19-c7c5cf95a244-utilities\") pod \"community-operators-tj7mg\" (UID: \"97a3fd4b-8c90-4fdf-bf19-c7c5cf95a244\") " pod="openshift-marketplace/community-operators-tj7mg" Jan 20 19:18:33 crc kubenswrapper[4661]: I0120 19:18:33.330577 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/97a3fd4b-8c90-4fdf-bf19-c7c5cf95a244-utilities\") pod \"community-operators-tj7mg\" (UID: \"97a3fd4b-8c90-4fdf-bf19-c7c5cf95a244\") " pod="openshift-marketplace/community-operators-tj7mg" Jan 20 19:18:33 crc kubenswrapper[4661]: I0120 19:18:33.330889 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/97a3fd4b-8c90-4fdf-bf19-c7c5cf95a244-catalog-content\") pod \"community-operators-tj7mg\" (UID: \"97a3fd4b-8c90-4fdf-bf19-c7c5cf95a244\") " pod="openshift-marketplace/community-operators-tj7mg" Jan 20 19:18:33 crc kubenswrapper[4661]: I0120 19:18:33.331060 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4mlfh\" (UniqueName: \"kubernetes.io/projected/97a3fd4b-8c90-4fdf-bf19-c7c5cf95a244-kube-api-access-4mlfh\") pod \"community-operators-tj7mg\" (UID: \"97a3fd4b-8c90-4fdf-bf19-c7c5cf95a244\") " pod="openshift-marketplace/community-operators-tj7mg" Jan 20 19:18:33 crc kubenswrapper[4661]: I0120 19:18:33.331223 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/97a3fd4b-8c90-4fdf-bf19-c7c5cf95a244-utilities\") pod \"community-operators-tj7mg\" (UID: \"97a3fd4b-8c90-4fdf-bf19-c7c5cf95a244\") " pod="openshift-marketplace/community-operators-tj7mg" Jan 20 19:18:33 crc kubenswrapper[4661]: I0120 19:18:33.332906 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/97a3fd4b-8c90-4fdf-bf19-c7c5cf95a244-catalog-content\") pod \"community-operators-tj7mg\" (UID: \"97a3fd4b-8c90-4fdf-bf19-c7c5cf95a244\") " pod="openshift-marketplace/community-operators-tj7mg" Jan 20 19:18:33 crc kubenswrapper[4661]: I0120 19:18:33.374171 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4mlfh\" (UniqueName: \"kubernetes.io/projected/97a3fd4b-8c90-4fdf-bf19-c7c5cf95a244-kube-api-access-4mlfh\") pod \"community-operators-tj7mg\" (UID: \"97a3fd4b-8c90-4fdf-bf19-c7c5cf95a244\") " pod="openshift-marketplace/community-operators-tj7mg" Jan 20 19:18:33 crc kubenswrapper[4661]: I0120 19:18:33.379001 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-tj7mg" Jan 20 19:18:34 crc kubenswrapper[4661]: I0120 19:18:34.081938 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-tj7mg"] Jan 20 19:18:34 crc kubenswrapper[4661]: I0120 19:18:34.950395 4661 generic.go:334] "Generic (PLEG): container finished" podID="97a3fd4b-8c90-4fdf-bf19-c7c5cf95a244" containerID="522e35767aef9e940b2987e6ece7936abb196cf41f3e887c2fdf741bf5e6f565" exitCode=0 Jan 20 19:18:34 crc kubenswrapper[4661]: I0120 19:18:34.950988 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tj7mg" event={"ID":"97a3fd4b-8c90-4fdf-bf19-c7c5cf95a244","Type":"ContainerDied","Data":"522e35767aef9e940b2987e6ece7936abb196cf41f3e887c2fdf741bf5e6f565"} Jan 20 19:18:34 crc kubenswrapper[4661]: I0120 19:18:34.951196 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tj7mg" event={"ID":"97a3fd4b-8c90-4fdf-bf19-c7c5cf95a244","Type":"ContainerStarted","Data":"3126aeb72e25c155764e66a782d542ccf97d519504b5d39b6ef51762101b7301"} Jan 20 19:18:36 crc kubenswrapper[4661]: I0120 19:18:35.976811 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tj7mg" event={"ID":"97a3fd4b-8c90-4fdf-bf19-c7c5cf95a244","Type":"ContainerStarted","Data":"505e35ef3bcba456b64042308e69f380d706f3849acc0a2dac73c1a6f31967d7"} Jan 20 19:18:37 crc kubenswrapper[4661]: I0120 19:18:37.991963 4661 generic.go:334] "Generic (PLEG): container finished" podID="97a3fd4b-8c90-4fdf-bf19-c7c5cf95a244" containerID="505e35ef3bcba456b64042308e69f380d706f3849acc0a2dac73c1a6f31967d7" exitCode=0 Jan 20 19:18:37 crc kubenswrapper[4661]: I0120 19:18:37.992023 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tj7mg" event={"ID":"97a3fd4b-8c90-4fdf-bf19-c7c5cf95a244","Type":"ContainerDied","Data":"505e35ef3bcba456b64042308e69f380d706f3849acc0a2dac73c1a6f31967d7"} Jan 20 19:18:39 crc kubenswrapper[4661]: I0120 19:18:39.142187 4661 scope.go:117] "RemoveContainer" containerID="88aefdb17fbdb2f9a99910e27de0632e3833c47bcd174020bd5e7e94c3da0469" Jan 20 19:18:39 crc kubenswrapper[4661]: E0120 19:18:39.142893 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-svf7c_openshift-machine-config-operator(78855c94-da90-4523-8d65-70f7fd153dee)\"" pod="openshift-machine-config-operator/machine-config-daemon-svf7c" podUID="78855c94-da90-4523-8d65-70f7fd153dee" Jan 20 19:18:40 crc kubenswrapper[4661]: I0120 19:18:40.018782 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tj7mg" event={"ID":"97a3fd4b-8c90-4fdf-bf19-c7c5cf95a244","Type":"ContainerStarted","Data":"30e71eb7a8387fb6ba59e5a9438ce4c03a9b6fecb2d0263c88bb44cfe845b592"} Jan 20 19:18:40 crc kubenswrapper[4661]: I0120 19:18:40.041080 4661 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-tj7mg" podStartSLOduration=3.05895114 podStartE2EDuration="7.041064913s" podCreationTimestamp="2026-01-20 19:18:33 +0000 UTC" firstStartedPulling="2026-01-20 19:18:34.953844587 +0000 UTC m=+4371.284634249" lastFinishedPulling="2026-01-20 19:18:38.93595836 +0000 UTC m=+4375.266748022" observedRunningTime="2026-01-20 19:18:40.039761128 +0000 UTC m=+4376.370550810" watchObservedRunningTime="2026-01-20 19:18:40.041064913 +0000 UTC m=+4376.371854575" Jan 20 19:18:43 crc kubenswrapper[4661]: I0120 19:18:43.059565 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-htvf4_e4437f72-2da5-4c3a-8a69-4a26f3190a62/controller/0.log" Jan 20 19:18:43 crc kubenswrapper[4661]: I0120 19:18:43.068246 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-htvf4_e4437f72-2da5-4c3a-8a69-4a26f3190a62/kube-rbac-proxy/0.log" Jan 20 19:18:43 crc kubenswrapper[4661]: I0120 19:18:43.089651 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9s69m_52982b0c-438c-4fbd-be5f-03fe6aca0327/controller/0.log" Jan 20 19:18:43 crc kubenswrapper[4661]: I0120 19:18:43.382904 4661 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-tj7mg" Jan 20 19:18:43 crc kubenswrapper[4661]: I0120 19:18:43.382948 4661 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-tj7mg" Jan 20 19:18:44 crc kubenswrapper[4661]: I0120 19:18:44.446514 4661 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-tj7mg" podUID="97a3fd4b-8c90-4fdf-bf19-c7c5cf95a244" containerName="registry-server" probeResult="failure" output=< Jan 20 19:18:44 crc kubenswrapper[4661]: timeout: failed to connect service ":50051" within 1s Jan 20 19:18:44 crc kubenswrapper[4661]: > Jan 20 19:18:45 crc kubenswrapper[4661]: I0120 19:18:45.612961 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9s69m_52982b0c-438c-4fbd-be5f-03fe6aca0327/frr/0.log" Jan 20 19:18:45 crc kubenswrapper[4661]: I0120 19:18:45.624732 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9s69m_52982b0c-438c-4fbd-be5f-03fe6aca0327/reloader/0.log" Jan 20 19:18:45 crc kubenswrapper[4661]: I0120 19:18:45.629578 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9s69m_52982b0c-438c-4fbd-be5f-03fe6aca0327/frr-metrics/0.log" Jan 20 19:18:45 crc kubenswrapper[4661]: I0120 19:18:45.644729 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9s69m_52982b0c-438c-4fbd-be5f-03fe6aca0327/kube-rbac-proxy/0.log" Jan 20 19:18:45 crc kubenswrapper[4661]: I0120 19:18:45.649998 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9s69m_52982b0c-438c-4fbd-be5f-03fe6aca0327/kube-rbac-proxy-frr/0.log" Jan 20 19:18:45 crc kubenswrapper[4661]: I0120 19:18:45.658006 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9s69m_52982b0c-438c-4fbd-be5f-03fe6aca0327/cp-frr-files/0.log" Jan 20 19:18:45 crc kubenswrapper[4661]: I0120 19:18:45.670586 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9s69m_52982b0c-438c-4fbd-be5f-03fe6aca0327/cp-reloader/0.log" Jan 20 19:18:45 crc kubenswrapper[4661]: I0120 19:18:45.680061 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9s69m_52982b0c-438c-4fbd-be5f-03fe6aca0327/cp-metrics/0.log" Jan 20 19:18:45 crc kubenswrapper[4661]: I0120 19:18:45.700724 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-7df86c4f6c-7z86n_c47c14be-ea84-47ba-a52b-9cb718ae6a30/frr-k8s-webhook-server/0.log" Jan 20 19:18:45 crc kubenswrapper[4661]: I0120 19:18:45.734019 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-6f4477bbcd-h46rv_715feebe-b380-4ce1-9842-7f9da051a195/manager/0.log" Jan 20 19:18:45 crc kubenswrapper[4661]: I0120 19:18:45.750631 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-686c759fbc-hdkt8_e0bbd467-090b-431e-b89b-8159d61d7dab/webhook-server/0.log" Jan 20 19:18:46 crc kubenswrapper[4661]: I0120 19:18:46.214722 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-58jbs_69809466-8e46-4f8a-b90e-638f8af8b313/speaker/0.log" Jan 20 19:18:46 crc kubenswrapper[4661]: I0120 19:18:46.222847 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-58jbs_69809466-8e46-4f8a-b90e-638f8af8b313/kube-rbac-proxy/0.log" Jan 20 19:18:51 crc kubenswrapper[4661]: I0120 19:18:51.012400 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_ee2394e6-ec1c-4093-9c8d-6a5795f4d146/memcached/0.log" Jan 20 19:18:51 crc kubenswrapper[4661]: I0120 19:18:51.142576 4661 scope.go:117] "RemoveContainer" containerID="88aefdb17fbdb2f9a99910e27de0632e3833c47bcd174020bd5e7e94c3da0469" Jan 20 19:18:51 crc kubenswrapper[4661]: E0120 19:18:51.142866 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-svf7c_openshift-machine-config-operator(78855c94-da90-4523-8d65-70f7fd153dee)\"" pod="openshift-machine-config-operator/machine-config-daemon-svf7c" podUID="78855c94-da90-4523-8d65-70f7fd153dee" Jan 20 19:18:51 crc kubenswrapper[4661]: I0120 19:18:51.150461 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-57dd7457c5-2txjn_978fc50f-3ea8-4427-af11-d8f4c4f3c0d5/neutron-api/0.log" Jan 20 19:18:51 crc kubenswrapper[4661]: I0120 19:18:51.214723 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-57dd7457c5-2txjn_978fc50f-3ea8-4427-af11-d8f4c4f3c0d5/neutron-httpd/0.log" Jan 20 19:18:51 crc kubenswrapper[4661]: I0120 19:18:51.244468 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-metadata-edpm-deployment-openstack-edpm-ipam-tbhjj_58c705e0-9353-44b0-b3af-65c84ddb1f44/neutron-metadata-edpm-deployment-openstack-edpm-ipam/0.log" Jan 20 19:18:51 crc kubenswrapper[4661]: I0120 19:18:51.451586 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_c36f847e-e718-445f-927e-4c6145c5ac8d/nova-api-log/0.log" Jan 20 19:18:51 crc kubenswrapper[4661]: I0120 19:18:51.824767 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_c36f847e-e718-445f-927e-4c6145c5ac8d/nova-api-api/0.log" Jan 20 19:18:51 crc kubenswrapper[4661]: I0120 19:18:51.933060 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_b5e5805d-a947-403a-b1dc-77949080c7be/nova-cell0-conductor-conductor/0.log" Jan 20 19:18:52 crc kubenswrapper[4661]: I0120 19:18:52.002410 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_ebdbbfb8-e8c3-405b-914d-0ace13b50e32/nova-cell1-conductor-conductor/0.log" Jan 20 19:18:52 crc kubenswrapper[4661]: I0120 19:18:52.088417 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_22e1bf04-4a38-4fa3-85c3-b63e90226ffa/nova-cell1-novncproxy-novncproxy/0.log" Jan 20 19:18:52 crc kubenswrapper[4661]: I0120 19:18:52.152142 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-cnqdb_40502583-1982-469d-a228-04488a4eb068/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam/0.log" Jan 20 19:18:52 crc kubenswrapper[4661]: I0120 19:18:52.232942 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_253d6878-90af-44c0-b6d2-dfb0d79a2190/nova-metadata-log/0.log" Jan 20 19:18:53 crc kubenswrapper[4661]: I0120 19:18:53.377542 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_253d6878-90af-44c0-b6d2-dfb0d79a2190/nova-metadata-metadata/0.log" Jan 20 19:18:53 crc kubenswrapper[4661]: I0120 19:18:53.524330 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_47ac760b-6ca8-4048-a60e-e3717fcb25ec/nova-scheduler-scheduler/0.log" Jan 20 19:18:53 crc kubenswrapper[4661]: I0120 19:18:53.548150 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_1a3386fb-6ffa-47fa-8697-8d3c45ff61be/galera/0.log" Jan 20 19:18:53 crc kubenswrapper[4661]: I0120 19:18:53.559940 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_1a3386fb-6ffa-47fa-8697-8d3c45ff61be/mysql-bootstrap/0.log" Jan 20 19:18:53 crc kubenswrapper[4661]: I0120 19:18:53.621032 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_1c705fc7-9ad0-4254-ad57-63db21057251/galera/0.log" Jan 20 19:18:53 crc kubenswrapper[4661]: I0120 19:18:53.649155 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_1c705fc7-9ad0-4254-ad57-63db21057251/mysql-bootstrap/0.log" Jan 20 19:18:53 crc kubenswrapper[4661]: I0120 19:18:53.659792 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_c6b78b7c-8709-4a28-bc8f-1cf8960203cc/openstackclient/0.log" Jan 20 19:18:53 crc kubenswrapper[4661]: I0120 19:18:53.669328 4661 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-tj7mg" Jan 20 19:18:53 crc kubenswrapper[4661]: I0120 19:18:53.682253 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-8mm55_97b7dc90-ebf5-4783-9d06-b0e30eb4d2d8/openstack-network-exporter/0.log" Jan 20 19:18:53 crc kubenswrapper[4661]: I0120 19:18:53.730388 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-9k84x_6d1b9f50-80c4-494b-8ea6-f3cd3ca1b98d/ovsdb-server/0.log" Jan 20 19:18:53 crc kubenswrapper[4661]: I0120 19:18:53.730915 4661 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-tj7mg" Jan 20 19:18:53 crc kubenswrapper[4661]: I0120 19:18:53.744947 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-9k84x_6d1b9f50-80c4-494b-8ea6-f3cd3ca1b98d/ovs-vswitchd/0.log" Jan 20 19:18:53 crc kubenswrapper[4661]: I0120 19:18:53.758169 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-9k84x_6d1b9f50-80c4-494b-8ea6-f3cd3ca1b98d/ovsdb-server-init/0.log" Jan 20 19:18:53 crc kubenswrapper[4661]: I0120 19:18:53.774512 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-p7h4x_65017fb7-6ab3-43d0-a308-a3d8da39b811/ovn-controller/0.log" Jan 20 19:18:53 crc kubenswrapper[4661]: I0120 19:18:53.819446 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-edpm-deployment-openstack-edpm-ipam-kgwml_7f25c2cb-31da-4f0d-b7cb-472e09443f4a/ovn-edpm-deployment-openstack-edpm-ipam/0.log" Jan 20 19:18:53 crc kubenswrapper[4661]: I0120 19:18:53.828518 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_26291859-ffb9-435a-92bd-7ebc53f7e4bc/ovn-northd/0.log" Jan 20 19:18:53 crc kubenswrapper[4661]: I0120 19:18:53.833208 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_26291859-ffb9-435a-92bd-7ebc53f7e4bc/openstack-network-exporter/0.log" Jan 20 19:18:53 crc kubenswrapper[4661]: I0120 19:18:53.853866 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_8a95987a-80ef-495d-adf7-f60c952836ce/ovsdbserver-nb/0.log" Jan 20 19:18:53 crc kubenswrapper[4661]: I0120 19:18:53.859156 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_8a95987a-80ef-495d-adf7-f60c952836ce/openstack-network-exporter/0.log" Jan 20 19:18:53 crc kubenswrapper[4661]: I0120 19:18:53.875903 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_20636e35-51a8-4c79-888a-64d59e109a53/ovsdbserver-sb/0.log" Jan 20 19:18:53 crc kubenswrapper[4661]: I0120 19:18:53.881344 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_20636e35-51a8-4c79-888a-64d59e109a53/openstack-network-exporter/0.log" Jan 20 19:18:53 crc kubenswrapper[4661]: I0120 19:18:53.915158 4661 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-tj7mg"] Jan 20 19:18:53 crc kubenswrapper[4661]: I0120 19:18:53.952468 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-66f64dd556-cvpcx_371e36aa-f7e5-443b-9f8e-ef9b8e9d0f61/placement-log/0.log" Jan 20 19:18:54 crc kubenswrapper[4661]: I0120 19:18:54.005406 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-66f64dd556-cvpcx_371e36aa-f7e5-443b-9f8e-ef9b8e9d0f61/placement-api/0.log" Jan 20 19:18:54 crc kubenswrapper[4661]: I0120 19:18:54.029073 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_7301e169-326c-4397-89f7-28b94553cef4/rabbitmq/0.log" Jan 20 19:18:54 crc kubenswrapper[4661]: I0120 19:18:54.037706 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_7301e169-326c-4397-89f7-28b94553cef4/setup-container/0.log" Jan 20 19:18:54 crc kubenswrapper[4661]: I0120 19:18:54.071394 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_5a690866-3b40-4a9f-ba41-a5a3a6d76c95/rabbitmq/0.log" Jan 20 19:18:54 crc kubenswrapper[4661]: I0120 19:18:54.075577 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_5a690866-3b40-4a9f-ba41-a5a3a6d76c95/setup-container/0.log" Jan 20 19:18:54 crc kubenswrapper[4661]: I0120 19:18:54.092061 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_reboot-os-edpm-deployment-openstack-edpm-ipam-zl299_9898267e-7857-4ef5-8f1a-10d5f1a97cec/reboot-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 20 19:18:54 crc kubenswrapper[4661]: I0120 19:18:54.103809 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_repo-setup-edpm-deployment-openstack-edpm-ipam-h5wl9_39ff301d-9d0a-441f-879b-64ddb885ad9b/repo-setup-edpm-deployment-openstack-edpm-ipam/0.log" Jan 20 19:18:54 crc kubenswrapper[4661]: I0120 19:18:54.117566 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_run-os-edpm-deployment-openstack-edpm-ipam-kzmcr_fd038034-bcb2-4723-a94f-16af58612f58/run-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 20 19:18:54 crc kubenswrapper[4661]: I0120 19:18:54.152497 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ssh-known-hosts-edpm-deployment-gr4vd_e10ea9a4-1fd9-43d6-bf38-0365ddbdb5d0/ssh-known-hosts-edpm-deployment/0.log" Jan 20 19:18:54 crc kubenswrapper[4661]: I0120 19:18:54.172175 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tempest-tests-tempest_fcc30bf2-7b68-4438-b7db-b041e1d1e2ff/tempest-tests-tempest-tests-runner/0.log" Jan 20 19:18:54 crc kubenswrapper[4661]: I0120 19:18:54.178349 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_test-operator-logs-pod-tempest-tempest-tests-tempest_db09fccc-50d1-4990-b482-a782822de50d/test-operator-logs-container/0.log" Jan 20 19:18:54 crc kubenswrapper[4661]: I0120 19:18:54.194113 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_validate-network-edpm-deployment-openstack-edpm-ipam-wxksd_4190b947-a737-4a67-bfa9-dad8bb4a7499/validate-network-edpm-deployment-openstack-edpm-ipam/0.log" Jan 20 19:18:55 crc kubenswrapper[4661]: I0120 19:18:55.159964 4661 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-tj7mg" podUID="97a3fd4b-8c90-4fdf-bf19-c7c5cf95a244" containerName="registry-server" containerID="cri-o://30e71eb7a8387fb6ba59e5a9438ce4c03a9b6fecb2d0263c88bb44cfe845b592" gracePeriod=2 Jan 20 19:18:55 crc kubenswrapper[4661]: I0120 19:18:55.782886 4661 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-tj7mg" Jan 20 19:18:55 crc kubenswrapper[4661]: I0120 19:18:55.925544 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/97a3fd4b-8c90-4fdf-bf19-c7c5cf95a244-utilities\") pod \"97a3fd4b-8c90-4fdf-bf19-c7c5cf95a244\" (UID: \"97a3fd4b-8c90-4fdf-bf19-c7c5cf95a244\") " Jan 20 19:18:55 crc kubenswrapper[4661]: I0120 19:18:55.926125 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/97a3fd4b-8c90-4fdf-bf19-c7c5cf95a244-catalog-content\") pod \"97a3fd4b-8c90-4fdf-bf19-c7c5cf95a244\" (UID: \"97a3fd4b-8c90-4fdf-bf19-c7c5cf95a244\") " Jan 20 19:18:55 crc kubenswrapper[4661]: I0120 19:18:55.926359 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4mlfh\" (UniqueName: \"kubernetes.io/projected/97a3fd4b-8c90-4fdf-bf19-c7c5cf95a244-kube-api-access-4mlfh\") pod \"97a3fd4b-8c90-4fdf-bf19-c7c5cf95a244\" (UID: \"97a3fd4b-8c90-4fdf-bf19-c7c5cf95a244\") " Jan 20 19:18:55 crc kubenswrapper[4661]: I0120 19:18:55.926820 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/97a3fd4b-8c90-4fdf-bf19-c7c5cf95a244-utilities" (OuterVolumeSpecName: "utilities") pod "97a3fd4b-8c90-4fdf-bf19-c7c5cf95a244" (UID: "97a3fd4b-8c90-4fdf-bf19-c7c5cf95a244"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 19:18:55 crc kubenswrapper[4661]: I0120 19:18:55.928024 4661 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/97a3fd4b-8c90-4fdf-bf19-c7c5cf95a244-utilities\") on node \"crc\" DevicePath \"\"" Jan 20 19:18:55 crc kubenswrapper[4661]: I0120 19:18:55.934841 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/97a3fd4b-8c90-4fdf-bf19-c7c5cf95a244-kube-api-access-4mlfh" (OuterVolumeSpecName: "kube-api-access-4mlfh") pod "97a3fd4b-8c90-4fdf-bf19-c7c5cf95a244" (UID: "97a3fd4b-8c90-4fdf-bf19-c7c5cf95a244"). InnerVolumeSpecName "kube-api-access-4mlfh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 19:18:55 crc kubenswrapper[4661]: I0120 19:18:55.984361 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/97a3fd4b-8c90-4fdf-bf19-c7c5cf95a244-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "97a3fd4b-8c90-4fdf-bf19-c7c5cf95a244" (UID: "97a3fd4b-8c90-4fdf-bf19-c7c5cf95a244"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 19:18:56 crc kubenswrapper[4661]: I0120 19:18:56.030081 4661 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4mlfh\" (UniqueName: \"kubernetes.io/projected/97a3fd4b-8c90-4fdf-bf19-c7c5cf95a244-kube-api-access-4mlfh\") on node \"crc\" DevicePath \"\"" Jan 20 19:18:56 crc kubenswrapper[4661]: I0120 19:18:56.030116 4661 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/97a3fd4b-8c90-4fdf-bf19-c7c5cf95a244-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 20 19:18:56 crc kubenswrapper[4661]: I0120 19:18:56.168119 4661 generic.go:334] "Generic (PLEG): container finished" podID="97a3fd4b-8c90-4fdf-bf19-c7c5cf95a244" containerID="30e71eb7a8387fb6ba59e5a9438ce4c03a9b6fecb2d0263c88bb44cfe845b592" exitCode=0 Jan 20 19:18:56 crc kubenswrapper[4661]: I0120 19:18:56.168990 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tj7mg" event={"ID":"97a3fd4b-8c90-4fdf-bf19-c7c5cf95a244","Type":"ContainerDied","Data":"30e71eb7a8387fb6ba59e5a9438ce4c03a9b6fecb2d0263c88bb44cfe845b592"} Jan 20 19:18:56 crc kubenswrapper[4661]: I0120 19:18:56.169088 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tj7mg" event={"ID":"97a3fd4b-8c90-4fdf-bf19-c7c5cf95a244","Type":"ContainerDied","Data":"3126aeb72e25c155764e66a782d542ccf97d519504b5d39b6ef51762101b7301"} Jan 20 19:18:56 crc kubenswrapper[4661]: I0120 19:18:56.169183 4661 scope.go:117] "RemoveContainer" containerID="30e71eb7a8387fb6ba59e5a9438ce4c03a9b6fecb2d0263c88bb44cfe845b592" Jan 20 19:18:56 crc kubenswrapper[4661]: I0120 19:18:56.169413 4661 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-tj7mg" Jan 20 19:18:56 crc kubenswrapper[4661]: I0120 19:18:56.193452 4661 scope.go:117] "RemoveContainer" containerID="505e35ef3bcba456b64042308e69f380d706f3849acc0a2dac73c1a6f31967d7" Jan 20 19:18:56 crc kubenswrapper[4661]: I0120 19:18:56.202274 4661 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-tj7mg"] Jan 20 19:18:56 crc kubenswrapper[4661]: I0120 19:18:56.211476 4661 scope.go:117] "RemoveContainer" containerID="522e35767aef9e940b2987e6ece7936abb196cf41f3e887c2fdf741bf5e6f565" Jan 20 19:18:56 crc kubenswrapper[4661]: I0120 19:18:56.212109 4661 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-tj7mg"] Jan 20 19:18:56 crc kubenswrapper[4661]: I0120 19:18:56.234153 4661 scope.go:117] "RemoveContainer" containerID="30e71eb7a8387fb6ba59e5a9438ce4c03a9b6fecb2d0263c88bb44cfe845b592" Jan 20 19:18:56 crc kubenswrapper[4661]: E0120 19:18:56.238046 4661 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"30e71eb7a8387fb6ba59e5a9438ce4c03a9b6fecb2d0263c88bb44cfe845b592\": container with ID starting with 30e71eb7a8387fb6ba59e5a9438ce4c03a9b6fecb2d0263c88bb44cfe845b592 not found: ID does not exist" containerID="30e71eb7a8387fb6ba59e5a9438ce4c03a9b6fecb2d0263c88bb44cfe845b592" Jan 20 19:18:56 crc kubenswrapper[4661]: I0120 19:18:56.238079 4661 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"30e71eb7a8387fb6ba59e5a9438ce4c03a9b6fecb2d0263c88bb44cfe845b592"} err="failed to get container status \"30e71eb7a8387fb6ba59e5a9438ce4c03a9b6fecb2d0263c88bb44cfe845b592\": rpc error: code = NotFound desc = could not find container \"30e71eb7a8387fb6ba59e5a9438ce4c03a9b6fecb2d0263c88bb44cfe845b592\": container with ID starting with 30e71eb7a8387fb6ba59e5a9438ce4c03a9b6fecb2d0263c88bb44cfe845b592 not found: ID does not exist" Jan 20 19:18:56 crc kubenswrapper[4661]: I0120 19:18:56.238101 4661 scope.go:117] "RemoveContainer" containerID="505e35ef3bcba456b64042308e69f380d706f3849acc0a2dac73c1a6f31967d7" Jan 20 19:18:56 crc kubenswrapper[4661]: E0120 19:18:56.238469 4661 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"505e35ef3bcba456b64042308e69f380d706f3849acc0a2dac73c1a6f31967d7\": container with ID starting with 505e35ef3bcba456b64042308e69f380d706f3849acc0a2dac73c1a6f31967d7 not found: ID does not exist" containerID="505e35ef3bcba456b64042308e69f380d706f3849acc0a2dac73c1a6f31967d7" Jan 20 19:18:56 crc kubenswrapper[4661]: I0120 19:18:56.238581 4661 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"505e35ef3bcba456b64042308e69f380d706f3849acc0a2dac73c1a6f31967d7"} err="failed to get container status \"505e35ef3bcba456b64042308e69f380d706f3849acc0a2dac73c1a6f31967d7\": rpc error: code = NotFound desc = could not find container \"505e35ef3bcba456b64042308e69f380d706f3849acc0a2dac73c1a6f31967d7\": container with ID starting with 505e35ef3bcba456b64042308e69f380d706f3849acc0a2dac73c1a6f31967d7 not found: ID does not exist" Jan 20 19:18:56 crc kubenswrapper[4661]: I0120 19:18:56.238683 4661 scope.go:117] "RemoveContainer" containerID="522e35767aef9e940b2987e6ece7936abb196cf41f3e887c2fdf741bf5e6f565" Jan 20 19:18:56 crc kubenswrapper[4661]: E0120 19:18:56.239103 4661 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"522e35767aef9e940b2987e6ece7936abb196cf41f3e887c2fdf741bf5e6f565\": container with ID starting with 522e35767aef9e940b2987e6ece7936abb196cf41f3e887c2fdf741bf5e6f565 not found: ID does not exist" containerID="522e35767aef9e940b2987e6ece7936abb196cf41f3e887c2fdf741bf5e6f565" Jan 20 19:18:56 crc kubenswrapper[4661]: I0120 19:18:56.239196 4661 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"522e35767aef9e940b2987e6ece7936abb196cf41f3e887c2fdf741bf5e6f565"} err="failed to get container status \"522e35767aef9e940b2987e6ece7936abb196cf41f3e887c2fdf741bf5e6f565\": rpc error: code = NotFound desc = could not find container \"522e35767aef9e940b2987e6ece7936abb196cf41f3e887c2fdf741bf5e6f565\": container with ID starting with 522e35767aef9e940b2987e6ece7936abb196cf41f3e887c2fdf741bf5e6f565 not found: ID does not exist" Jan 20 19:18:58 crc kubenswrapper[4661]: I0120 19:18:58.154509 4661 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="97a3fd4b-8c90-4fdf-bf19-c7c5cf95a244" path="/var/lib/kubelet/pods/97a3fd4b-8c90-4fdf-bf19-c7c5cf95a244/volumes" Jan 20 19:19:05 crc kubenswrapper[4661]: I0120 19:19:05.144541 4661 scope.go:117] "RemoveContainer" containerID="88aefdb17fbdb2f9a99910e27de0632e3833c47bcd174020bd5e7e94c3da0469" Jan 20 19:19:05 crc kubenswrapper[4661]: E0120 19:19:05.145634 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-svf7c_openshift-machine-config-operator(78855c94-da90-4523-8d65-70f7fd153dee)\"" pod="openshift-machine-config-operator/machine-config-daemon-svf7c" podUID="78855c94-da90-4523-8d65-70f7fd153dee" Jan 20 19:19:09 crc kubenswrapper[4661]: I0120 19:19:09.314792 4661 generic.go:334] "Generic (PLEG): container finished" podID="f5be9df1-a80b-4503-8bb4-e1b1d314d3f5" containerID="163965f4c4bbe998e5d80d06a2ce4e9bafe18c567946a5ef5ba56fd3c4c2fc3e" exitCode=0 Jan 20 19:19:09 crc kubenswrapper[4661]: I0120 19:19:09.315176 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-7bgfs/crc-debug-vx6vh" event={"ID":"f5be9df1-a80b-4503-8bb4-e1b1d314d3f5","Type":"ContainerDied","Data":"163965f4c4bbe998e5d80d06a2ce4e9bafe18c567946a5ef5ba56fd3c4c2fc3e"} Jan 20 19:19:10 crc kubenswrapper[4661]: I0120 19:19:10.418621 4661 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-7bgfs/crc-debug-vx6vh" Jan 20 19:19:10 crc kubenswrapper[4661]: I0120 19:19:10.458755 4661 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-7bgfs/crc-debug-vx6vh"] Jan 20 19:19:10 crc kubenswrapper[4661]: I0120 19:19:10.467247 4661 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-7bgfs/crc-debug-vx6vh"] Jan 20 19:19:10 crc kubenswrapper[4661]: I0120 19:19:10.557120 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lrnhc\" (UniqueName: \"kubernetes.io/projected/f5be9df1-a80b-4503-8bb4-e1b1d314d3f5-kube-api-access-lrnhc\") pod \"f5be9df1-a80b-4503-8bb4-e1b1d314d3f5\" (UID: \"f5be9df1-a80b-4503-8bb4-e1b1d314d3f5\") " Jan 20 19:19:10 crc kubenswrapper[4661]: I0120 19:19:10.557397 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/f5be9df1-a80b-4503-8bb4-e1b1d314d3f5-host\") pod \"f5be9df1-a80b-4503-8bb4-e1b1d314d3f5\" (UID: \"f5be9df1-a80b-4503-8bb4-e1b1d314d3f5\") " Jan 20 19:19:10 crc kubenswrapper[4661]: I0120 19:19:10.557453 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f5be9df1-a80b-4503-8bb4-e1b1d314d3f5-host" (OuterVolumeSpecName: "host") pod "f5be9df1-a80b-4503-8bb4-e1b1d314d3f5" (UID: "f5be9df1-a80b-4503-8bb4-e1b1d314d3f5"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 20 19:19:10 crc kubenswrapper[4661]: I0120 19:19:10.558182 4661 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/f5be9df1-a80b-4503-8bb4-e1b1d314d3f5-host\") on node \"crc\" DevicePath \"\"" Jan 20 19:19:10 crc kubenswrapper[4661]: I0120 19:19:10.581064 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f5be9df1-a80b-4503-8bb4-e1b1d314d3f5-kube-api-access-lrnhc" (OuterVolumeSpecName: "kube-api-access-lrnhc") pod "f5be9df1-a80b-4503-8bb4-e1b1d314d3f5" (UID: "f5be9df1-a80b-4503-8bb4-e1b1d314d3f5"). InnerVolumeSpecName "kube-api-access-lrnhc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 19:19:10 crc kubenswrapper[4661]: I0120 19:19:10.660765 4661 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lrnhc\" (UniqueName: \"kubernetes.io/projected/f5be9df1-a80b-4503-8bb4-e1b1d314d3f5-kube-api-access-lrnhc\") on node \"crc\" DevicePath \"\"" Jan 20 19:19:11 crc kubenswrapper[4661]: I0120 19:19:11.334842 4661 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0db8c3d13211bd7e420e3750435c1a4e67cdc0fbde0a840b1ef5d8c14c3cbc81" Jan 20 19:19:11 crc kubenswrapper[4661]: I0120 19:19:11.334933 4661 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-7bgfs/crc-debug-vx6vh" Jan 20 19:19:11 crc kubenswrapper[4661]: I0120 19:19:11.697009 4661 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-7bgfs/crc-debug-4x8mr"] Jan 20 19:19:11 crc kubenswrapper[4661]: E0120 19:19:11.697770 4661 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="97a3fd4b-8c90-4fdf-bf19-c7c5cf95a244" containerName="registry-server" Jan 20 19:19:11 crc kubenswrapper[4661]: I0120 19:19:11.697790 4661 state_mem.go:107] "Deleted CPUSet assignment" podUID="97a3fd4b-8c90-4fdf-bf19-c7c5cf95a244" containerName="registry-server" Jan 20 19:19:11 crc kubenswrapper[4661]: E0120 19:19:11.697809 4661 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f5be9df1-a80b-4503-8bb4-e1b1d314d3f5" containerName="container-00" Jan 20 19:19:11 crc kubenswrapper[4661]: I0120 19:19:11.697816 4661 state_mem.go:107] "Deleted CPUSet assignment" podUID="f5be9df1-a80b-4503-8bb4-e1b1d314d3f5" containerName="container-00" Jan 20 19:19:11 crc kubenswrapper[4661]: E0120 19:19:11.697849 4661 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="97a3fd4b-8c90-4fdf-bf19-c7c5cf95a244" containerName="extract-utilities" Jan 20 19:19:11 crc kubenswrapper[4661]: I0120 19:19:11.697858 4661 state_mem.go:107] "Deleted CPUSet assignment" podUID="97a3fd4b-8c90-4fdf-bf19-c7c5cf95a244" containerName="extract-utilities" Jan 20 19:19:11 crc kubenswrapper[4661]: E0120 19:19:11.697868 4661 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="97a3fd4b-8c90-4fdf-bf19-c7c5cf95a244" containerName="extract-content" Jan 20 19:19:11 crc kubenswrapper[4661]: I0120 19:19:11.697876 4661 state_mem.go:107] "Deleted CPUSet assignment" podUID="97a3fd4b-8c90-4fdf-bf19-c7c5cf95a244" containerName="extract-content" Jan 20 19:19:11 crc kubenswrapper[4661]: I0120 19:19:11.698051 4661 memory_manager.go:354] "RemoveStaleState removing state" podUID="f5be9df1-a80b-4503-8bb4-e1b1d314d3f5" containerName="container-00" Jan 20 19:19:11 crc kubenswrapper[4661]: I0120 19:19:11.698075 4661 memory_manager.go:354] "RemoveStaleState removing state" podUID="97a3fd4b-8c90-4fdf-bf19-c7c5cf95a244" containerName="registry-server" Jan 20 19:19:11 crc kubenswrapper[4661]: I0120 19:19:11.698724 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-7bgfs/crc-debug-4x8mr" Jan 20 19:19:11 crc kubenswrapper[4661]: I0120 19:19:11.886953 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kr4pp\" (UniqueName: \"kubernetes.io/projected/10188f24-f92d-44ff-9f61-e2c9edd8ec76-kube-api-access-kr4pp\") pod \"crc-debug-4x8mr\" (UID: \"10188f24-f92d-44ff-9f61-e2c9edd8ec76\") " pod="openshift-must-gather-7bgfs/crc-debug-4x8mr" Jan 20 19:19:11 crc kubenswrapper[4661]: I0120 19:19:11.887037 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/10188f24-f92d-44ff-9f61-e2c9edd8ec76-host\") pod \"crc-debug-4x8mr\" (UID: \"10188f24-f92d-44ff-9f61-e2c9edd8ec76\") " pod="openshift-must-gather-7bgfs/crc-debug-4x8mr" Jan 20 19:19:11 crc kubenswrapper[4661]: I0120 19:19:11.990170 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/10188f24-f92d-44ff-9f61-e2c9edd8ec76-host\") pod \"crc-debug-4x8mr\" (UID: \"10188f24-f92d-44ff-9f61-e2c9edd8ec76\") " pod="openshift-must-gather-7bgfs/crc-debug-4x8mr" Jan 20 19:19:11 crc kubenswrapper[4661]: I0120 19:19:11.990279 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/10188f24-f92d-44ff-9f61-e2c9edd8ec76-host\") pod \"crc-debug-4x8mr\" (UID: \"10188f24-f92d-44ff-9f61-e2c9edd8ec76\") " pod="openshift-must-gather-7bgfs/crc-debug-4x8mr" Jan 20 19:19:11 crc kubenswrapper[4661]: I0120 19:19:11.991338 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kr4pp\" (UniqueName: \"kubernetes.io/projected/10188f24-f92d-44ff-9f61-e2c9edd8ec76-kube-api-access-kr4pp\") pod \"crc-debug-4x8mr\" (UID: \"10188f24-f92d-44ff-9f61-e2c9edd8ec76\") " pod="openshift-must-gather-7bgfs/crc-debug-4x8mr" Jan 20 19:19:12 crc kubenswrapper[4661]: I0120 19:19:12.020275 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kr4pp\" (UniqueName: \"kubernetes.io/projected/10188f24-f92d-44ff-9f61-e2c9edd8ec76-kube-api-access-kr4pp\") pod \"crc-debug-4x8mr\" (UID: \"10188f24-f92d-44ff-9f61-e2c9edd8ec76\") " pod="openshift-must-gather-7bgfs/crc-debug-4x8mr" Jan 20 19:19:12 crc kubenswrapper[4661]: I0120 19:19:12.151764 4661 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f5be9df1-a80b-4503-8bb4-e1b1d314d3f5" path="/var/lib/kubelet/pods/f5be9df1-a80b-4503-8bb4-e1b1d314d3f5/volumes" Jan 20 19:19:12 crc kubenswrapper[4661]: I0120 19:19:12.314403 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-7bgfs/crc-debug-4x8mr" Jan 20 19:19:12 crc kubenswrapper[4661]: W0120 19:19:12.375757 4661 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod10188f24_f92d_44ff_9f61_e2c9edd8ec76.slice/crio-b1426f3b8cad2f4171a55ab20c747bd5998226962fa9a674d37b8671274551e4 WatchSource:0}: Error finding container b1426f3b8cad2f4171a55ab20c747bd5998226962fa9a674d37b8671274551e4: Status 404 returned error can't find the container with id b1426f3b8cad2f4171a55ab20c747bd5998226962fa9a674d37b8671274551e4 Jan 20 19:19:13 crc kubenswrapper[4661]: I0120 19:19:13.357600 4661 generic.go:334] "Generic (PLEG): container finished" podID="10188f24-f92d-44ff-9f61-e2c9edd8ec76" containerID="77fcc1525710554ff9d931e51ebf26318892a8e0eb5935d7538c8896854f957b" exitCode=0 Jan 20 19:19:13 crc kubenswrapper[4661]: I0120 19:19:13.357700 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-7bgfs/crc-debug-4x8mr" event={"ID":"10188f24-f92d-44ff-9f61-e2c9edd8ec76","Type":"ContainerDied","Data":"77fcc1525710554ff9d931e51ebf26318892a8e0eb5935d7538c8896854f957b"} Jan 20 19:19:13 crc kubenswrapper[4661]: I0120 19:19:13.358025 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-7bgfs/crc-debug-4x8mr" event={"ID":"10188f24-f92d-44ff-9f61-e2c9edd8ec76","Type":"ContainerStarted","Data":"b1426f3b8cad2f4171a55ab20c747bd5998226962fa9a674d37b8671274551e4"} Jan 20 19:19:13 crc kubenswrapper[4661]: I0120 19:19:13.809485 4661 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-7bgfs/crc-debug-4x8mr"] Jan 20 19:19:13 crc kubenswrapper[4661]: I0120 19:19:13.816544 4661 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-7bgfs/crc-debug-4x8mr"] Jan 20 19:19:14 crc kubenswrapper[4661]: I0120 19:19:14.454195 4661 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-7bgfs/crc-debug-4x8mr" Jan 20 19:19:14 crc kubenswrapper[4661]: I0120 19:19:14.645192 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/10188f24-f92d-44ff-9f61-e2c9edd8ec76-host\") pod \"10188f24-f92d-44ff-9f61-e2c9edd8ec76\" (UID: \"10188f24-f92d-44ff-9f61-e2c9edd8ec76\") " Jan 20 19:19:14 crc kubenswrapper[4661]: I0120 19:19:14.645327 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/10188f24-f92d-44ff-9f61-e2c9edd8ec76-host" (OuterVolumeSpecName: "host") pod "10188f24-f92d-44ff-9f61-e2c9edd8ec76" (UID: "10188f24-f92d-44ff-9f61-e2c9edd8ec76"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 20 19:19:14 crc kubenswrapper[4661]: I0120 19:19:14.645432 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kr4pp\" (UniqueName: \"kubernetes.io/projected/10188f24-f92d-44ff-9f61-e2c9edd8ec76-kube-api-access-kr4pp\") pod \"10188f24-f92d-44ff-9f61-e2c9edd8ec76\" (UID: \"10188f24-f92d-44ff-9f61-e2c9edd8ec76\") " Jan 20 19:19:14 crc kubenswrapper[4661]: I0120 19:19:14.646129 4661 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/10188f24-f92d-44ff-9f61-e2c9edd8ec76-host\") on node \"crc\" DevicePath \"\"" Jan 20 19:19:14 crc kubenswrapper[4661]: I0120 19:19:14.651341 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/10188f24-f92d-44ff-9f61-e2c9edd8ec76-kube-api-access-kr4pp" (OuterVolumeSpecName: "kube-api-access-kr4pp") pod "10188f24-f92d-44ff-9f61-e2c9edd8ec76" (UID: "10188f24-f92d-44ff-9f61-e2c9edd8ec76"). InnerVolumeSpecName "kube-api-access-kr4pp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 19:19:14 crc kubenswrapper[4661]: I0120 19:19:14.747731 4661 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kr4pp\" (UniqueName: \"kubernetes.io/projected/10188f24-f92d-44ff-9f61-e2c9edd8ec76-kube-api-access-kr4pp\") on node \"crc\" DevicePath \"\"" Jan 20 19:19:14 crc kubenswrapper[4661]: I0120 19:19:14.975972 4661 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-7bgfs/crc-debug-94nnt"] Jan 20 19:19:14 crc kubenswrapper[4661]: E0120 19:19:14.976423 4661 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="10188f24-f92d-44ff-9f61-e2c9edd8ec76" containerName="container-00" Jan 20 19:19:14 crc kubenswrapper[4661]: I0120 19:19:14.976443 4661 state_mem.go:107] "Deleted CPUSet assignment" podUID="10188f24-f92d-44ff-9f61-e2c9edd8ec76" containerName="container-00" Jan 20 19:19:14 crc kubenswrapper[4661]: I0120 19:19:14.976701 4661 memory_manager.go:354] "RemoveStaleState removing state" podUID="10188f24-f92d-44ff-9f61-e2c9edd8ec76" containerName="container-00" Jan 20 19:19:14 crc kubenswrapper[4661]: I0120 19:19:14.977443 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-7bgfs/crc-debug-94nnt" Jan 20 19:19:15 crc kubenswrapper[4661]: I0120 19:19:15.052568 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-px5sd\" (UniqueName: \"kubernetes.io/projected/47080d1d-6564-4ae9-b2ca-ece49c67c85d-kube-api-access-px5sd\") pod \"crc-debug-94nnt\" (UID: \"47080d1d-6564-4ae9-b2ca-ece49c67c85d\") " pod="openshift-must-gather-7bgfs/crc-debug-94nnt" Jan 20 19:19:15 crc kubenswrapper[4661]: I0120 19:19:15.052876 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/47080d1d-6564-4ae9-b2ca-ece49c67c85d-host\") pod \"crc-debug-94nnt\" (UID: \"47080d1d-6564-4ae9-b2ca-ece49c67c85d\") " pod="openshift-must-gather-7bgfs/crc-debug-94nnt" Jan 20 19:19:15 crc kubenswrapper[4661]: I0120 19:19:15.154911 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-px5sd\" (UniqueName: \"kubernetes.io/projected/47080d1d-6564-4ae9-b2ca-ece49c67c85d-kube-api-access-px5sd\") pod \"crc-debug-94nnt\" (UID: \"47080d1d-6564-4ae9-b2ca-ece49c67c85d\") " pod="openshift-must-gather-7bgfs/crc-debug-94nnt" Jan 20 19:19:15 crc kubenswrapper[4661]: I0120 19:19:15.154985 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/47080d1d-6564-4ae9-b2ca-ece49c67c85d-host\") pod \"crc-debug-94nnt\" (UID: \"47080d1d-6564-4ae9-b2ca-ece49c67c85d\") " pod="openshift-must-gather-7bgfs/crc-debug-94nnt" Jan 20 19:19:15 crc kubenswrapper[4661]: I0120 19:19:15.155169 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/47080d1d-6564-4ae9-b2ca-ece49c67c85d-host\") pod \"crc-debug-94nnt\" (UID: \"47080d1d-6564-4ae9-b2ca-ece49c67c85d\") " pod="openshift-must-gather-7bgfs/crc-debug-94nnt" Jan 20 19:19:15 crc kubenswrapper[4661]: I0120 19:19:15.175265 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-px5sd\" (UniqueName: \"kubernetes.io/projected/47080d1d-6564-4ae9-b2ca-ece49c67c85d-kube-api-access-px5sd\") pod \"crc-debug-94nnt\" (UID: \"47080d1d-6564-4ae9-b2ca-ece49c67c85d\") " pod="openshift-must-gather-7bgfs/crc-debug-94nnt" Jan 20 19:19:15 crc kubenswrapper[4661]: I0120 19:19:15.297446 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-7bgfs/crc-debug-94nnt" Jan 20 19:19:15 crc kubenswrapper[4661]: W0120 19:19:15.330898 4661 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod47080d1d_6564_4ae9_b2ca_ece49c67c85d.slice/crio-bbbb6a4d3061defdc63e424ac3df09cc7937823ee0deb7b9827ccf9cda8a69b4 WatchSource:0}: Error finding container bbbb6a4d3061defdc63e424ac3df09cc7937823ee0deb7b9827ccf9cda8a69b4: Status 404 returned error can't find the container with id bbbb6a4d3061defdc63e424ac3df09cc7937823ee0deb7b9827ccf9cda8a69b4 Jan 20 19:19:15 crc kubenswrapper[4661]: I0120 19:19:15.380114 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-7bgfs/crc-debug-94nnt" event={"ID":"47080d1d-6564-4ae9-b2ca-ece49c67c85d","Type":"ContainerStarted","Data":"bbbb6a4d3061defdc63e424ac3df09cc7937823ee0deb7b9827ccf9cda8a69b4"} Jan 20 19:19:15 crc kubenswrapper[4661]: I0120 19:19:15.382293 4661 scope.go:117] "RemoveContainer" containerID="77fcc1525710554ff9d931e51ebf26318892a8e0eb5935d7538c8896854f957b" Jan 20 19:19:15 crc kubenswrapper[4661]: I0120 19:19:15.382437 4661 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-7bgfs/crc-debug-4x8mr" Jan 20 19:19:16 crc kubenswrapper[4661]: I0120 19:19:16.162869 4661 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="10188f24-f92d-44ff-9f61-e2c9edd8ec76" path="/var/lib/kubelet/pods/10188f24-f92d-44ff-9f61-e2c9edd8ec76/volumes" Jan 20 19:19:16 crc kubenswrapper[4661]: I0120 19:19:16.396409 4661 generic.go:334] "Generic (PLEG): container finished" podID="47080d1d-6564-4ae9-b2ca-ece49c67c85d" containerID="307a8db3b6ac0006ed1c4fea25c223ac11d8ebe51d0340b5ec8ffca3e40853c9" exitCode=0 Jan 20 19:19:16 crc kubenswrapper[4661]: I0120 19:19:16.396480 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-7bgfs/crc-debug-94nnt" event={"ID":"47080d1d-6564-4ae9-b2ca-ece49c67c85d","Type":"ContainerDied","Data":"307a8db3b6ac0006ed1c4fea25c223ac11d8ebe51d0340b5ec8ffca3e40853c9"} Jan 20 19:19:16 crc kubenswrapper[4661]: I0120 19:19:16.444957 4661 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-7bgfs/crc-debug-94nnt"] Jan 20 19:19:16 crc kubenswrapper[4661]: I0120 19:19:16.456316 4661 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-7bgfs/crc-debug-94nnt"] Jan 20 19:19:17 crc kubenswrapper[4661]: I0120 19:19:17.534344 4661 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-7bgfs/crc-debug-94nnt" Jan 20 19:19:17 crc kubenswrapper[4661]: I0120 19:19:17.710171 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-px5sd\" (UniqueName: \"kubernetes.io/projected/47080d1d-6564-4ae9-b2ca-ece49c67c85d-kube-api-access-px5sd\") pod \"47080d1d-6564-4ae9-b2ca-ece49c67c85d\" (UID: \"47080d1d-6564-4ae9-b2ca-ece49c67c85d\") " Jan 20 19:19:17 crc kubenswrapper[4661]: I0120 19:19:17.710614 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/47080d1d-6564-4ae9-b2ca-ece49c67c85d-host\") pod \"47080d1d-6564-4ae9-b2ca-ece49c67c85d\" (UID: \"47080d1d-6564-4ae9-b2ca-ece49c67c85d\") " Jan 20 19:19:17 crc kubenswrapper[4661]: I0120 19:19:17.711557 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/47080d1d-6564-4ae9-b2ca-ece49c67c85d-host" (OuterVolumeSpecName: "host") pod "47080d1d-6564-4ae9-b2ca-ece49c67c85d" (UID: "47080d1d-6564-4ae9-b2ca-ece49c67c85d"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 20 19:19:17 crc kubenswrapper[4661]: I0120 19:19:17.727404 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/47080d1d-6564-4ae9-b2ca-ece49c67c85d-kube-api-access-px5sd" (OuterVolumeSpecName: "kube-api-access-px5sd") pod "47080d1d-6564-4ae9-b2ca-ece49c67c85d" (UID: "47080d1d-6564-4ae9-b2ca-ece49c67c85d"). InnerVolumeSpecName "kube-api-access-px5sd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 19:19:17 crc kubenswrapper[4661]: I0120 19:19:17.813549 4661 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/47080d1d-6564-4ae9-b2ca-ece49c67c85d-host\") on node \"crc\" DevicePath \"\"" Jan 20 19:19:17 crc kubenswrapper[4661]: I0120 19:19:17.813587 4661 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-px5sd\" (UniqueName: \"kubernetes.io/projected/47080d1d-6564-4ae9-b2ca-ece49c67c85d-kube-api-access-px5sd\") on node \"crc\" DevicePath \"\"" Jan 20 19:19:18 crc kubenswrapper[4661]: I0120 19:19:18.153060 4661 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="47080d1d-6564-4ae9-b2ca-ece49c67c85d" path="/var/lib/kubelet/pods/47080d1d-6564-4ae9-b2ca-ece49c67c85d/volumes" Jan 20 19:19:18 crc kubenswrapper[4661]: I0120 19:19:18.417865 4661 scope.go:117] "RemoveContainer" containerID="307a8db3b6ac0006ed1c4fea25c223ac11d8ebe51d0340b5ec8ffca3e40853c9" Jan 20 19:19:18 crc kubenswrapper[4661]: I0120 19:19:18.418064 4661 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-7bgfs/crc-debug-94nnt" Jan 20 19:19:19 crc kubenswrapper[4661]: I0120 19:19:19.143189 4661 scope.go:117] "RemoveContainer" containerID="88aefdb17fbdb2f9a99910e27de0632e3833c47bcd174020bd5e7e94c3da0469" Jan 20 19:19:19 crc kubenswrapper[4661]: E0120 19:19:19.143799 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-svf7c_openshift-machine-config-operator(78855c94-da90-4523-8d65-70f7fd153dee)\"" pod="openshift-machine-config-operator/machine-config-daemon-svf7c" podUID="78855c94-da90-4523-8d65-70f7fd153dee" Jan 20 19:19:24 crc kubenswrapper[4661]: I0120 19:19:24.503480 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-7ddb5c749-bbwzg_e257e7b3-ba70-44d2-abb9-6a6848bf1c06/manager/0.log" Jan 20 19:19:24 crc kubenswrapper[4661]: I0120 19:19:24.517177 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_c328dd49ee03958f20cc032e90cbfddae000be998ce16b6019d67f47cbzppkm_dc3336c8-c6d2-4f42-b8d7-534f5e765776/extract/0.log" Jan 20 19:19:24 crc kubenswrapper[4661]: I0120 19:19:24.528435 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_c328dd49ee03958f20cc032e90cbfddae000be998ce16b6019d67f47cbzppkm_dc3336c8-c6d2-4f42-b8d7-534f5e765776/util/0.log" Jan 20 19:19:24 crc kubenswrapper[4661]: I0120 19:19:24.540150 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_c328dd49ee03958f20cc032e90cbfddae000be998ce16b6019d67f47cbzppkm_dc3336c8-c6d2-4f42-b8d7-534f5e765776/pull/0.log" Jan 20 19:19:24 crc kubenswrapper[4661]: I0120 19:19:24.600866 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-9b68f5989-hk9zx_51bdae14-22a5-4783-8712-fc51ca6d8a07/manager/0.log" Jan 20 19:19:24 crc kubenswrapper[4661]: I0120 19:19:24.613643 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-9f958b845-dw6hd_08e08814-f213-4476-a78d-82cddc30022d/manager/0.log" Jan 20 19:19:24 crc kubenswrapper[4661]: I0120 19:19:24.695868 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-c6994669c-gzjg9_eccd3436-cb57-49b8-a2f7-106fe5e39c7d/manager/0.log" Jan 20 19:19:24 crc kubenswrapper[4661]: I0120 19:19:24.705174 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-594c8c9d5d-r5bws_2bf3fc47-9ca2-45aa-9835-1ed5d413b0ec/manager/0.log" Jan 20 19:19:24 crc kubenswrapper[4661]: I0120 19:19:24.737319 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-77d5c5b54f-5w4m2_04a8f9c5-45fc-47db-adf2-3de38af2cf96/manager/0.log" Jan 20 19:19:25 crc kubenswrapper[4661]: I0120 19:19:25.014937 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-77c48c7859-w8bbb_70002b35-6f0d-4679-9279-a80574c467f0/manager/0.log" Jan 20 19:19:25 crc kubenswrapper[4661]: I0120 19:19:25.025578 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-78757b4889-cszrc_10ed69a9-7fbf-4139-b2b2-80dec4f8cf41/manager/0.log" Jan 20 19:19:25 crc kubenswrapper[4661]: I0120 19:19:25.105315 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-767fdc4f47-4g9db_a5920876-3cd0-41cf-b7d8-6fd8ea0af29c/manager/0.log" Jan 20 19:19:25 crc kubenswrapper[4661]: I0120 19:19:25.159566 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-864f6b75bf-svt25_6c1159da-faf7-4389-b57b-05173827968d/manager/0.log" Jan 20 19:19:25 crc kubenswrapper[4661]: I0120 19:19:25.192578 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-c87fff755-69ktn_12b130a9-df33-4c1a-a145-961791dc9d9d/manager/0.log" Jan 20 19:19:25 crc kubenswrapper[4661]: I0120 19:19:25.233627 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-cb4666565-cqf8m_1b070a22-e050-4db7-bc74-f8a1129a8d61/manager/0.log" Jan 20 19:19:25 crc kubenswrapper[4661]: I0120 19:19:25.307014 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-65849867d6-2g55t_52bfaf4d-624e-45d7-86d8-4c0e18afe2e6/manager/0.log" Jan 20 19:19:25 crc kubenswrapper[4661]: I0120 19:19:25.315849 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-7fc9b76cf6-mqz45_5798b368-6725-4e14-a77c-37b7bcfd538d/manager/0.log" Jan 20 19:19:25 crc kubenswrapper[4661]: I0120 19:19:25.334736 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-6b68b8b854xbhqc_65995719-9618-424e-a324-084d52a0cd47/manager/0.log" Jan 20 19:19:25 crc kubenswrapper[4661]: I0120 19:19:25.438264 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-init-fdc84db4c-p87rq_8e170a45-9133-4aee-81e7-7f6188f48c91/operator/0.log" Jan 20 19:19:26 crc kubenswrapper[4661]: I0120 19:19:26.642343 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-58b4997fc9-9wjks_d603e76e-8a9d-444f-b251-2d29b5588c8e/manager/0.log" Jan 20 19:19:26 crc kubenswrapper[4661]: I0120 19:19:26.652945 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-wj8kr_a9b5891c-9b50-4f14-ade6-69a048487d08/registry-server/0.log" Jan 20 19:19:26 crc kubenswrapper[4661]: I0120 19:19:26.708012 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-55db956ddc-4msz7_f61aad5b-f531-4dc0-8328-4b057c84651e/manager/0.log" Jan 20 19:19:26 crc kubenswrapper[4661]: I0120 19:19:26.730825 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-686df47fcb-tcgdv_dbbf0040-fc50-457e-ad76-42d6061a6df1/manager/0.log" Jan 20 19:19:26 crc kubenswrapper[4661]: I0120 19:19:26.750934 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-v8gf9_2e78fff0-2eba-4aa9-a4b0-2f5ff775e1ec/operator/0.log" Jan 20 19:19:26 crc kubenswrapper[4661]: I0120 19:19:26.764439 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-85dd56d4cc-wqr58_497cc518-3499-43be-8aff-c4ff58803cba/manager/0.log" Jan 20 19:19:26 crc kubenswrapper[4661]: I0120 19:19:26.830851 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-5f8f495fcf-2wsx8_22fe1eac-c7f9-4cef-8811-db5861b4caa2/manager/0.log" Jan 20 19:19:26 crc kubenswrapper[4661]: I0120 19:19:26.840925 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-7cd8bc9dbb-gg985_7f267072-d784-469d-acad-238e58ddd82c/manager/0.log" Jan 20 19:19:26 crc kubenswrapper[4661]: I0120 19:19:26.849439 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-64cd966744-hppzk_5a07b584-21cc-464b-a3bf-046c6e0ab18f/manager/0.log" Jan 20 19:19:31 crc kubenswrapper[4661]: I0120 19:19:31.142823 4661 scope.go:117] "RemoveContainer" containerID="88aefdb17fbdb2f9a99910e27de0632e3833c47bcd174020bd5e7e94c3da0469" Jan 20 19:19:31 crc kubenswrapper[4661]: E0120 19:19:31.144080 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-svf7c_openshift-machine-config-operator(78855c94-da90-4523-8d65-70f7fd153dee)\"" pod="openshift-machine-config-operator/machine-config-daemon-svf7c" podUID="78855c94-da90-4523-8d65-70f7fd153dee" Jan 20 19:19:32 crc kubenswrapper[4661]: I0120 19:19:32.232156 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-qdlnn_a507ebcc-7e0b-445b-9688-882358d365ce/control-plane-machine-set-operator/0.log" Jan 20 19:19:32 crc kubenswrapper[4661]: I0120 19:19:32.249013 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-7hbkg_302e8226-565c-44a4-bb0e-dee670200ae3/kube-rbac-proxy/0.log" Jan 20 19:19:32 crc kubenswrapper[4661]: I0120 19:19:32.259048 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-7hbkg_302e8226-565c-44a4-bb0e-dee670200ae3/machine-api-operator/0.log" Jan 20 19:19:42 crc kubenswrapper[4661]: I0120 19:19:42.594471 4661 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-77b42"] Jan 20 19:19:42 crc kubenswrapper[4661]: E0120 19:19:42.596352 4661 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="47080d1d-6564-4ae9-b2ca-ece49c67c85d" containerName="container-00" Jan 20 19:19:42 crc kubenswrapper[4661]: I0120 19:19:42.596923 4661 state_mem.go:107] "Deleted CPUSet assignment" podUID="47080d1d-6564-4ae9-b2ca-ece49c67c85d" containerName="container-00" Jan 20 19:19:42 crc kubenswrapper[4661]: I0120 19:19:42.597179 4661 memory_manager.go:354] "RemoveStaleState removing state" podUID="47080d1d-6564-4ae9-b2ca-ece49c67c85d" containerName="container-00" Jan 20 19:19:42 crc kubenswrapper[4661]: I0120 19:19:42.598553 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-77b42" Jan 20 19:19:42 crc kubenswrapper[4661]: I0120 19:19:42.636637 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-77b42"] Jan 20 19:19:42 crc kubenswrapper[4661]: I0120 19:19:42.723900 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a8990fca-0e09-46d9-b3db-f5b1e3e17bb1-catalog-content\") pod \"certified-operators-77b42\" (UID: \"a8990fca-0e09-46d9-b3db-f5b1e3e17bb1\") " pod="openshift-marketplace/certified-operators-77b42" Jan 20 19:19:42 crc kubenswrapper[4661]: I0120 19:19:42.724003 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a8990fca-0e09-46d9-b3db-f5b1e3e17bb1-utilities\") pod \"certified-operators-77b42\" (UID: \"a8990fca-0e09-46d9-b3db-f5b1e3e17bb1\") " pod="openshift-marketplace/certified-operators-77b42" Jan 20 19:19:42 crc kubenswrapper[4661]: I0120 19:19:42.724325 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zddmv\" (UniqueName: \"kubernetes.io/projected/a8990fca-0e09-46d9-b3db-f5b1e3e17bb1-kube-api-access-zddmv\") pod \"certified-operators-77b42\" (UID: \"a8990fca-0e09-46d9-b3db-f5b1e3e17bb1\") " pod="openshift-marketplace/certified-operators-77b42" Jan 20 19:19:42 crc kubenswrapper[4661]: I0120 19:19:42.827112 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zddmv\" (UniqueName: \"kubernetes.io/projected/a8990fca-0e09-46d9-b3db-f5b1e3e17bb1-kube-api-access-zddmv\") pod \"certified-operators-77b42\" (UID: \"a8990fca-0e09-46d9-b3db-f5b1e3e17bb1\") " pod="openshift-marketplace/certified-operators-77b42" Jan 20 19:19:42 crc kubenswrapper[4661]: I0120 19:19:42.827340 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a8990fca-0e09-46d9-b3db-f5b1e3e17bb1-catalog-content\") pod \"certified-operators-77b42\" (UID: \"a8990fca-0e09-46d9-b3db-f5b1e3e17bb1\") " pod="openshift-marketplace/certified-operators-77b42" Jan 20 19:19:42 crc kubenswrapper[4661]: I0120 19:19:42.827413 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a8990fca-0e09-46d9-b3db-f5b1e3e17bb1-utilities\") pod \"certified-operators-77b42\" (UID: \"a8990fca-0e09-46d9-b3db-f5b1e3e17bb1\") " pod="openshift-marketplace/certified-operators-77b42" Jan 20 19:19:42 crc kubenswrapper[4661]: I0120 19:19:42.828872 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a8990fca-0e09-46d9-b3db-f5b1e3e17bb1-catalog-content\") pod \"certified-operators-77b42\" (UID: \"a8990fca-0e09-46d9-b3db-f5b1e3e17bb1\") " pod="openshift-marketplace/certified-operators-77b42" Jan 20 19:19:42 crc kubenswrapper[4661]: I0120 19:19:42.828893 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a8990fca-0e09-46d9-b3db-f5b1e3e17bb1-utilities\") pod \"certified-operators-77b42\" (UID: \"a8990fca-0e09-46d9-b3db-f5b1e3e17bb1\") " pod="openshift-marketplace/certified-operators-77b42" Jan 20 19:19:42 crc kubenswrapper[4661]: I0120 19:19:42.866096 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zddmv\" (UniqueName: \"kubernetes.io/projected/a8990fca-0e09-46d9-b3db-f5b1e3e17bb1-kube-api-access-zddmv\") pod \"certified-operators-77b42\" (UID: \"a8990fca-0e09-46d9-b3db-f5b1e3e17bb1\") " pod="openshift-marketplace/certified-operators-77b42" Jan 20 19:19:42 crc kubenswrapper[4661]: I0120 19:19:42.935986 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-77b42" Jan 20 19:19:43 crc kubenswrapper[4661]: I0120 19:19:43.144220 4661 scope.go:117] "RemoveContainer" containerID="88aefdb17fbdb2f9a99910e27de0632e3833c47bcd174020bd5e7e94c3da0469" Jan 20 19:19:43 crc kubenswrapper[4661]: E0120 19:19:43.145033 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-svf7c_openshift-machine-config-operator(78855c94-da90-4523-8d65-70f7fd153dee)\"" pod="openshift-machine-config-operator/machine-config-daemon-svf7c" podUID="78855c94-da90-4523-8d65-70f7fd153dee" Jan 20 19:19:43 crc kubenswrapper[4661]: I0120 19:19:43.456604 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-77b42"] Jan 20 19:19:44 crc kubenswrapper[4661]: I0120 19:19:44.655269 4661 generic.go:334] "Generic (PLEG): container finished" podID="a8990fca-0e09-46d9-b3db-f5b1e3e17bb1" containerID="c8fca1c3b4897c54be557a8d1e2b655cb6ef5c6953ac3c3fb043ff4c19ca2a0b" exitCode=0 Jan 20 19:19:44 crc kubenswrapper[4661]: I0120 19:19:44.655312 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-77b42" event={"ID":"a8990fca-0e09-46d9-b3db-f5b1e3e17bb1","Type":"ContainerDied","Data":"c8fca1c3b4897c54be557a8d1e2b655cb6ef5c6953ac3c3fb043ff4c19ca2a0b"} Jan 20 19:19:44 crc kubenswrapper[4661]: I0120 19:19:44.655742 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-77b42" event={"ID":"a8990fca-0e09-46d9-b3db-f5b1e3e17bb1","Type":"ContainerStarted","Data":"d6b9f031f043e9f4673084b19f4f665a6ea38570b989766384240295f877e73a"} Jan 20 19:19:44 crc kubenswrapper[4661]: I0120 19:19:44.657522 4661 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 20 19:19:46 crc kubenswrapper[4661]: I0120 19:19:46.678752 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-77b42" event={"ID":"a8990fca-0e09-46d9-b3db-f5b1e3e17bb1","Type":"ContainerStarted","Data":"b356591ad3cde7567113ed8ce795bef8d986d49741cb2195adbdd7287a8735d1"} Jan 20 19:19:47 crc kubenswrapper[4661]: I0120 19:19:47.688756 4661 generic.go:334] "Generic (PLEG): container finished" podID="a8990fca-0e09-46d9-b3db-f5b1e3e17bb1" containerID="b356591ad3cde7567113ed8ce795bef8d986d49741cb2195adbdd7287a8735d1" exitCode=0 Jan 20 19:19:47 crc kubenswrapper[4661]: I0120 19:19:47.688869 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-77b42" event={"ID":"a8990fca-0e09-46d9-b3db-f5b1e3e17bb1","Type":"ContainerDied","Data":"b356591ad3cde7567113ed8ce795bef8d986d49741cb2195adbdd7287a8735d1"} Jan 20 19:19:48 crc kubenswrapper[4661]: I0120 19:19:48.702894 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-77b42" event={"ID":"a8990fca-0e09-46d9-b3db-f5b1e3e17bb1","Type":"ContainerStarted","Data":"ab17da6365990b43c2ee31789b041c01a0983b321bf781d87fbbc1b33c0f7698"} Jan 20 19:19:48 crc kubenswrapper[4661]: I0120 19:19:48.720790 4661 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-77b42" podStartSLOduration=3.282604323 podStartE2EDuration="6.720773553s" podCreationTimestamp="2026-01-20 19:19:42 +0000 UTC" firstStartedPulling="2026-01-20 19:19:44.657318511 +0000 UTC m=+4440.988108173" lastFinishedPulling="2026-01-20 19:19:48.095487751 +0000 UTC m=+4444.426277403" observedRunningTime="2026-01-20 19:19:48.719753936 +0000 UTC m=+4445.050543598" watchObservedRunningTime="2026-01-20 19:19:48.720773553 +0000 UTC m=+4445.051563215" Jan 20 19:19:52 crc kubenswrapper[4661]: E0120 19:19:52.627172 4661 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda8990fca_0e09_46d9_b3db_f5b1e3e17bb1.slice/crio-c8fca1c3b4897c54be557a8d1e2b655cb6ef5c6953ac3c3fb043ff4c19ca2a0b.scope\": RecentStats: unable to find data in memory cache]" Jan 20 19:19:52 crc kubenswrapper[4661]: I0120 19:19:52.936767 4661 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-77b42" Jan 20 19:19:52 crc kubenswrapper[4661]: I0120 19:19:52.937170 4661 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-77b42" Jan 20 19:19:52 crc kubenswrapper[4661]: I0120 19:19:52.992877 4661 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-77b42" Jan 20 19:19:53 crc kubenswrapper[4661]: I0120 19:19:53.814279 4661 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-77b42" Jan 20 19:19:53 crc kubenswrapper[4661]: I0120 19:19:53.874163 4661 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-77b42"] Jan 20 19:19:55 crc kubenswrapper[4661]: I0120 19:19:55.143583 4661 scope.go:117] "RemoveContainer" containerID="88aefdb17fbdb2f9a99910e27de0632e3833c47bcd174020bd5e7e94c3da0469" Jan 20 19:19:55 crc kubenswrapper[4661]: E0120 19:19:55.144144 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-svf7c_openshift-machine-config-operator(78855c94-da90-4523-8d65-70f7fd153dee)\"" pod="openshift-machine-config-operator/machine-config-daemon-svf7c" podUID="78855c94-da90-4523-8d65-70f7fd153dee" Jan 20 19:19:55 crc kubenswrapper[4661]: I0120 19:19:55.775203 4661 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-77b42" podUID="a8990fca-0e09-46d9-b3db-f5b1e3e17bb1" containerName="registry-server" containerID="cri-o://ab17da6365990b43c2ee31789b041c01a0983b321bf781d87fbbc1b33c0f7698" gracePeriod=2 Jan 20 19:19:56 crc kubenswrapper[4661]: I0120 19:19:56.287587 4661 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-77b42" Jan 20 19:19:56 crc kubenswrapper[4661]: I0120 19:19:56.350868 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zddmv\" (UniqueName: \"kubernetes.io/projected/a8990fca-0e09-46d9-b3db-f5b1e3e17bb1-kube-api-access-zddmv\") pod \"a8990fca-0e09-46d9-b3db-f5b1e3e17bb1\" (UID: \"a8990fca-0e09-46d9-b3db-f5b1e3e17bb1\") " Jan 20 19:19:56 crc kubenswrapper[4661]: I0120 19:19:56.351259 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a8990fca-0e09-46d9-b3db-f5b1e3e17bb1-catalog-content\") pod \"a8990fca-0e09-46d9-b3db-f5b1e3e17bb1\" (UID: \"a8990fca-0e09-46d9-b3db-f5b1e3e17bb1\") " Jan 20 19:19:56 crc kubenswrapper[4661]: I0120 19:19:56.351287 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a8990fca-0e09-46d9-b3db-f5b1e3e17bb1-utilities\") pod \"a8990fca-0e09-46d9-b3db-f5b1e3e17bb1\" (UID: \"a8990fca-0e09-46d9-b3db-f5b1e3e17bb1\") " Jan 20 19:19:56 crc kubenswrapper[4661]: I0120 19:19:56.352295 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a8990fca-0e09-46d9-b3db-f5b1e3e17bb1-utilities" (OuterVolumeSpecName: "utilities") pod "a8990fca-0e09-46d9-b3db-f5b1e3e17bb1" (UID: "a8990fca-0e09-46d9-b3db-f5b1e3e17bb1"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 19:19:56 crc kubenswrapper[4661]: I0120 19:19:56.359630 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a8990fca-0e09-46d9-b3db-f5b1e3e17bb1-kube-api-access-zddmv" (OuterVolumeSpecName: "kube-api-access-zddmv") pod "a8990fca-0e09-46d9-b3db-f5b1e3e17bb1" (UID: "a8990fca-0e09-46d9-b3db-f5b1e3e17bb1"). InnerVolumeSpecName "kube-api-access-zddmv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 19:19:56 crc kubenswrapper[4661]: I0120 19:19:56.411080 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a8990fca-0e09-46d9-b3db-f5b1e3e17bb1-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a8990fca-0e09-46d9-b3db-f5b1e3e17bb1" (UID: "a8990fca-0e09-46d9-b3db-f5b1e3e17bb1"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 19:19:56 crc kubenswrapper[4661]: I0120 19:19:56.453407 4661 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zddmv\" (UniqueName: \"kubernetes.io/projected/a8990fca-0e09-46d9-b3db-f5b1e3e17bb1-kube-api-access-zddmv\") on node \"crc\" DevicePath \"\"" Jan 20 19:19:56 crc kubenswrapper[4661]: I0120 19:19:56.453450 4661 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a8990fca-0e09-46d9-b3db-f5b1e3e17bb1-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 20 19:19:56 crc kubenswrapper[4661]: I0120 19:19:56.453460 4661 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a8990fca-0e09-46d9-b3db-f5b1e3e17bb1-utilities\") on node \"crc\" DevicePath \"\"" Jan 20 19:19:56 crc kubenswrapper[4661]: I0120 19:19:56.800525 4661 generic.go:334] "Generic (PLEG): container finished" podID="a8990fca-0e09-46d9-b3db-f5b1e3e17bb1" containerID="ab17da6365990b43c2ee31789b041c01a0983b321bf781d87fbbc1b33c0f7698" exitCode=0 Jan 20 19:19:56 crc kubenswrapper[4661]: I0120 19:19:56.800571 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-77b42" event={"ID":"a8990fca-0e09-46d9-b3db-f5b1e3e17bb1","Type":"ContainerDied","Data":"ab17da6365990b43c2ee31789b041c01a0983b321bf781d87fbbc1b33c0f7698"} Jan 20 19:19:56 crc kubenswrapper[4661]: I0120 19:19:56.800606 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-77b42" event={"ID":"a8990fca-0e09-46d9-b3db-f5b1e3e17bb1","Type":"ContainerDied","Data":"d6b9f031f043e9f4673084b19f4f665a6ea38570b989766384240295f877e73a"} Jan 20 19:19:56 crc kubenswrapper[4661]: I0120 19:19:56.800624 4661 scope.go:117] "RemoveContainer" containerID="ab17da6365990b43c2ee31789b041c01a0983b321bf781d87fbbc1b33c0f7698" Jan 20 19:19:56 crc kubenswrapper[4661]: I0120 19:19:56.800816 4661 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-77b42" Jan 20 19:19:56 crc kubenswrapper[4661]: I0120 19:19:56.819827 4661 scope.go:117] "RemoveContainer" containerID="b356591ad3cde7567113ed8ce795bef8d986d49741cb2195adbdd7287a8735d1" Jan 20 19:19:56 crc kubenswrapper[4661]: I0120 19:19:56.837935 4661 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-77b42"] Jan 20 19:19:56 crc kubenswrapper[4661]: I0120 19:19:56.847380 4661 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-77b42"] Jan 20 19:19:56 crc kubenswrapper[4661]: I0120 19:19:56.858001 4661 scope.go:117] "RemoveContainer" containerID="c8fca1c3b4897c54be557a8d1e2b655cb6ef5c6953ac3c3fb043ff4c19ca2a0b" Jan 20 19:19:56 crc kubenswrapper[4661]: I0120 19:19:56.886497 4661 scope.go:117] "RemoveContainer" containerID="ab17da6365990b43c2ee31789b041c01a0983b321bf781d87fbbc1b33c0f7698" Jan 20 19:19:56 crc kubenswrapper[4661]: E0120 19:19:56.887031 4661 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ab17da6365990b43c2ee31789b041c01a0983b321bf781d87fbbc1b33c0f7698\": container with ID starting with ab17da6365990b43c2ee31789b041c01a0983b321bf781d87fbbc1b33c0f7698 not found: ID does not exist" containerID="ab17da6365990b43c2ee31789b041c01a0983b321bf781d87fbbc1b33c0f7698" Jan 20 19:19:56 crc kubenswrapper[4661]: I0120 19:19:56.887154 4661 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ab17da6365990b43c2ee31789b041c01a0983b321bf781d87fbbc1b33c0f7698"} err="failed to get container status \"ab17da6365990b43c2ee31789b041c01a0983b321bf781d87fbbc1b33c0f7698\": rpc error: code = NotFound desc = could not find container \"ab17da6365990b43c2ee31789b041c01a0983b321bf781d87fbbc1b33c0f7698\": container with ID starting with ab17da6365990b43c2ee31789b041c01a0983b321bf781d87fbbc1b33c0f7698 not found: ID does not exist" Jan 20 19:19:56 crc kubenswrapper[4661]: I0120 19:19:56.887254 4661 scope.go:117] "RemoveContainer" containerID="b356591ad3cde7567113ed8ce795bef8d986d49741cb2195adbdd7287a8735d1" Jan 20 19:19:56 crc kubenswrapper[4661]: E0120 19:19:56.887928 4661 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b356591ad3cde7567113ed8ce795bef8d986d49741cb2195adbdd7287a8735d1\": container with ID starting with b356591ad3cde7567113ed8ce795bef8d986d49741cb2195adbdd7287a8735d1 not found: ID does not exist" containerID="b356591ad3cde7567113ed8ce795bef8d986d49741cb2195adbdd7287a8735d1" Jan 20 19:19:56 crc kubenswrapper[4661]: I0120 19:19:56.887969 4661 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b356591ad3cde7567113ed8ce795bef8d986d49741cb2195adbdd7287a8735d1"} err="failed to get container status \"b356591ad3cde7567113ed8ce795bef8d986d49741cb2195adbdd7287a8735d1\": rpc error: code = NotFound desc = could not find container \"b356591ad3cde7567113ed8ce795bef8d986d49741cb2195adbdd7287a8735d1\": container with ID starting with b356591ad3cde7567113ed8ce795bef8d986d49741cb2195adbdd7287a8735d1 not found: ID does not exist" Jan 20 19:19:56 crc kubenswrapper[4661]: I0120 19:19:56.887996 4661 scope.go:117] "RemoveContainer" containerID="c8fca1c3b4897c54be557a8d1e2b655cb6ef5c6953ac3c3fb043ff4c19ca2a0b" Jan 20 19:19:56 crc kubenswrapper[4661]: E0120 19:19:56.888449 4661 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c8fca1c3b4897c54be557a8d1e2b655cb6ef5c6953ac3c3fb043ff4c19ca2a0b\": container with ID starting with c8fca1c3b4897c54be557a8d1e2b655cb6ef5c6953ac3c3fb043ff4c19ca2a0b not found: ID does not exist" containerID="c8fca1c3b4897c54be557a8d1e2b655cb6ef5c6953ac3c3fb043ff4c19ca2a0b" Jan 20 19:19:56 crc kubenswrapper[4661]: I0120 19:19:56.888496 4661 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c8fca1c3b4897c54be557a8d1e2b655cb6ef5c6953ac3c3fb043ff4c19ca2a0b"} err="failed to get container status \"c8fca1c3b4897c54be557a8d1e2b655cb6ef5c6953ac3c3fb043ff4c19ca2a0b\": rpc error: code = NotFound desc = could not find container \"c8fca1c3b4897c54be557a8d1e2b655cb6ef5c6953ac3c3fb043ff4c19ca2a0b\": container with ID starting with c8fca1c3b4897c54be557a8d1e2b655cb6ef5c6953ac3c3fb043ff4c19ca2a0b not found: ID does not exist" Jan 20 19:19:58 crc kubenswrapper[4661]: I0120 19:19:58.158185 4661 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a8990fca-0e09-46d9-b3db-f5b1e3e17bb1" path="/var/lib/kubelet/pods/a8990fca-0e09-46d9-b3db-f5b1e3e17bb1/volumes" Jan 20 19:20:02 crc kubenswrapper[4661]: E0120 19:20:02.898466 4661 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda8990fca_0e09_46d9_b3db_f5b1e3e17bb1.slice/crio-c8fca1c3b4897c54be557a8d1e2b655cb6ef5c6953ac3c3fb043ff4c19ca2a0b.scope\": RecentStats: unable to find data in memory cache]" Jan 20 19:20:03 crc kubenswrapper[4661]: I0120 19:20:03.841211 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-858654f9db-2wqcl_13a9f3bc-c133-49ea-9cfd-bc8c107e32c6/cert-manager-controller/0.log" Jan 20 19:20:03 crc kubenswrapper[4661]: I0120 19:20:03.857729 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-cf98fcc89-f7qg8_3c6e82bb-badf-4079-abf0-566f4b6f0776/cert-manager-cainjector/0.log" Jan 20 19:20:03 crc kubenswrapper[4661]: I0120 19:20:03.871748 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-687f57d79b-scrrz_b1feddfe-5c29-4eba-99c5-65849498f0dc/cert-manager-webhook/0.log" Jan 20 19:20:09 crc kubenswrapper[4661]: I0120 19:20:09.142540 4661 scope.go:117] "RemoveContainer" containerID="88aefdb17fbdb2f9a99910e27de0632e3833c47bcd174020bd5e7e94c3da0469" Jan 20 19:20:09 crc kubenswrapper[4661]: E0120 19:20:09.143247 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-svf7c_openshift-machine-config-operator(78855c94-da90-4523-8d65-70f7fd153dee)\"" pod="openshift-machine-config-operator/machine-config-daemon-svf7c" podUID="78855c94-da90-4523-8d65-70f7fd153dee" Jan 20 19:20:10 crc kubenswrapper[4661]: I0120 19:20:10.546191 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-7754f76f8b-frgmz_68fe7ab0-cff5-474c-aa0d-7c579ddc51bb/nmstate-console-plugin/0.log" Jan 20 19:20:10 crc kubenswrapper[4661]: I0120 19:20:10.575310 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-q9p2x_0b121ec2-f30a-46c4-a556-dd00cca2a1e3/nmstate-handler/0.log" Jan 20 19:20:10 crc kubenswrapper[4661]: I0120 19:20:10.593925 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-7pxb7_7f2c01ac-294a-42b8-9988-22419d94a0ec/nmstate-metrics/0.log" Jan 20 19:20:10 crc kubenswrapper[4661]: I0120 19:20:10.602689 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-7pxb7_7f2c01ac-294a-42b8-9988-22419d94a0ec/kube-rbac-proxy/0.log" Jan 20 19:20:10 crc kubenswrapper[4661]: I0120 19:20:10.618867 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-646758c888-mkng6_91e3ce75-26ba-42cb-b4dd-322bc9188bab/nmstate-operator/0.log" Jan 20 19:20:10 crc kubenswrapper[4661]: I0120 19:20:10.628880 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-8474b5b9d8-6rl82_fa442cfc-fd6e-4b5d-882d-aaa8de83f99a/nmstate-webhook/0.log" Jan 20 19:20:13 crc kubenswrapper[4661]: E0120 19:20:13.121061 4661 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda8990fca_0e09_46d9_b3db_f5b1e3e17bb1.slice/crio-c8fca1c3b4897c54be557a8d1e2b655cb6ef5c6953ac3c3fb043ff4c19ca2a0b.scope\": RecentStats: unable to find data in memory cache]" Jan 20 19:20:22 crc kubenswrapper[4661]: I0120 19:20:22.142724 4661 scope.go:117] "RemoveContainer" containerID="88aefdb17fbdb2f9a99910e27de0632e3833c47bcd174020bd5e7e94c3da0469" Jan 20 19:20:22 crc kubenswrapper[4661]: E0120 19:20:22.144538 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-svf7c_openshift-machine-config-operator(78855c94-da90-4523-8d65-70f7fd153dee)\"" pod="openshift-machine-config-operator/machine-config-daemon-svf7c" podUID="78855c94-da90-4523-8d65-70f7fd153dee" Jan 20 19:20:23 crc kubenswrapper[4661]: E0120 19:20:23.376325 4661 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda8990fca_0e09_46d9_b3db_f5b1e3e17bb1.slice/crio-c8fca1c3b4897c54be557a8d1e2b655cb6ef5c6953ac3c3fb043ff4c19ca2a0b.scope\": RecentStats: unable to find data in memory cache]" Jan 20 19:20:25 crc kubenswrapper[4661]: I0120 19:20:24.999910 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-htvf4_e4437f72-2da5-4c3a-8a69-4a26f3190a62/controller/0.log" Jan 20 19:20:25 crc kubenswrapper[4661]: I0120 19:20:25.006889 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-htvf4_e4437f72-2da5-4c3a-8a69-4a26f3190a62/kube-rbac-proxy/0.log" Jan 20 19:20:25 crc kubenswrapper[4661]: I0120 19:20:25.033449 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9s69m_52982b0c-438c-4fbd-be5f-03fe6aca0327/controller/0.log" Jan 20 19:20:26 crc kubenswrapper[4661]: I0120 19:20:26.442965 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9s69m_52982b0c-438c-4fbd-be5f-03fe6aca0327/frr/0.log" Jan 20 19:20:26 crc kubenswrapper[4661]: I0120 19:20:26.499208 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9s69m_52982b0c-438c-4fbd-be5f-03fe6aca0327/reloader/0.log" Jan 20 19:20:26 crc kubenswrapper[4661]: I0120 19:20:26.509879 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9s69m_52982b0c-438c-4fbd-be5f-03fe6aca0327/frr-metrics/0.log" Jan 20 19:20:26 crc kubenswrapper[4661]: I0120 19:20:26.518271 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9s69m_52982b0c-438c-4fbd-be5f-03fe6aca0327/kube-rbac-proxy/0.log" Jan 20 19:20:26 crc kubenswrapper[4661]: I0120 19:20:26.532903 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9s69m_52982b0c-438c-4fbd-be5f-03fe6aca0327/kube-rbac-proxy-frr/0.log" Jan 20 19:20:26 crc kubenswrapper[4661]: I0120 19:20:26.551761 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9s69m_52982b0c-438c-4fbd-be5f-03fe6aca0327/cp-frr-files/0.log" Jan 20 19:20:26 crc kubenswrapper[4661]: I0120 19:20:26.566821 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9s69m_52982b0c-438c-4fbd-be5f-03fe6aca0327/cp-reloader/0.log" Jan 20 19:20:26 crc kubenswrapper[4661]: I0120 19:20:26.608926 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9s69m_52982b0c-438c-4fbd-be5f-03fe6aca0327/cp-metrics/0.log" Jan 20 19:20:26 crc kubenswrapper[4661]: I0120 19:20:26.637329 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-7df86c4f6c-7z86n_c47c14be-ea84-47ba-a52b-9cb718ae6a30/frr-k8s-webhook-server/0.log" Jan 20 19:20:26 crc kubenswrapper[4661]: I0120 19:20:26.679763 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-6f4477bbcd-h46rv_715feebe-b380-4ce1-9842-7f9da051a195/manager/0.log" Jan 20 19:20:26 crc kubenswrapper[4661]: I0120 19:20:26.690877 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-686c759fbc-hdkt8_e0bbd467-090b-431e-b89b-8159d61d7dab/webhook-server/0.log" Jan 20 19:20:26 crc kubenswrapper[4661]: I0120 19:20:26.970188 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-58jbs_69809466-8e46-4f8a-b90e-638f8af8b313/speaker/0.log" Jan 20 19:20:26 crc kubenswrapper[4661]: I0120 19:20:26.978543 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-58jbs_69809466-8e46-4f8a-b90e-638f8af8b313/kube-rbac-proxy/0.log" Jan 20 19:20:31 crc kubenswrapper[4661]: I0120 19:20:31.889434 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcwzssn_32740d6c-8df0-4b8a-8097-5fbecd7ca5e5/extract/0.log" Jan 20 19:20:31 crc kubenswrapper[4661]: I0120 19:20:31.906201 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcwzssn_32740d6c-8df0-4b8a-8097-5fbecd7ca5e5/util/0.log" Jan 20 19:20:31 crc kubenswrapper[4661]: I0120 19:20:31.921761 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcwzssn_32740d6c-8df0-4b8a-8097-5fbecd7ca5e5/pull/0.log" Jan 20 19:20:31 crc kubenswrapper[4661]: I0120 19:20:31.931287 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7138pmwn_b42326cd-209d-43fd-8195-113ca565dfee/extract/0.log" Jan 20 19:20:31 crc kubenswrapper[4661]: I0120 19:20:31.938392 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7138pmwn_b42326cd-209d-43fd-8195-113ca565dfee/util/0.log" Jan 20 19:20:31 crc kubenswrapper[4661]: I0120 19:20:31.946819 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7138pmwn_b42326cd-209d-43fd-8195-113ca565dfee/pull/0.log" Jan 20 19:20:32 crc kubenswrapper[4661]: I0120 19:20:32.578135 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-4qf7z_c4a07359-5af4-415a-af87-0b579fb7d0dc/registry-server/0.log" Jan 20 19:20:32 crc kubenswrapper[4661]: I0120 19:20:32.583974 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-4qf7z_c4a07359-5af4-415a-af87-0b579fb7d0dc/extract-utilities/0.log" Jan 20 19:20:32 crc kubenswrapper[4661]: I0120 19:20:32.599642 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-4qf7z_c4a07359-5af4-415a-af87-0b579fb7d0dc/extract-content/0.log" Jan 20 19:20:33 crc kubenswrapper[4661]: I0120 19:20:33.144596 4661 scope.go:117] "RemoveContainer" containerID="88aefdb17fbdb2f9a99910e27de0632e3833c47bcd174020bd5e7e94c3da0469" Jan 20 19:20:33 crc kubenswrapper[4661]: E0120 19:20:33.144912 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-svf7c_openshift-machine-config-operator(78855c94-da90-4523-8d65-70f7fd153dee)\"" pod="openshift-machine-config-operator/machine-config-daemon-svf7c" podUID="78855c94-da90-4523-8d65-70f7fd153dee" Jan 20 19:20:33 crc kubenswrapper[4661]: I0120 19:20:33.202557 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-snrjs_6c7424e7-2b2f-4f1b-8970-9061b4f651ff/registry-server/0.log" Jan 20 19:20:33 crc kubenswrapper[4661]: I0120 19:20:33.207689 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-snrjs_6c7424e7-2b2f-4f1b-8970-9061b4f651ff/extract-utilities/0.log" Jan 20 19:20:33 crc kubenswrapper[4661]: I0120 19:20:33.215785 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-snrjs_6c7424e7-2b2f-4f1b-8970-9061b4f651ff/extract-content/0.log" Jan 20 19:20:33 crc kubenswrapper[4661]: I0120 19:20:33.227326 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-p2zck_663dd2a4-8e69-41d7-b561-4419dd0b4e90/marketplace-operator/0.log" Jan 20 19:20:33 crc kubenswrapper[4661]: I0120 19:20:33.364769 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-chn68_631fc07f-b0f0-4f54-881f-bc76a8ec7b34/registry-server/0.log" Jan 20 19:20:33 crc kubenswrapper[4661]: I0120 19:20:33.369980 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-chn68_631fc07f-b0f0-4f54-881f-bc76a8ec7b34/extract-utilities/0.log" Jan 20 19:20:33 crc kubenswrapper[4661]: I0120 19:20:33.377186 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-chn68_631fc07f-b0f0-4f54-881f-bc76a8ec7b34/extract-content/0.log" Jan 20 19:20:33 crc kubenswrapper[4661]: E0120 19:20:33.635416 4661 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda8990fca_0e09_46d9_b3db_f5b1e3e17bb1.slice/crio-c8fca1c3b4897c54be557a8d1e2b655cb6ef5c6953ac3c3fb043ff4c19ca2a0b.scope\": RecentStats: unable to find data in memory cache]" Jan 20 19:20:33 crc kubenswrapper[4661]: I0120 19:20:33.925395 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-cm29g_a4aacd2d-5f80-4352-bd51-48f7a3a9b8ef/registry-server/0.log" Jan 20 19:20:33 crc kubenswrapper[4661]: I0120 19:20:33.931126 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-cm29g_a4aacd2d-5f80-4352-bd51-48f7a3a9b8ef/extract-utilities/0.log" Jan 20 19:20:33 crc kubenswrapper[4661]: I0120 19:20:33.938830 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-cm29g_a4aacd2d-5f80-4352-bd51-48f7a3a9b8ef/extract-content/0.log" Jan 20 19:20:43 crc kubenswrapper[4661]: E0120 19:20:43.906729 4661 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda8990fca_0e09_46d9_b3db_f5b1e3e17bb1.slice/crio-c8fca1c3b4897c54be557a8d1e2b655cb6ef5c6953ac3c3fb043ff4c19ca2a0b.scope\": RecentStats: unable to find data in memory cache]" Jan 20 19:20:44 crc kubenswrapper[4661]: I0120 19:20:44.153408 4661 scope.go:117] "RemoveContainer" containerID="88aefdb17fbdb2f9a99910e27de0632e3833c47bcd174020bd5e7e94c3da0469" Jan 20 19:20:44 crc kubenswrapper[4661]: E0120 19:20:44.154019 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-svf7c_openshift-machine-config-operator(78855c94-da90-4523-8d65-70f7fd153dee)\"" pod="openshift-machine-config-operator/machine-config-daemon-svf7c" podUID="78855c94-da90-4523-8d65-70f7fd153dee" Jan 20 19:20:55 crc kubenswrapper[4661]: I0120 19:20:55.142064 4661 scope.go:117] "RemoveContainer" containerID="88aefdb17fbdb2f9a99910e27de0632e3833c47bcd174020bd5e7e94c3da0469" Jan 20 19:20:55 crc kubenswrapper[4661]: E0120 19:20:55.142726 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-svf7c_openshift-machine-config-operator(78855c94-da90-4523-8d65-70f7fd153dee)\"" pod="openshift-machine-config-operator/machine-config-daemon-svf7c" podUID="78855c94-da90-4523-8d65-70f7fd153dee" Jan 20 19:21:08 crc kubenswrapper[4661]: I0120 19:21:08.143286 4661 scope.go:117] "RemoveContainer" containerID="88aefdb17fbdb2f9a99910e27de0632e3833c47bcd174020bd5e7e94c3da0469" Jan 20 19:21:08 crc kubenswrapper[4661]: E0120 19:21:08.144147 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-svf7c_openshift-machine-config-operator(78855c94-da90-4523-8d65-70f7fd153dee)\"" pod="openshift-machine-config-operator/machine-config-daemon-svf7c" podUID="78855c94-da90-4523-8d65-70f7fd153dee" Jan 20 19:21:21 crc kubenswrapper[4661]: I0120 19:21:21.142409 4661 scope.go:117] "RemoveContainer" containerID="88aefdb17fbdb2f9a99910e27de0632e3833c47bcd174020bd5e7e94c3da0469" Jan 20 19:21:21 crc kubenswrapper[4661]: E0120 19:21:21.143276 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-svf7c_openshift-machine-config-operator(78855c94-da90-4523-8d65-70f7fd153dee)\"" pod="openshift-machine-config-operator/machine-config-daemon-svf7c" podUID="78855c94-da90-4523-8d65-70f7fd153dee" Jan 20 19:21:32 crc kubenswrapper[4661]: I0120 19:21:32.143149 4661 scope.go:117] "RemoveContainer" containerID="88aefdb17fbdb2f9a99910e27de0632e3833c47bcd174020bd5e7e94c3da0469" Jan 20 19:21:32 crc kubenswrapper[4661]: E0120 19:21:32.143719 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-svf7c_openshift-machine-config-operator(78855c94-da90-4523-8d65-70f7fd153dee)\"" pod="openshift-machine-config-operator/machine-config-daemon-svf7c" podUID="78855c94-da90-4523-8d65-70f7fd153dee" Jan 20 19:21:44 crc kubenswrapper[4661]: I0120 19:21:44.149688 4661 scope.go:117] "RemoveContainer" containerID="88aefdb17fbdb2f9a99910e27de0632e3833c47bcd174020bd5e7e94c3da0469" Jan 20 19:21:44 crc kubenswrapper[4661]: E0120 19:21:44.150377 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-svf7c_openshift-machine-config-operator(78855c94-da90-4523-8d65-70f7fd153dee)\"" pod="openshift-machine-config-operator/machine-config-daemon-svf7c" podUID="78855c94-da90-4523-8d65-70f7fd153dee" Jan 20 19:21:59 crc kubenswrapper[4661]: I0120 19:21:59.142260 4661 scope.go:117] "RemoveContainer" containerID="88aefdb17fbdb2f9a99910e27de0632e3833c47bcd174020bd5e7e94c3da0469" Jan 20 19:21:59 crc kubenswrapper[4661]: E0120 19:21:59.142964 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-svf7c_openshift-machine-config-operator(78855c94-da90-4523-8d65-70f7fd153dee)\"" pod="openshift-machine-config-operator/machine-config-daemon-svf7c" podUID="78855c94-da90-4523-8d65-70f7fd153dee" Jan 20 19:22:08 crc kubenswrapper[4661]: I0120 19:22:08.031179 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-htvf4_e4437f72-2da5-4c3a-8a69-4a26f3190a62/controller/0.log" Jan 20 19:22:08 crc kubenswrapper[4661]: I0120 19:22:08.036979 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-htvf4_e4437f72-2da5-4c3a-8a69-4a26f3190a62/kube-rbac-proxy/0.log" Jan 20 19:22:08 crc kubenswrapper[4661]: I0120 19:22:08.071005 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9s69m_52982b0c-438c-4fbd-be5f-03fe6aca0327/controller/0.log" Jan 20 19:22:08 crc kubenswrapper[4661]: I0120 19:22:08.314051 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-858654f9db-2wqcl_13a9f3bc-c133-49ea-9cfd-bc8c107e32c6/cert-manager-controller/0.log" Jan 20 19:22:08 crc kubenswrapper[4661]: I0120 19:22:08.332633 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-cf98fcc89-f7qg8_3c6e82bb-badf-4079-abf0-566f4b6f0776/cert-manager-cainjector/0.log" Jan 20 19:22:08 crc kubenswrapper[4661]: I0120 19:22:08.344059 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-687f57d79b-scrrz_b1feddfe-5c29-4eba-99c5-65849498f0dc/cert-manager-webhook/0.log" Jan 20 19:22:09 crc kubenswrapper[4661]: I0120 19:22:09.364518 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9s69m_52982b0c-438c-4fbd-be5f-03fe6aca0327/frr/0.log" Jan 20 19:22:09 crc kubenswrapper[4661]: I0120 19:22:09.399148 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9s69m_52982b0c-438c-4fbd-be5f-03fe6aca0327/reloader/0.log" Jan 20 19:22:09 crc kubenswrapper[4661]: I0120 19:22:09.418609 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9s69m_52982b0c-438c-4fbd-be5f-03fe6aca0327/frr-metrics/0.log" Jan 20 19:22:09 crc kubenswrapper[4661]: I0120 19:22:09.434403 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9s69m_52982b0c-438c-4fbd-be5f-03fe6aca0327/kube-rbac-proxy/0.log" Jan 20 19:22:09 crc kubenswrapper[4661]: I0120 19:22:09.443517 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9s69m_52982b0c-438c-4fbd-be5f-03fe6aca0327/kube-rbac-proxy-frr/0.log" Jan 20 19:22:09 crc kubenswrapper[4661]: I0120 19:22:09.453390 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9s69m_52982b0c-438c-4fbd-be5f-03fe6aca0327/cp-frr-files/0.log" Jan 20 19:22:09 crc kubenswrapper[4661]: I0120 19:22:09.459941 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9s69m_52982b0c-438c-4fbd-be5f-03fe6aca0327/cp-reloader/0.log" Jan 20 19:22:09 crc kubenswrapper[4661]: I0120 19:22:09.471420 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9s69m_52982b0c-438c-4fbd-be5f-03fe6aca0327/cp-metrics/0.log" Jan 20 19:22:09 crc kubenswrapper[4661]: I0120 19:22:09.498616 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-7df86c4f6c-7z86n_c47c14be-ea84-47ba-a52b-9cb718ae6a30/frr-k8s-webhook-server/0.log" Jan 20 19:22:09 crc kubenswrapper[4661]: I0120 19:22:09.520941 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-6f4477bbcd-h46rv_715feebe-b380-4ce1-9842-7f9da051a195/manager/0.log" Jan 20 19:22:09 crc kubenswrapper[4661]: I0120 19:22:09.537183 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-686c759fbc-hdkt8_e0bbd467-090b-431e-b89b-8159d61d7dab/webhook-server/0.log" Jan 20 19:22:09 crc kubenswrapper[4661]: I0120 19:22:09.768736 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-58jbs_69809466-8e46-4f8a-b90e-638f8af8b313/speaker/0.log" Jan 20 19:22:09 crc kubenswrapper[4661]: I0120 19:22:09.780248 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-58jbs_69809466-8e46-4f8a-b90e-638f8af8b313/kube-rbac-proxy/0.log" Jan 20 19:22:10 crc kubenswrapper[4661]: I0120 19:22:10.183894 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-7ddb5c749-bbwzg_e257e7b3-ba70-44d2-abb9-6a6848bf1c06/manager/0.log" Jan 20 19:22:10 crc kubenswrapper[4661]: I0120 19:22:10.202705 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_c328dd49ee03958f20cc032e90cbfddae000be998ce16b6019d67f47cbzppkm_dc3336c8-c6d2-4f42-b8d7-534f5e765776/extract/0.log" Jan 20 19:22:10 crc kubenswrapper[4661]: I0120 19:22:10.211048 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_c328dd49ee03958f20cc032e90cbfddae000be998ce16b6019d67f47cbzppkm_dc3336c8-c6d2-4f42-b8d7-534f5e765776/util/0.log" Jan 20 19:22:10 crc kubenswrapper[4661]: I0120 19:22:10.219067 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_c328dd49ee03958f20cc032e90cbfddae000be998ce16b6019d67f47cbzppkm_dc3336c8-c6d2-4f42-b8d7-534f5e765776/pull/0.log" Jan 20 19:22:10 crc kubenswrapper[4661]: I0120 19:22:10.263878 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-9b68f5989-hk9zx_51bdae14-22a5-4783-8712-fc51ca6d8a07/manager/0.log" Jan 20 19:22:10 crc kubenswrapper[4661]: I0120 19:22:10.284804 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-9f958b845-dw6hd_08e08814-f213-4476-a78d-82cddc30022d/manager/0.log" Jan 20 19:22:10 crc kubenswrapper[4661]: I0120 19:22:10.369848 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-c6994669c-gzjg9_eccd3436-cb57-49b8-a2f7-106fe5e39c7d/manager/0.log" Jan 20 19:22:10 crc kubenswrapper[4661]: I0120 19:22:10.384920 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-594c8c9d5d-r5bws_2bf3fc47-9ca2-45aa-9835-1ed5d413b0ec/manager/0.log" Jan 20 19:22:10 crc kubenswrapper[4661]: I0120 19:22:10.397275 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-77d5c5b54f-5w4m2_04a8f9c5-45fc-47db-adf2-3de38af2cf96/manager/0.log" Jan 20 19:22:10 crc kubenswrapper[4661]: I0120 19:22:10.588678 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-77c48c7859-w8bbb_70002b35-6f0d-4679-9279-a80574c467f0/manager/0.log" Jan 20 19:22:10 crc kubenswrapper[4661]: I0120 19:22:10.615651 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-78757b4889-cszrc_10ed69a9-7fbf-4139-b2b2-80dec4f8cf41/manager/0.log" Jan 20 19:22:10 crc kubenswrapper[4661]: I0120 19:22:10.683745 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-767fdc4f47-4g9db_a5920876-3cd0-41cf-b7d8-6fd8ea0af29c/manager/0.log" Jan 20 19:22:10 crc kubenswrapper[4661]: I0120 19:22:10.784237 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-864f6b75bf-svt25_6c1159da-faf7-4389-b57b-05173827968d/manager/0.log" Jan 20 19:22:10 crc kubenswrapper[4661]: I0120 19:22:10.815112 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-c87fff755-69ktn_12b130a9-df33-4c1a-a145-961791dc9d9d/manager/0.log" Jan 20 19:22:10 crc kubenswrapper[4661]: I0120 19:22:10.859076 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-cb4666565-cqf8m_1b070a22-e050-4db7-bc74-f8a1129a8d61/manager/0.log" Jan 20 19:22:10 crc kubenswrapper[4661]: I0120 19:22:10.923394 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-65849867d6-2g55t_52bfaf4d-624e-45d7-86d8-4c0e18afe2e6/manager/0.log" Jan 20 19:22:10 crc kubenswrapper[4661]: I0120 19:22:10.934549 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-7fc9b76cf6-mqz45_5798b368-6725-4e14-a77c-37b7bcfd538d/manager/0.log" Jan 20 19:22:10 crc kubenswrapper[4661]: I0120 19:22:10.959305 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-6b68b8b854xbhqc_65995719-9618-424e-a324-084d52a0cd47/manager/0.log" Jan 20 19:22:11 crc kubenswrapper[4661]: I0120 19:22:11.118088 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-init-fdc84db4c-p87rq_8e170a45-9133-4aee-81e7-7f6188f48c91/operator/0.log" Jan 20 19:22:11 crc kubenswrapper[4661]: I0120 19:22:11.164599 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-858654f9db-2wqcl_13a9f3bc-c133-49ea-9cfd-bc8c107e32c6/cert-manager-controller/0.log" Jan 20 19:22:11 crc kubenswrapper[4661]: I0120 19:22:11.181866 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-cf98fcc89-f7qg8_3c6e82bb-badf-4079-abf0-566f4b6f0776/cert-manager-cainjector/0.log" Jan 20 19:22:11 crc kubenswrapper[4661]: I0120 19:22:11.219079 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-687f57d79b-scrrz_b1feddfe-5c29-4eba-99c5-65849498f0dc/cert-manager-webhook/0.log" Jan 20 19:22:12 crc kubenswrapper[4661]: I0120 19:22:12.201006 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-58b4997fc9-9wjks_d603e76e-8a9d-444f-b251-2d29b5588c8e/manager/0.log" Jan 20 19:22:12 crc kubenswrapper[4661]: I0120 19:22:12.220045 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-wj8kr_a9b5891c-9b50-4f14-ade6-69a048487d08/registry-server/0.log" Jan 20 19:22:12 crc kubenswrapper[4661]: I0120 19:22:12.277939 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-55db956ddc-4msz7_f61aad5b-f531-4dc0-8328-4b057c84651e/manager/0.log" Jan 20 19:22:12 crc kubenswrapper[4661]: I0120 19:22:12.304977 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-686df47fcb-tcgdv_dbbf0040-fc50-457e-ad76-42d6061a6df1/manager/0.log" Jan 20 19:22:12 crc kubenswrapper[4661]: I0120 19:22:12.327332 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-v8gf9_2e78fff0-2eba-4aa9-a4b0-2f5ff775e1ec/operator/0.log" Jan 20 19:22:12 crc kubenswrapper[4661]: I0120 19:22:12.340150 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-85dd56d4cc-wqr58_497cc518-3499-43be-8aff-c4ff58803cba/manager/0.log" Jan 20 19:22:12 crc kubenswrapper[4661]: I0120 19:22:12.399010 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-5f8f495fcf-2wsx8_22fe1eac-c7f9-4cef-8811-db5861b4caa2/manager/0.log" Jan 20 19:22:12 crc kubenswrapper[4661]: I0120 19:22:12.415930 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-7cd8bc9dbb-gg985_7f267072-d784-469d-acad-238e58ddd82c/manager/0.log" Jan 20 19:22:12 crc kubenswrapper[4661]: I0120 19:22:12.437719 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-64cd966744-hppzk_5a07b584-21cc-464b-a3bf-046c6e0ab18f/manager/0.log" Jan 20 19:22:12 crc kubenswrapper[4661]: I0120 19:22:12.507728 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-qdlnn_a507ebcc-7e0b-445b-9688-882358d365ce/control-plane-machine-set-operator/0.log" Jan 20 19:22:12 crc kubenswrapper[4661]: I0120 19:22:12.541359 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-7hbkg_302e8226-565c-44a4-bb0e-dee670200ae3/kube-rbac-proxy/0.log" Jan 20 19:22:12 crc kubenswrapper[4661]: I0120 19:22:12.550567 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-7hbkg_302e8226-565c-44a4-bb0e-dee670200ae3/machine-api-operator/0.log" Jan 20 19:22:13 crc kubenswrapper[4661]: I0120 19:22:13.142574 4661 scope.go:117] "RemoveContainer" containerID="88aefdb17fbdb2f9a99910e27de0632e3833c47bcd174020bd5e7e94c3da0469" Jan 20 19:22:13 crc kubenswrapper[4661]: E0120 19:22:13.142842 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-svf7c_openshift-machine-config-operator(78855c94-da90-4523-8d65-70f7fd153dee)\"" pod="openshift-machine-config-operator/machine-config-daemon-svf7c" podUID="78855c94-da90-4523-8d65-70f7fd153dee" Jan 20 19:22:13 crc kubenswrapper[4661]: I0120 19:22:13.581646 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-7ddb5c749-bbwzg_e257e7b3-ba70-44d2-abb9-6a6848bf1c06/manager/0.log" Jan 20 19:22:13 crc kubenswrapper[4661]: I0120 19:22:13.594991 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_c328dd49ee03958f20cc032e90cbfddae000be998ce16b6019d67f47cbzppkm_dc3336c8-c6d2-4f42-b8d7-534f5e765776/extract/0.log" Jan 20 19:22:13 crc kubenswrapper[4661]: I0120 19:22:13.602540 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_c328dd49ee03958f20cc032e90cbfddae000be998ce16b6019d67f47cbzppkm_dc3336c8-c6d2-4f42-b8d7-534f5e765776/util/0.log" Jan 20 19:22:13 crc kubenswrapper[4661]: I0120 19:22:13.609093 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_c328dd49ee03958f20cc032e90cbfddae000be998ce16b6019d67f47cbzppkm_dc3336c8-c6d2-4f42-b8d7-534f5e765776/pull/0.log" Jan 20 19:22:13 crc kubenswrapper[4661]: I0120 19:22:13.657468 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-9b68f5989-hk9zx_51bdae14-22a5-4783-8712-fc51ca6d8a07/manager/0.log" Jan 20 19:22:13 crc kubenswrapper[4661]: I0120 19:22:13.672611 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-9f958b845-dw6hd_08e08814-f213-4476-a78d-82cddc30022d/manager/0.log" Jan 20 19:22:13 crc kubenswrapper[4661]: I0120 19:22:13.755436 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-c6994669c-gzjg9_eccd3436-cb57-49b8-a2f7-106fe5e39c7d/manager/0.log" Jan 20 19:22:13 crc kubenswrapper[4661]: I0120 19:22:13.769241 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-594c8c9d5d-r5bws_2bf3fc47-9ca2-45aa-9835-1ed5d413b0ec/manager/0.log" Jan 20 19:22:13 crc kubenswrapper[4661]: I0120 19:22:13.785274 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-77d5c5b54f-5w4m2_04a8f9c5-45fc-47db-adf2-3de38af2cf96/manager/0.log" Jan 20 19:22:14 crc kubenswrapper[4661]: I0120 19:22:14.022331 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-77c48c7859-w8bbb_70002b35-6f0d-4679-9279-a80574c467f0/manager/0.log" Jan 20 19:22:14 crc kubenswrapper[4661]: I0120 19:22:14.033922 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-78757b4889-cszrc_10ed69a9-7fbf-4139-b2b2-80dec4f8cf41/manager/0.log" Jan 20 19:22:14 crc kubenswrapper[4661]: I0120 19:22:14.106106 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-767fdc4f47-4g9db_a5920876-3cd0-41cf-b7d8-6fd8ea0af29c/manager/0.log" Jan 20 19:22:14 crc kubenswrapper[4661]: I0120 19:22:14.143753 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-864f6b75bf-svt25_6c1159da-faf7-4389-b57b-05173827968d/manager/0.log" Jan 20 19:22:14 crc kubenswrapper[4661]: I0120 19:22:14.175686 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-c87fff755-69ktn_12b130a9-df33-4c1a-a145-961791dc9d9d/manager/0.log" Jan 20 19:22:14 crc kubenswrapper[4661]: I0120 19:22:14.212309 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-cb4666565-cqf8m_1b070a22-e050-4db7-bc74-f8a1129a8d61/manager/0.log" Jan 20 19:22:14 crc kubenswrapper[4661]: I0120 19:22:14.253189 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-7754f76f8b-frgmz_68fe7ab0-cff5-474c-aa0d-7c579ddc51bb/nmstate-console-plugin/0.log" Jan 20 19:22:14 crc kubenswrapper[4661]: I0120 19:22:14.270692 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-q9p2x_0b121ec2-f30a-46c4-a556-dd00cca2a1e3/nmstate-handler/0.log" Jan 20 19:22:14 crc kubenswrapper[4661]: I0120 19:22:14.276959 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-65849867d6-2g55t_52bfaf4d-624e-45d7-86d8-4c0e18afe2e6/manager/0.log" Jan 20 19:22:14 crc kubenswrapper[4661]: I0120 19:22:14.288923 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-7fc9b76cf6-mqz45_5798b368-6725-4e14-a77c-37b7bcfd538d/manager/0.log" Jan 20 19:22:14 crc kubenswrapper[4661]: I0120 19:22:14.303373 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-7pxb7_7f2c01ac-294a-42b8-9988-22419d94a0ec/nmstate-metrics/0.log" Jan 20 19:22:14 crc kubenswrapper[4661]: I0120 19:22:14.311877 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-6b68b8b854xbhqc_65995719-9618-424e-a324-084d52a0cd47/manager/0.log" Jan 20 19:22:14 crc kubenswrapper[4661]: I0120 19:22:14.318284 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-7pxb7_7f2c01ac-294a-42b8-9988-22419d94a0ec/kube-rbac-proxy/0.log" Jan 20 19:22:14 crc kubenswrapper[4661]: I0120 19:22:14.337342 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-646758c888-mkng6_91e3ce75-26ba-42cb-b4dd-322bc9188bab/nmstate-operator/0.log" Jan 20 19:22:14 crc kubenswrapper[4661]: I0120 19:22:14.498264 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-init-fdc84db4c-p87rq_8e170a45-9133-4aee-81e7-7f6188f48c91/operator/0.log" Jan 20 19:22:14 crc kubenswrapper[4661]: I0120 19:22:14.659573 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-8474b5b9d8-6rl82_fa442cfc-fd6e-4b5d-882d-aaa8de83f99a/nmstate-webhook/0.log" Jan 20 19:22:15 crc kubenswrapper[4661]: I0120 19:22:15.684029 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-58b4997fc9-9wjks_d603e76e-8a9d-444f-b251-2d29b5588c8e/manager/0.log" Jan 20 19:22:15 crc kubenswrapper[4661]: I0120 19:22:15.694106 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-wj8kr_a9b5891c-9b50-4f14-ade6-69a048487d08/registry-server/0.log" Jan 20 19:22:15 crc kubenswrapper[4661]: I0120 19:22:15.731773 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-55db956ddc-4msz7_f61aad5b-f531-4dc0-8328-4b057c84651e/manager/0.log" Jan 20 19:22:15 crc kubenswrapper[4661]: I0120 19:22:15.752006 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-686df47fcb-tcgdv_dbbf0040-fc50-457e-ad76-42d6061a6df1/manager/0.log" Jan 20 19:22:15 crc kubenswrapper[4661]: I0120 19:22:15.770160 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-v8gf9_2e78fff0-2eba-4aa9-a4b0-2f5ff775e1ec/operator/0.log" Jan 20 19:22:15 crc kubenswrapper[4661]: I0120 19:22:15.793876 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-85dd56d4cc-wqr58_497cc518-3499-43be-8aff-c4ff58803cba/manager/0.log" Jan 20 19:22:15 crc kubenswrapper[4661]: I0120 19:22:15.881369 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-5f8f495fcf-2wsx8_22fe1eac-c7f9-4cef-8811-db5861b4caa2/manager/0.log" Jan 20 19:22:15 crc kubenswrapper[4661]: I0120 19:22:15.924883 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-7cd8bc9dbb-gg985_7f267072-d784-469d-acad-238e58ddd82c/manager/0.log" Jan 20 19:22:15 crc kubenswrapper[4661]: I0120 19:22:15.942657 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-64cd966744-hppzk_5a07b584-21cc-464b-a3bf-046c6e0ab18f/manager/0.log" Jan 20 19:22:18 crc kubenswrapper[4661]: I0120 19:22:18.006008 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-j9j6p_e190abed-d178-4ce7-9485-f6090ecf8578/kube-multus-additional-cni-plugins/0.log" Jan 20 19:22:18 crc kubenswrapper[4661]: I0120 19:22:18.016485 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-j9j6p_e190abed-d178-4ce7-9485-f6090ecf8578/egress-router-binary-copy/0.log" Jan 20 19:22:18 crc kubenswrapper[4661]: I0120 19:22:18.027140 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-j9j6p_e190abed-d178-4ce7-9485-f6090ecf8578/cni-plugins/0.log" Jan 20 19:22:18 crc kubenswrapper[4661]: I0120 19:22:18.043306 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-j9j6p_e190abed-d178-4ce7-9485-f6090ecf8578/bond-cni-plugin/0.log" Jan 20 19:22:18 crc kubenswrapper[4661]: I0120 19:22:18.062531 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-j9j6p_e190abed-d178-4ce7-9485-f6090ecf8578/routeoverride-cni/0.log" Jan 20 19:22:18 crc kubenswrapper[4661]: I0120 19:22:18.081446 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-j9j6p_e190abed-d178-4ce7-9485-f6090ecf8578/whereabouts-cni-bincopy/0.log" Jan 20 19:22:18 crc kubenswrapper[4661]: I0120 19:22:18.087405 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-j9j6p_e190abed-d178-4ce7-9485-f6090ecf8578/whereabouts-cni/0.log" Jan 20 19:22:18 crc kubenswrapper[4661]: I0120 19:22:18.110868 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-admission-controller-857f4d67dd-44vhk_e4cd0e68-3282-4713-8386-8c86f56f1f70/multus-admission-controller/0.log" Jan 20 19:22:18 crc kubenswrapper[4661]: I0120 19:22:18.116633 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-admission-controller-857f4d67dd-44vhk_e4cd0e68-3282-4713-8386-8c86f56f1f70/kube-rbac-proxy/0.log" Jan 20 19:22:18 crc kubenswrapper[4661]: I0120 19:22:18.164528 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-z97p2_5b6f2401-3eb9-4ee4-b79c-6faee06bc21c/kube-multus/2.log" Jan 20 19:22:18 crc kubenswrapper[4661]: I0120 19:22:18.256085 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-z97p2_5b6f2401-3eb9-4ee4-b79c-6faee06bc21c/kube-multus/3.log" Jan 20 19:22:18 crc kubenswrapper[4661]: I0120 19:22:18.290437 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_network-metrics-daemon-dhd6h_58dc28d6-51a6-49ce-ab8e-ac0b6bbe7131/network-metrics-daemon/0.log" Jan 20 19:22:18 crc kubenswrapper[4661]: I0120 19:22:18.295251 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_network-metrics-daemon-dhd6h_58dc28d6-51a6-49ce-ab8e-ac0b6bbe7131/kube-rbac-proxy/0.log" Jan 20 19:22:26 crc kubenswrapper[4661]: I0120 19:22:26.142200 4661 scope.go:117] "RemoveContainer" containerID="88aefdb17fbdb2f9a99910e27de0632e3833c47bcd174020bd5e7e94c3da0469" Jan 20 19:22:26 crc kubenswrapper[4661]: E0120 19:22:26.143033 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-svf7c_openshift-machine-config-operator(78855c94-da90-4523-8d65-70f7fd153dee)\"" pod="openshift-machine-config-operator/machine-config-daemon-svf7c" podUID="78855c94-da90-4523-8d65-70f7fd153dee" Jan 20 19:22:31 crc kubenswrapper[4661]: I0120 19:22:31.966840 4661 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-krs8j"] Jan 20 19:22:31 crc kubenswrapper[4661]: E0120 19:22:31.967975 4661 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a8990fca-0e09-46d9-b3db-f5b1e3e17bb1" containerName="extract-utilities" Jan 20 19:22:31 crc kubenswrapper[4661]: I0120 19:22:31.967998 4661 state_mem.go:107] "Deleted CPUSet assignment" podUID="a8990fca-0e09-46d9-b3db-f5b1e3e17bb1" containerName="extract-utilities" Jan 20 19:22:31 crc kubenswrapper[4661]: E0120 19:22:31.968034 4661 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a8990fca-0e09-46d9-b3db-f5b1e3e17bb1" containerName="registry-server" Jan 20 19:22:31 crc kubenswrapper[4661]: I0120 19:22:31.968049 4661 state_mem.go:107] "Deleted CPUSet assignment" podUID="a8990fca-0e09-46d9-b3db-f5b1e3e17bb1" containerName="registry-server" Jan 20 19:22:31 crc kubenswrapper[4661]: E0120 19:22:31.968079 4661 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a8990fca-0e09-46d9-b3db-f5b1e3e17bb1" containerName="extract-content" Jan 20 19:22:31 crc kubenswrapper[4661]: I0120 19:22:31.968091 4661 state_mem.go:107] "Deleted CPUSet assignment" podUID="a8990fca-0e09-46d9-b3db-f5b1e3e17bb1" containerName="extract-content" Jan 20 19:22:31 crc kubenswrapper[4661]: I0120 19:22:31.968434 4661 memory_manager.go:354] "RemoveStaleState removing state" podUID="a8990fca-0e09-46d9-b3db-f5b1e3e17bb1" containerName="registry-server" Jan 20 19:22:31 crc kubenswrapper[4661]: I0120 19:22:31.970772 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-krs8j" Jan 20 19:22:31 crc kubenswrapper[4661]: I0120 19:22:31.989227 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-krs8j"] Jan 20 19:22:32 crc kubenswrapper[4661]: I0120 19:22:32.050037 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jvvt6\" (UniqueName: \"kubernetes.io/projected/cb260cbc-14da-4183-b5bf-f17c297932e4-kube-api-access-jvvt6\") pod \"redhat-marketplace-krs8j\" (UID: \"cb260cbc-14da-4183-b5bf-f17c297932e4\") " pod="openshift-marketplace/redhat-marketplace-krs8j" Jan 20 19:22:32 crc kubenswrapper[4661]: I0120 19:22:32.050184 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cb260cbc-14da-4183-b5bf-f17c297932e4-catalog-content\") pod \"redhat-marketplace-krs8j\" (UID: \"cb260cbc-14da-4183-b5bf-f17c297932e4\") " pod="openshift-marketplace/redhat-marketplace-krs8j" Jan 20 19:22:32 crc kubenswrapper[4661]: I0120 19:22:32.050252 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cb260cbc-14da-4183-b5bf-f17c297932e4-utilities\") pod \"redhat-marketplace-krs8j\" (UID: \"cb260cbc-14da-4183-b5bf-f17c297932e4\") " pod="openshift-marketplace/redhat-marketplace-krs8j" Jan 20 19:22:32 crc kubenswrapper[4661]: I0120 19:22:32.153002 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cb260cbc-14da-4183-b5bf-f17c297932e4-utilities\") pod \"redhat-marketplace-krs8j\" (UID: \"cb260cbc-14da-4183-b5bf-f17c297932e4\") " pod="openshift-marketplace/redhat-marketplace-krs8j" Jan 20 19:22:32 crc kubenswrapper[4661]: I0120 19:22:32.153194 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jvvt6\" (UniqueName: \"kubernetes.io/projected/cb260cbc-14da-4183-b5bf-f17c297932e4-kube-api-access-jvvt6\") pod \"redhat-marketplace-krs8j\" (UID: \"cb260cbc-14da-4183-b5bf-f17c297932e4\") " pod="openshift-marketplace/redhat-marketplace-krs8j" Jan 20 19:22:32 crc kubenswrapper[4661]: I0120 19:22:32.153293 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cb260cbc-14da-4183-b5bf-f17c297932e4-catalog-content\") pod \"redhat-marketplace-krs8j\" (UID: \"cb260cbc-14da-4183-b5bf-f17c297932e4\") " pod="openshift-marketplace/redhat-marketplace-krs8j" Jan 20 19:22:32 crc kubenswrapper[4661]: I0120 19:22:32.153932 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cb260cbc-14da-4183-b5bf-f17c297932e4-catalog-content\") pod \"redhat-marketplace-krs8j\" (UID: \"cb260cbc-14da-4183-b5bf-f17c297932e4\") " pod="openshift-marketplace/redhat-marketplace-krs8j" Jan 20 19:22:32 crc kubenswrapper[4661]: I0120 19:22:32.153971 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cb260cbc-14da-4183-b5bf-f17c297932e4-utilities\") pod \"redhat-marketplace-krs8j\" (UID: \"cb260cbc-14da-4183-b5bf-f17c297932e4\") " pod="openshift-marketplace/redhat-marketplace-krs8j" Jan 20 19:22:32 crc kubenswrapper[4661]: I0120 19:22:32.177853 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jvvt6\" (UniqueName: \"kubernetes.io/projected/cb260cbc-14da-4183-b5bf-f17c297932e4-kube-api-access-jvvt6\") pod \"redhat-marketplace-krs8j\" (UID: \"cb260cbc-14da-4183-b5bf-f17c297932e4\") " pod="openshift-marketplace/redhat-marketplace-krs8j" Jan 20 19:22:32 crc kubenswrapper[4661]: I0120 19:22:32.289109 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-krs8j" Jan 20 19:22:32 crc kubenswrapper[4661]: I0120 19:22:32.768988 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-krs8j"] Jan 20 19:22:33 crc kubenswrapper[4661]: I0120 19:22:33.058787 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-krs8j" event={"ID":"cb260cbc-14da-4183-b5bf-f17c297932e4","Type":"ContainerStarted","Data":"eaf057feb943545b918a45663dec5beb92fd768528b1a128e693432fa9bb2708"} Jan 20 19:22:34 crc kubenswrapper[4661]: I0120 19:22:34.073853 4661 generic.go:334] "Generic (PLEG): container finished" podID="cb260cbc-14da-4183-b5bf-f17c297932e4" containerID="5f4eaab84a1574b925674ef4a0ed2dc4a44b7a7a76fb175605e545dc186fde16" exitCode=0 Jan 20 19:22:34 crc kubenswrapper[4661]: I0120 19:22:34.074152 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-krs8j" event={"ID":"cb260cbc-14da-4183-b5bf-f17c297932e4","Type":"ContainerDied","Data":"5f4eaab84a1574b925674ef4a0ed2dc4a44b7a7a76fb175605e545dc186fde16"} Jan 20 19:22:36 crc kubenswrapper[4661]: I0120 19:22:36.093135 4661 generic.go:334] "Generic (PLEG): container finished" podID="cb260cbc-14da-4183-b5bf-f17c297932e4" containerID="f5b5005789e15749d32e4949af6de86bbcbcf426daeea47189ffc5fcee04d408" exitCode=0 Jan 20 19:22:36 crc kubenswrapper[4661]: I0120 19:22:36.093243 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-krs8j" event={"ID":"cb260cbc-14da-4183-b5bf-f17c297932e4","Type":"ContainerDied","Data":"f5b5005789e15749d32e4949af6de86bbcbcf426daeea47189ffc5fcee04d408"} Jan 20 19:22:37 crc kubenswrapper[4661]: I0120 19:22:37.102795 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-krs8j" event={"ID":"cb260cbc-14da-4183-b5bf-f17c297932e4","Type":"ContainerStarted","Data":"f78e0eb1932a7680de41c1b655e523ea9f8ebe158b881ea5dff8c2ac6a3c14c9"} Jan 20 19:22:37 crc kubenswrapper[4661]: I0120 19:22:37.142567 4661 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-krs8j" podStartSLOduration=3.619833358 podStartE2EDuration="6.142552854s" podCreationTimestamp="2026-01-20 19:22:31 +0000 UTC" firstStartedPulling="2026-01-20 19:22:34.080192014 +0000 UTC m=+4610.410981686" lastFinishedPulling="2026-01-20 19:22:36.60291152 +0000 UTC m=+4612.933701182" observedRunningTime="2026-01-20 19:22:37.131465105 +0000 UTC m=+4613.462254777" watchObservedRunningTime="2026-01-20 19:22:37.142552854 +0000 UTC m=+4613.473342516" Jan 20 19:22:39 crc kubenswrapper[4661]: I0120 19:22:39.142985 4661 scope.go:117] "RemoveContainer" containerID="88aefdb17fbdb2f9a99910e27de0632e3833c47bcd174020bd5e7e94c3da0469" Jan 20 19:22:39 crc kubenswrapper[4661]: E0120 19:22:39.143701 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-svf7c_openshift-machine-config-operator(78855c94-da90-4523-8d65-70f7fd153dee)\"" pod="openshift-machine-config-operator/machine-config-daemon-svf7c" podUID="78855c94-da90-4523-8d65-70f7fd153dee" Jan 20 19:22:42 crc kubenswrapper[4661]: I0120 19:22:42.290310 4661 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-krs8j" Jan 20 19:22:42 crc kubenswrapper[4661]: I0120 19:22:42.291983 4661 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-krs8j" Jan 20 19:22:42 crc kubenswrapper[4661]: I0120 19:22:42.378209 4661 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-krs8j" Jan 20 19:22:43 crc kubenswrapper[4661]: I0120 19:22:43.250433 4661 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-krs8j" Jan 20 19:22:43 crc kubenswrapper[4661]: I0120 19:22:43.324596 4661 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-krs8j"] Jan 20 19:22:45 crc kubenswrapper[4661]: I0120 19:22:45.181159 4661 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-krs8j" podUID="cb260cbc-14da-4183-b5bf-f17c297932e4" containerName="registry-server" containerID="cri-o://f78e0eb1932a7680de41c1b655e523ea9f8ebe158b881ea5dff8c2ac6a3c14c9" gracePeriod=2 Jan 20 19:22:45 crc kubenswrapper[4661]: I0120 19:22:45.644381 4661 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-krs8j" Jan 20 19:22:45 crc kubenswrapper[4661]: I0120 19:22:45.763118 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jvvt6\" (UniqueName: \"kubernetes.io/projected/cb260cbc-14da-4183-b5bf-f17c297932e4-kube-api-access-jvvt6\") pod \"cb260cbc-14da-4183-b5bf-f17c297932e4\" (UID: \"cb260cbc-14da-4183-b5bf-f17c297932e4\") " Jan 20 19:22:45 crc kubenswrapper[4661]: I0120 19:22:45.763261 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cb260cbc-14da-4183-b5bf-f17c297932e4-catalog-content\") pod \"cb260cbc-14da-4183-b5bf-f17c297932e4\" (UID: \"cb260cbc-14da-4183-b5bf-f17c297932e4\") " Jan 20 19:22:45 crc kubenswrapper[4661]: I0120 19:22:45.763351 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cb260cbc-14da-4183-b5bf-f17c297932e4-utilities\") pod \"cb260cbc-14da-4183-b5bf-f17c297932e4\" (UID: \"cb260cbc-14da-4183-b5bf-f17c297932e4\") " Jan 20 19:22:45 crc kubenswrapper[4661]: I0120 19:22:45.765302 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cb260cbc-14da-4183-b5bf-f17c297932e4-utilities" (OuterVolumeSpecName: "utilities") pod "cb260cbc-14da-4183-b5bf-f17c297932e4" (UID: "cb260cbc-14da-4183-b5bf-f17c297932e4"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 19:22:45 crc kubenswrapper[4661]: I0120 19:22:45.772307 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cb260cbc-14da-4183-b5bf-f17c297932e4-kube-api-access-jvvt6" (OuterVolumeSpecName: "kube-api-access-jvvt6") pod "cb260cbc-14da-4183-b5bf-f17c297932e4" (UID: "cb260cbc-14da-4183-b5bf-f17c297932e4"). InnerVolumeSpecName "kube-api-access-jvvt6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 19:22:45 crc kubenswrapper[4661]: I0120 19:22:45.800385 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cb260cbc-14da-4183-b5bf-f17c297932e4-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "cb260cbc-14da-4183-b5bf-f17c297932e4" (UID: "cb260cbc-14da-4183-b5bf-f17c297932e4"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 19:22:45 crc kubenswrapper[4661]: I0120 19:22:45.866296 4661 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jvvt6\" (UniqueName: \"kubernetes.io/projected/cb260cbc-14da-4183-b5bf-f17c297932e4-kube-api-access-jvvt6\") on node \"crc\" DevicePath \"\"" Jan 20 19:22:45 crc kubenswrapper[4661]: I0120 19:22:45.866335 4661 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cb260cbc-14da-4183-b5bf-f17c297932e4-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 20 19:22:45 crc kubenswrapper[4661]: I0120 19:22:45.866348 4661 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cb260cbc-14da-4183-b5bf-f17c297932e4-utilities\") on node \"crc\" DevicePath \"\"" Jan 20 19:22:46 crc kubenswrapper[4661]: I0120 19:22:46.193739 4661 generic.go:334] "Generic (PLEG): container finished" podID="cb260cbc-14da-4183-b5bf-f17c297932e4" containerID="f78e0eb1932a7680de41c1b655e523ea9f8ebe158b881ea5dff8c2ac6a3c14c9" exitCode=0 Jan 20 19:22:46 crc kubenswrapper[4661]: I0120 19:22:46.193785 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-krs8j" event={"ID":"cb260cbc-14da-4183-b5bf-f17c297932e4","Type":"ContainerDied","Data":"f78e0eb1932a7680de41c1b655e523ea9f8ebe158b881ea5dff8c2ac6a3c14c9"} Jan 20 19:22:46 crc kubenswrapper[4661]: I0120 19:22:46.193816 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-krs8j" event={"ID":"cb260cbc-14da-4183-b5bf-f17c297932e4","Type":"ContainerDied","Data":"eaf057feb943545b918a45663dec5beb92fd768528b1a128e693432fa9bb2708"} Jan 20 19:22:46 crc kubenswrapper[4661]: I0120 19:22:46.193836 4661 scope.go:117] "RemoveContainer" containerID="f78e0eb1932a7680de41c1b655e523ea9f8ebe158b881ea5dff8c2ac6a3c14c9" Jan 20 19:22:46 crc kubenswrapper[4661]: I0120 19:22:46.193887 4661 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-krs8j" Jan 20 19:22:46 crc kubenswrapper[4661]: I0120 19:22:46.235948 4661 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-krs8j"] Jan 20 19:22:46 crc kubenswrapper[4661]: I0120 19:22:46.238330 4661 scope.go:117] "RemoveContainer" containerID="f5b5005789e15749d32e4949af6de86bbcbcf426daeea47189ffc5fcee04d408" Jan 20 19:22:46 crc kubenswrapper[4661]: I0120 19:22:46.260646 4661 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-krs8j"] Jan 20 19:22:46 crc kubenswrapper[4661]: I0120 19:22:46.265483 4661 scope.go:117] "RemoveContainer" containerID="5f4eaab84a1574b925674ef4a0ed2dc4a44b7a7a76fb175605e545dc186fde16" Jan 20 19:22:46 crc kubenswrapper[4661]: I0120 19:22:46.295250 4661 scope.go:117] "RemoveContainer" containerID="f78e0eb1932a7680de41c1b655e523ea9f8ebe158b881ea5dff8c2ac6a3c14c9" Jan 20 19:22:46 crc kubenswrapper[4661]: E0120 19:22:46.295862 4661 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f78e0eb1932a7680de41c1b655e523ea9f8ebe158b881ea5dff8c2ac6a3c14c9\": container with ID starting with f78e0eb1932a7680de41c1b655e523ea9f8ebe158b881ea5dff8c2ac6a3c14c9 not found: ID does not exist" containerID="f78e0eb1932a7680de41c1b655e523ea9f8ebe158b881ea5dff8c2ac6a3c14c9" Jan 20 19:22:46 crc kubenswrapper[4661]: I0120 19:22:46.295906 4661 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f78e0eb1932a7680de41c1b655e523ea9f8ebe158b881ea5dff8c2ac6a3c14c9"} err="failed to get container status \"f78e0eb1932a7680de41c1b655e523ea9f8ebe158b881ea5dff8c2ac6a3c14c9\": rpc error: code = NotFound desc = could not find container \"f78e0eb1932a7680de41c1b655e523ea9f8ebe158b881ea5dff8c2ac6a3c14c9\": container with ID starting with f78e0eb1932a7680de41c1b655e523ea9f8ebe158b881ea5dff8c2ac6a3c14c9 not found: ID does not exist" Jan 20 19:22:46 crc kubenswrapper[4661]: I0120 19:22:46.295932 4661 scope.go:117] "RemoveContainer" containerID="f5b5005789e15749d32e4949af6de86bbcbcf426daeea47189ffc5fcee04d408" Jan 20 19:22:46 crc kubenswrapper[4661]: E0120 19:22:46.296284 4661 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f5b5005789e15749d32e4949af6de86bbcbcf426daeea47189ffc5fcee04d408\": container with ID starting with f5b5005789e15749d32e4949af6de86bbcbcf426daeea47189ffc5fcee04d408 not found: ID does not exist" containerID="f5b5005789e15749d32e4949af6de86bbcbcf426daeea47189ffc5fcee04d408" Jan 20 19:22:46 crc kubenswrapper[4661]: I0120 19:22:46.296309 4661 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f5b5005789e15749d32e4949af6de86bbcbcf426daeea47189ffc5fcee04d408"} err="failed to get container status \"f5b5005789e15749d32e4949af6de86bbcbcf426daeea47189ffc5fcee04d408\": rpc error: code = NotFound desc = could not find container \"f5b5005789e15749d32e4949af6de86bbcbcf426daeea47189ffc5fcee04d408\": container with ID starting with f5b5005789e15749d32e4949af6de86bbcbcf426daeea47189ffc5fcee04d408 not found: ID does not exist" Jan 20 19:22:46 crc kubenswrapper[4661]: I0120 19:22:46.296326 4661 scope.go:117] "RemoveContainer" containerID="5f4eaab84a1574b925674ef4a0ed2dc4a44b7a7a76fb175605e545dc186fde16" Jan 20 19:22:46 crc kubenswrapper[4661]: E0120 19:22:46.296843 4661 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5f4eaab84a1574b925674ef4a0ed2dc4a44b7a7a76fb175605e545dc186fde16\": container with ID starting with 5f4eaab84a1574b925674ef4a0ed2dc4a44b7a7a76fb175605e545dc186fde16 not found: ID does not exist" containerID="5f4eaab84a1574b925674ef4a0ed2dc4a44b7a7a76fb175605e545dc186fde16" Jan 20 19:22:46 crc kubenswrapper[4661]: I0120 19:22:46.296907 4661 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5f4eaab84a1574b925674ef4a0ed2dc4a44b7a7a76fb175605e545dc186fde16"} err="failed to get container status \"5f4eaab84a1574b925674ef4a0ed2dc4a44b7a7a76fb175605e545dc186fde16\": rpc error: code = NotFound desc = could not find container \"5f4eaab84a1574b925674ef4a0ed2dc4a44b7a7a76fb175605e545dc186fde16\": container with ID starting with 5f4eaab84a1574b925674ef4a0ed2dc4a44b7a7a76fb175605e545dc186fde16 not found: ID does not exist" Jan 20 19:22:48 crc kubenswrapper[4661]: I0120 19:22:48.166049 4661 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cb260cbc-14da-4183-b5bf-f17c297932e4" path="/var/lib/kubelet/pods/cb260cbc-14da-4183-b5bf-f17c297932e4/volumes" Jan 20 19:22:52 crc kubenswrapper[4661]: I0120 19:22:52.145733 4661 scope.go:117] "RemoveContainer" containerID="88aefdb17fbdb2f9a99910e27de0632e3833c47bcd174020bd5e7e94c3da0469" Jan 20 19:22:52 crc kubenswrapper[4661]: E0120 19:22:52.146956 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-svf7c_openshift-machine-config-operator(78855c94-da90-4523-8d65-70f7fd153dee)\"" pod="openshift-machine-config-operator/machine-config-daemon-svf7c" podUID="78855c94-da90-4523-8d65-70f7fd153dee" Jan 20 19:23:07 crc kubenswrapper[4661]: I0120 19:23:07.145043 4661 scope.go:117] "RemoveContainer" containerID="88aefdb17fbdb2f9a99910e27de0632e3833c47bcd174020bd5e7e94c3da0469" Jan 20 19:23:07 crc kubenswrapper[4661]: I0120 19:23:07.475345 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-svf7c" event={"ID":"78855c94-da90-4523-8d65-70f7fd153dee","Type":"ContainerStarted","Data":"c8c6bcc8680b1eddd1950421f97606495736752d26633ff9fc203db7e17d57b4"} Jan 20 19:25:19 crc kubenswrapper[4661]: I0120 19:25:19.529356 4661 scope.go:117] "RemoveContainer" containerID="163965f4c4bbe998e5d80d06a2ce4e9bafe18c567946a5ef5ba56fd3c4c2fc3e" Jan 20 19:25:27 crc kubenswrapper[4661]: I0120 19:25:27.416894 4661 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-d4jtq"] Jan 20 19:25:27 crc kubenswrapper[4661]: E0120 19:25:27.417780 4661 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cb260cbc-14da-4183-b5bf-f17c297932e4" containerName="extract-utilities" Jan 20 19:25:27 crc kubenswrapper[4661]: I0120 19:25:27.417795 4661 state_mem.go:107] "Deleted CPUSet assignment" podUID="cb260cbc-14da-4183-b5bf-f17c297932e4" containerName="extract-utilities" Jan 20 19:25:27 crc kubenswrapper[4661]: E0120 19:25:27.417819 4661 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cb260cbc-14da-4183-b5bf-f17c297932e4" containerName="extract-content" Jan 20 19:25:27 crc kubenswrapper[4661]: I0120 19:25:27.417827 4661 state_mem.go:107] "Deleted CPUSet assignment" podUID="cb260cbc-14da-4183-b5bf-f17c297932e4" containerName="extract-content" Jan 20 19:25:27 crc kubenswrapper[4661]: E0120 19:25:27.417852 4661 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cb260cbc-14da-4183-b5bf-f17c297932e4" containerName="registry-server" Jan 20 19:25:27 crc kubenswrapper[4661]: I0120 19:25:27.417863 4661 state_mem.go:107] "Deleted CPUSet assignment" podUID="cb260cbc-14da-4183-b5bf-f17c297932e4" containerName="registry-server" Jan 20 19:25:27 crc kubenswrapper[4661]: I0120 19:25:27.418102 4661 memory_manager.go:354] "RemoveStaleState removing state" podUID="cb260cbc-14da-4183-b5bf-f17c297932e4" containerName="registry-server" Jan 20 19:25:27 crc kubenswrapper[4661]: I0120 19:25:27.423458 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-d4jtq" Jan 20 19:25:27 crc kubenswrapper[4661]: I0120 19:25:27.435870 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-d4jtq"] Jan 20 19:25:27 crc kubenswrapper[4661]: I0120 19:25:27.590429 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/423e9789-4154-4a6b-bfa5-fe36ca4d7577-catalog-content\") pod \"redhat-operators-d4jtq\" (UID: \"423e9789-4154-4a6b-bfa5-fe36ca4d7577\") " pod="openshift-marketplace/redhat-operators-d4jtq" Jan 20 19:25:27 crc kubenswrapper[4661]: I0120 19:25:27.590871 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pnnj8\" (UniqueName: \"kubernetes.io/projected/423e9789-4154-4a6b-bfa5-fe36ca4d7577-kube-api-access-pnnj8\") pod \"redhat-operators-d4jtq\" (UID: \"423e9789-4154-4a6b-bfa5-fe36ca4d7577\") " pod="openshift-marketplace/redhat-operators-d4jtq" Jan 20 19:25:27 crc kubenswrapper[4661]: I0120 19:25:27.591071 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/423e9789-4154-4a6b-bfa5-fe36ca4d7577-utilities\") pod \"redhat-operators-d4jtq\" (UID: \"423e9789-4154-4a6b-bfa5-fe36ca4d7577\") " pod="openshift-marketplace/redhat-operators-d4jtq" Jan 20 19:25:27 crc kubenswrapper[4661]: I0120 19:25:27.691841 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/423e9789-4154-4a6b-bfa5-fe36ca4d7577-utilities\") pod \"redhat-operators-d4jtq\" (UID: \"423e9789-4154-4a6b-bfa5-fe36ca4d7577\") " pod="openshift-marketplace/redhat-operators-d4jtq" Jan 20 19:25:27 crc kubenswrapper[4661]: I0120 19:25:27.692217 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/423e9789-4154-4a6b-bfa5-fe36ca4d7577-catalog-content\") pod \"redhat-operators-d4jtq\" (UID: \"423e9789-4154-4a6b-bfa5-fe36ca4d7577\") " pod="openshift-marketplace/redhat-operators-d4jtq" Jan 20 19:25:27 crc kubenswrapper[4661]: I0120 19:25:27.692293 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/423e9789-4154-4a6b-bfa5-fe36ca4d7577-utilities\") pod \"redhat-operators-d4jtq\" (UID: \"423e9789-4154-4a6b-bfa5-fe36ca4d7577\") " pod="openshift-marketplace/redhat-operators-d4jtq" Jan 20 19:25:27 crc kubenswrapper[4661]: I0120 19:25:27.692622 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/423e9789-4154-4a6b-bfa5-fe36ca4d7577-catalog-content\") pod \"redhat-operators-d4jtq\" (UID: \"423e9789-4154-4a6b-bfa5-fe36ca4d7577\") " pod="openshift-marketplace/redhat-operators-d4jtq" Jan 20 19:25:27 crc kubenswrapper[4661]: I0120 19:25:27.692852 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pnnj8\" (UniqueName: \"kubernetes.io/projected/423e9789-4154-4a6b-bfa5-fe36ca4d7577-kube-api-access-pnnj8\") pod \"redhat-operators-d4jtq\" (UID: \"423e9789-4154-4a6b-bfa5-fe36ca4d7577\") " pod="openshift-marketplace/redhat-operators-d4jtq" Jan 20 19:25:27 crc kubenswrapper[4661]: I0120 19:25:27.717500 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pnnj8\" (UniqueName: \"kubernetes.io/projected/423e9789-4154-4a6b-bfa5-fe36ca4d7577-kube-api-access-pnnj8\") pod \"redhat-operators-d4jtq\" (UID: \"423e9789-4154-4a6b-bfa5-fe36ca4d7577\") " pod="openshift-marketplace/redhat-operators-d4jtq" Jan 20 19:25:27 crc kubenswrapper[4661]: I0120 19:25:27.754550 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-d4jtq" Jan 20 19:25:28 crc kubenswrapper[4661]: I0120 19:25:28.299229 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-d4jtq"] Jan 20 19:25:28 crc kubenswrapper[4661]: I0120 19:25:28.974472 4661 generic.go:334] "Generic (PLEG): container finished" podID="423e9789-4154-4a6b-bfa5-fe36ca4d7577" containerID="6bb2d6c20dc1785f95f54340e0660cb5419a9c5de1db1e6691de87620d109dcf" exitCode=0 Jan 20 19:25:28 crc kubenswrapper[4661]: I0120 19:25:28.974542 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-d4jtq" event={"ID":"423e9789-4154-4a6b-bfa5-fe36ca4d7577","Type":"ContainerDied","Data":"6bb2d6c20dc1785f95f54340e0660cb5419a9c5de1db1e6691de87620d109dcf"} Jan 20 19:25:28 crc kubenswrapper[4661]: I0120 19:25:28.974823 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-d4jtq" event={"ID":"423e9789-4154-4a6b-bfa5-fe36ca4d7577","Type":"ContainerStarted","Data":"977b3afd049168c5660170e0d8d880797b190c6d6d1dfc19f55a94b3fda48d05"} Jan 20 19:25:28 crc kubenswrapper[4661]: I0120 19:25:28.976186 4661 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 20 19:25:29 crc kubenswrapper[4661]: I0120 19:25:29.323433 4661 patch_prober.go:28] interesting pod/machine-config-daemon-svf7c container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 20 19:25:29 crc kubenswrapper[4661]: I0120 19:25:29.323797 4661 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-svf7c" podUID="78855c94-da90-4523-8d65-70f7fd153dee" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 20 19:25:31 crc kubenswrapper[4661]: I0120 19:25:31.005564 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-d4jtq" event={"ID":"423e9789-4154-4a6b-bfa5-fe36ca4d7577","Type":"ContainerStarted","Data":"7cdeb0d45b211ac7952dfbb45b98164c071d697548fd0b4450d6d5619f47357a"} Jan 20 19:25:35 crc kubenswrapper[4661]: I0120 19:25:35.046449 4661 generic.go:334] "Generic (PLEG): container finished" podID="423e9789-4154-4a6b-bfa5-fe36ca4d7577" containerID="7cdeb0d45b211ac7952dfbb45b98164c071d697548fd0b4450d6d5619f47357a" exitCode=0 Jan 20 19:25:35 crc kubenswrapper[4661]: I0120 19:25:35.046536 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-d4jtq" event={"ID":"423e9789-4154-4a6b-bfa5-fe36ca4d7577","Type":"ContainerDied","Data":"7cdeb0d45b211ac7952dfbb45b98164c071d697548fd0b4450d6d5619f47357a"} Jan 20 19:25:36 crc kubenswrapper[4661]: I0120 19:25:36.058048 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-d4jtq" event={"ID":"423e9789-4154-4a6b-bfa5-fe36ca4d7577","Type":"ContainerStarted","Data":"728a13802305d40e018d9ec96c4520e69b358fa3b70929b71aabbcb72566182a"} Jan 20 19:25:36 crc kubenswrapper[4661]: I0120 19:25:36.131987 4661 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-d4jtq" podStartSLOduration=2.664085675 podStartE2EDuration="9.131970071s" podCreationTimestamp="2026-01-20 19:25:27 +0000 UTC" firstStartedPulling="2026-01-20 19:25:28.975956311 +0000 UTC m=+4785.306745973" lastFinishedPulling="2026-01-20 19:25:35.443840697 +0000 UTC m=+4791.774630369" observedRunningTime="2026-01-20 19:25:36.125576314 +0000 UTC m=+4792.456365986" watchObservedRunningTime="2026-01-20 19:25:36.131970071 +0000 UTC m=+4792.462759723" Jan 20 19:25:37 crc kubenswrapper[4661]: I0120 19:25:37.755095 4661 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-d4jtq" Jan 20 19:25:37 crc kubenswrapper[4661]: I0120 19:25:37.755159 4661 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-d4jtq" Jan 20 19:25:38 crc kubenswrapper[4661]: I0120 19:25:38.815407 4661 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-d4jtq" podUID="423e9789-4154-4a6b-bfa5-fe36ca4d7577" containerName="registry-server" probeResult="failure" output=< Jan 20 19:25:38 crc kubenswrapper[4661]: timeout: failed to connect service ":50051" within 1s Jan 20 19:25:38 crc kubenswrapper[4661]: > Jan 20 19:25:47 crc kubenswrapper[4661]: I0120 19:25:47.810535 4661 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-d4jtq" Jan 20 19:25:47 crc kubenswrapper[4661]: I0120 19:25:47.883411 4661 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-d4jtq" Jan 20 19:25:48 crc kubenswrapper[4661]: I0120 19:25:48.066905 4661 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-d4jtq"] Jan 20 19:25:49 crc kubenswrapper[4661]: I0120 19:25:49.198958 4661 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-d4jtq" podUID="423e9789-4154-4a6b-bfa5-fe36ca4d7577" containerName="registry-server" containerID="cri-o://728a13802305d40e018d9ec96c4520e69b358fa3b70929b71aabbcb72566182a" gracePeriod=2 Jan 20 19:25:49 crc kubenswrapper[4661]: I0120 19:25:49.640056 4661 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-d4jtq" Jan 20 19:25:49 crc kubenswrapper[4661]: I0120 19:25:49.783153 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/423e9789-4154-4a6b-bfa5-fe36ca4d7577-utilities\") pod \"423e9789-4154-4a6b-bfa5-fe36ca4d7577\" (UID: \"423e9789-4154-4a6b-bfa5-fe36ca4d7577\") " Jan 20 19:25:49 crc kubenswrapper[4661]: I0120 19:25:49.783304 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/423e9789-4154-4a6b-bfa5-fe36ca4d7577-catalog-content\") pod \"423e9789-4154-4a6b-bfa5-fe36ca4d7577\" (UID: \"423e9789-4154-4a6b-bfa5-fe36ca4d7577\") " Jan 20 19:25:49 crc kubenswrapper[4661]: I0120 19:25:49.783349 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pnnj8\" (UniqueName: \"kubernetes.io/projected/423e9789-4154-4a6b-bfa5-fe36ca4d7577-kube-api-access-pnnj8\") pod \"423e9789-4154-4a6b-bfa5-fe36ca4d7577\" (UID: \"423e9789-4154-4a6b-bfa5-fe36ca4d7577\") " Jan 20 19:25:49 crc kubenswrapper[4661]: I0120 19:25:49.784444 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/423e9789-4154-4a6b-bfa5-fe36ca4d7577-utilities" (OuterVolumeSpecName: "utilities") pod "423e9789-4154-4a6b-bfa5-fe36ca4d7577" (UID: "423e9789-4154-4a6b-bfa5-fe36ca4d7577"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 19:25:49 crc kubenswrapper[4661]: I0120 19:25:49.801423 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/423e9789-4154-4a6b-bfa5-fe36ca4d7577-kube-api-access-pnnj8" (OuterVolumeSpecName: "kube-api-access-pnnj8") pod "423e9789-4154-4a6b-bfa5-fe36ca4d7577" (UID: "423e9789-4154-4a6b-bfa5-fe36ca4d7577"). InnerVolumeSpecName "kube-api-access-pnnj8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 19:25:49 crc kubenswrapper[4661]: I0120 19:25:49.885613 4661 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pnnj8\" (UniqueName: \"kubernetes.io/projected/423e9789-4154-4a6b-bfa5-fe36ca4d7577-kube-api-access-pnnj8\") on node \"crc\" DevicePath \"\"" Jan 20 19:25:49 crc kubenswrapper[4661]: I0120 19:25:49.885822 4661 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/423e9789-4154-4a6b-bfa5-fe36ca4d7577-utilities\") on node \"crc\" DevicePath \"\"" Jan 20 19:25:49 crc kubenswrapper[4661]: I0120 19:25:49.899378 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/423e9789-4154-4a6b-bfa5-fe36ca4d7577-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "423e9789-4154-4a6b-bfa5-fe36ca4d7577" (UID: "423e9789-4154-4a6b-bfa5-fe36ca4d7577"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 19:25:49 crc kubenswrapper[4661]: I0120 19:25:49.987025 4661 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/423e9789-4154-4a6b-bfa5-fe36ca4d7577-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 20 19:25:50 crc kubenswrapper[4661]: I0120 19:25:50.209756 4661 generic.go:334] "Generic (PLEG): container finished" podID="423e9789-4154-4a6b-bfa5-fe36ca4d7577" containerID="728a13802305d40e018d9ec96c4520e69b358fa3b70929b71aabbcb72566182a" exitCode=0 Jan 20 19:25:50 crc kubenswrapper[4661]: I0120 19:25:50.209805 4661 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-d4jtq" Jan 20 19:25:50 crc kubenswrapper[4661]: I0120 19:25:50.209803 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-d4jtq" event={"ID":"423e9789-4154-4a6b-bfa5-fe36ca4d7577","Type":"ContainerDied","Data":"728a13802305d40e018d9ec96c4520e69b358fa3b70929b71aabbcb72566182a"} Jan 20 19:25:50 crc kubenswrapper[4661]: I0120 19:25:50.209923 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-d4jtq" event={"ID":"423e9789-4154-4a6b-bfa5-fe36ca4d7577","Type":"ContainerDied","Data":"977b3afd049168c5660170e0d8d880797b190c6d6d1dfc19f55a94b3fda48d05"} Jan 20 19:25:50 crc kubenswrapper[4661]: I0120 19:25:50.209940 4661 scope.go:117] "RemoveContainer" containerID="728a13802305d40e018d9ec96c4520e69b358fa3b70929b71aabbcb72566182a" Jan 20 19:25:50 crc kubenswrapper[4661]: I0120 19:25:50.232333 4661 scope.go:117] "RemoveContainer" containerID="7cdeb0d45b211ac7952dfbb45b98164c071d697548fd0b4450d6d5619f47357a" Jan 20 19:25:50 crc kubenswrapper[4661]: I0120 19:25:50.245886 4661 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-d4jtq"] Jan 20 19:25:50 crc kubenswrapper[4661]: I0120 19:25:50.256743 4661 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-d4jtq"] Jan 20 19:25:50 crc kubenswrapper[4661]: I0120 19:25:50.288111 4661 scope.go:117] "RemoveContainer" containerID="6bb2d6c20dc1785f95f54340e0660cb5419a9c5de1db1e6691de87620d109dcf" Jan 20 19:25:50 crc kubenswrapper[4661]: I0120 19:25:50.343160 4661 scope.go:117] "RemoveContainer" containerID="728a13802305d40e018d9ec96c4520e69b358fa3b70929b71aabbcb72566182a" Jan 20 19:25:50 crc kubenswrapper[4661]: E0120 19:25:50.343599 4661 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"728a13802305d40e018d9ec96c4520e69b358fa3b70929b71aabbcb72566182a\": container with ID starting with 728a13802305d40e018d9ec96c4520e69b358fa3b70929b71aabbcb72566182a not found: ID does not exist" containerID="728a13802305d40e018d9ec96c4520e69b358fa3b70929b71aabbcb72566182a" Jan 20 19:25:50 crc kubenswrapper[4661]: I0120 19:25:50.343653 4661 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"728a13802305d40e018d9ec96c4520e69b358fa3b70929b71aabbcb72566182a"} err="failed to get container status \"728a13802305d40e018d9ec96c4520e69b358fa3b70929b71aabbcb72566182a\": rpc error: code = NotFound desc = could not find container \"728a13802305d40e018d9ec96c4520e69b358fa3b70929b71aabbcb72566182a\": container with ID starting with 728a13802305d40e018d9ec96c4520e69b358fa3b70929b71aabbcb72566182a not found: ID does not exist" Jan 20 19:25:50 crc kubenswrapper[4661]: I0120 19:25:50.343740 4661 scope.go:117] "RemoveContainer" containerID="7cdeb0d45b211ac7952dfbb45b98164c071d697548fd0b4450d6d5619f47357a" Jan 20 19:25:50 crc kubenswrapper[4661]: E0120 19:25:50.344063 4661 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7cdeb0d45b211ac7952dfbb45b98164c071d697548fd0b4450d6d5619f47357a\": container with ID starting with 7cdeb0d45b211ac7952dfbb45b98164c071d697548fd0b4450d6d5619f47357a not found: ID does not exist" containerID="7cdeb0d45b211ac7952dfbb45b98164c071d697548fd0b4450d6d5619f47357a" Jan 20 19:25:50 crc kubenswrapper[4661]: I0120 19:25:50.344092 4661 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7cdeb0d45b211ac7952dfbb45b98164c071d697548fd0b4450d6d5619f47357a"} err="failed to get container status \"7cdeb0d45b211ac7952dfbb45b98164c071d697548fd0b4450d6d5619f47357a\": rpc error: code = NotFound desc = could not find container \"7cdeb0d45b211ac7952dfbb45b98164c071d697548fd0b4450d6d5619f47357a\": container with ID starting with 7cdeb0d45b211ac7952dfbb45b98164c071d697548fd0b4450d6d5619f47357a not found: ID does not exist" Jan 20 19:25:50 crc kubenswrapper[4661]: I0120 19:25:50.344134 4661 scope.go:117] "RemoveContainer" containerID="6bb2d6c20dc1785f95f54340e0660cb5419a9c5de1db1e6691de87620d109dcf" Jan 20 19:25:50 crc kubenswrapper[4661]: E0120 19:25:50.344352 4661 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6bb2d6c20dc1785f95f54340e0660cb5419a9c5de1db1e6691de87620d109dcf\": container with ID starting with 6bb2d6c20dc1785f95f54340e0660cb5419a9c5de1db1e6691de87620d109dcf not found: ID does not exist" containerID="6bb2d6c20dc1785f95f54340e0660cb5419a9c5de1db1e6691de87620d109dcf" Jan 20 19:25:50 crc kubenswrapper[4661]: I0120 19:25:50.344378 4661 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6bb2d6c20dc1785f95f54340e0660cb5419a9c5de1db1e6691de87620d109dcf"} err="failed to get container status \"6bb2d6c20dc1785f95f54340e0660cb5419a9c5de1db1e6691de87620d109dcf\": rpc error: code = NotFound desc = could not find container \"6bb2d6c20dc1785f95f54340e0660cb5419a9c5de1db1e6691de87620d109dcf\": container with ID starting with 6bb2d6c20dc1785f95f54340e0660cb5419a9c5de1db1e6691de87620d109dcf not found: ID does not exist" Jan 20 19:25:52 crc kubenswrapper[4661]: I0120 19:25:52.154589 4661 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="423e9789-4154-4a6b-bfa5-fe36ca4d7577" path="/var/lib/kubelet/pods/423e9789-4154-4a6b-bfa5-fe36ca4d7577/volumes" Jan 20 19:25:59 crc kubenswrapper[4661]: I0120 19:25:59.323149 4661 patch_prober.go:28] interesting pod/machine-config-daemon-svf7c container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 20 19:25:59 crc kubenswrapper[4661]: I0120 19:25:59.324553 4661 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-svf7c" podUID="78855c94-da90-4523-8d65-70f7fd153dee" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 20 19:26:29 crc kubenswrapper[4661]: I0120 19:26:29.323236 4661 patch_prober.go:28] interesting pod/machine-config-daemon-svf7c container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 20 19:26:29 crc kubenswrapper[4661]: I0120 19:26:29.324717 4661 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-svf7c" podUID="78855c94-da90-4523-8d65-70f7fd153dee" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 20 19:26:29 crc kubenswrapper[4661]: I0120 19:26:29.324846 4661 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-svf7c" Jan 20 19:26:29 crc kubenswrapper[4661]: I0120 19:26:29.325644 4661 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"c8c6bcc8680b1eddd1950421f97606495736752d26633ff9fc203db7e17d57b4"} pod="openshift-machine-config-operator/machine-config-daemon-svf7c" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 20 19:26:29 crc kubenswrapper[4661]: I0120 19:26:29.325787 4661 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-svf7c" podUID="78855c94-da90-4523-8d65-70f7fd153dee" containerName="machine-config-daemon" containerID="cri-o://c8c6bcc8680b1eddd1950421f97606495736752d26633ff9fc203db7e17d57b4" gracePeriod=600 Jan 20 19:26:29 crc kubenswrapper[4661]: I0120 19:26:29.610954 4661 generic.go:334] "Generic (PLEG): container finished" podID="78855c94-da90-4523-8d65-70f7fd153dee" containerID="c8c6bcc8680b1eddd1950421f97606495736752d26633ff9fc203db7e17d57b4" exitCode=0 Jan 20 19:26:29 crc kubenswrapper[4661]: I0120 19:26:29.611042 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-svf7c" event={"ID":"78855c94-da90-4523-8d65-70f7fd153dee","Type":"ContainerDied","Data":"c8c6bcc8680b1eddd1950421f97606495736752d26633ff9fc203db7e17d57b4"} Jan 20 19:26:29 crc kubenswrapper[4661]: I0120 19:26:29.611562 4661 scope.go:117] "RemoveContainer" containerID="88aefdb17fbdb2f9a99910e27de0632e3833c47bcd174020bd5e7e94c3da0469" Jan 20 19:26:30 crc kubenswrapper[4661]: I0120 19:26:30.627169 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-svf7c" event={"ID":"78855c94-da90-4523-8d65-70f7fd153dee","Type":"ContainerStarted","Data":"7b21693d8a82b0b4ec2dcf50fcc180f7e20aa5c7528eda5ef12172a1ad0a2efe"} Jan 20 19:28:29 crc kubenswrapper[4661]: I0120 19:28:29.323832 4661 patch_prober.go:28] interesting pod/machine-config-daemon-svf7c container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 20 19:28:29 crc kubenswrapper[4661]: I0120 19:28:29.324251 4661 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-svf7c" podUID="78855c94-da90-4523-8d65-70f7fd153dee" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 20 19:28:59 crc kubenswrapper[4661]: I0120 19:28:59.323708 4661 patch_prober.go:28] interesting pod/machine-config-daemon-svf7c container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 20 19:28:59 crc kubenswrapper[4661]: I0120 19:28:59.324372 4661 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-svf7c" podUID="78855c94-da90-4523-8d65-70f7fd153dee" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 20 19:29:10 crc kubenswrapper[4661]: I0120 19:29:10.009298 4661 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-q7qv6"] Jan 20 19:29:10 crc kubenswrapper[4661]: E0120 19:29:10.010341 4661 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="423e9789-4154-4a6b-bfa5-fe36ca4d7577" containerName="extract-content" Jan 20 19:29:10 crc kubenswrapper[4661]: I0120 19:29:10.010357 4661 state_mem.go:107] "Deleted CPUSet assignment" podUID="423e9789-4154-4a6b-bfa5-fe36ca4d7577" containerName="extract-content" Jan 20 19:29:10 crc kubenswrapper[4661]: E0120 19:29:10.010378 4661 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="423e9789-4154-4a6b-bfa5-fe36ca4d7577" containerName="extract-utilities" Jan 20 19:29:10 crc kubenswrapper[4661]: I0120 19:29:10.010388 4661 state_mem.go:107] "Deleted CPUSet assignment" podUID="423e9789-4154-4a6b-bfa5-fe36ca4d7577" containerName="extract-utilities" Jan 20 19:29:10 crc kubenswrapper[4661]: E0120 19:29:10.010409 4661 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="423e9789-4154-4a6b-bfa5-fe36ca4d7577" containerName="registry-server" Jan 20 19:29:10 crc kubenswrapper[4661]: I0120 19:29:10.010418 4661 state_mem.go:107] "Deleted CPUSet assignment" podUID="423e9789-4154-4a6b-bfa5-fe36ca4d7577" containerName="registry-server" Jan 20 19:29:10 crc kubenswrapper[4661]: I0120 19:29:10.010673 4661 memory_manager.go:354] "RemoveStaleState removing state" podUID="423e9789-4154-4a6b-bfa5-fe36ca4d7577" containerName="registry-server" Jan 20 19:29:10 crc kubenswrapper[4661]: I0120 19:29:10.012373 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-q7qv6" Jan 20 19:29:10 crc kubenswrapper[4661]: I0120 19:29:10.040164 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-q7qv6"] Jan 20 19:29:10 crc kubenswrapper[4661]: I0120 19:29:10.176424 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xhnfv\" (UniqueName: \"kubernetes.io/projected/c54ce817-ea9c-4eda-bb7a-2451fab78486-kube-api-access-xhnfv\") pod \"community-operators-q7qv6\" (UID: \"c54ce817-ea9c-4eda-bb7a-2451fab78486\") " pod="openshift-marketplace/community-operators-q7qv6" Jan 20 19:29:10 crc kubenswrapper[4661]: I0120 19:29:10.176487 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c54ce817-ea9c-4eda-bb7a-2451fab78486-catalog-content\") pod \"community-operators-q7qv6\" (UID: \"c54ce817-ea9c-4eda-bb7a-2451fab78486\") " pod="openshift-marketplace/community-operators-q7qv6" Jan 20 19:29:10 crc kubenswrapper[4661]: I0120 19:29:10.176557 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c54ce817-ea9c-4eda-bb7a-2451fab78486-utilities\") pod \"community-operators-q7qv6\" (UID: \"c54ce817-ea9c-4eda-bb7a-2451fab78486\") " pod="openshift-marketplace/community-operators-q7qv6" Jan 20 19:29:10 crc kubenswrapper[4661]: I0120 19:29:10.278762 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xhnfv\" (UniqueName: \"kubernetes.io/projected/c54ce817-ea9c-4eda-bb7a-2451fab78486-kube-api-access-xhnfv\") pod \"community-operators-q7qv6\" (UID: \"c54ce817-ea9c-4eda-bb7a-2451fab78486\") " pod="openshift-marketplace/community-operators-q7qv6" Jan 20 19:29:10 crc kubenswrapper[4661]: I0120 19:29:10.278854 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c54ce817-ea9c-4eda-bb7a-2451fab78486-catalog-content\") pod \"community-operators-q7qv6\" (UID: \"c54ce817-ea9c-4eda-bb7a-2451fab78486\") " pod="openshift-marketplace/community-operators-q7qv6" Jan 20 19:29:10 crc kubenswrapper[4661]: I0120 19:29:10.278957 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c54ce817-ea9c-4eda-bb7a-2451fab78486-utilities\") pod \"community-operators-q7qv6\" (UID: \"c54ce817-ea9c-4eda-bb7a-2451fab78486\") " pod="openshift-marketplace/community-operators-q7qv6" Jan 20 19:29:10 crc kubenswrapper[4661]: I0120 19:29:10.279333 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c54ce817-ea9c-4eda-bb7a-2451fab78486-utilities\") pod \"community-operators-q7qv6\" (UID: \"c54ce817-ea9c-4eda-bb7a-2451fab78486\") " pod="openshift-marketplace/community-operators-q7qv6" Jan 20 19:29:10 crc kubenswrapper[4661]: I0120 19:29:10.279557 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c54ce817-ea9c-4eda-bb7a-2451fab78486-catalog-content\") pod \"community-operators-q7qv6\" (UID: \"c54ce817-ea9c-4eda-bb7a-2451fab78486\") " pod="openshift-marketplace/community-operators-q7qv6" Jan 20 19:29:10 crc kubenswrapper[4661]: I0120 19:29:10.298143 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xhnfv\" (UniqueName: \"kubernetes.io/projected/c54ce817-ea9c-4eda-bb7a-2451fab78486-kube-api-access-xhnfv\") pod \"community-operators-q7qv6\" (UID: \"c54ce817-ea9c-4eda-bb7a-2451fab78486\") " pod="openshift-marketplace/community-operators-q7qv6" Jan 20 19:29:10 crc kubenswrapper[4661]: I0120 19:29:10.340294 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-q7qv6" Jan 20 19:29:10 crc kubenswrapper[4661]: I0120 19:29:10.713340 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-q7qv6"] Jan 20 19:29:11 crc kubenswrapper[4661]: I0120 19:29:11.442026 4661 generic.go:334] "Generic (PLEG): container finished" podID="c54ce817-ea9c-4eda-bb7a-2451fab78486" containerID="f4e208f3900fb94834e4b30cd3a2237d703d5203dc6dace3500a38b6a7f367f0" exitCode=0 Jan 20 19:29:11 crc kubenswrapper[4661]: I0120 19:29:11.442382 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-q7qv6" event={"ID":"c54ce817-ea9c-4eda-bb7a-2451fab78486","Type":"ContainerDied","Data":"f4e208f3900fb94834e4b30cd3a2237d703d5203dc6dace3500a38b6a7f367f0"} Jan 20 19:29:11 crc kubenswrapper[4661]: I0120 19:29:11.442436 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-q7qv6" event={"ID":"c54ce817-ea9c-4eda-bb7a-2451fab78486","Type":"ContainerStarted","Data":"1050e57dc13f958722a6277f417cebd658106ffd5b77f8bf8925397199d4ab70"} Jan 20 19:29:12 crc kubenswrapper[4661]: I0120 19:29:12.453876 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-q7qv6" event={"ID":"c54ce817-ea9c-4eda-bb7a-2451fab78486","Type":"ContainerStarted","Data":"3af016045d7e8d3707dc8811d0e131414db5d67eb0238ef404102aaaf84d6789"} Jan 20 19:29:13 crc kubenswrapper[4661]: I0120 19:29:13.466849 4661 generic.go:334] "Generic (PLEG): container finished" podID="c54ce817-ea9c-4eda-bb7a-2451fab78486" containerID="3af016045d7e8d3707dc8811d0e131414db5d67eb0238ef404102aaaf84d6789" exitCode=0 Jan 20 19:29:13 crc kubenswrapper[4661]: I0120 19:29:13.467007 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-q7qv6" event={"ID":"c54ce817-ea9c-4eda-bb7a-2451fab78486","Type":"ContainerDied","Data":"3af016045d7e8d3707dc8811d0e131414db5d67eb0238ef404102aaaf84d6789"} Jan 20 19:29:14 crc kubenswrapper[4661]: I0120 19:29:14.476388 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-q7qv6" event={"ID":"c54ce817-ea9c-4eda-bb7a-2451fab78486","Type":"ContainerStarted","Data":"44c0dacfb5566dbbba4c591f3e1eeb6ef55945de07f4141874a71d857c87c5e8"} Jan 20 19:29:14 crc kubenswrapper[4661]: I0120 19:29:14.507110 4661 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-q7qv6" podStartSLOduration=2.816583303 podStartE2EDuration="5.507088923s" podCreationTimestamp="2026-01-20 19:29:09 +0000 UTC" firstStartedPulling="2026-01-20 19:29:11.446768622 +0000 UTC m=+5007.777558284" lastFinishedPulling="2026-01-20 19:29:14.137274242 +0000 UTC m=+5010.468063904" observedRunningTime="2026-01-20 19:29:14.499592708 +0000 UTC m=+5010.830382380" watchObservedRunningTime="2026-01-20 19:29:14.507088923 +0000 UTC m=+5010.837878585" Jan 20 19:29:20 crc kubenswrapper[4661]: I0120 19:29:20.340973 4661 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-q7qv6" Jan 20 19:29:20 crc kubenswrapper[4661]: I0120 19:29:20.341375 4661 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-q7qv6" Jan 20 19:29:20 crc kubenswrapper[4661]: I0120 19:29:20.406968 4661 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-q7qv6" Jan 20 19:29:20 crc kubenswrapper[4661]: I0120 19:29:20.592151 4661 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-q7qv6" Jan 20 19:29:20 crc kubenswrapper[4661]: I0120 19:29:20.656533 4661 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-q7qv6"] Jan 20 19:29:22 crc kubenswrapper[4661]: I0120 19:29:22.565595 4661 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-q7qv6" podUID="c54ce817-ea9c-4eda-bb7a-2451fab78486" containerName="registry-server" containerID="cri-o://44c0dacfb5566dbbba4c591f3e1eeb6ef55945de07f4141874a71d857c87c5e8" gracePeriod=2 Jan 20 19:29:23 crc kubenswrapper[4661]: I0120 19:29:23.057734 4661 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-q7qv6" Jan 20 19:29:23 crc kubenswrapper[4661]: I0120 19:29:23.187273 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c54ce817-ea9c-4eda-bb7a-2451fab78486-catalog-content\") pod \"c54ce817-ea9c-4eda-bb7a-2451fab78486\" (UID: \"c54ce817-ea9c-4eda-bb7a-2451fab78486\") " Jan 20 19:29:23 crc kubenswrapper[4661]: I0120 19:29:23.187416 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c54ce817-ea9c-4eda-bb7a-2451fab78486-utilities\") pod \"c54ce817-ea9c-4eda-bb7a-2451fab78486\" (UID: \"c54ce817-ea9c-4eda-bb7a-2451fab78486\") " Jan 20 19:29:23 crc kubenswrapper[4661]: I0120 19:29:23.187458 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xhnfv\" (UniqueName: \"kubernetes.io/projected/c54ce817-ea9c-4eda-bb7a-2451fab78486-kube-api-access-xhnfv\") pod \"c54ce817-ea9c-4eda-bb7a-2451fab78486\" (UID: \"c54ce817-ea9c-4eda-bb7a-2451fab78486\") " Jan 20 19:29:23 crc kubenswrapper[4661]: I0120 19:29:23.188720 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c54ce817-ea9c-4eda-bb7a-2451fab78486-utilities" (OuterVolumeSpecName: "utilities") pod "c54ce817-ea9c-4eda-bb7a-2451fab78486" (UID: "c54ce817-ea9c-4eda-bb7a-2451fab78486"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 19:29:23 crc kubenswrapper[4661]: I0120 19:29:23.198122 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c54ce817-ea9c-4eda-bb7a-2451fab78486-kube-api-access-xhnfv" (OuterVolumeSpecName: "kube-api-access-xhnfv") pod "c54ce817-ea9c-4eda-bb7a-2451fab78486" (UID: "c54ce817-ea9c-4eda-bb7a-2451fab78486"). InnerVolumeSpecName "kube-api-access-xhnfv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 19:29:23 crc kubenswrapper[4661]: I0120 19:29:23.270926 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c54ce817-ea9c-4eda-bb7a-2451fab78486-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c54ce817-ea9c-4eda-bb7a-2451fab78486" (UID: "c54ce817-ea9c-4eda-bb7a-2451fab78486"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 19:29:23 crc kubenswrapper[4661]: I0120 19:29:23.291287 4661 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xhnfv\" (UniqueName: \"kubernetes.io/projected/c54ce817-ea9c-4eda-bb7a-2451fab78486-kube-api-access-xhnfv\") on node \"crc\" DevicePath \"\"" Jan 20 19:29:23 crc kubenswrapper[4661]: I0120 19:29:23.291319 4661 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c54ce817-ea9c-4eda-bb7a-2451fab78486-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 20 19:29:23 crc kubenswrapper[4661]: I0120 19:29:23.291333 4661 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c54ce817-ea9c-4eda-bb7a-2451fab78486-utilities\") on node \"crc\" DevicePath \"\"" Jan 20 19:29:23 crc kubenswrapper[4661]: I0120 19:29:23.576945 4661 generic.go:334] "Generic (PLEG): container finished" podID="c54ce817-ea9c-4eda-bb7a-2451fab78486" containerID="44c0dacfb5566dbbba4c591f3e1eeb6ef55945de07f4141874a71d857c87c5e8" exitCode=0 Jan 20 19:29:23 crc kubenswrapper[4661]: I0120 19:29:23.576996 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-q7qv6" event={"ID":"c54ce817-ea9c-4eda-bb7a-2451fab78486","Type":"ContainerDied","Data":"44c0dacfb5566dbbba4c591f3e1eeb6ef55945de07f4141874a71d857c87c5e8"} Jan 20 19:29:23 crc kubenswrapper[4661]: I0120 19:29:23.577008 4661 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-q7qv6" Jan 20 19:29:23 crc kubenswrapper[4661]: I0120 19:29:23.577030 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-q7qv6" event={"ID":"c54ce817-ea9c-4eda-bb7a-2451fab78486","Type":"ContainerDied","Data":"1050e57dc13f958722a6277f417cebd658106ffd5b77f8bf8925397199d4ab70"} Jan 20 19:29:23 crc kubenswrapper[4661]: I0120 19:29:23.577052 4661 scope.go:117] "RemoveContainer" containerID="44c0dacfb5566dbbba4c591f3e1eeb6ef55945de07f4141874a71d857c87c5e8" Jan 20 19:29:23 crc kubenswrapper[4661]: I0120 19:29:23.602366 4661 scope.go:117] "RemoveContainer" containerID="3af016045d7e8d3707dc8811d0e131414db5d67eb0238ef404102aaaf84d6789" Jan 20 19:29:23 crc kubenswrapper[4661]: I0120 19:29:23.613946 4661 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-q7qv6"] Jan 20 19:29:23 crc kubenswrapper[4661]: I0120 19:29:23.625086 4661 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-q7qv6"] Jan 20 19:29:23 crc kubenswrapper[4661]: I0120 19:29:23.656782 4661 scope.go:117] "RemoveContainer" containerID="f4e208f3900fb94834e4b30cd3a2237d703d5203dc6dace3500a38b6a7f367f0" Jan 20 19:29:23 crc kubenswrapper[4661]: I0120 19:29:23.693327 4661 scope.go:117] "RemoveContainer" containerID="44c0dacfb5566dbbba4c591f3e1eeb6ef55945de07f4141874a71d857c87c5e8" Jan 20 19:29:23 crc kubenswrapper[4661]: E0120 19:29:23.693943 4661 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"44c0dacfb5566dbbba4c591f3e1eeb6ef55945de07f4141874a71d857c87c5e8\": container with ID starting with 44c0dacfb5566dbbba4c591f3e1eeb6ef55945de07f4141874a71d857c87c5e8 not found: ID does not exist" containerID="44c0dacfb5566dbbba4c591f3e1eeb6ef55945de07f4141874a71d857c87c5e8" Jan 20 19:29:23 crc kubenswrapper[4661]: I0120 19:29:23.693983 4661 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"44c0dacfb5566dbbba4c591f3e1eeb6ef55945de07f4141874a71d857c87c5e8"} err="failed to get container status \"44c0dacfb5566dbbba4c591f3e1eeb6ef55945de07f4141874a71d857c87c5e8\": rpc error: code = NotFound desc = could not find container \"44c0dacfb5566dbbba4c591f3e1eeb6ef55945de07f4141874a71d857c87c5e8\": container with ID starting with 44c0dacfb5566dbbba4c591f3e1eeb6ef55945de07f4141874a71d857c87c5e8 not found: ID does not exist" Jan 20 19:29:23 crc kubenswrapper[4661]: I0120 19:29:23.694025 4661 scope.go:117] "RemoveContainer" containerID="3af016045d7e8d3707dc8811d0e131414db5d67eb0238ef404102aaaf84d6789" Jan 20 19:29:23 crc kubenswrapper[4661]: E0120 19:29:23.694512 4661 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3af016045d7e8d3707dc8811d0e131414db5d67eb0238ef404102aaaf84d6789\": container with ID starting with 3af016045d7e8d3707dc8811d0e131414db5d67eb0238ef404102aaaf84d6789 not found: ID does not exist" containerID="3af016045d7e8d3707dc8811d0e131414db5d67eb0238ef404102aaaf84d6789" Jan 20 19:29:23 crc kubenswrapper[4661]: I0120 19:29:23.694543 4661 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3af016045d7e8d3707dc8811d0e131414db5d67eb0238ef404102aaaf84d6789"} err="failed to get container status \"3af016045d7e8d3707dc8811d0e131414db5d67eb0238ef404102aaaf84d6789\": rpc error: code = NotFound desc = could not find container \"3af016045d7e8d3707dc8811d0e131414db5d67eb0238ef404102aaaf84d6789\": container with ID starting with 3af016045d7e8d3707dc8811d0e131414db5d67eb0238ef404102aaaf84d6789 not found: ID does not exist" Jan 20 19:29:23 crc kubenswrapper[4661]: I0120 19:29:23.694561 4661 scope.go:117] "RemoveContainer" containerID="f4e208f3900fb94834e4b30cd3a2237d703d5203dc6dace3500a38b6a7f367f0" Jan 20 19:29:23 crc kubenswrapper[4661]: E0120 19:29:23.695807 4661 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f4e208f3900fb94834e4b30cd3a2237d703d5203dc6dace3500a38b6a7f367f0\": container with ID starting with f4e208f3900fb94834e4b30cd3a2237d703d5203dc6dace3500a38b6a7f367f0 not found: ID does not exist" containerID="f4e208f3900fb94834e4b30cd3a2237d703d5203dc6dace3500a38b6a7f367f0" Jan 20 19:29:23 crc kubenswrapper[4661]: I0120 19:29:23.695835 4661 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f4e208f3900fb94834e4b30cd3a2237d703d5203dc6dace3500a38b6a7f367f0"} err="failed to get container status \"f4e208f3900fb94834e4b30cd3a2237d703d5203dc6dace3500a38b6a7f367f0\": rpc error: code = NotFound desc = could not find container \"f4e208f3900fb94834e4b30cd3a2237d703d5203dc6dace3500a38b6a7f367f0\": container with ID starting with f4e208f3900fb94834e4b30cd3a2237d703d5203dc6dace3500a38b6a7f367f0 not found: ID does not exist" Jan 20 19:29:24 crc kubenswrapper[4661]: I0120 19:29:24.154995 4661 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c54ce817-ea9c-4eda-bb7a-2451fab78486" path="/var/lib/kubelet/pods/c54ce817-ea9c-4eda-bb7a-2451fab78486/volumes" Jan 20 19:29:27 crc kubenswrapper[4661]: I0120 19:29:27.652739 4661 generic.go:334] "Generic (PLEG): container finished" podID="dec0aa71-5aea-4d67-bf53-1f4a04b38a7a" containerID="21da3d6317aa762ceff13c328487020472a0b2825302720548d71b607385ae89" exitCode=0 Jan 20 19:29:27 crc kubenswrapper[4661]: I0120 19:29:27.652871 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-7bgfs/must-gather-vdf4s" event={"ID":"dec0aa71-5aea-4d67-bf53-1f4a04b38a7a","Type":"ContainerDied","Data":"21da3d6317aa762ceff13c328487020472a0b2825302720548d71b607385ae89"} Jan 20 19:29:27 crc kubenswrapper[4661]: I0120 19:29:27.654800 4661 scope.go:117] "RemoveContainer" containerID="21da3d6317aa762ceff13c328487020472a0b2825302720548d71b607385ae89" Jan 20 19:29:28 crc kubenswrapper[4661]: I0120 19:29:28.693753 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-7bgfs_must-gather-vdf4s_dec0aa71-5aea-4d67-bf53-1f4a04b38a7a/gather/0.log" Jan 20 19:29:29 crc kubenswrapper[4661]: I0120 19:29:29.323313 4661 patch_prober.go:28] interesting pod/machine-config-daemon-svf7c container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 20 19:29:29 crc kubenswrapper[4661]: I0120 19:29:29.323360 4661 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-svf7c" podUID="78855c94-da90-4523-8d65-70f7fd153dee" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 20 19:29:29 crc kubenswrapper[4661]: I0120 19:29:29.323396 4661 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-svf7c" Jan 20 19:29:29 crc kubenswrapper[4661]: I0120 19:29:29.323880 4661 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"7b21693d8a82b0b4ec2dcf50fcc180f7e20aa5c7528eda5ef12172a1ad0a2efe"} pod="openshift-machine-config-operator/machine-config-daemon-svf7c" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 20 19:29:29 crc kubenswrapper[4661]: I0120 19:29:29.323947 4661 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-svf7c" podUID="78855c94-da90-4523-8d65-70f7fd153dee" containerName="machine-config-daemon" containerID="cri-o://7b21693d8a82b0b4ec2dcf50fcc180f7e20aa5c7528eda5ef12172a1ad0a2efe" gracePeriod=600 Jan 20 19:29:29 crc kubenswrapper[4661]: E0120 19:29:29.451248 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-svf7c_openshift-machine-config-operator(78855c94-da90-4523-8d65-70f7fd153dee)\"" pod="openshift-machine-config-operator/machine-config-daemon-svf7c" podUID="78855c94-da90-4523-8d65-70f7fd153dee" Jan 20 19:29:29 crc kubenswrapper[4661]: I0120 19:29:29.677588 4661 generic.go:334] "Generic (PLEG): container finished" podID="78855c94-da90-4523-8d65-70f7fd153dee" containerID="7b21693d8a82b0b4ec2dcf50fcc180f7e20aa5c7528eda5ef12172a1ad0a2efe" exitCode=0 Jan 20 19:29:29 crc kubenswrapper[4661]: I0120 19:29:29.677640 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-svf7c" event={"ID":"78855c94-da90-4523-8d65-70f7fd153dee","Type":"ContainerDied","Data":"7b21693d8a82b0b4ec2dcf50fcc180f7e20aa5c7528eda5ef12172a1ad0a2efe"} Jan 20 19:29:29 crc kubenswrapper[4661]: I0120 19:29:29.677695 4661 scope.go:117] "RemoveContainer" containerID="c8c6bcc8680b1eddd1950421f97606495736752d26633ff9fc203db7e17d57b4" Jan 20 19:29:29 crc kubenswrapper[4661]: I0120 19:29:29.678650 4661 scope.go:117] "RemoveContainer" containerID="7b21693d8a82b0b4ec2dcf50fcc180f7e20aa5c7528eda5ef12172a1ad0a2efe" Jan 20 19:29:29 crc kubenswrapper[4661]: E0120 19:29:29.679291 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-svf7c_openshift-machine-config-operator(78855c94-da90-4523-8d65-70f7fd153dee)\"" pod="openshift-machine-config-operator/machine-config-daemon-svf7c" podUID="78855c94-da90-4523-8d65-70f7fd153dee" Jan 20 19:29:37 crc kubenswrapper[4661]: I0120 19:29:37.558828 4661 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-7bgfs/must-gather-vdf4s"] Jan 20 19:29:37 crc kubenswrapper[4661]: I0120 19:29:37.559507 4661 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-7bgfs/must-gather-vdf4s" podUID="dec0aa71-5aea-4d67-bf53-1f4a04b38a7a" containerName="copy" containerID="cri-o://59b5ede8e1f2c412bf65eaf53dccb54cf13dbb1159c80808b98bac29da71ea7c" gracePeriod=2 Jan 20 19:29:37 crc kubenswrapper[4661]: I0120 19:29:37.568334 4661 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-7bgfs/must-gather-vdf4s"] Jan 20 19:29:37 crc kubenswrapper[4661]: I0120 19:29:37.767280 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-7bgfs_must-gather-vdf4s_dec0aa71-5aea-4d67-bf53-1f4a04b38a7a/copy/0.log" Jan 20 19:29:37 crc kubenswrapper[4661]: I0120 19:29:37.767605 4661 generic.go:334] "Generic (PLEG): container finished" podID="dec0aa71-5aea-4d67-bf53-1f4a04b38a7a" containerID="59b5ede8e1f2c412bf65eaf53dccb54cf13dbb1159c80808b98bac29da71ea7c" exitCode=143 Jan 20 19:29:38 crc kubenswrapper[4661]: I0120 19:29:38.182818 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-7bgfs_must-gather-vdf4s_dec0aa71-5aea-4d67-bf53-1f4a04b38a7a/copy/0.log" Jan 20 19:29:38 crc kubenswrapper[4661]: I0120 19:29:38.183520 4661 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-7bgfs/must-gather-vdf4s" Jan 20 19:29:38 crc kubenswrapper[4661]: I0120 19:29:38.239319 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/dec0aa71-5aea-4d67-bf53-1f4a04b38a7a-must-gather-output\") pod \"dec0aa71-5aea-4d67-bf53-1f4a04b38a7a\" (UID: \"dec0aa71-5aea-4d67-bf53-1f4a04b38a7a\") " Jan 20 19:29:38 crc kubenswrapper[4661]: I0120 19:29:38.239434 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vh27h\" (UniqueName: \"kubernetes.io/projected/dec0aa71-5aea-4d67-bf53-1f4a04b38a7a-kube-api-access-vh27h\") pod \"dec0aa71-5aea-4d67-bf53-1f4a04b38a7a\" (UID: \"dec0aa71-5aea-4d67-bf53-1f4a04b38a7a\") " Jan 20 19:29:38 crc kubenswrapper[4661]: I0120 19:29:38.249254 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dec0aa71-5aea-4d67-bf53-1f4a04b38a7a-kube-api-access-vh27h" (OuterVolumeSpecName: "kube-api-access-vh27h") pod "dec0aa71-5aea-4d67-bf53-1f4a04b38a7a" (UID: "dec0aa71-5aea-4d67-bf53-1f4a04b38a7a"). InnerVolumeSpecName "kube-api-access-vh27h". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 19:29:38 crc kubenswrapper[4661]: I0120 19:29:38.341546 4661 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vh27h\" (UniqueName: \"kubernetes.io/projected/dec0aa71-5aea-4d67-bf53-1f4a04b38a7a-kube-api-access-vh27h\") on node \"crc\" DevicePath \"\"" Jan 20 19:29:38 crc kubenswrapper[4661]: I0120 19:29:38.444827 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dec0aa71-5aea-4d67-bf53-1f4a04b38a7a-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "dec0aa71-5aea-4d67-bf53-1f4a04b38a7a" (UID: "dec0aa71-5aea-4d67-bf53-1f4a04b38a7a"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 19:29:38 crc kubenswrapper[4661]: I0120 19:29:38.547562 4661 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/dec0aa71-5aea-4d67-bf53-1f4a04b38a7a-must-gather-output\") on node \"crc\" DevicePath \"\"" Jan 20 19:29:38 crc kubenswrapper[4661]: I0120 19:29:38.777546 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-7bgfs_must-gather-vdf4s_dec0aa71-5aea-4d67-bf53-1f4a04b38a7a/copy/0.log" Jan 20 19:29:38 crc kubenswrapper[4661]: I0120 19:29:38.778062 4661 scope.go:117] "RemoveContainer" containerID="59b5ede8e1f2c412bf65eaf53dccb54cf13dbb1159c80808b98bac29da71ea7c" Jan 20 19:29:38 crc kubenswrapper[4661]: I0120 19:29:38.778195 4661 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-7bgfs/must-gather-vdf4s" Jan 20 19:29:38 crc kubenswrapper[4661]: I0120 19:29:38.822652 4661 scope.go:117] "RemoveContainer" containerID="21da3d6317aa762ceff13c328487020472a0b2825302720548d71b607385ae89" Jan 20 19:29:40 crc kubenswrapper[4661]: I0120 19:29:40.161976 4661 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dec0aa71-5aea-4d67-bf53-1f4a04b38a7a" path="/var/lib/kubelet/pods/dec0aa71-5aea-4d67-bf53-1f4a04b38a7a/volumes" Jan 20 19:29:43 crc kubenswrapper[4661]: I0120 19:29:43.144101 4661 scope.go:117] "RemoveContainer" containerID="7b21693d8a82b0b4ec2dcf50fcc180f7e20aa5c7528eda5ef12172a1ad0a2efe" Jan 20 19:29:43 crc kubenswrapper[4661]: E0120 19:29:43.145616 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-svf7c_openshift-machine-config-operator(78855c94-da90-4523-8d65-70f7fd153dee)\"" pod="openshift-machine-config-operator/machine-config-daemon-svf7c" podUID="78855c94-da90-4523-8d65-70f7fd153dee" Jan 20 19:29:55 crc kubenswrapper[4661]: I0120 19:29:55.143079 4661 scope.go:117] "RemoveContainer" containerID="7b21693d8a82b0b4ec2dcf50fcc180f7e20aa5c7528eda5ef12172a1ad0a2efe" Jan 20 19:29:55 crc kubenswrapper[4661]: E0120 19:29:55.144219 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-svf7c_openshift-machine-config-operator(78855c94-da90-4523-8d65-70f7fd153dee)\"" pod="openshift-machine-config-operator/machine-config-daemon-svf7c" podUID="78855c94-da90-4523-8d65-70f7fd153dee" Jan 20 19:30:00 crc kubenswrapper[4661]: I0120 19:30:00.182483 4661 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29482290-xbjht"] Jan 20 19:30:00 crc kubenswrapper[4661]: E0120 19:30:00.184013 4661 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c54ce817-ea9c-4eda-bb7a-2451fab78486" containerName="extract-content" Jan 20 19:30:00 crc kubenswrapper[4661]: I0120 19:30:00.184045 4661 state_mem.go:107] "Deleted CPUSet assignment" podUID="c54ce817-ea9c-4eda-bb7a-2451fab78486" containerName="extract-content" Jan 20 19:30:00 crc kubenswrapper[4661]: E0120 19:30:00.184069 4661 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c54ce817-ea9c-4eda-bb7a-2451fab78486" containerName="registry-server" Jan 20 19:30:00 crc kubenswrapper[4661]: I0120 19:30:00.184088 4661 state_mem.go:107] "Deleted CPUSet assignment" podUID="c54ce817-ea9c-4eda-bb7a-2451fab78486" containerName="registry-server" Jan 20 19:30:00 crc kubenswrapper[4661]: E0120 19:30:00.184130 4661 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dec0aa71-5aea-4d67-bf53-1f4a04b38a7a" containerName="copy" Jan 20 19:30:00 crc kubenswrapper[4661]: I0120 19:30:00.184147 4661 state_mem.go:107] "Deleted CPUSet assignment" podUID="dec0aa71-5aea-4d67-bf53-1f4a04b38a7a" containerName="copy" Jan 20 19:30:00 crc kubenswrapper[4661]: E0120 19:30:00.184182 4661 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c54ce817-ea9c-4eda-bb7a-2451fab78486" containerName="extract-utilities" Jan 20 19:30:00 crc kubenswrapper[4661]: I0120 19:30:00.184198 4661 state_mem.go:107] "Deleted CPUSet assignment" podUID="c54ce817-ea9c-4eda-bb7a-2451fab78486" containerName="extract-utilities" Jan 20 19:30:00 crc kubenswrapper[4661]: E0120 19:30:00.184241 4661 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dec0aa71-5aea-4d67-bf53-1f4a04b38a7a" containerName="gather" Jan 20 19:30:00 crc kubenswrapper[4661]: I0120 19:30:00.184257 4661 state_mem.go:107] "Deleted CPUSet assignment" podUID="dec0aa71-5aea-4d67-bf53-1f4a04b38a7a" containerName="gather" Jan 20 19:30:00 crc kubenswrapper[4661]: I0120 19:30:00.184655 4661 memory_manager.go:354] "RemoveStaleState removing state" podUID="c54ce817-ea9c-4eda-bb7a-2451fab78486" containerName="registry-server" Jan 20 19:30:00 crc kubenswrapper[4661]: I0120 19:30:00.184725 4661 memory_manager.go:354] "RemoveStaleState removing state" podUID="dec0aa71-5aea-4d67-bf53-1f4a04b38a7a" containerName="gather" Jan 20 19:30:00 crc kubenswrapper[4661]: I0120 19:30:00.184777 4661 memory_manager.go:354] "RemoveStaleState removing state" podUID="dec0aa71-5aea-4d67-bf53-1f4a04b38a7a" containerName="copy" Jan 20 19:30:00 crc kubenswrapper[4661]: I0120 19:30:00.186054 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29482290-xbjht" Jan 20 19:30:00 crc kubenswrapper[4661]: I0120 19:30:00.193783 4661 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 20 19:30:00 crc kubenswrapper[4661]: I0120 19:30:00.193824 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 20 19:30:00 crc kubenswrapper[4661]: I0120 19:30:00.221299 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29482290-xbjht"] Jan 20 19:30:00 crc kubenswrapper[4661]: I0120 19:30:00.251293 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-npssr\" (UniqueName: \"kubernetes.io/projected/909d1c3a-f62d-479f-a0d4-2fb01d066e30-kube-api-access-npssr\") pod \"collect-profiles-29482290-xbjht\" (UID: \"909d1c3a-f62d-479f-a0d4-2fb01d066e30\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29482290-xbjht" Jan 20 19:30:00 crc kubenswrapper[4661]: I0120 19:30:00.251325 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/909d1c3a-f62d-479f-a0d4-2fb01d066e30-secret-volume\") pod \"collect-profiles-29482290-xbjht\" (UID: \"909d1c3a-f62d-479f-a0d4-2fb01d066e30\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29482290-xbjht" Jan 20 19:30:00 crc kubenswrapper[4661]: I0120 19:30:00.251376 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/909d1c3a-f62d-479f-a0d4-2fb01d066e30-config-volume\") pod \"collect-profiles-29482290-xbjht\" (UID: \"909d1c3a-f62d-479f-a0d4-2fb01d066e30\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29482290-xbjht" Jan 20 19:30:00 crc kubenswrapper[4661]: I0120 19:30:00.352545 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/909d1c3a-f62d-479f-a0d4-2fb01d066e30-config-volume\") pod \"collect-profiles-29482290-xbjht\" (UID: \"909d1c3a-f62d-479f-a0d4-2fb01d066e30\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29482290-xbjht" Jan 20 19:30:00 crc kubenswrapper[4661]: I0120 19:30:00.352964 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-npssr\" (UniqueName: \"kubernetes.io/projected/909d1c3a-f62d-479f-a0d4-2fb01d066e30-kube-api-access-npssr\") pod \"collect-profiles-29482290-xbjht\" (UID: \"909d1c3a-f62d-479f-a0d4-2fb01d066e30\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29482290-xbjht" Jan 20 19:30:00 crc kubenswrapper[4661]: I0120 19:30:00.353016 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/909d1c3a-f62d-479f-a0d4-2fb01d066e30-secret-volume\") pod \"collect-profiles-29482290-xbjht\" (UID: \"909d1c3a-f62d-479f-a0d4-2fb01d066e30\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29482290-xbjht" Jan 20 19:30:00 crc kubenswrapper[4661]: I0120 19:30:00.353860 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/909d1c3a-f62d-479f-a0d4-2fb01d066e30-config-volume\") pod \"collect-profiles-29482290-xbjht\" (UID: \"909d1c3a-f62d-479f-a0d4-2fb01d066e30\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29482290-xbjht" Jan 20 19:30:00 crc kubenswrapper[4661]: I0120 19:30:00.631633 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/909d1c3a-f62d-479f-a0d4-2fb01d066e30-secret-volume\") pod \"collect-profiles-29482290-xbjht\" (UID: \"909d1c3a-f62d-479f-a0d4-2fb01d066e30\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29482290-xbjht" Jan 20 19:30:00 crc kubenswrapper[4661]: I0120 19:30:00.633054 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-npssr\" (UniqueName: \"kubernetes.io/projected/909d1c3a-f62d-479f-a0d4-2fb01d066e30-kube-api-access-npssr\") pod \"collect-profiles-29482290-xbjht\" (UID: \"909d1c3a-f62d-479f-a0d4-2fb01d066e30\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29482290-xbjht" Jan 20 19:30:00 crc kubenswrapper[4661]: I0120 19:30:00.811796 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29482290-xbjht" Jan 20 19:30:01 crc kubenswrapper[4661]: I0120 19:30:01.357899 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29482290-xbjht"] Jan 20 19:30:02 crc kubenswrapper[4661]: I0120 19:30:02.040094 4661 generic.go:334] "Generic (PLEG): container finished" podID="909d1c3a-f62d-479f-a0d4-2fb01d066e30" containerID="2b353aa6e7fa8aef6f1296e573ed526e1180ca3cf26567d8301e682a2e2f3af0" exitCode=0 Jan 20 19:30:02 crc kubenswrapper[4661]: I0120 19:30:02.040153 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29482290-xbjht" event={"ID":"909d1c3a-f62d-479f-a0d4-2fb01d066e30","Type":"ContainerDied","Data":"2b353aa6e7fa8aef6f1296e573ed526e1180ca3cf26567d8301e682a2e2f3af0"} Jan 20 19:30:02 crc kubenswrapper[4661]: I0120 19:30:02.040391 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29482290-xbjht" event={"ID":"909d1c3a-f62d-479f-a0d4-2fb01d066e30","Type":"ContainerStarted","Data":"4838a4efb0e03c9ba666089a9d6d248fbed14ecf5fdff819ebaed7d4b3f46318"} Jan 20 19:30:03 crc kubenswrapper[4661]: I0120 19:30:03.417496 4661 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29482290-xbjht" Jan 20 19:30:03 crc kubenswrapper[4661]: I0120 19:30:03.531610 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/909d1c3a-f62d-479f-a0d4-2fb01d066e30-secret-volume\") pod \"909d1c3a-f62d-479f-a0d4-2fb01d066e30\" (UID: \"909d1c3a-f62d-479f-a0d4-2fb01d066e30\") " Jan 20 19:30:03 crc kubenswrapper[4661]: I0120 19:30:03.532741 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-npssr\" (UniqueName: \"kubernetes.io/projected/909d1c3a-f62d-479f-a0d4-2fb01d066e30-kube-api-access-npssr\") pod \"909d1c3a-f62d-479f-a0d4-2fb01d066e30\" (UID: \"909d1c3a-f62d-479f-a0d4-2fb01d066e30\") " Jan 20 19:30:03 crc kubenswrapper[4661]: I0120 19:30:03.532881 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/909d1c3a-f62d-479f-a0d4-2fb01d066e30-config-volume\") pod \"909d1c3a-f62d-479f-a0d4-2fb01d066e30\" (UID: \"909d1c3a-f62d-479f-a0d4-2fb01d066e30\") " Jan 20 19:30:03 crc kubenswrapper[4661]: I0120 19:30:03.533511 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/909d1c3a-f62d-479f-a0d4-2fb01d066e30-config-volume" (OuterVolumeSpecName: "config-volume") pod "909d1c3a-f62d-479f-a0d4-2fb01d066e30" (UID: "909d1c3a-f62d-479f-a0d4-2fb01d066e30"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 19:30:03 crc kubenswrapper[4661]: I0120 19:30:03.536978 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/909d1c3a-f62d-479f-a0d4-2fb01d066e30-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "909d1c3a-f62d-479f-a0d4-2fb01d066e30" (UID: "909d1c3a-f62d-479f-a0d4-2fb01d066e30"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 19:30:03 crc kubenswrapper[4661]: I0120 19:30:03.538463 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/909d1c3a-f62d-479f-a0d4-2fb01d066e30-kube-api-access-npssr" (OuterVolumeSpecName: "kube-api-access-npssr") pod "909d1c3a-f62d-479f-a0d4-2fb01d066e30" (UID: "909d1c3a-f62d-479f-a0d4-2fb01d066e30"). InnerVolumeSpecName "kube-api-access-npssr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 19:30:03 crc kubenswrapper[4661]: I0120 19:30:03.635723 4661 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-npssr\" (UniqueName: \"kubernetes.io/projected/909d1c3a-f62d-479f-a0d4-2fb01d066e30-kube-api-access-npssr\") on node \"crc\" DevicePath \"\"" Jan 20 19:30:03 crc kubenswrapper[4661]: I0120 19:30:03.635774 4661 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/909d1c3a-f62d-479f-a0d4-2fb01d066e30-config-volume\") on node \"crc\" DevicePath \"\"" Jan 20 19:30:03 crc kubenswrapper[4661]: I0120 19:30:03.635793 4661 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/909d1c3a-f62d-479f-a0d4-2fb01d066e30-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 20 19:30:04 crc kubenswrapper[4661]: I0120 19:30:04.071694 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29482290-xbjht" event={"ID":"909d1c3a-f62d-479f-a0d4-2fb01d066e30","Type":"ContainerDied","Data":"4838a4efb0e03c9ba666089a9d6d248fbed14ecf5fdff819ebaed7d4b3f46318"} Jan 20 19:30:04 crc kubenswrapper[4661]: I0120 19:30:04.071759 4661 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4838a4efb0e03c9ba666089a9d6d248fbed14ecf5fdff819ebaed7d4b3f46318" Jan 20 19:30:04 crc kubenswrapper[4661]: I0120 19:30:04.071708 4661 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29482290-xbjht" Jan 20 19:30:04 crc kubenswrapper[4661]: I0120 19:30:04.487991 4661 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29482245-w4pl8"] Jan 20 19:30:04 crc kubenswrapper[4661]: I0120 19:30:04.496843 4661 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29482245-w4pl8"] Jan 20 19:30:06 crc kubenswrapper[4661]: I0120 19:30:06.159648 4661 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2fb2a9dd-f8b0-436b-88e9-8e9b74ac7f62" path="/var/lib/kubelet/pods/2fb2a9dd-f8b0-436b-88e9-8e9b74ac7f62/volumes" Jan 20 19:30:06 crc kubenswrapper[4661]: I0120 19:30:06.554712 4661 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-cd9j8"] Jan 20 19:30:06 crc kubenswrapper[4661]: E0120 19:30:06.555332 4661 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="909d1c3a-f62d-479f-a0d4-2fb01d066e30" containerName="collect-profiles" Jan 20 19:30:06 crc kubenswrapper[4661]: I0120 19:30:06.555430 4661 state_mem.go:107] "Deleted CPUSet assignment" podUID="909d1c3a-f62d-479f-a0d4-2fb01d066e30" containerName="collect-profiles" Jan 20 19:30:06 crc kubenswrapper[4661]: I0120 19:30:06.555655 4661 memory_manager.go:354] "RemoveStaleState removing state" podUID="909d1c3a-f62d-479f-a0d4-2fb01d066e30" containerName="collect-profiles" Jan 20 19:30:06 crc kubenswrapper[4661]: I0120 19:30:06.556976 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-cd9j8" Jan 20 19:30:06 crc kubenswrapper[4661]: I0120 19:30:06.583595 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-cd9j8"] Jan 20 19:30:06 crc kubenswrapper[4661]: I0120 19:30:06.605152 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1fb24ae3-0e22-47f8-b197-e236b88a72b3-catalog-content\") pod \"certified-operators-cd9j8\" (UID: \"1fb24ae3-0e22-47f8-b197-e236b88a72b3\") " pod="openshift-marketplace/certified-operators-cd9j8" Jan 20 19:30:06 crc kubenswrapper[4661]: I0120 19:30:06.605470 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vwcr8\" (UniqueName: \"kubernetes.io/projected/1fb24ae3-0e22-47f8-b197-e236b88a72b3-kube-api-access-vwcr8\") pod \"certified-operators-cd9j8\" (UID: \"1fb24ae3-0e22-47f8-b197-e236b88a72b3\") " pod="openshift-marketplace/certified-operators-cd9j8" Jan 20 19:30:06 crc kubenswrapper[4661]: I0120 19:30:06.605633 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1fb24ae3-0e22-47f8-b197-e236b88a72b3-utilities\") pod \"certified-operators-cd9j8\" (UID: \"1fb24ae3-0e22-47f8-b197-e236b88a72b3\") " pod="openshift-marketplace/certified-operators-cd9j8" Jan 20 19:30:06 crc kubenswrapper[4661]: I0120 19:30:06.708941 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vwcr8\" (UniqueName: \"kubernetes.io/projected/1fb24ae3-0e22-47f8-b197-e236b88a72b3-kube-api-access-vwcr8\") pod \"certified-operators-cd9j8\" (UID: \"1fb24ae3-0e22-47f8-b197-e236b88a72b3\") " pod="openshift-marketplace/certified-operators-cd9j8" Jan 20 19:30:06 crc kubenswrapper[4661]: I0120 19:30:06.709257 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1fb24ae3-0e22-47f8-b197-e236b88a72b3-utilities\") pod \"certified-operators-cd9j8\" (UID: \"1fb24ae3-0e22-47f8-b197-e236b88a72b3\") " pod="openshift-marketplace/certified-operators-cd9j8" Jan 20 19:30:06 crc kubenswrapper[4661]: I0120 19:30:06.709492 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1fb24ae3-0e22-47f8-b197-e236b88a72b3-catalog-content\") pod \"certified-operators-cd9j8\" (UID: \"1fb24ae3-0e22-47f8-b197-e236b88a72b3\") " pod="openshift-marketplace/certified-operators-cd9j8" Jan 20 19:30:06 crc kubenswrapper[4661]: I0120 19:30:06.710105 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1fb24ae3-0e22-47f8-b197-e236b88a72b3-catalog-content\") pod \"certified-operators-cd9j8\" (UID: \"1fb24ae3-0e22-47f8-b197-e236b88a72b3\") " pod="openshift-marketplace/certified-operators-cd9j8" Jan 20 19:30:06 crc kubenswrapper[4661]: I0120 19:30:06.710603 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1fb24ae3-0e22-47f8-b197-e236b88a72b3-utilities\") pod \"certified-operators-cd9j8\" (UID: \"1fb24ae3-0e22-47f8-b197-e236b88a72b3\") " pod="openshift-marketplace/certified-operators-cd9j8" Jan 20 19:30:06 crc kubenswrapper[4661]: I0120 19:30:06.729448 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vwcr8\" (UniqueName: \"kubernetes.io/projected/1fb24ae3-0e22-47f8-b197-e236b88a72b3-kube-api-access-vwcr8\") pod \"certified-operators-cd9j8\" (UID: \"1fb24ae3-0e22-47f8-b197-e236b88a72b3\") " pod="openshift-marketplace/certified-operators-cd9j8" Jan 20 19:30:06 crc kubenswrapper[4661]: I0120 19:30:06.891282 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-cd9j8" Jan 20 19:30:07 crc kubenswrapper[4661]: I0120 19:30:07.370833 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-cd9j8"] Jan 20 19:30:08 crc kubenswrapper[4661]: I0120 19:30:08.135862 4661 generic.go:334] "Generic (PLEG): container finished" podID="1fb24ae3-0e22-47f8-b197-e236b88a72b3" containerID="1c71b292b1f7715a756ad67b8b3a6ad1a4b5875bca13dba9110d00b7dac55fa4" exitCode=0 Jan 20 19:30:08 crc kubenswrapper[4661]: I0120 19:30:08.136327 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cd9j8" event={"ID":"1fb24ae3-0e22-47f8-b197-e236b88a72b3","Type":"ContainerDied","Data":"1c71b292b1f7715a756ad67b8b3a6ad1a4b5875bca13dba9110d00b7dac55fa4"} Jan 20 19:30:08 crc kubenswrapper[4661]: I0120 19:30:08.136361 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cd9j8" event={"ID":"1fb24ae3-0e22-47f8-b197-e236b88a72b3","Type":"ContainerStarted","Data":"b30a23e5c5c741920d73a1c4559ec3c374bfe9c8a8f03f589af0f8de1552d88f"} Jan 20 19:30:10 crc kubenswrapper[4661]: I0120 19:30:10.142877 4661 scope.go:117] "RemoveContainer" containerID="7b21693d8a82b0b4ec2dcf50fcc180f7e20aa5c7528eda5ef12172a1ad0a2efe" Jan 20 19:30:10 crc kubenswrapper[4661]: E0120 19:30:10.145554 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-svf7c_openshift-machine-config-operator(78855c94-da90-4523-8d65-70f7fd153dee)\"" pod="openshift-machine-config-operator/machine-config-daemon-svf7c" podUID="78855c94-da90-4523-8d65-70f7fd153dee" Jan 20 19:30:10 crc kubenswrapper[4661]: I0120 19:30:10.181048 4661 generic.go:334] "Generic (PLEG): container finished" podID="1fb24ae3-0e22-47f8-b197-e236b88a72b3" containerID="5ba0d380a38cc94df00cdd8a30ee71fdc7abbaa52f9f6659613b956b6e14430b" exitCode=0 Jan 20 19:30:10 crc kubenswrapper[4661]: I0120 19:30:10.181107 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cd9j8" event={"ID":"1fb24ae3-0e22-47f8-b197-e236b88a72b3","Type":"ContainerDied","Data":"5ba0d380a38cc94df00cdd8a30ee71fdc7abbaa52f9f6659613b956b6e14430b"} Jan 20 19:30:12 crc kubenswrapper[4661]: I0120 19:30:12.200252 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cd9j8" event={"ID":"1fb24ae3-0e22-47f8-b197-e236b88a72b3","Type":"ContainerStarted","Data":"3aedb91957221786e1dcc868a09e3573f3c5afa43abd2444844f68df756f724c"} Jan 20 19:30:12 crc kubenswrapper[4661]: I0120 19:30:12.225322 4661 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-cd9j8" podStartSLOduration=3.082149845 podStartE2EDuration="6.225303339s" podCreationTimestamp="2026-01-20 19:30:06 +0000 UTC" firstStartedPulling="2026-01-20 19:30:08.139498557 +0000 UTC m=+5064.470288219" lastFinishedPulling="2026-01-20 19:30:11.282652051 +0000 UTC m=+5067.613441713" observedRunningTime="2026-01-20 19:30:12.218389879 +0000 UTC m=+5068.549179571" watchObservedRunningTime="2026-01-20 19:30:12.225303339 +0000 UTC m=+5068.556093011" Jan 20 19:30:16 crc kubenswrapper[4661]: I0120 19:30:16.891492 4661 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-cd9j8" Jan 20 19:30:16 crc kubenswrapper[4661]: I0120 19:30:16.892986 4661 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-cd9j8" Jan 20 19:30:16 crc kubenswrapper[4661]: I0120 19:30:16.960258 4661 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-cd9j8" Jan 20 19:30:17 crc kubenswrapper[4661]: I0120 19:30:17.303723 4661 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-cd9j8" Jan 20 19:30:17 crc kubenswrapper[4661]: I0120 19:30:17.372637 4661 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-cd9j8"] Jan 20 19:30:19 crc kubenswrapper[4661]: I0120 19:30:19.260994 4661 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-cd9j8" podUID="1fb24ae3-0e22-47f8-b197-e236b88a72b3" containerName="registry-server" containerID="cri-o://3aedb91957221786e1dcc868a09e3573f3c5afa43abd2444844f68df756f724c" gracePeriod=2 Jan 20 19:30:19 crc kubenswrapper[4661]: I0120 19:30:19.722801 4661 scope.go:117] "RemoveContainer" containerID="b6c0c5fbff6a14c086f1e79f41da24c041b84daf259323785fe0df6e12619f9d" Jan 20 19:30:19 crc kubenswrapper[4661]: I0120 19:30:19.729967 4661 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-cd9j8" Jan 20 19:30:19 crc kubenswrapper[4661]: I0120 19:30:19.788412 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1fb24ae3-0e22-47f8-b197-e236b88a72b3-catalog-content\") pod \"1fb24ae3-0e22-47f8-b197-e236b88a72b3\" (UID: \"1fb24ae3-0e22-47f8-b197-e236b88a72b3\") " Jan 20 19:30:19 crc kubenswrapper[4661]: I0120 19:30:19.788956 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vwcr8\" (UniqueName: \"kubernetes.io/projected/1fb24ae3-0e22-47f8-b197-e236b88a72b3-kube-api-access-vwcr8\") pod \"1fb24ae3-0e22-47f8-b197-e236b88a72b3\" (UID: \"1fb24ae3-0e22-47f8-b197-e236b88a72b3\") " Jan 20 19:30:19 crc kubenswrapper[4661]: I0120 19:30:19.789069 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1fb24ae3-0e22-47f8-b197-e236b88a72b3-utilities\") pod \"1fb24ae3-0e22-47f8-b197-e236b88a72b3\" (UID: \"1fb24ae3-0e22-47f8-b197-e236b88a72b3\") " Jan 20 19:30:19 crc kubenswrapper[4661]: I0120 19:30:19.790809 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1fb24ae3-0e22-47f8-b197-e236b88a72b3-utilities" (OuterVolumeSpecName: "utilities") pod "1fb24ae3-0e22-47f8-b197-e236b88a72b3" (UID: "1fb24ae3-0e22-47f8-b197-e236b88a72b3"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 19:30:19 crc kubenswrapper[4661]: I0120 19:30:19.810333 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1fb24ae3-0e22-47f8-b197-e236b88a72b3-kube-api-access-vwcr8" (OuterVolumeSpecName: "kube-api-access-vwcr8") pod "1fb24ae3-0e22-47f8-b197-e236b88a72b3" (UID: "1fb24ae3-0e22-47f8-b197-e236b88a72b3"). InnerVolumeSpecName "kube-api-access-vwcr8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 19:30:19 crc kubenswrapper[4661]: I0120 19:30:19.891648 4661 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vwcr8\" (UniqueName: \"kubernetes.io/projected/1fb24ae3-0e22-47f8-b197-e236b88a72b3-kube-api-access-vwcr8\") on node \"crc\" DevicePath \"\"" Jan 20 19:30:19 crc kubenswrapper[4661]: I0120 19:30:19.891683 4661 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1fb24ae3-0e22-47f8-b197-e236b88a72b3-utilities\") on node \"crc\" DevicePath \"\"" Jan 20 19:30:19 crc kubenswrapper[4661]: I0120 19:30:19.898068 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1fb24ae3-0e22-47f8-b197-e236b88a72b3-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1fb24ae3-0e22-47f8-b197-e236b88a72b3" (UID: "1fb24ae3-0e22-47f8-b197-e236b88a72b3"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 19:30:19 crc kubenswrapper[4661]: I0120 19:30:19.993860 4661 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1fb24ae3-0e22-47f8-b197-e236b88a72b3-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 20 19:30:20 crc kubenswrapper[4661]: I0120 19:30:20.278978 4661 generic.go:334] "Generic (PLEG): container finished" podID="1fb24ae3-0e22-47f8-b197-e236b88a72b3" containerID="3aedb91957221786e1dcc868a09e3573f3c5afa43abd2444844f68df756f724c" exitCode=0 Jan 20 19:30:20 crc kubenswrapper[4661]: I0120 19:30:20.279028 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cd9j8" event={"ID":"1fb24ae3-0e22-47f8-b197-e236b88a72b3","Type":"ContainerDied","Data":"3aedb91957221786e1dcc868a09e3573f3c5afa43abd2444844f68df756f724c"} Jan 20 19:30:20 crc kubenswrapper[4661]: I0120 19:30:20.279062 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cd9j8" event={"ID":"1fb24ae3-0e22-47f8-b197-e236b88a72b3","Type":"ContainerDied","Data":"b30a23e5c5c741920d73a1c4559ec3c374bfe9c8a8f03f589af0f8de1552d88f"} Jan 20 19:30:20 crc kubenswrapper[4661]: I0120 19:30:20.279081 4661 scope.go:117] "RemoveContainer" containerID="3aedb91957221786e1dcc868a09e3573f3c5afa43abd2444844f68df756f724c" Jan 20 19:30:20 crc kubenswrapper[4661]: I0120 19:30:20.279036 4661 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-cd9j8" Jan 20 19:30:20 crc kubenswrapper[4661]: I0120 19:30:20.307174 4661 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-cd9j8"] Jan 20 19:30:20 crc kubenswrapper[4661]: I0120 19:30:20.309869 4661 scope.go:117] "RemoveContainer" containerID="5ba0d380a38cc94df00cdd8a30ee71fdc7abbaa52f9f6659613b956b6e14430b" Jan 20 19:30:20 crc kubenswrapper[4661]: I0120 19:30:20.321913 4661 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-cd9j8"] Jan 20 19:30:20 crc kubenswrapper[4661]: I0120 19:30:20.331445 4661 scope.go:117] "RemoveContainer" containerID="1c71b292b1f7715a756ad67b8b3a6ad1a4b5875bca13dba9110d00b7dac55fa4" Jan 20 19:30:20 crc kubenswrapper[4661]: I0120 19:30:20.353493 4661 scope.go:117] "RemoveContainer" containerID="3aedb91957221786e1dcc868a09e3573f3c5afa43abd2444844f68df756f724c" Jan 20 19:30:20 crc kubenswrapper[4661]: E0120 19:30:20.353994 4661 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3aedb91957221786e1dcc868a09e3573f3c5afa43abd2444844f68df756f724c\": container with ID starting with 3aedb91957221786e1dcc868a09e3573f3c5afa43abd2444844f68df756f724c not found: ID does not exist" containerID="3aedb91957221786e1dcc868a09e3573f3c5afa43abd2444844f68df756f724c" Jan 20 19:30:20 crc kubenswrapper[4661]: I0120 19:30:20.354041 4661 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3aedb91957221786e1dcc868a09e3573f3c5afa43abd2444844f68df756f724c"} err="failed to get container status \"3aedb91957221786e1dcc868a09e3573f3c5afa43abd2444844f68df756f724c\": rpc error: code = NotFound desc = could not find container \"3aedb91957221786e1dcc868a09e3573f3c5afa43abd2444844f68df756f724c\": container with ID starting with 3aedb91957221786e1dcc868a09e3573f3c5afa43abd2444844f68df756f724c not found: ID does not exist" Jan 20 19:30:20 crc kubenswrapper[4661]: I0120 19:30:20.354067 4661 scope.go:117] "RemoveContainer" containerID="5ba0d380a38cc94df00cdd8a30ee71fdc7abbaa52f9f6659613b956b6e14430b" Jan 20 19:30:20 crc kubenswrapper[4661]: E0120 19:30:20.354350 4661 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5ba0d380a38cc94df00cdd8a30ee71fdc7abbaa52f9f6659613b956b6e14430b\": container with ID starting with 5ba0d380a38cc94df00cdd8a30ee71fdc7abbaa52f9f6659613b956b6e14430b not found: ID does not exist" containerID="5ba0d380a38cc94df00cdd8a30ee71fdc7abbaa52f9f6659613b956b6e14430b" Jan 20 19:30:20 crc kubenswrapper[4661]: I0120 19:30:20.354383 4661 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5ba0d380a38cc94df00cdd8a30ee71fdc7abbaa52f9f6659613b956b6e14430b"} err="failed to get container status \"5ba0d380a38cc94df00cdd8a30ee71fdc7abbaa52f9f6659613b956b6e14430b\": rpc error: code = NotFound desc = could not find container \"5ba0d380a38cc94df00cdd8a30ee71fdc7abbaa52f9f6659613b956b6e14430b\": container with ID starting with 5ba0d380a38cc94df00cdd8a30ee71fdc7abbaa52f9f6659613b956b6e14430b not found: ID does not exist" Jan 20 19:30:20 crc kubenswrapper[4661]: I0120 19:30:20.354402 4661 scope.go:117] "RemoveContainer" containerID="1c71b292b1f7715a756ad67b8b3a6ad1a4b5875bca13dba9110d00b7dac55fa4" Jan 20 19:30:20 crc kubenswrapper[4661]: E0120 19:30:20.354737 4661 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1c71b292b1f7715a756ad67b8b3a6ad1a4b5875bca13dba9110d00b7dac55fa4\": container with ID starting with 1c71b292b1f7715a756ad67b8b3a6ad1a4b5875bca13dba9110d00b7dac55fa4 not found: ID does not exist" containerID="1c71b292b1f7715a756ad67b8b3a6ad1a4b5875bca13dba9110d00b7dac55fa4" Jan 20 19:30:20 crc kubenswrapper[4661]: I0120 19:30:20.354767 4661 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1c71b292b1f7715a756ad67b8b3a6ad1a4b5875bca13dba9110d00b7dac55fa4"} err="failed to get container status \"1c71b292b1f7715a756ad67b8b3a6ad1a4b5875bca13dba9110d00b7dac55fa4\": rpc error: code = NotFound desc = could not find container \"1c71b292b1f7715a756ad67b8b3a6ad1a4b5875bca13dba9110d00b7dac55fa4\": container with ID starting with 1c71b292b1f7715a756ad67b8b3a6ad1a4b5875bca13dba9110d00b7dac55fa4 not found: ID does not exist" Jan 20 19:30:22 crc kubenswrapper[4661]: I0120 19:30:22.142998 4661 scope.go:117] "RemoveContainer" containerID="7b21693d8a82b0b4ec2dcf50fcc180f7e20aa5c7528eda5ef12172a1ad0a2efe" Jan 20 19:30:22 crc kubenswrapper[4661]: E0120 19:30:22.143528 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-svf7c_openshift-machine-config-operator(78855c94-da90-4523-8d65-70f7fd153dee)\"" pod="openshift-machine-config-operator/machine-config-daemon-svf7c" podUID="78855c94-da90-4523-8d65-70f7fd153dee" Jan 20 19:30:22 crc kubenswrapper[4661]: I0120 19:30:22.155444 4661 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1fb24ae3-0e22-47f8-b197-e236b88a72b3" path="/var/lib/kubelet/pods/1fb24ae3-0e22-47f8-b197-e236b88a72b3/volumes" Jan 20 19:30:34 crc kubenswrapper[4661]: I0120 19:30:34.147256 4661 scope.go:117] "RemoveContainer" containerID="7b21693d8a82b0b4ec2dcf50fcc180f7e20aa5c7528eda5ef12172a1ad0a2efe" Jan 20 19:30:34 crc kubenswrapper[4661]: E0120 19:30:34.148305 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-svf7c_openshift-machine-config-operator(78855c94-da90-4523-8d65-70f7fd153dee)\"" pod="openshift-machine-config-operator/machine-config-daemon-svf7c" podUID="78855c94-da90-4523-8d65-70f7fd153dee" Jan 20 19:30:45 crc kubenswrapper[4661]: I0120 19:30:45.142949 4661 scope.go:117] "RemoveContainer" containerID="7b21693d8a82b0b4ec2dcf50fcc180f7e20aa5c7528eda5ef12172a1ad0a2efe" Jan 20 19:30:45 crc kubenswrapper[4661]: E0120 19:30:45.144077 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-svf7c_openshift-machine-config-operator(78855c94-da90-4523-8d65-70f7fd153dee)\"" pod="openshift-machine-config-operator/machine-config-daemon-svf7c" podUID="78855c94-da90-4523-8d65-70f7fd153dee" Jan 20 19:30:59 crc kubenswrapper[4661]: I0120 19:30:59.141775 4661 scope.go:117] "RemoveContainer" containerID="7b21693d8a82b0b4ec2dcf50fcc180f7e20aa5c7528eda5ef12172a1ad0a2efe" Jan 20 19:30:59 crc kubenswrapper[4661]: E0120 19:30:59.142613 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-svf7c_openshift-machine-config-operator(78855c94-da90-4523-8d65-70f7fd153dee)\"" pod="openshift-machine-config-operator/machine-config-daemon-svf7c" podUID="78855c94-da90-4523-8d65-70f7fd153dee" Jan 20 19:31:13 crc kubenswrapper[4661]: I0120 19:31:13.142592 4661 scope.go:117] "RemoveContainer" containerID="7b21693d8a82b0b4ec2dcf50fcc180f7e20aa5c7528eda5ef12172a1ad0a2efe" Jan 20 19:31:13 crc kubenswrapper[4661]: E0120 19:31:13.143816 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-svf7c_openshift-machine-config-operator(78855c94-da90-4523-8d65-70f7fd153dee)\"" pod="openshift-machine-config-operator/machine-config-daemon-svf7c" podUID="78855c94-da90-4523-8d65-70f7fd153dee" Jan 20 19:31:22 crc kubenswrapper[4661]: I0120 19:31:22.099827 4661 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-2z2xq/must-gather-p8q68"] Jan 20 19:31:22 crc kubenswrapper[4661]: E0120 19:31:22.100818 4661 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1fb24ae3-0e22-47f8-b197-e236b88a72b3" containerName="extract-content" Jan 20 19:31:22 crc kubenswrapper[4661]: I0120 19:31:22.100831 4661 state_mem.go:107] "Deleted CPUSet assignment" podUID="1fb24ae3-0e22-47f8-b197-e236b88a72b3" containerName="extract-content" Jan 20 19:31:22 crc kubenswrapper[4661]: E0120 19:31:22.100839 4661 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1fb24ae3-0e22-47f8-b197-e236b88a72b3" containerName="extract-utilities" Jan 20 19:31:22 crc kubenswrapper[4661]: I0120 19:31:22.100845 4661 state_mem.go:107] "Deleted CPUSet assignment" podUID="1fb24ae3-0e22-47f8-b197-e236b88a72b3" containerName="extract-utilities" Jan 20 19:31:22 crc kubenswrapper[4661]: E0120 19:31:22.100860 4661 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1fb24ae3-0e22-47f8-b197-e236b88a72b3" containerName="registry-server" Jan 20 19:31:22 crc kubenswrapper[4661]: I0120 19:31:22.100867 4661 state_mem.go:107] "Deleted CPUSet assignment" podUID="1fb24ae3-0e22-47f8-b197-e236b88a72b3" containerName="registry-server" Jan 20 19:31:22 crc kubenswrapper[4661]: I0120 19:31:22.101189 4661 memory_manager.go:354] "RemoveStaleState removing state" podUID="1fb24ae3-0e22-47f8-b197-e236b88a72b3" containerName="registry-server" Jan 20 19:31:22 crc kubenswrapper[4661]: I0120 19:31:22.102365 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-2z2xq/must-gather-p8q68" Jan 20 19:31:22 crc kubenswrapper[4661]: I0120 19:31:22.103921 4661 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-2z2xq"/"default-dockercfg-5wr2n" Jan 20 19:31:22 crc kubenswrapper[4661]: I0120 19:31:22.105472 4661 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-2z2xq"/"openshift-service-ca.crt" Jan 20 19:31:22 crc kubenswrapper[4661]: I0120 19:31:22.105717 4661 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-2z2xq"/"kube-root-ca.crt" Jan 20 19:31:22 crc kubenswrapper[4661]: I0120 19:31:22.168073 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/1c0be25f-c64b-46cd-96d1-c9306ca20536-must-gather-output\") pod \"must-gather-p8q68\" (UID: \"1c0be25f-c64b-46cd-96d1-c9306ca20536\") " pod="openshift-must-gather-2z2xq/must-gather-p8q68" Jan 20 19:31:22 crc kubenswrapper[4661]: I0120 19:31:22.168266 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k5n47\" (UniqueName: \"kubernetes.io/projected/1c0be25f-c64b-46cd-96d1-c9306ca20536-kube-api-access-k5n47\") pod \"must-gather-p8q68\" (UID: \"1c0be25f-c64b-46cd-96d1-c9306ca20536\") " pod="openshift-must-gather-2z2xq/must-gather-p8q68" Jan 20 19:31:22 crc kubenswrapper[4661]: I0120 19:31:22.191006 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-2z2xq/must-gather-p8q68"] Jan 20 19:31:22 crc kubenswrapper[4661]: I0120 19:31:22.269946 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/1c0be25f-c64b-46cd-96d1-c9306ca20536-must-gather-output\") pod \"must-gather-p8q68\" (UID: \"1c0be25f-c64b-46cd-96d1-c9306ca20536\") " pod="openshift-must-gather-2z2xq/must-gather-p8q68" Jan 20 19:31:22 crc kubenswrapper[4661]: I0120 19:31:22.270008 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k5n47\" (UniqueName: \"kubernetes.io/projected/1c0be25f-c64b-46cd-96d1-c9306ca20536-kube-api-access-k5n47\") pod \"must-gather-p8q68\" (UID: \"1c0be25f-c64b-46cd-96d1-c9306ca20536\") " pod="openshift-must-gather-2z2xq/must-gather-p8q68" Jan 20 19:31:22 crc kubenswrapper[4661]: I0120 19:31:22.270658 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/1c0be25f-c64b-46cd-96d1-c9306ca20536-must-gather-output\") pod \"must-gather-p8q68\" (UID: \"1c0be25f-c64b-46cd-96d1-c9306ca20536\") " pod="openshift-must-gather-2z2xq/must-gather-p8q68" Jan 20 19:31:22 crc kubenswrapper[4661]: I0120 19:31:22.305403 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k5n47\" (UniqueName: \"kubernetes.io/projected/1c0be25f-c64b-46cd-96d1-c9306ca20536-kube-api-access-k5n47\") pod \"must-gather-p8q68\" (UID: \"1c0be25f-c64b-46cd-96d1-c9306ca20536\") " pod="openshift-must-gather-2z2xq/must-gather-p8q68" Jan 20 19:31:22 crc kubenswrapper[4661]: I0120 19:31:22.419528 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-2z2xq/must-gather-p8q68" Jan 20 19:31:22 crc kubenswrapper[4661]: I0120 19:31:22.871121 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-2z2xq/must-gather-p8q68"] Jan 20 19:31:23 crc kubenswrapper[4661]: I0120 19:31:23.893255 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-2z2xq/must-gather-p8q68" event={"ID":"1c0be25f-c64b-46cd-96d1-c9306ca20536","Type":"ContainerStarted","Data":"7107f61bca4c2db6a1e4e6be2c93cf01e69f35ef0990cbd311ad2ea112861067"} Jan 20 19:31:23 crc kubenswrapper[4661]: I0120 19:31:23.893816 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-2z2xq/must-gather-p8q68" event={"ID":"1c0be25f-c64b-46cd-96d1-c9306ca20536","Type":"ContainerStarted","Data":"88f3cd081cc0b391c97df2c593b87901b89d12659b809e90a40f989bc24ecb5a"} Jan 20 19:31:23 crc kubenswrapper[4661]: I0120 19:31:23.893868 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-2z2xq/must-gather-p8q68" event={"ID":"1c0be25f-c64b-46cd-96d1-c9306ca20536","Type":"ContainerStarted","Data":"ec30a54faa487e51d2bdc4ef0b9948e060104cb9b0a62c3ec4e8db8c42b2bbcb"} Jan 20 19:31:23 crc kubenswrapper[4661]: I0120 19:31:23.913371 4661 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-2z2xq/must-gather-p8q68" podStartSLOduration=1.9133530909999998 podStartE2EDuration="1.913353091s" podCreationTimestamp="2026-01-20 19:31:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 19:31:23.905938218 +0000 UTC m=+5140.236727880" watchObservedRunningTime="2026-01-20 19:31:23.913353091 +0000 UTC m=+5140.244142753" Jan 20 19:31:24 crc kubenswrapper[4661]: I0120 19:31:24.148215 4661 scope.go:117] "RemoveContainer" containerID="7b21693d8a82b0b4ec2dcf50fcc180f7e20aa5c7528eda5ef12172a1ad0a2efe" Jan 20 19:31:24 crc kubenswrapper[4661]: E0120 19:31:24.148479 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-svf7c_openshift-machine-config-operator(78855c94-da90-4523-8d65-70f7fd153dee)\"" pod="openshift-machine-config-operator/machine-config-daemon-svf7c" podUID="78855c94-da90-4523-8d65-70f7fd153dee" Jan 20 19:31:28 crc kubenswrapper[4661]: I0120 19:31:28.440421 4661 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-2z2xq/crc-debug-5bqx5"] Jan 20 19:31:28 crc kubenswrapper[4661]: I0120 19:31:28.467344 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-2z2xq/crc-debug-5bqx5" Jan 20 19:31:28 crc kubenswrapper[4661]: I0120 19:31:28.583557 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4q6zx\" (UniqueName: \"kubernetes.io/projected/b11282dc-9a83-4361-b616-acc9e4b8ea97-kube-api-access-4q6zx\") pod \"crc-debug-5bqx5\" (UID: \"b11282dc-9a83-4361-b616-acc9e4b8ea97\") " pod="openshift-must-gather-2z2xq/crc-debug-5bqx5" Jan 20 19:31:28 crc kubenswrapper[4661]: I0120 19:31:28.583629 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/b11282dc-9a83-4361-b616-acc9e4b8ea97-host\") pod \"crc-debug-5bqx5\" (UID: \"b11282dc-9a83-4361-b616-acc9e4b8ea97\") " pod="openshift-must-gather-2z2xq/crc-debug-5bqx5" Jan 20 19:31:28 crc kubenswrapper[4661]: I0120 19:31:28.685337 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4q6zx\" (UniqueName: \"kubernetes.io/projected/b11282dc-9a83-4361-b616-acc9e4b8ea97-kube-api-access-4q6zx\") pod \"crc-debug-5bqx5\" (UID: \"b11282dc-9a83-4361-b616-acc9e4b8ea97\") " pod="openshift-must-gather-2z2xq/crc-debug-5bqx5" Jan 20 19:31:28 crc kubenswrapper[4661]: I0120 19:31:28.685436 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/b11282dc-9a83-4361-b616-acc9e4b8ea97-host\") pod \"crc-debug-5bqx5\" (UID: \"b11282dc-9a83-4361-b616-acc9e4b8ea97\") " pod="openshift-must-gather-2z2xq/crc-debug-5bqx5" Jan 20 19:31:28 crc kubenswrapper[4661]: I0120 19:31:28.685708 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/b11282dc-9a83-4361-b616-acc9e4b8ea97-host\") pod \"crc-debug-5bqx5\" (UID: \"b11282dc-9a83-4361-b616-acc9e4b8ea97\") " pod="openshift-must-gather-2z2xq/crc-debug-5bqx5" Jan 20 19:31:28 crc kubenswrapper[4661]: I0120 19:31:28.709389 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4q6zx\" (UniqueName: \"kubernetes.io/projected/b11282dc-9a83-4361-b616-acc9e4b8ea97-kube-api-access-4q6zx\") pod \"crc-debug-5bqx5\" (UID: \"b11282dc-9a83-4361-b616-acc9e4b8ea97\") " pod="openshift-must-gather-2z2xq/crc-debug-5bqx5" Jan 20 19:31:28 crc kubenswrapper[4661]: I0120 19:31:28.801409 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-2z2xq/crc-debug-5bqx5" Jan 20 19:31:28 crc kubenswrapper[4661]: I0120 19:31:28.938098 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-2z2xq/crc-debug-5bqx5" event={"ID":"b11282dc-9a83-4361-b616-acc9e4b8ea97","Type":"ContainerStarted","Data":"4335176430d1e785c2ceae87f243f505fa72245df894de6fefa2aa16b70ee26b"} Jan 20 19:31:29 crc kubenswrapper[4661]: I0120 19:31:29.947557 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-2z2xq/crc-debug-5bqx5" event={"ID":"b11282dc-9a83-4361-b616-acc9e4b8ea97","Type":"ContainerStarted","Data":"5a14a940839b66caaeb155dbddf78741718d471f59263a7dcf16ea8fc445cbcb"} Jan 20 19:31:29 crc kubenswrapper[4661]: I0120 19:31:29.959722 4661 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-2z2xq/crc-debug-5bqx5" podStartSLOduration=1.959702853 podStartE2EDuration="1.959702853s" podCreationTimestamp="2026-01-20 19:31:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 19:31:29.957869625 +0000 UTC m=+5146.288659287" watchObservedRunningTime="2026-01-20 19:31:29.959702853 +0000 UTC m=+5146.290492515" Jan 20 19:31:31 crc kubenswrapper[4661]: I0120 19:31:31.756451 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-7b8c8444c8-77p78_e636b383-8c8d-4554-9717-35ba37b726f5/barbican-api-log/0.log" Jan 20 19:31:31 crc kubenswrapper[4661]: I0120 19:31:31.767633 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-7b8c8444c8-77p78_e636b383-8c8d-4554-9717-35ba37b726f5/barbican-api/0.log" Jan 20 19:31:31 crc kubenswrapper[4661]: I0120 19:31:31.809196 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-596b75897b-2g4gm_f94c4b9e-2f53-44d0-a637-4e8f4a3f9d58/barbican-keystone-listener-log/0.log" Jan 20 19:31:31 crc kubenswrapper[4661]: I0120 19:31:31.816311 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-596b75897b-2g4gm_f94c4b9e-2f53-44d0-a637-4e8f4a3f9d58/barbican-keystone-listener/0.log" Jan 20 19:31:31 crc kubenswrapper[4661]: I0120 19:31:31.834876 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-5b484f76ff-qrd8w_341d9328-73af-4986-9901-43b929a9e030/barbican-worker-log/0.log" Jan 20 19:31:31 crc kubenswrapper[4661]: I0120 19:31:31.841053 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-5b484f76ff-qrd8w_341d9328-73af-4986-9901-43b929a9e030/barbican-worker/0.log" Jan 20 19:31:31 crc kubenswrapper[4661]: I0120 19:31:31.899783 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-edpm-deployment-openstack-edpm-ipam-r28s7_e65dc54b-d336-441e-b167-cb297ef179a5/bootstrap-edpm-deployment-openstack-edpm-ipam/0.log" Jan 20 19:31:31 crc kubenswrapper[4661]: I0120 19:31:31.917111 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_359e2c27-69df-47ab-95bb-e9f70c04f988/ceilometer-central-agent/0.log" Jan 20 19:31:31 crc kubenswrapper[4661]: I0120 19:31:31.982170 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_359e2c27-69df-47ab-95bb-e9f70c04f988/ceilometer-notification-agent/0.log" Jan 20 19:31:31 crc kubenswrapper[4661]: I0120 19:31:31.990413 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_359e2c27-69df-47ab-95bb-e9f70c04f988/sg-core/0.log" Jan 20 19:31:31 crc kubenswrapper[4661]: I0120 19:31:31.999662 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_359e2c27-69df-47ab-95bb-e9f70c04f988/proxy-httpd/0.log" Jan 20 19:31:32 crc kubenswrapper[4661]: I0120 19:31:32.009684 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceph-client-edpm-deployment-openstack-edpm-ipam-28cfq_3689afcd-a340-4415-a127-c9ce66ab8d7b/ceph-client-edpm-deployment-openstack-edpm-ipam/0.log" Jan 20 19:31:32 crc kubenswrapper[4661]: I0120 19:31:32.022015 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-7nsxk_98f2afa8-7c09-4427-a558-a3da2bfd4df4/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam/0.log" Jan 20 19:31:32 crc kubenswrapper[4661]: I0120 19:31:32.049369 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_b4fa215a-165d-44b7-9bfd-19a2a9a5205c/cinder-api-log/0.log" Jan 20 19:31:32 crc kubenswrapper[4661]: I0120 19:31:32.128926 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_b4fa215a-165d-44b7-9bfd-19a2a9a5205c/cinder-api/0.log" Jan 20 19:31:32 crc kubenswrapper[4661]: I0120 19:31:32.368176 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-backup-0_3066acf4-e48e-410e-8623-f29b5424f4fe/cinder-backup/0.log" Jan 20 19:31:32 crc kubenswrapper[4661]: I0120 19:31:32.393030 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-backup-0_3066acf4-e48e-410e-8623-f29b5424f4fe/probe/0.log" Jan 20 19:31:32 crc kubenswrapper[4661]: I0120 19:31:32.450995 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_da178eaf-bf04-4638-a071-808d119fd4ec/cinder-scheduler/0.log" Jan 20 19:31:32 crc kubenswrapper[4661]: I0120 19:31:32.495720 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_da178eaf-bf04-4638-a071-808d119fd4ec/probe/0.log" Jan 20 19:31:32 crc kubenswrapper[4661]: I0120 19:31:32.574513 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-volume-volume1-0_c8c04a60-5bb8-4d54-93a6-1acfcbea3358/cinder-volume/0.log" Jan 20 19:31:32 crc kubenswrapper[4661]: I0120 19:31:32.600259 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-volume-volume1-0_c8c04a60-5bb8-4d54-93a6-1acfcbea3358/probe/0.log" Jan 20 19:31:32 crc kubenswrapper[4661]: I0120 19:31:32.628384 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-network-edpm-deployment-openstack-edpm-ipam-n2tt9_77d1abe1-5293-4f5d-b062-d3fc2bb71510/configure-network-edpm-deployment-openstack-edpm-ipam/0.log" Jan 20 19:31:32 crc kubenswrapper[4661]: I0120 19:31:32.658096 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-edpm-deployment-openstack-edpm-ipam-nzqc8_e38a8deb-8469-4d4d-a865-4d374e8fcb7c/configure-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 20 19:31:32 crc kubenswrapper[4661]: I0120 19:31:32.866609 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-77766fdf55-f5frm_a3cb0dc2-5231-45c2-81ae-038a006f73f0/dnsmasq-dns/0.log" Jan 20 19:31:32 crc kubenswrapper[4661]: I0120 19:31:32.871462 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-77766fdf55-f5frm_a3cb0dc2-5231-45c2-81ae-038a006f73f0/init/0.log" Jan 20 19:31:32 crc kubenswrapper[4661]: I0120 19:31:32.885209 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_ff298cae-f405-48cf-a8b3-c297f1e6cf80/glance-log/0.log" Jan 20 19:31:32 crc kubenswrapper[4661]: I0120 19:31:32.904367 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_ff298cae-f405-48cf-a8b3-c297f1e6cf80/glance-httpd/0.log" Jan 20 19:31:32 crc kubenswrapper[4661]: I0120 19:31:32.916243 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_26659b2b-07b6-4184-b2a8-bad999a10fd3/glance-log/0.log" Jan 20 19:31:32 crc kubenswrapper[4661]: I0120 19:31:32.937332 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_26659b2b-07b6-4184-b2a8-bad999a10fd3/glance-httpd/0.log" Jan 20 19:31:33 crc kubenswrapper[4661]: I0120 19:31:33.182947 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-658f6cd46d-59d52_3c2196ee-0a5d-49b8-9f9b-4eada2792101/horizon-log/0.log" Jan 20 19:31:33 crc kubenswrapper[4661]: I0120 19:31:33.277883 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-658f6cd46d-59d52_3c2196ee-0a5d-49b8-9f9b-4eada2792101/horizon/0.log" Jan 20 19:31:33 crc kubenswrapper[4661]: I0120 19:31:33.315900 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-certs-edpm-deployment-openstack-edpm-ipam-5r5z7_47dfef92-6673-4a9f-9999-47f830dd42bc/install-certs-edpm-deployment-openstack-edpm-ipam/0.log" Jan 20 19:31:33 crc kubenswrapper[4661]: I0120 19:31:33.349882 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-os-edpm-deployment-openstack-edpm-ipam-p6cj5_4fae988d-a1e4-4f89-8a5f-45989cd3584c/install-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 20 19:31:33 crc kubenswrapper[4661]: I0120 19:31:33.525492 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-c468c8b55-f2kw4_40923d71-e4f3-4c19-939c-e9f9b12fe635/keystone-api/0.log" Jan 20 19:31:33 crc kubenswrapper[4661]: I0120 19:31:33.532412 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29482261-r7s7h_089b2d3d-e382-4407-828e-cbeb9199951f/keystone-cron/0.log" Jan 20 19:31:33 crc kubenswrapper[4661]: I0120 19:31:33.542962 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_a42dbd72-de9b-49d9-b7fb-b8255659f933/kube-state-metrics/0.log" Jan 20 19:31:33 crc kubenswrapper[4661]: I0120 19:31:33.596099 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_libvirt-edpm-deployment-openstack-edpm-ipam-h99fk_e8b8d2fe-c25d-41e1-b32e-6e81c03e0717/libvirt-edpm-deployment-openstack-edpm-ipam/0.log" Jan 20 19:31:33 crc kubenswrapper[4661]: I0120 19:31:33.609043 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-api-0_c4f33125-e16c-4df4-9c3f-9f772fe671eb/manila-api-log/0.log" Jan 20 19:31:33 crc kubenswrapper[4661]: I0120 19:31:33.787022 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-api-0_c4f33125-e16c-4df4-9c3f-9f772fe671eb/manila-api/0.log" Jan 20 19:31:33 crc kubenswrapper[4661]: I0120 19:31:33.868986 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-scheduler-0_c4217d44-feda-4241-9ede-1e22b3324b01/manila-scheduler/0.log" Jan 20 19:31:33 crc kubenswrapper[4661]: I0120 19:31:33.876600 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-scheduler-0_c4217d44-feda-4241-9ede-1e22b3324b01/probe/0.log" Jan 20 19:31:33 crc kubenswrapper[4661]: I0120 19:31:33.937389 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-share-share1-0_6c223935-3aa9-491c-8f9d-638441f57742/manila-share/0.log" Jan 20 19:31:33 crc kubenswrapper[4661]: I0120 19:31:33.943531 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-share-share1-0_6c223935-3aa9-491c-8f9d-638441f57742/probe/0.log" Jan 20 19:31:36 crc kubenswrapper[4661]: I0120 19:31:36.143136 4661 scope.go:117] "RemoveContainer" containerID="7b21693d8a82b0b4ec2dcf50fcc180f7e20aa5c7528eda5ef12172a1ad0a2efe" Jan 20 19:31:36 crc kubenswrapper[4661]: E0120 19:31:36.143963 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-svf7c_openshift-machine-config-operator(78855c94-da90-4523-8d65-70f7fd153dee)\"" pod="openshift-machine-config-operator/machine-config-daemon-svf7c" podUID="78855c94-da90-4523-8d65-70f7fd153dee" Jan 20 19:31:47 crc kubenswrapper[4661]: I0120 19:31:47.144380 4661 scope.go:117] "RemoveContainer" containerID="7b21693d8a82b0b4ec2dcf50fcc180f7e20aa5c7528eda5ef12172a1ad0a2efe" Jan 20 19:31:47 crc kubenswrapper[4661]: E0120 19:31:47.145800 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-svf7c_openshift-machine-config-operator(78855c94-da90-4523-8d65-70f7fd153dee)\"" pod="openshift-machine-config-operator/machine-config-daemon-svf7c" podUID="78855c94-da90-4523-8d65-70f7fd153dee" Jan 20 19:31:53 crc kubenswrapper[4661]: I0120 19:31:53.144919 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-htvf4_e4437f72-2da5-4c3a-8a69-4a26f3190a62/controller/0.log" Jan 20 19:31:53 crc kubenswrapper[4661]: I0120 19:31:53.151526 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-htvf4_e4437f72-2da5-4c3a-8a69-4a26f3190a62/kube-rbac-proxy/0.log" Jan 20 19:31:53 crc kubenswrapper[4661]: I0120 19:31:53.211853 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9s69m_52982b0c-438c-4fbd-be5f-03fe6aca0327/controller/0.log" Jan 20 19:31:55 crc kubenswrapper[4661]: I0120 19:31:55.758785 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9s69m_52982b0c-438c-4fbd-be5f-03fe6aca0327/frr/0.log" Jan 20 19:31:55 crc kubenswrapper[4661]: I0120 19:31:55.774480 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9s69m_52982b0c-438c-4fbd-be5f-03fe6aca0327/reloader/0.log" Jan 20 19:31:55 crc kubenswrapper[4661]: I0120 19:31:55.778920 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9s69m_52982b0c-438c-4fbd-be5f-03fe6aca0327/frr-metrics/0.log" Jan 20 19:31:55 crc kubenswrapper[4661]: I0120 19:31:55.786846 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9s69m_52982b0c-438c-4fbd-be5f-03fe6aca0327/kube-rbac-proxy/0.log" Jan 20 19:31:55 crc kubenswrapper[4661]: I0120 19:31:55.793921 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9s69m_52982b0c-438c-4fbd-be5f-03fe6aca0327/kube-rbac-proxy-frr/0.log" Jan 20 19:31:55 crc kubenswrapper[4661]: I0120 19:31:55.800965 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9s69m_52982b0c-438c-4fbd-be5f-03fe6aca0327/cp-frr-files/0.log" Jan 20 19:31:55 crc kubenswrapper[4661]: I0120 19:31:55.807572 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9s69m_52982b0c-438c-4fbd-be5f-03fe6aca0327/cp-reloader/0.log" Jan 20 19:31:55 crc kubenswrapper[4661]: I0120 19:31:55.813770 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9s69m_52982b0c-438c-4fbd-be5f-03fe6aca0327/cp-metrics/0.log" Jan 20 19:31:55 crc kubenswrapper[4661]: I0120 19:31:55.839530 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-7df86c4f6c-7z86n_c47c14be-ea84-47ba-a52b-9cb718ae6a30/frr-k8s-webhook-server/0.log" Jan 20 19:31:55 crc kubenswrapper[4661]: I0120 19:31:55.881234 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-6f4477bbcd-h46rv_715feebe-b380-4ce1-9842-7f9da051a195/manager/0.log" Jan 20 19:31:55 crc kubenswrapper[4661]: I0120 19:31:55.908296 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-686c759fbc-hdkt8_e0bbd467-090b-431e-b89b-8159d61d7dab/webhook-server/0.log" Jan 20 19:31:56 crc kubenswrapper[4661]: I0120 19:31:56.367653 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-58jbs_69809466-8e46-4f8a-b90e-638f8af8b313/speaker/0.log" Jan 20 19:31:56 crc kubenswrapper[4661]: I0120 19:31:56.374696 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-58jbs_69809466-8e46-4f8a-b90e-638f8af8b313/kube-rbac-proxy/0.log" Jan 20 19:31:59 crc kubenswrapper[4661]: I0120 19:31:59.949196 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_ee2394e6-ec1c-4093-9c8d-6a5795f4d146/memcached/0.log" Jan 20 19:32:00 crc kubenswrapper[4661]: I0120 19:32:00.044787 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-57dd7457c5-2txjn_978fc50f-3ea8-4427-af11-d8f4c4f3c0d5/neutron-api/0.log" Jan 20 19:32:00 crc kubenswrapper[4661]: I0120 19:32:00.095887 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-57dd7457c5-2txjn_978fc50f-3ea8-4427-af11-d8f4c4f3c0d5/neutron-httpd/0.log" Jan 20 19:32:00 crc kubenswrapper[4661]: I0120 19:32:00.121488 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-metadata-edpm-deployment-openstack-edpm-ipam-tbhjj_58c705e0-9353-44b0-b3af-65c84ddb1f44/neutron-metadata-edpm-deployment-openstack-edpm-ipam/0.log" Jan 20 19:32:00 crc kubenswrapper[4661]: I0120 19:32:00.329141 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_c36f847e-e718-445f-927e-4c6145c5ac8d/nova-api-log/0.log" Jan 20 19:32:00 crc kubenswrapper[4661]: I0120 19:32:00.897102 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_c36f847e-e718-445f-927e-4c6145c5ac8d/nova-api-api/0.log" Jan 20 19:32:01 crc kubenswrapper[4661]: I0120 19:32:01.025021 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_b5e5805d-a947-403a-b1dc-77949080c7be/nova-cell0-conductor-conductor/0.log" Jan 20 19:32:01 crc kubenswrapper[4661]: I0120 19:32:01.136928 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_ebdbbfb8-e8c3-405b-914d-0ace13b50e32/nova-cell1-conductor-conductor/0.log" Jan 20 19:32:01 crc kubenswrapper[4661]: I0120 19:32:01.238035 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_22e1bf04-4a38-4fa3-85c3-b63e90226ffa/nova-cell1-novncproxy-novncproxy/0.log" Jan 20 19:32:01 crc kubenswrapper[4661]: I0120 19:32:01.296488 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-cnqdb_40502583-1982-469d-a228-04488a4eb068/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam/0.log" Jan 20 19:32:01 crc kubenswrapper[4661]: I0120 19:32:01.354199 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_253d6878-90af-44c0-b6d2-dfb0d79a2190/nova-metadata-log/0.log" Jan 20 19:32:02 crc kubenswrapper[4661]: I0120 19:32:02.144496 4661 scope.go:117] "RemoveContainer" containerID="7b21693d8a82b0b4ec2dcf50fcc180f7e20aa5c7528eda5ef12172a1ad0a2efe" Jan 20 19:32:02 crc kubenswrapper[4661]: E0120 19:32:02.145714 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-svf7c_openshift-machine-config-operator(78855c94-da90-4523-8d65-70f7fd153dee)\"" pod="openshift-machine-config-operator/machine-config-daemon-svf7c" podUID="78855c94-da90-4523-8d65-70f7fd153dee" Jan 20 19:32:02 crc kubenswrapper[4661]: I0120 19:32:02.642769 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_253d6878-90af-44c0-b6d2-dfb0d79a2190/nova-metadata-metadata/0.log" Jan 20 19:32:02 crc kubenswrapper[4661]: I0120 19:32:02.783936 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_47ac760b-6ca8-4048-a60e-e3717fcb25ec/nova-scheduler-scheduler/0.log" Jan 20 19:32:02 crc kubenswrapper[4661]: I0120 19:32:02.806406 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_1a3386fb-6ffa-47fa-8697-8d3c45ff61be/galera/0.log" Jan 20 19:32:02 crc kubenswrapper[4661]: I0120 19:32:02.819781 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_1a3386fb-6ffa-47fa-8697-8d3c45ff61be/mysql-bootstrap/0.log" Jan 20 19:32:02 crc kubenswrapper[4661]: I0120 19:32:02.849755 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_1c705fc7-9ad0-4254-ad57-63db21057251/galera/0.log" Jan 20 19:32:02 crc kubenswrapper[4661]: I0120 19:32:02.865678 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_1c705fc7-9ad0-4254-ad57-63db21057251/mysql-bootstrap/0.log" Jan 20 19:32:02 crc kubenswrapper[4661]: I0120 19:32:02.876448 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_c6b78b7c-8709-4a28-bc8f-1cf8960203cc/openstackclient/0.log" Jan 20 19:32:02 crc kubenswrapper[4661]: I0120 19:32:02.886744 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-8mm55_97b7dc90-ebf5-4783-9d06-b0e30eb4d2d8/openstack-network-exporter/0.log" Jan 20 19:32:02 crc kubenswrapper[4661]: I0120 19:32:02.901402 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-9k84x_6d1b9f50-80c4-494b-8ea6-f3cd3ca1b98d/ovsdb-server/0.log" Jan 20 19:32:02 crc kubenswrapper[4661]: I0120 19:32:02.915625 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-9k84x_6d1b9f50-80c4-494b-8ea6-f3cd3ca1b98d/ovs-vswitchd/0.log" Jan 20 19:32:02 crc kubenswrapper[4661]: I0120 19:32:02.920975 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-9k84x_6d1b9f50-80c4-494b-8ea6-f3cd3ca1b98d/ovsdb-server-init/0.log" Jan 20 19:32:02 crc kubenswrapper[4661]: I0120 19:32:02.934216 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-p7h4x_65017fb7-6ab3-43d0-a308-a3d8da39b811/ovn-controller/0.log" Jan 20 19:32:02 crc kubenswrapper[4661]: I0120 19:32:02.979260 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-edpm-deployment-openstack-edpm-ipam-kgwml_7f25c2cb-31da-4f0d-b7cb-472e09443f4a/ovn-edpm-deployment-openstack-edpm-ipam/0.log" Jan 20 19:32:02 crc kubenswrapper[4661]: I0120 19:32:02.992452 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_26291859-ffb9-435a-92bd-7ebc53f7e4bc/ovn-northd/0.log" Jan 20 19:32:02 crc kubenswrapper[4661]: I0120 19:32:02.997805 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_26291859-ffb9-435a-92bd-7ebc53f7e4bc/openstack-network-exporter/0.log" Jan 20 19:32:03 crc kubenswrapper[4661]: I0120 19:32:03.021920 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_8a95987a-80ef-495d-adf7-f60c952836ce/ovsdbserver-nb/0.log" Jan 20 19:32:03 crc kubenswrapper[4661]: I0120 19:32:03.025869 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_8a95987a-80ef-495d-adf7-f60c952836ce/openstack-network-exporter/0.log" Jan 20 19:32:03 crc kubenswrapper[4661]: I0120 19:32:03.042098 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_20636e35-51a8-4c79-888a-64d59e109a53/ovsdbserver-sb/0.log" Jan 20 19:32:03 crc kubenswrapper[4661]: I0120 19:32:03.048732 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_20636e35-51a8-4c79-888a-64d59e109a53/openstack-network-exporter/0.log" Jan 20 19:32:03 crc kubenswrapper[4661]: I0120 19:32:03.162062 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-66f64dd556-cvpcx_371e36aa-f7e5-443b-9f8e-ef9b8e9d0f61/placement-log/0.log" Jan 20 19:32:03 crc kubenswrapper[4661]: I0120 19:32:03.228516 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-66f64dd556-cvpcx_371e36aa-f7e5-443b-9f8e-ef9b8e9d0f61/placement-api/0.log" Jan 20 19:32:03 crc kubenswrapper[4661]: I0120 19:32:03.254229 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_7301e169-326c-4397-89f7-28b94553cef4/rabbitmq/0.log" Jan 20 19:32:03 crc kubenswrapper[4661]: I0120 19:32:03.259953 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_7301e169-326c-4397-89f7-28b94553cef4/setup-container/0.log" Jan 20 19:32:03 crc kubenswrapper[4661]: I0120 19:32:03.289481 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_5a690866-3b40-4a9f-ba41-a5a3a6d76c95/rabbitmq/0.log" Jan 20 19:32:03 crc kubenswrapper[4661]: I0120 19:32:03.293988 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_5a690866-3b40-4a9f-ba41-a5a3a6d76c95/setup-container/0.log" Jan 20 19:32:03 crc kubenswrapper[4661]: I0120 19:32:03.312873 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_reboot-os-edpm-deployment-openstack-edpm-ipam-zl299_9898267e-7857-4ef5-8f1a-10d5f1a97cec/reboot-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 20 19:32:03 crc kubenswrapper[4661]: I0120 19:32:03.324384 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_repo-setup-edpm-deployment-openstack-edpm-ipam-h5wl9_39ff301d-9d0a-441f-879b-64ddb885ad9b/repo-setup-edpm-deployment-openstack-edpm-ipam/0.log" Jan 20 19:32:03 crc kubenswrapper[4661]: I0120 19:32:03.339354 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_run-os-edpm-deployment-openstack-edpm-ipam-kzmcr_fd038034-bcb2-4723-a94f-16af58612f58/run-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 20 19:32:03 crc kubenswrapper[4661]: I0120 19:32:03.361217 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ssh-known-hosts-edpm-deployment-gr4vd_e10ea9a4-1fd9-43d6-bf38-0365ddbdb5d0/ssh-known-hosts-edpm-deployment/0.log" Jan 20 19:32:03 crc kubenswrapper[4661]: I0120 19:32:03.380988 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tempest-tests-tempest_fcc30bf2-7b68-4438-b7db-b041e1d1e2ff/tempest-tests-tempest-tests-runner/0.log" Jan 20 19:32:03 crc kubenswrapper[4661]: I0120 19:32:03.387054 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_test-operator-logs-pod-tempest-tempest-tests-tempest_db09fccc-50d1-4990-b482-a782822de50d/test-operator-logs-container/0.log" Jan 20 19:32:03 crc kubenswrapper[4661]: I0120 19:32:03.398754 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_validate-network-edpm-deployment-openstack-edpm-ipam-wxksd_4190b947-a737-4a67-bfa9-dad8bb4a7499/validate-network-edpm-deployment-openstack-edpm-ipam/0.log" Jan 20 19:32:05 crc kubenswrapper[4661]: I0120 19:32:05.282862 4661 generic.go:334] "Generic (PLEG): container finished" podID="b11282dc-9a83-4361-b616-acc9e4b8ea97" containerID="5a14a940839b66caaeb155dbddf78741718d471f59263a7dcf16ea8fc445cbcb" exitCode=0 Jan 20 19:32:05 crc kubenswrapper[4661]: I0120 19:32:05.282957 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-2z2xq/crc-debug-5bqx5" event={"ID":"b11282dc-9a83-4361-b616-acc9e4b8ea97","Type":"ContainerDied","Data":"5a14a940839b66caaeb155dbddf78741718d471f59263a7dcf16ea8fc445cbcb"} Jan 20 19:32:06 crc kubenswrapper[4661]: I0120 19:32:06.393844 4661 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-2z2xq/crc-debug-5bqx5" Jan 20 19:32:06 crc kubenswrapper[4661]: I0120 19:32:06.426471 4661 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-2z2xq/crc-debug-5bqx5"] Jan 20 19:32:06 crc kubenswrapper[4661]: I0120 19:32:06.435263 4661 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-2z2xq/crc-debug-5bqx5"] Jan 20 19:32:06 crc kubenswrapper[4661]: I0120 19:32:06.479104 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/b11282dc-9a83-4361-b616-acc9e4b8ea97-host\") pod \"b11282dc-9a83-4361-b616-acc9e4b8ea97\" (UID: \"b11282dc-9a83-4361-b616-acc9e4b8ea97\") " Jan 20 19:32:06 crc kubenswrapper[4661]: I0120 19:32:06.479210 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b11282dc-9a83-4361-b616-acc9e4b8ea97-host" (OuterVolumeSpecName: "host") pod "b11282dc-9a83-4361-b616-acc9e4b8ea97" (UID: "b11282dc-9a83-4361-b616-acc9e4b8ea97"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 20 19:32:06 crc kubenswrapper[4661]: I0120 19:32:06.479446 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4q6zx\" (UniqueName: \"kubernetes.io/projected/b11282dc-9a83-4361-b616-acc9e4b8ea97-kube-api-access-4q6zx\") pod \"b11282dc-9a83-4361-b616-acc9e4b8ea97\" (UID: \"b11282dc-9a83-4361-b616-acc9e4b8ea97\") " Jan 20 19:32:06 crc kubenswrapper[4661]: I0120 19:32:06.479823 4661 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/b11282dc-9a83-4361-b616-acc9e4b8ea97-host\") on node \"crc\" DevicePath \"\"" Jan 20 19:32:06 crc kubenswrapper[4661]: I0120 19:32:06.484899 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11282dc-9a83-4361-b616-acc9e4b8ea97-kube-api-access-4q6zx" (OuterVolumeSpecName: "kube-api-access-4q6zx") pod "b11282dc-9a83-4361-b616-acc9e4b8ea97" (UID: "b11282dc-9a83-4361-b616-acc9e4b8ea97"). InnerVolumeSpecName "kube-api-access-4q6zx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 19:32:06 crc kubenswrapper[4661]: I0120 19:32:06.581895 4661 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4q6zx\" (UniqueName: \"kubernetes.io/projected/b11282dc-9a83-4361-b616-acc9e4b8ea97-kube-api-access-4q6zx\") on node \"crc\" DevicePath \"\"" Jan 20 19:32:07 crc kubenswrapper[4661]: I0120 19:32:07.299650 4661 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4335176430d1e785c2ceae87f243f505fa72245df894de6fefa2aa16b70ee26b" Jan 20 19:32:07 crc kubenswrapper[4661]: I0120 19:32:07.299741 4661 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-2z2xq/crc-debug-5bqx5" Jan 20 19:32:07 crc kubenswrapper[4661]: I0120 19:32:07.633339 4661 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-2z2xq/crc-debug-j6ndp"] Jan 20 19:32:07 crc kubenswrapper[4661]: E0120 19:32:07.634364 4661 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b11282dc-9a83-4361-b616-acc9e4b8ea97" containerName="container-00" Jan 20 19:32:07 crc kubenswrapper[4661]: I0120 19:32:07.634463 4661 state_mem.go:107] "Deleted CPUSet assignment" podUID="b11282dc-9a83-4361-b616-acc9e4b8ea97" containerName="container-00" Jan 20 19:32:07 crc kubenswrapper[4661]: I0120 19:32:07.634813 4661 memory_manager.go:354] "RemoveStaleState removing state" podUID="b11282dc-9a83-4361-b616-acc9e4b8ea97" containerName="container-00" Jan 20 19:32:07 crc kubenswrapper[4661]: I0120 19:32:07.635623 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-2z2xq/crc-debug-j6ndp" Jan 20 19:32:07 crc kubenswrapper[4661]: I0120 19:32:07.702102 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lt9gs\" (UniqueName: \"kubernetes.io/projected/fa3eaab6-95c0-4378-a402-4abe2bf5268f-kube-api-access-lt9gs\") pod \"crc-debug-j6ndp\" (UID: \"fa3eaab6-95c0-4378-a402-4abe2bf5268f\") " pod="openshift-must-gather-2z2xq/crc-debug-j6ndp" Jan 20 19:32:07 crc kubenswrapper[4661]: I0120 19:32:07.702442 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/fa3eaab6-95c0-4378-a402-4abe2bf5268f-host\") pod \"crc-debug-j6ndp\" (UID: \"fa3eaab6-95c0-4378-a402-4abe2bf5268f\") " pod="openshift-must-gather-2z2xq/crc-debug-j6ndp" Jan 20 19:32:07 crc kubenswrapper[4661]: I0120 19:32:07.804939 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lt9gs\" (UniqueName: \"kubernetes.io/projected/fa3eaab6-95c0-4378-a402-4abe2bf5268f-kube-api-access-lt9gs\") pod \"crc-debug-j6ndp\" (UID: \"fa3eaab6-95c0-4378-a402-4abe2bf5268f\") " pod="openshift-must-gather-2z2xq/crc-debug-j6ndp" Jan 20 19:32:07 crc kubenswrapper[4661]: I0120 19:32:07.805384 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/fa3eaab6-95c0-4378-a402-4abe2bf5268f-host\") pod \"crc-debug-j6ndp\" (UID: \"fa3eaab6-95c0-4378-a402-4abe2bf5268f\") " pod="openshift-must-gather-2z2xq/crc-debug-j6ndp" Jan 20 19:32:07 crc kubenswrapper[4661]: I0120 19:32:07.805525 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/fa3eaab6-95c0-4378-a402-4abe2bf5268f-host\") pod \"crc-debug-j6ndp\" (UID: \"fa3eaab6-95c0-4378-a402-4abe2bf5268f\") " pod="openshift-must-gather-2z2xq/crc-debug-j6ndp" Jan 20 19:32:07 crc kubenswrapper[4661]: I0120 19:32:07.828652 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lt9gs\" (UniqueName: \"kubernetes.io/projected/fa3eaab6-95c0-4378-a402-4abe2bf5268f-kube-api-access-lt9gs\") pod \"crc-debug-j6ndp\" (UID: \"fa3eaab6-95c0-4378-a402-4abe2bf5268f\") " pod="openshift-must-gather-2z2xq/crc-debug-j6ndp" Jan 20 19:32:07 crc kubenswrapper[4661]: I0120 19:32:07.954872 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-2z2xq/crc-debug-j6ndp" Jan 20 19:32:08 crc kubenswrapper[4661]: I0120 19:32:08.163450 4661 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11282dc-9a83-4361-b616-acc9e4b8ea97" path="/var/lib/kubelet/pods/b11282dc-9a83-4361-b616-acc9e4b8ea97/volumes" Jan 20 19:32:08 crc kubenswrapper[4661]: I0120 19:32:08.309535 4661 generic.go:334] "Generic (PLEG): container finished" podID="fa3eaab6-95c0-4378-a402-4abe2bf5268f" containerID="f7deb5966180721a704a89104c1f169e147266c28d082bb4177291ff16e93778" exitCode=0 Jan 20 19:32:08 crc kubenswrapper[4661]: I0120 19:32:08.309604 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-2z2xq/crc-debug-j6ndp" event={"ID":"fa3eaab6-95c0-4378-a402-4abe2bf5268f","Type":"ContainerDied","Data":"f7deb5966180721a704a89104c1f169e147266c28d082bb4177291ff16e93778"} Jan 20 19:32:08 crc kubenswrapper[4661]: I0120 19:32:08.309636 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-2z2xq/crc-debug-j6ndp" event={"ID":"fa3eaab6-95c0-4378-a402-4abe2bf5268f","Type":"ContainerStarted","Data":"e2464d48e2cf38f7865ab4e429d8f4a939994f974ef48c0ca9a88f6866225c20"} Jan 20 19:32:08 crc kubenswrapper[4661]: I0120 19:32:08.740508 4661 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-2z2xq/crc-debug-j6ndp"] Jan 20 19:32:08 crc kubenswrapper[4661]: I0120 19:32:08.748719 4661 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-2z2xq/crc-debug-j6ndp"] Jan 20 19:32:09 crc kubenswrapper[4661]: I0120 19:32:09.438646 4661 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-2z2xq/crc-debug-j6ndp" Jan 20 19:32:09 crc kubenswrapper[4661]: I0120 19:32:09.537874 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lt9gs\" (UniqueName: \"kubernetes.io/projected/fa3eaab6-95c0-4378-a402-4abe2bf5268f-kube-api-access-lt9gs\") pod \"fa3eaab6-95c0-4378-a402-4abe2bf5268f\" (UID: \"fa3eaab6-95c0-4378-a402-4abe2bf5268f\") " Jan 20 19:32:09 crc kubenswrapper[4661]: I0120 19:32:09.537963 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/fa3eaab6-95c0-4378-a402-4abe2bf5268f-host\") pod \"fa3eaab6-95c0-4378-a402-4abe2bf5268f\" (UID: \"fa3eaab6-95c0-4378-a402-4abe2bf5268f\") " Jan 20 19:32:09 crc kubenswrapper[4661]: I0120 19:32:09.538135 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fa3eaab6-95c0-4378-a402-4abe2bf5268f-host" (OuterVolumeSpecName: "host") pod "fa3eaab6-95c0-4378-a402-4abe2bf5268f" (UID: "fa3eaab6-95c0-4378-a402-4abe2bf5268f"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 20 19:32:09 crc kubenswrapper[4661]: I0120 19:32:09.538697 4661 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/fa3eaab6-95c0-4378-a402-4abe2bf5268f-host\") on node \"crc\" DevicePath \"\"" Jan 20 19:32:09 crc kubenswrapper[4661]: I0120 19:32:09.542764 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fa3eaab6-95c0-4378-a402-4abe2bf5268f-kube-api-access-lt9gs" (OuterVolumeSpecName: "kube-api-access-lt9gs") pod "fa3eaab6-95c0-4378-a402-4abe2bf5268f" (UID: "fa3eaab6-95c0-4378-a402-4abe2bf5268f"). InnerVolumeSpecName "kube-api-access-lt9gs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 19:32:09 crc kubenswrapper[4661]: I0120 19:32:09.640901 4661 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lt9gs\" (UniqueName: \"kubernetes.io/projected/fa3eaab6-95c0-4378-a402-4abe2bf5268f-kube-api-access-lt9gs\") on node \"crc\" DevicePath \"\"" Jan 20 19:32:09 crc kubenswrapper[4661]: I0120 19:32:09.981973 4661 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-2z2xq/crc-debug-lzt5w"] Jan 20 19:32:09 crc kubenswrapper[4661]: E0120 19:32:09.982462 4661 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fa3eaab6-95c0-4378-a402-4abe2bf5268f" containerName="container-00" Jan 20 19:32:09 crc kubenswrapper[4661]: I0120 19:32:09.982476 4661 state_mem.go:107] "Deleted CPUSet assignment" podUID="fa3eaab6-95c0-4378-a402-4abe2bf5268f" containerName="container-00" Jan 20 19:32:09 crc kubenswrapper[4661]: I0120 19:32:09.982642 4661 memory_manager.go:354] "RemoveStaleState removing state" podUID="fa3eaab6-95c0-4378-a402-4abe2bf5268f" containerName="container-00" Jan 20 19:32:09 crc kubenswrapper[4661]: I0120 19:32:09.983279 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-2z2xq/crc-debug-lzt5w" Jan 20 19:32:10 crc kubenswrapper[4661]: I0120 19:32:10.047747 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/0f12e246-e6ad-42a7-94bf-53a57ddde7d9-host\") pod \"crc-debug-lzt5w\" (UID: \"0f12e246-e6ad-42a7-94bf-53a57ddde7d9\") " pod="openshift-must-gather-2z2xq/crc-debug-lzt5w" Jan 20 19:32:10 crc kubenswrapper[4661]: I0120 19:32:10.048031 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tmhdt\" (UniqueName: \"kubernetes.io/projected/0f12e246-e6ad-42a7-94bf-53a57ddde7d9-kube-api-access-tmhdt\") pod \"crc-debug-lzt5w\" (UID: \"0f12e246-e6ad-42a7-94bf-53a57ddde7d9\") " pod="openshift-must-gather-2z2xq/crc-debug-lzt5w" Jan 20 19:32:10 crc kubenswrapper[4661]: I0120 19:32:10.149895 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tmhdt\" (UniqueName: \"kubernetes.io/projected/0f12e246-e6ad-42a7-94bf-53a57ddde7d9-kube-api-access-tmhdt\") pod \"crc-debug-lzt5w\" (UID: \"0f12e246-e6ad-42a7-94bf-53a57ddde7d9\") " pod="openshift-must-gather-2z2xq/crc-debug-lzt5w" Jan 20 19:32:10 crc kubenswrapper[4661]: I0120 19:32:10.150044 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/0f12e246-e6ad-42a7-94bf-53a57ddde7d9-host\") pod \"crc-debug-lzt5w\" (UID: \"0f12e246-e6ad-42a7-94bf-53a57ddde7d9\") " pod="openshift-must-gather-2z2xq/crc-debug-lzt5w" Jan 20 19:32:10 crc kubenswrapper[4661]: I0120 19:32:10.150141 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/0f12e246-e6ad-42a7-94bf-53a57ddde7d9-host\") pod \"crc-debug-lzt5w\" (UID: \"0f12e246-e6ad-42a7-94bf-53a57ddde7d9\") " pod="openshift-must-gather-2z2xq/crc-debug-lzt5w" Jan 20 19:32:10 crc kubenswrapper[4661]: I0120 19:32:10.168248 4661 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fa3eaab6-95c0-4378-a402-4abe2bf5268f" path="/var/lib/kubelet/pods/fa3eaab6-95c0-4378-a402-4abe2bf5268f/volumes" Jan 20 19:32:10 crc kubenswrapper[4661]: I0120 19:32:10.171462 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tmhdt\" (UniqueName: \"kubernetes.io/projected/0f12e246-e6ad-42a7-94bf-53a57ddde7d9-kube-api-access-tmhdt\") pod \"crc-debug-lzt5w\" (UID: \"0f12e246-e6ad-42a7-94bf-53a57ddde7d9\") " pod="openshift-must-gather-2z2xq/crc-debug-lzt5w" Jan 20 19:32:10 crc kubenswrapper[4661]: I0120 19:32:10.304363 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-2z2xq/crc-debug-lzt5w" Jan 20 19:32:10 crc kubenswrapper[4661]: W0120 19:32:10.331251 4661 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0f12e246_e6ad_42a7_94bf_53a57ddde7d9.slice/crio-7243a2cc3912304bc0234170ae518186c8407e76425203073a44b6f78208d92e WatchSource:0}: Error finding container 7243a2cc3912304bc0234170ae518186c8407e76425203073a44b6f78208d92e: Status 404 returned error can't find the container with id 7243a2cc3912304bc0234170ae518186c8407e76425203073a44b6f78208d92e Jan 20 19:32:10 crc kubenswrapper[4661]: I0120 19:32:10.333205 4661 scope.go:117] "RemoveContainer" containerID="f7deb5966180721a704a89104c1f169e147266c28d082bb4177291ff16e93778" Jan 20 19:32:10 crc kubenswrapper[4661]: I0120 19:32:10.333309 4661 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-2z2xq/crc-debug-j6ndp" Jan 20 19:32:11 crc kubenswrapper[4661]: I0120 19:32:11.349319 4661 generic.go:334] "Generic (PLEG): container finished" podID="0f12e246-e6ad-42a7-94bf-53a57ddde7d9" containerID="e60bfdd7b50c00c62a8ae7689c912c0d5d571102aec9cdf04afcf2156e3879ca" exitCode=0 Jan 20 19:32:11 crc kubenswrapper[4661]: I0120 19:32:11.349463 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-2z2xq/crc-debug-lzt5w" event={"ID":"0f12e246-e6ad-42a7-94bf-53a57ddde7d9","Type":"ContainerDied","Data":"e60bfdd7b50c00c62a8ae7689c912c0d5d571102aec9cdf04afcf2156e3879ca"} Jan 20 19:32:11 crc kubenswrapper[4661]: I0120 19:32:11.349937 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-2z2xq/crc-debug-lzt5w" event={"ID":"0f12e246-e6ad-42a7-94bf-53a57ddde7d9","Type":"ContainerStarted","Data":"7243a2cc3912304bc0234170ae518186c8407e76425203073a44b6f78208d92e"} Jan 20 19:32:11 crc kubenswrapper[4661]: I0120 19:32:11.398196 4661 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-2z2xq/crc-debug-lzt5w"] Jan 20 19:32:11 crc kubenswrapper[4661]: I0120 19:32:11.405614 4661 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-2z2xq/crc-debug-lzt5w"] Jan 20 19:32:12 crc kubenswrapper[4661]: I0120 19:32:12.951360 4661 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-2z2xq/crc-debug-lzt5w" Jan 20 19:32:13 crc kubenswrapper[4661]: I0120 19:32:13.121432 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tmhdt\" (UniqueName: \"kubernetes.io/projected/0f12e246-e6ad-42a7-94bf-53a57ddde7d9-kube-api-access-tmhdt\") pod \"0f12e246-e6ad-42a7-94bf-53a57ddde7d9\" (UID: \"0f12e246-e6ad-42a7-94bf-53a57ddde7d9\") " Jan 20 19:32:13 crc kubenswrapper[4661]: I0120 19:32:13.121490 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/0f12e246-e6ad-42a7-94bf-53a57ddde7d9-host\") pod \"0f12e246-e6ad-42a7-94bf-53a57ddde7d9\" (UID: \"0f12e246-e6ad-42a7-94bf-53a57ddde7d9\") " Jan 20 19:32:13 crc kubenswrapper[4661]: I0120 19:32:13.121661 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0f12e246-e6ad-42a7-94bf-53a57ddde7d9-host" (OuterVolumeSpecName: "host") pod "0f12e246-e6ad-42a7-94bf-53a57ddde7d9" (UID: "0f12e246-e6ad-42a7-94bf-53a57ddde7d9"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 20 19:32:13 crc kubenswrapper[4661]: I0120 19:32:13.122317 4661 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/0f12e246-e6ad-42a7-94bf-53a57ddde7d9-host\") on node \"crc\" DevicePath \"\"" Jan 20 19:32:13 crc kubenswrapper[4661]: I0120 19:32:13.131766 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0f12e246-e6ad-42a7-94bf-53a57ddde7d9-kube-api-access-tmhdt" (OuterVolumeSpecName: "kube-api-access-tmhdt") pod "0f12e246-e6ad-42a7-94bf-53a57ddde7d9" (UID: "0f12e246-e6ad-42a7-94bf-53a57ddde7d9"). InnerVolumeSpecName "kube-api-access-tmhdt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 19:32:13 crc kubenswrapper[4661]: I0120 19:32:13.224514 4661 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tmhdt\" (UniqueName: \"kubernetes.io/projected/0f12e246-e6ad-42a7-94bf-53a57ddde7d9-kube-api-access-tmhdt\") on node \"crc\" DevicePath \"\"" Jan 20 19:32:13 crc kubenswrapper[4661]: I0120 19:32:13.368002 4661 scope.go:117] "RemoveContainer" containerID="e60bfdd7b50c00c62a8ae7689c912c0d5d571102aec9cdf04afcf2156e3879ca" Jan 20 19:32:13 crc kubenswrapper[4661]: I0120 19:32:13.368015 4661 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-2z2xq/crc-debug-lzt5w" Jan 20 19:32:14 crc kubenswrapper[4661]: I0120 19:32:14.146594 4661 scope.go:117] "RemoveContainer" containerID="7b21693d8a82b0b4ec2dcf50fcc180f7e20aa5c7528eda5ef12172a1ad0a2efe" Jan 20 19:32:14 crc kubenswrapper[4661]: E0120 19:32:14.147199 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-svf7c_openshift-machine-config-operator(78855c94-da90-4523-8d65-70f7fd153dee)\"" pod="openshift-machine-config-operator/machine-config-daemon-svf7c" podUID="78855c94-da90-4523-8d65-70f7fd153dee" Jan 20 19:32:14 crc kubenswrapper[4661]: I0120 19:32:14.151657 4661 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0f12e246-e6ad-42a7-94bf-53a57ddde7d9" path="/var/lib/kubelet/pods/0f12e246-e6ad-42a7-94bf-53a57ddde7d9/volumes" Jan 20 19:32:26 crc kubenswrapper[4661]: I0120 19:32:26.143082 4661 scope.go:117] "RemoveContainer" containerID="7b21693d8a82b0b4ec2dcf50fcc180f7e20aa5c7528eda5ef12172a1ad0a2efe" Jan 20 19:32:26 crc kubenswrapper[4661]: E0120 19:32:26.143960 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-svf7c_openshift-machine-config-operator(78855c94-da90-4523-8d65-70f7fd153dee)\"" pod="openshift-machine-config-operator/machine-config-daemon-svf7c" podUID="78855c94-da90-4523-8d65-70f7fd153dee" Jan 20 19:32:29 crc kubenswrapper[4661]: I0120 19:32:29.016073 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-7ddb5c749-bbwzg_e257e7b3-ba70-44d2-abb9-6a6848bf1c06/manager/0.log" Jan 20 19:32:29 crc kubenswrapper[4661]: I0120 19:32:29.025204 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_c328dd49ee03958f20cc032e90cbfddae000be998ce16b6019d67f47cbzppkm_dc3336c8-c6d2-4f42-b8d7-534f5e765776/extract/0.log" Jan 20 19:32:29 crc kubenswrapper[4661]: I0120 19:32:29.029802 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_c328dd49ee03958f20cc032e90cbfddae000be998ce16b6019d67f47cbzppkm_dc3336c8-c6d2-4f42-b8d7-534f5e765776/util/0.log" Jan 20 19:32:29 crc kubenswrapper[4661]: I0120 19:32:29.036777 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_c328dd49ee03958f20cc032e90cbfddae000be998ce16b6019d67f47cbzppkm_dc3336c8-c6d2-4f42-b8d7-534f5e765776/pull/0.log" Jan 20 19:32:29 crc kubenswrapper[4661]: I0120 19:32:29.099143 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-9b68f5989-hk9zx_51bdae14-22a5-4783-8712-fc51ca6d8a07/manager/0.log" Jan 20 19:32:29 crc kubenswrapper[4661]: I0120 19:32:29.108067 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-9f958b845-dw6hd_08e08814-f213-4476-a78d-82cddc30022d/manager/0.log" Jan 20 19:32:29 crc kubenswrapper[4661]: I0120 19:32:29.197838 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-c6994669c-gzjg9_eccd3436-cb57-49b8-a2f7-106fe5e39c7d/manager/0.log" Jan 20 19:32:29 crc kubenswrapper[4661]: I0120 19:32:29.206041 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-594c8c9d5d-r5bws_2bf3fc47-9ca2-45aa-9835-1ed5d413b0ec/manager/0.log" Jan 20 19:32:29 crc kubenswrapper[4661]: I0120 19:32:29.220715 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-77d5c5b54f-5w4m2_04a8f9c5-45fc-47db-adf2-3de38af2cf96/manager/0.log" Jan 20 19:32:29 crc kubenswrapper[4661]: I0120 19:32:29.463246 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-77c48c7859-w8bbb_70002b35-6f0d-4679-9279-a80574c467f0/manager/0.log" Jan 20 19:32:29 crc kubenswrapper[4661]: I0120 19:32:29.477290 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-78757b4889-cszrc_10ed69a9-7fbf-4139-b2b2-80dec4f8cf41/manager/0.log" Jan 20 19:32:29 crc kubenswrapper[4661]: I0120 19:32:29.539485 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-767fdc4f47-4g9db_a5920876-3cd0-41cf-b7d8-6fd8ea0af29c/manager/0.log" Jan 20 19:32:29 crc kubenswrapper[4661]: I0120 19:32:29.582945 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-864f6b75bf-svt25_6c1159da-faf7-4389-b57b-05173827968d/manager/0.log" Jan 20 19:32:29 crc kubenswrapper[4661]: I0120 19:32:29.615983 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-c87fff755-69ktn_12b130a9-df33-4c1a-a145-961791dc9d9d/manager/0.log" Jan 20 19:32:29 crc kubenswrapper[4661]: I0120 19:32:29.681809 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-cb4666565-cqf8m_1b070a22-e050-4db7-bc74-f8a1129a8d61/manager/0.log" Jan 20 19:32:29 crc kubenswrapper[4661]: I0120 19:32:29.750690 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-65849867d6-2g55t_52bfaf4d-624e-45d7-86d8-4c0e18afe2e6/manager/0.log" Jan 20 19:32:29 crc kubenswrapper[4661]: I0120 19:32:29.759596 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-7fc9b76cf6-mqz45_5798b368-6725-4e14-a77c-37b7bcfd538d/manager/0.log" Jan 20 19:32:29 crc kubenswrapper[4661]: I0120 19:32:29.777748 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-6b68b8b854xbhqc_65995719-9618-424e-a324-084d52a0cd47/manager/0.log" Jan 20 19:32:29 crc kubenswrapper[4661]: I0120 19:32:29.895296 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-init-fdc84db4c-p87rq_8e170a45-9133-4aee-81e7-7f6188f48c91/operator/0.log" Jan 20 19:32:31 crc kubenswrapper[4661]: I0120 19:32:31.030249 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-58b4997fc9-9wjks_d603e76e-8a9d-444f-b251-2d29b5588c8e/manager/0.log" Jan 20 19:32:31 crc kubenswrapper[4661]: I0120 19:32:31.041590 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-wj8kr_a9b5891c-9b50-4f14-ade6-69a048487d08/registry-server/0.log" Jan 20 19:32:31 crc kubenswrapper[4661]: I0120 19:32:31.105351 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-55db956ddc-4msz7_f61aad5b-f531-4dc0-8328-4b057c84651e/manager/0.log" Jan 20 19:32:31 crc kubenswrapper[4661]: I0120 19:32:31.125887 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-686df47fcb-tcgdv_dbbf0040-fc50-457e-ad76-42d6061a6df1/manager/0.log" Jan 20 19:32:31 crc kubenswrapper[4661]: I0120 19:32:31.146565 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-v8gf9_2e78fff0-2eba-4aa9-a4b0-2f5ff775e1ec/operator/0.log" Jan 20 19:32:31 crc kubenswrapper[4661]: I0120 19:32:31.156700 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-85dd56d4cc-wqr58_497cc518-3499-43be-8aff-c4ff58803cba/manager/0.log" Jan 20 19:32:31 crc kubenswrapper[4661]: I0120 19:32:31.214919 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-5f8f495fcf-2wsx8_22fe1eac-c7f9-4cef-8811-db5861b4caa2/manager/0.log" Jan 20 19:32:31 crc kubenswrapper[4661]: I0120 19:32:31.229064 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-7cd8bc9dbb-gg985_7f267072-d784-469d-acad-238e58ddd82c/manager/0.log" Jan 20 19:32:31 crc kubenswrapper[4661]: I0120 19:32:31.240159 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-64cd966744-hppzk_5a07b584-21cc-464b-a3bf-046c6e0ab18f/manager/0.log" Jan 20 19:32:37 crc kubenswrapper[4661]: I0120 19:32:37.333264 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-qdlnn_a507ebcc-7e0b-445b-9688-882358d365ce/control-plane-machine-set-operator/0.log" Jan 20 19:32:37 crc kubenswrapper[4661]: I0120 19:32:37.348860 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-7hbkg_302e8226-565c-44a4-bb0e-dee670200ae3/kube-rbac-proxy/0.log" Jan 20 19:32:37 crc kubenswrapper[4661]: I0120 19:32:37.361201 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-7hbkg_302e8226-565c-44a4-bb0e-dee670200ae3/machine-api-operator/0.log" Jan 20 19:32:41 crc kubenswrapper[4661]: I0120 19:32:41.143431 4661 scope.go:117] "RemoveContainer" containerID="7b21693d8a82b0b4ec2dcf50fcc180f7e20aa5c7528eda5ef12172a1ad0a2efe" Jan 20 19:32:41 crc kubenswrapper[4661]: E0120 19:32:41.144459 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-svf7c_openshift-machine-config-operator(78855c94-da90-4523-8d65-70f7fd153dee)\"" pod="openshift-machine-config-operator/machine-config-daemon-svf7c" podUID="78855c94-da90-4523-8d65-70f7fd153dee" Jan 20 19:32:45 crc kubenswrapper[4661]: I0120 19:32:45.701150 4661 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-lx59p"] Jan 20 19:32:45 crc kubenswrapper[4661]: E0120 19:32:45.702802 4661 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0f12e246-e6ad-42a7-94bf-53a57ddde7d9" containerName="container-00" Jan 20 19:32:45 crc kubenswrapper[4661]: I0120 19:32:45.702837 4661 state_mem.go:107] "Deleted CPUSet assignment" podUID="0f12e246-e6ad-42a7-94bf-53a57ddde7d9" containerName="container-00" Jan 20 19:32:45 crc kubenswrapper[4661]: I0120 19:32:45.703288 4661 memory_manager.go:354] "RemoveStaleState removing state" podUID="0f12e246-e6ad-42a7-94bf-53a57ddde7d9" containerName="container-00" Jan 20 19:32:45 crc kubenswrapper[4661]: I0120 19:32:45.706760 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lx59p" Jan 20 19:32:45 crc kubenswrapper[4661]: I0120 19:32:45.713044 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-lx59p"] Jan 20 19:32:45 crc kubenswrapper[4661]: I0120 19:32:45.892532 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/93f00d7b-4ab2-4411-8880-07d0a28a884f-utilities\") pod \"redhat-marketplace-lx59p\" (UID: \"93f00d7b-4ab2-4411-8880-07d0a28a884f\") " pod="openshift-marketplace/redhat-marketplace-lx59p" Jan 20 19:32:45 crc kubenswrapper[4661]: I0120 19:32:45.892646 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/93f00d7b-4ab2-4411-8880-07d0a28a884f-catalog-content\") pod \"redhat-marketplace-lx59p\" (UID: \"93f00d7b-4ab2-4411-8880-07d0a28a884f\") " pod="openshift-marketplace/redhat-marketplace-lx59p" Jan 20 19:32:45 crc kubenswrapper[4661]: I0120 19:32:45.892689 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gwgqr\" (UniqueName: \"kubernetes.io/projected/93f00d7b-4ab2-4411-8880-07d0a28a884f-kube-api-access-gwgqr\") pod \"redhat-marketplace-lx59p\" (UID: \"93f00d7b-4ab2-4411-8880-07d0a28a884f\") " pod="openshift-marketplace/redhat-marketplace-lx59p" Jan 20 19:32:45 crc kubenswrapper[4661]: I0120 19:32:45.994226 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/93f00d7b-4ab2-4411-8880-07d0a28a884f-catalog-content\") pod \"redhat-marketplace-lx59p\" (UID: \"93f00d7b-4ab2-4411-8880-07d0a28a884f\") " pod="openshift-marketplace/redhat-marketplace-lx59p" Jan 20 19:32:45 crc kubenswrapper[4661]: I0120 19:32:45.994271 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gwgqr\" (UniqueName: \"kubernetes.io/projected/93f00d7b-4ab2-4411-8880-07d0a28a884f-kube-api-access-gwgqr\") pod \"redhat-marketplace-lx59p\" (UID: \"93f00d7b-4ab2-4411-8880-07d0a28a884f\") " pod="openshift-marketplace/redhat-marketplace-lx59p" Jan 20 19:32:45 crc kubenswrapper[4661]: I0120 19:32:45.994409 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/93f00d7b-4ab2-4411-8880-07d0a28a884f-utilities\") pod \"redhat-marketplace-lx59p\" (UID: \"93f00d7b-4ab2-4411-8880-07d0a28a884f\") " pod="openshift-marketplace/redhat-marketplace-lx59p" Jan 20 19:32:45 crc kubenswrapper[4661]: I0120 19:32:45.994735 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/93f00d7b-4ab2-4411-8880-07d0a28a884f-catalog-content\") pod \"redhat-marketplace-lx59p\" (UID: \"93f00d7b-4ab2-4411-8880-07d0a28a884f\") " pod="openshift-marketplace/redhat-marketplace-lx59p" Jan 20 19:32:45 crc kubenswrapper[4661]: I0120 19:32:45.994751 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/93f00d7b-4ab2-4411-8880-07d0a28a884f-utilities\") pod \"redhat-marketplace-lx59p\" (UID: \"93f00d7b-4ab2-4411-8880-07d0a28a884f\") " pod="openshift-marketplace/redhat-marketplace-lx59p" Jan 20 19:32:46 crc kubenswrapper[4661]: I0120 19:32:46.016586 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gwgqr\" (UniqueName: \"kubernetes.io/projected/93f00d7b-4ab2-4411-8880-07d0a28a884f-kube-api-access-gwgqr\") pod \"redhat-marketplace-lx59p\" (UID: \"93f00d7b-4ab2-4411-8880-07d0a28a884f\") " pod="openshift-marketplace/redhat-marketplace-lx59p" Jan 20 19:32:46 crc kubenswrapper[4661]: I0120 19:32:46.046916 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lx59p" Jan 20 19:32:46 crc kubenswrapper[4661]: I0120 19:32:46.564583 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-lx59p"] Jan 20 19:32:46 crc kubenswrapper[4661]: I0120 19:32:46.674182 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lx59p" event={"ID":"93f00d7b-4ab2-4411-8880-07d0a28a884f","Type":"ContainerStarted","Data":"6c63330208ab5d24425f354942148e06fd678cb1e241af2e3fd0499190f38d78"} Jan 20 19:32:47 crc kubenswrapper[4661]: I0120 19:32:47.686527 4661 generic.go:334] "Generic (PLEG): container finished" podID="93f00d7b-4ab2-4411-8880-07d0a28a884f" containerID="e729021c0b078aebccbf4983b959192244b3ac7338c1a1d68e446a0fe1c9f88c" exitCode=0 Jan 20 19:32:47 crc kubenswrapper[4661]: I0120 19:32:47.687268 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lx59p" event={"ID":"93f00d7b-4ab2-4411-8880-07d0a28a884f","Type":"ContainerDied","Data":"e729021c0b078aebccbf4983b959192244b3ac7338c1a1d68e446a0fe1c9f88c"} Jan 20 19:32:47 crc kubenswrapper[4661]: I0120 19:32:47.693017 4661 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 20 19:32:49 crc kubenswrapper[4661]: I0120 19:32:49.716636 4661 generic.go:334] "Generic (PLEG): container finished" podID="93f00d7b-4ab2-4411-8880-07d0a28a884f" containerID="e6a5b6be7806a29834e762402ca23329f2a55d21c60ec6ae8907997474f29e61" exitCode=0 Jan 20 19:32:49 crc kubenswrapper[4661]: I0120 19:32:49.717203 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lx59p" event={"ID":"93f00d7b-4ab2-4411-8880-07d0a28a884f","Type":"ContainerDied","Data":"e6a5b6be7806a29834e762402ca23329f2a55d21c60ec6ae8907997474f29e61"} Jan 20 19:32:50 crc kubenswrapper[4661]: I0120 19:32:50.739444 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lx59p" event={"ID":"93f00d7b-4ab2-4411-8880-07d0a28a884f","Type":"ContainerStarted","Data":"7c041e6d89b16a56472695fe30e6397dbf317381eee349348139ca0245d30649"} Jan 20 19:32:50 crc kubenswrapper[4661]: I0120 19:32:50.771711 4661 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-lx59p" podStartSLOduration=3.103084496 podStartE2EDuration="5.771690627s" podCreationTimestamp="2026-01-20 19:32:45 +0000 UTC" firstStartedPulling="2026-01-20 19:32:47.69268198 +0000 UTC m=+5224.023471642" lastFinishedPulling="2026-01-20 19:32:50.361288071 +0000 UTC m=+5226.692077773" observedRunningTime="2026-01-20 19:32:50.761943334 +0000 UTC m=+5227.092733026" watchObservedRunningTime="2026-01-20 19:32:50.771690627 +0000 UTC m=+5227.102480289" Jan 20 19:32:55 crc kubenswrapper[4661]: I0120 19:32:55.142108 4661 scope.go:117] "RemoveContainer" containerID="7b21693d8a82b0b4ec2dcf50fcc180f7e20aa5c7528eda5ef12172a1ad0a2efe" Jan 20 19:32:55 crc kubenswrapper[4661]: E0120 19:32:55.144870 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-svf7c_openshift-machine-config-operator(78855c94-da90-4523-8d65-70f7fd153dee)\"" pod="openshift-machine-config-operator/machine-config-daemon-svf7c" podUID="78855c94-da90-4523-8d65-70f7fd153dee" Jan 20 19:32:56 crc kubenswrapper[4661]: I0120 19:32:56.047210 4661 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-lx59p" Jan 20 19:32:56 crc kubenswrapper[4661]: I0120 19:32:56.047682 4661 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-lx59p" Jan 20 19:32:56 crc kubenswrapper[4661]: I0120 19:32:56.211384 4661 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-lx59p" Jan 20 19:32:56 crc kubenswrapper[4661]: I0120 19:32:56.848796 4661 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-lx59p" Jan 20 19:32:56 crc kubenswrapper[4661]: I0120 19:32:56.906890 4661 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-lx59p"] Jan 20 19:32:58 crc kubenswrapper[4661]: I0120 19:32:58.823103 4661 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-lx59p" podUID="93f00d7b-4ab2-4411-8880-07d0a28a884f" containerName="registry-server" containerID="cri-o://7c041e6d89b16a56472695fe30e6397dbf317381eee349348139ca0245d30649" gracePeriod=2 Jan 20 19:32:59 crc kubenswrapper[4661]: I0120 19:32:59.333154 4661 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lx59p" Jan 20 19:32:59 crc kubenswrapper[4661]: I0120 19:32:59.488918 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/93f00d7b-4ab2-4411-8880-07d0a28a884f-utilities\") pod \"93f00d7b-4ab2-4411-8880-07d0a28a884f\" (UID: \"93f00d7b-4ab2-4411-8880-07d0a28a884f\") " Jan 20 19:32:59 crc kubenswrapper[4661]: I0120 19:32:59.489091 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/93f00d7b-4ab2-4411-8880-07d0a28a884f-catalog-content\") pod \"93f00d7b-4ab2-4411-8880-07d0a28a884f\" (UID: \"93f00d7b-4ab2-4411-8880-07d0a28a884f\") " Jan 20 19:32:59 crc kubenswrapper[4661]: I0120 19:32:59.489199 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gwgqr\" (UniqueName: \"kubernetes.io/projected/93f00d7b-4ab2-4411-8880-07d0a28a884f-kube-api-access-gwgqr\") pod \"93f00d7b-4ab2-4411-8880-07d0a28a884f\" (UID: \"93f00d7b-4ab2-4411-8880-07d0a28a884f\") " Jan 20 19:32:59 crc kubenswrapper[4661]: I0120 19:32:59.490620 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/93f00d7b-4ab2-4411-8880-07d0a28a884f-utilities" (OuterVolumeSpecName: "utilities") pod "93f00d7b-4ab2-4411-8880-07d0a28a884f" (UID: "93f00d7b-4ab2-4411-8880-07d0a28a884f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 19:32:59 crc kubenswrapper[4661]: I0120 19:32:59.497102 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/93f00d7b-4ab2-4411-8880-07d0a28a884f-kube-api-access-gwgqr" (OuterVolumeSpecName: "kube-api-access-gwgqr") pod "93f00d7b-4ab2-4411-8880-07d0a28a884f" (UID: "93f00d7b-4ab2-4411-8880-07d0a28a884f"). InnerVolumeSpecName "kube-api-access-gwgqr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 19:32:59 crc kubenswrapper[4661]: I0120 19:32:59.512179 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/93f00d7b-4ab2-4411-8880-07d0a28a884f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "93f00d7b-4ab2-4411-8880-07d0a28a884f" (UID: "93f00d7b-4ab2-4411-8880-07d0a28a884f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 19:32:59 crc kubenswrapper[4661]: I0120 19:32:59.591548 4661 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/93f00d7b-4ab2-4411-8880-07d0a28a884f-utilities\") on node \"crc\" DevicePath \"\"" Jan 20 19:32:59 crc kubenswrapper[4661]: I0120 19:32:59.591581 4661 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/93f00d7b-4ab2-4411-8880-07d0a28a884f-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 20 19:32:59 crc kubenswrapper[4661]: I0120 19:32:59.591593 4661 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gwgqr\" (UniqueName: \"kubernetes.io/projected/93f00d7b-4ab2-4411-8880-07d0a28a884f-kube-api-access-gwgqr\") on node \"crc\" DevicePath \"\"" Jan 20 19:32:59 crc kubenswrapper[4661]: I0120 19:32:59.842245 4661 generic.go:334] "Generic (PLEG): container finished" podID="93f00d7b-4ab2-4411-8880-07d0a28a884f" containerID="7c041e6d89b16a56472695fe30e6397dbf317381eee349348139ca0245d30649" exitCode=0 Jan 20 19:32:59 crc kubenswrapper[4661]: I0120 19:32:59.842299 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lx59p" event={"ID":"93f00d7b-4ab2-4411-8880-07d0a28a884f","Type":"ContainerDied","Data":"7c041e6d89b16a56472695fe30e6397dbf317381eee349348139ca0245d30649"} Jan 20 19:32:59 crc kubenswrapper[4661]: I0120 19:32:59.842330 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lx59p" event={"ID":"93f00d7b-4ab2-4411-8880-07d0a28a884f","Type":"ContainerDied","Data":"6c63330208ab5d24425f354942148e06fd678cb1e241af2e3fd0499190f38d78"} Jan 20 19:32:59 crc kubenswrapper[4661]: I0120 19:32:59.842350 4661 scope.go:117] "RemoveContainer" containerID="7c041e6d89b16a56472695fe30e6397dbf317381eee349348139ca0245d30649" Jan 20 19:32:59 crc kubenswrapper[4661]: I0120 19:32:59.842381 4661 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lx59p" Jan 20 19:32:59 crc kubenswrapper[4661]: I0120 19:32:59.862781 4661 scope.go:117] "RemoveContainer" containerID="e6a5b6be7806a29834e762402ca23329f2a55d21c60ec6ae8907997474f29e61" Jan 20 19:32:59 crc kubenswrapper[4661]: I0120 19:32:59.883699 4661 scope.go:117] "RemoveContainer" containerID="e729021c0b078aebccbf4983b959192244b3ac7338c1a1d68e446a0fe1c9f88c" Jan 20 19:32:59 crc kubenswrapper[4661]: I0120 19:32:59.913960 4661 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-lx59p"] Jan 20 19:32:59 crc kubenswrapper[4661]: I0120 19:32:59.925987 4661 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-lx59p"] Jan 20 19:32:59 crc kubenswrapper[4661]: I0120 19:32:59.930235 4661 scope.go:117] "RemoveContainer" containerID="7c041e6d89b16a56472695fe30e6397dbf317381eee349348139ca0245d30649" Jan 20 19:32:59 crc kubenswrapper[4661]: E0120 19:32:59.930912 4661 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7c041e6d89b16a56472695fe30e6397dbf317381eee349348139ca0245d30649\": container with ID starting with 7c041e6d89b16a56472695fe30e6397dbf317381eee349348139ca0245d30649 not found: ID does not exist" containerID="7c041e6d89b16a56472695fe30e6397dbf317381eee349348139ca0245d30649" Jan 20 19:32:59 crc kubenswrapper[4661]: I0120 19:32:59.931029 4661 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7c041e6d89b16a56472695fe30e6397dbf317381eee349348139ca0245d30649"} err="failed to get container status \"7c041e6d89b16a56472695fe30e6397dbf317381eee349348139ca0245d30649\": rpc error: code = NotFound desc = could not find container \"7c041e6d89b16a56472695fe30e6397dbf317381eee349348139ca0245d30649\": container with ID starting with 7c041e6d89b16a56472695fe30e6397dbf317381eee349348139ca0245d30649 not found: ID does not exist" Jan 20 19:32:59 crc kubenswrapper[4661]: I0120 19:32:59.931122 4661 scope.go:117] "RemoveContainer" containerID="e6a5b6be7806a29834e762402ca23329f2a55d21c60ec6ae8907997474f29e61" Jan 20 19:32:59 crc kubenswrapper[4661]: E0120 19:32:59.931473 4661 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e6a5b6be7806a29834e762402ca23329f2a55d21c60ec6ae8907997474f29e61\": container with ID starting with e6a5b6be7806a29834e762402ca23329f2a55d21c60ec6ae8907997474f29e61 not found: ID does not exist" containerID="e6a5b6be7806a29834e762402ca23329f2a55d21c60ec6ae8907997474f29e61" Jan 20 19:32:59 crc kubenswrapper[4661]: I0120 19:32:59.931597 4661 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e6a5b6be7806a29834e762402ca23329f2a55d21c60ec6ae8907997474f29e61"} err="failed to get container status \"e6a5b6be7806a29834e762402ca23329f2a55d21c60ec6ae8907997474f29e61\": rpc error: code = NotFound desc = could not find container \"e6a5b6be7806a29834e762402ca23329f2a55d21c60ec6ae8907997474f29e61\": container with ID starting with e6a5b6be7806a29834e762402ca23329f2a55d21c60ec6ae8907997474f29e61 not found: ID does not exist" Jan 20 19:32:59 crc kubenswrapper[4661]: I0120 19:32:59.931724 4661 scope.go:117] "RemoveContainer" containerID="e729021c0b078aebccbf4983b959192244b3ac7338c1a1d68e446a0fe1c9f88c" Jan 20 19:32:59 crc kubenswrapper[4661]: E0120 19:32:59.932021 4661 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e729021c0b078aebccbf4983b959192244b3ac7338c1a1d68e446a0fe1c9f88c\": container with ID starting with e729021c0b078aebccbf4983b959192244b3ac7338c1a1d68e446a0fe1c9f88c not found: ID does not exist" containerID="e729021c0b078aebccbf4983b959192244b3ac7338c1a1d68e446a0fe1c9f88c" Jan 20 19:32:59 crc kubenswrapper[4661]: I0120 19:32:59.932130 4661 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e729021c0b078aebccbf4983b959192244b3ac7338c1a1d68e446a0fe1c9f88c"} err="failed to get container status \"e729021c0b078aebccbf4983b959192244b3ac7338c1a1d68e446a0fe1c9f88c\": rpc error: code = NotFound desc = could not find container \"e729021c0b078aebccbf4983b959192244b3ac7338c1a1d68e446a0fe1c9f88c\": container with ID starting with e729021c0b078aebccbf4983b959192244b3ac7338c1a1d68e446a0fe1c9f88c not found: ID does not exist" Jan 20 19:33:00 crc kubenswrapper[4661]: I0120 19:33:00.160577 4661 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="93f00d7b-4ab2-4411-8880-07d0a28a884f" path="/var/lib/kubelet/pods/93f00d7b-4ab2-4411-8880-07d0a28a884f/volumes" Jan 20 19:33:09 crc kubenswrapper[4661]: I0120 19:33:09.004822 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-858654f9db-2wqcl_13a9f3bc-c133-49ea-9cfd-bc8c107e32c6/cert-manager-controller/0.log" Jan 20 19:33:09 crc kubenswrapper[4661]: I0120 19:33:09.019999 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-cf98fcc89-f7qg8_3c6e82bb-badf-4079-abf0-566f4b6f0776/cert-manager-cainjector/0.log" Jan 20 19:33:09 crc kubenswrapper[4661]: I0120 19:33:09.028620 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-687f57d79b-scrrz_b1feddfe-5c29-4eba-99c5-65849498f0dc/cert-manager-webhook/0.log" Jan 20 19:33:10 crc kubenswrapper[4661]: I0120 19:33:10.145022 4661 scope.go:117] "RemoveContainer" containerID="7b21693d8a82b0b4ec2dcf50fcc180f7e20aa5c7528eda5ef12172a1ad0a2efe" Jan 20 19:33:10 crc kubenswrapper[4661]: E0120 19:33:10.145489 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-svf7c_openshift-machine-config-operator(78855c94-da90-4523-8d65-70f7fd153dee)\"" pod="openshift-machine-config-operator/machine-config-daemon-svf7c" podUID="78855c94-da90-4523-8d65-70f7fd153dee" Jan 20 19:33:15 crc kubenswrapper[4661]: I0120 19:33:15.270295 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-7754f76f8b-frgmz_68fe7ab0-cff5-474c-aa0d-7c579ddc51bb/nmstate-console-plugin/0.log" Jan 20 19:33:15 crc kubenswrapper[4661]: I0120 19:33:15.288634 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-q9p2x_0b121ec2-f30a-46c4-a556-dd00cca2a1e3/nmstate-handler/0.log" Jan 20 19:33:15 crc kubenswrapper[4661]: I0120 19:33:15.307884 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-7pxb7_7f2c01ac-294a-42b8-9988-22419d94a0ec/nmstate-metrics/0.log" Jan 20 19:33:15 crc kubenswrapper[4661]: I0120 19:33:15.317855 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-7pxb7_7f2c01ac-294a-42b8-9988-22419d94a0ec/kube-rbac-proxy/0.log" Jan 20 19:33:15 crc kubenswrapper[4661]: I0120 19:33:15.336332 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-646758c888-mkng6_91e3ce75-26ba-42cb-b4dd-322bc9188bab/nmstate-operator/0.log" Jan 20 19:33:15 crc kubenswrapper[4661]: I0120 19:33:15.347298 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-8474b5b9d8-6rl82_fa442cfc-fd6e-4b5d-882d-aaa8de83f99a/nmstate-webhook/0.log" Jan 20 19:33:21 crc kubenswrapper[4661]: I0120 19:33:21.142702 4661 scope.go:117] "RemoveContainer" containerID="7b21693d8a82b0b4ec2dcf50fcc180f7e20aa5c7528eda5ef12172a1ad0a2efe" Jan 20 19:33:21 crc kubenswrapper[4661]: E0120 19:33:21.143248 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-svf7c_openshift-machine-config-operator(78855c94-da90-4523-8d65-70f7fd153dee)\"" pod="openshift-machine-config-operator/machine-config-daemon-svf7c" podUID="78855c94-da90-4523-8d65-70f7fd153dee" Jan 20 19:33:27 crc kubenswrapper[4661]: I0120 19:33:27.134601 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-htvf4_e4437f72-2da5-4c3a-8a69-4a26f3190a62/controller/0.log" Jan 20 19:33:27 crc kubenswrapper[4661]: I0120 19:33:27.143509 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-htvf4_e4437f72-2da5-4c3a-8a69-4a26f3190a62/kube-rbac-proxy/0.log" Jan 20 19:33:27 crc kubenswrapper[4661]: I0120 19:33:27.171567 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9s69m_52982b0c-438c-4fbd-be5f-03fe6aca0327/controller/0.log" Jan 20 19:33:28 crc kubenswrapper[4661]: I0120 19:33:28.883866 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9s69m_52982b0c-438c-4fbd-be5f-03fe6aca0327/frr/0.log" Jan 20 19:33:28 crc kubenswrapper[4661]: I0120 19:33:28.894449 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9s69m_52982b0c-438c-4fbd-be5f-03fe6aca0327/reloader/0.log" Jan 20 19:33:28 crc kubenswrapper[4661]: I0120 19:33:28.900052 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9s69m_52982b0c-438c-4fbd-be5f-03fe6aca0327/frr-metrics/0.log" Jan 20 19:33:28 crc kubenswrapper[4661]: I0120 19:33:28.908987 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9s69m_52982b0c-438c-4fbd-be5f-03fe6aca0327/kube-rbac-proxy/0.log" Jan 20 19:33:28 crc kubenswrapper[4661]: I0120 19:33:28.916323 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9s69m_52982b0c-438c-4fbd-be5f-03fe6aca0327/kube-rbac-proxy-frr/0.log" Jan 20 19:33:28 crc kubenswrapper[4661]: I0120 19:33:28.923229 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9s69m_52982b0c-438c-4fbd-be5f-03fe6aca0327/cp-frr-files/0.log" Jan 20 19:33:28 crc kubenswrapper[4661]: I0120 19:33:28.930026 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9s69m_52982b0c-438c-4fbd-be5f-03fe6aca0327/cp-reloader/0.log" Jan 20 19:33:28 crc kubenswrapper[4661]: I0120 19:33:28.936479 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9s69m_52982b0c-438c-4fbd-be5f-03fe6aca0327/cp-metrics/0.log" Jan 20 19:33:28 crc kubenswrapper[4661]: I0120 19:33:28.947431 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-7df86c4f6c-7z86n_c47c14be-ea84-47ba-a52b-9cb718ae6a30/frr-k8s-webhook-server/0.log" Jan 20 19:33:28 crc kubenswrapper[4661]: I0120 19:33:28.978584 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-6f4477bbcd-h46rv_715feebe-b380-4ce1-9842-7f9da051a195/manager/0.log" Jan 20 19:33:28 crc kubenswrapper[4661]: I0120 19:33:28.987688 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-686c759fbc-hdkt8_e0bbd467-090b-431e-b89b-8159d61d7dab/webhook-server/0.log" Jan 20 19:33:29 crc kubenswrapper[4661]: I0120 19:33:29.310070 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-58jbs_69809466-8e46-4f8a-b90e-638f8af8b313/speaker/0.log" Jan 20 19:33:29 crc kubenswrapper[4661]: I0120 19:33:29.320625 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-58jbs_69809466-8e46-4f8a-b90e-638f8af8b313/kube-rbac-proxy/0.log" Jan 20 19:33:33 crc kubenswrapper[4661]: I0120 19:33:33.273442 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcwzssn_32740d6c-8df0-4b8a-8097-5fbecd7ca5e5/extract/0.log" Jan 20 19:33:33 crc kubenswrapper[4661]: I0120 19:33:33.283461 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcwzssn_32740d6c-8df0-4b8a-8097-5fbecd7ca5e5/util/0.log" Jan 20 19:33:33 crc kubenswrapper[4661]: I0120 19:33:33.291322 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcwzssn_32740d6c-8df0-4b8a-8097-5fbecd7ca5e5/pull/0.log" Jan 20 19:33:33 crc kubenswrapper[4661]: I0120 19:33:33.300793 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7138pmwn_b42326cd-209d-43fd-8195-113ca565dfee/extract/0.log" Jan 20 19:33:33 crc kubenswrapper[4661]: I0120 19:33:33.314412 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7138pmwn_b42326cd-209d-43fd-8195-113ca565dfee/util/0.log" Jan 20 19:33:33 crc kubenswrapper[4661]: I0120 19:33:33.326317 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7138pmwn_b42326cd-209d-43fd-8195-113ca565dfee/pull/0.log" Jan 20 19:33:34 crc kubenswrapper[4661]: I0120 19:33:34.042068 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-4qf7z_c4a07359-5af4-415a-af87-0b579fb7d0dc/registry-server/0.log" Jan 20 19:33:34 crc kubenswrapper[4661]: I0120 19:33:34.048662 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-4qf7z_c4a07359-5af4-415a-af87-0b579fb7d0dc/extract-utilities/0.log" Jan 20 19:33:34 crc kubenswrapper[4661]: I0120 19:33:34.062918 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-4qf7z_c4a07359-5af4-415a-af87-0b579fb7d0dc/extract-content/0.log" Jan 20 19:33:34 crc kubenswrapper[4661]: I0120 19:33:34.872477 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-snrjs_6c7424e7-2b2f-4f1b-8970-9061b4f651ff/registry-server/0.log" Jan 20 19:33:34 crc kubenswrapper[4661]: I0120 19:33:34.877792 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-snrjs_6c7424e7-2b2f-4f1b-8970-9061b4f651ff/extract-utilities/0.log" Jan 20 19:33:34 crc kubenswrapper[4661]: I0120 19:33:34.884780 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-snrjs_6c7424e7-2b2f-4f1b-8970-9061b4f651ff/extract-content/0.log" Jan 20 19:33:34 crc kubenswrapper[4661]: I0120 19:33:34.902306 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-p2zck_663dd2a4-8e69-41d7-b561-4419dd0b4e90/marketplace-operator/0.log" Jan 20 19:33:35 crc kubenswrapper[4661]: I0120 19:33:35.081851 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-chn68_631fc07f-b0f0-4f54-881f-bc76a8ec7b34/registry-server/0.log" Jan 20 19:33:35 crc kubenswrapper[4661]: I0120 19:33:35.086421 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-chn68_631fc07f-b0f0-4f54-881f-bc76a8ec7b34/extract-utilities/0.log" Jan 20 19:33:35 crc kubenswrapper[4661]: I0120 19:33:35.096224 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-chn68_631fc07f-b0f0-4f54-881f-bc76a8ec7b34/extract-content/0.log" Jan 20 19:33:35 crc kubenswrapper[4661]: I0120 19:33:35.729184 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-cm29g_a4aacd2d-5f80-4352-bd51-48f7a3a9b8ef/registry-server/0.log" Jan 20 19:33:35 crc kubenswrapper[4661]: I0120 19:33:35.736495 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-cm29g_a4aacd2d-5f80-4352-bd51-48f7a3a9b8ef/extract-utilities/0.log" Jan 20 19:33:35 crc kubenswrapper[4661]: I0120 19:33:35.748471 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-cm29g_a4aacd2d-5f80-4352-bd51-48f7a3a9b8ef/extract-content/0.log" Jan 20 19:33:36 crc kubenswrapper[4661]: I0120 19:33:36.146593 4661 scope.go:117] "RemoveContainer" containerID="7b21693d8a82b0b4ec2dcf50fcc180f7e20aa5c7528eda5ef12172a1ad0a2efe" Jan 20 19:33:36 crc kubenswrapper[4661]: E0120 19:33:36.147162 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-svf7c_openshift-machine-config-operator(78855c94-da90-4523-8d65-70f7fd153dee)\"" pod="openshift-machine-config-operator/machine-config-daemon-svf7c" podUID="78855c94-da90-4523-8d65-70f7fd153dee" Jan 20 19:33:47 crc kubenswrapper[4661]: I0120 19:33:47.142970 4661 scope.go:117] "RemoveContainer" containerID="7b21693d8a82b0b4ec2dcf50fcc180f7e20aa5c7528eda5ef12172a1ad0a2efe" Jan 20 19:33:47 crc kubenswrapper[4661]: E0120 19:33:47.143681 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-svf7c_openshift-machine-config-operator(78855c94-da90-4523-8d65-70f7fd153dee)\"" pod="openshift-machine-config-operator/machine-config-daemon-svf7c" podUID="78855c94-da90-4523-8d65-70f7fd153dee" Jan 20 19:33:58 crc kubenswrapper[4661]: I0120 19:33:58.142753 4661 scope.go:117] "RemoveContainer" containerID="7b21693d8a82b0b4ec2dcf50fcc180f7e20aa5c7528eda5ef12172a1ad0a2efe" Jan 20 19:33:58 crc kubenswrapper[4661]: E0120 19:33:58.144562 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-svf7c_openshift-machine-config-operator(78855c94-da90-4523-8d65-70f7fd153dee)\"" pod="openshift-machine-config-operator/machine-config-daemon-svf7c" podUID="78855c94-da90-4523-8d65-70f7fd153dee" Jan 20 19:34:12 crc kubenswrapper[4661]: I0120 19:34:12.143853 4661 scope.go:117] "RemoveContainer" containerID="7b21693d8a82b0b4ec2dcf50fcc180f7e20aa5c7528eda5ef12172a1ad0a2efe" Jan 20 19:34:12 crc kubenswrapper[4661]: E0120 19:34:12.144661 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-svf7c_openshift-machine-config-operator(78855c94-da90-4523-8d65-70f7fd153dee)\"" pod="openshift-machine-config-operator/machine-config-daemon-svf7c" podUID="78855c94-da90-4523-8d65-70f7fd153dee" Jan 20 19:34:25 crc kubenswrapper[4661]: I0120 19:34:25.143098 4661 scope.go:117] "RemoveContainer" containerID="7b21693d8a82b0b4ec2dcf50fcc180f7e20aa5c7528eda5ef12172a1ad0a2efe" Jan 20 19:34:25 crc kubenswrapper[4661]: E0120 19:34:25.143968 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-svf7c_openshift-machine-config-operator(78855c94-da90-4523-8d65-70f7fd153dee)\"" pod="openshift-machine-config-operator/machine-config-daemon-svf7c" podUID="78855c94-da90-4523-8d65-70f7fd153dee" Jan 20 19:34:39 crc kubenswrapper[4661]: I0120 19:34:39.142256 4661 scope.go:117] "RemoveContainer" containerID="7b21693d8a82b0b4ec2dcf50fcc180f7e20aa5c7528eda5ef12172a1ad0a2efe" Jan 20 19:34:39 crc kubenswrapper[4661]: I0120 19:34:39.737338 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-svf7c" event={"ID":"78855c94-da90-4523-8d65-70f7fd153dee","Type":"ContainerStarted","Data":"51457ad649bdc1ea8dd82d8ab16bfc72ad5c24435b0fd5e6de792c40c78dc5c4"} Jan 20 19:35:01 crc kubenswrapper[4661]: I0120 19:35:01.091653 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-htvf4_e4437f72-2da5-4c3a-8a69-4a26f3190a62/controller/0.log" Jan 20 19:35:01 crc kubenswrapper[4661]: I0120 19:35:01.097641 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-htvf4_e4437f72-2da5-4c3a-8a69-4a26f3190a62/kube-rbac-proxy/0.log" Jan 20 19:35:01 crc kubenswrapper[4661]: I0120 19:35:01.129791 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9s69m_52982b0c-438c-4fbd-be5f-03fe6aca0327/controller/0.log" Jan 20 19:35:01 crc kubenswrapper[4661]: I0120 19:35:01.265622 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-858654f9db-2wqcl_13a9f3bc-c133-49ea-9cfd-bc8c107e32c6/cert-manager-controller/0.log" Jan 20 19:35:01 crc kubenswrapper[4661]: I0120 19:35:01.279301 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-cf98fcc89-f7qg8_3c6e82bb-badf-4079-abf0-566f4b6f0776/cert-manager-cainjector/0.log" Jan 20 19:35:01 crc kubenswrapper[4661]: I0120 19:35:01.295410 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-687f57d79b-scrrz_b1feddfe-5c29-4eba-99c5-65849498f0dc/cert-manager-webhook/0.log" Jan 20 19:35:02 crc kubenswrapper[4661]: I0120 19:35:02.539840 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-7ddb5c749-bbwzg_e257e7b3-ba70-44d2-abb9-6a6848bf1c06/manager/0.log" Jan 20 19:35:02 crc kubenswrapper[4661]: I0120 19:35:02.549073 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_c328dd49ee03958f20cc032e90cbfddae000be998ce16b6019d67f47cbzppkm_dc3336c8-c6d2-4f42-b8d7-534f5e765776/extract/0.log" Jan 20 19:35:02 crc kubenswrapper[4661]: I0120 19:35:02.556987 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_c328dd49ee03958f20cc032e90cbfddae000be998ce16b6019d67f47cbzppkm_dc3336c8-c6d2-4f42-b8d7-534f5e765776/util/0.log" Jan 20 19:35:02 crc kubenswrapper[4661]: I0120 19:35:02.568133 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_c328dd49ee03958f20cc032e90cbfddae000be998ce16b6019d67f47cbzppkm_dc3336c8-c6d2-4f42-b8d7-534f5e765776/pull/0.log" Jan 20 19:35:02 crc kubenswrapper[4661]: I0120 19:35:02.623096 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9s69m_52982b0c-438c-4fbd-be5f-03fe6aca0327/frr/0.log" Jan 20 19:35:02 crc kubenswrapper[4661]: I0120 19:35:02.635020 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9s69m_52982b0c-438c-4fbd-be5f-03fe6aca0327/reloader/0.log" Jan 20 19:35:02 crc kubenswrapper[4661]: I0120 19:35:02.639903 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9s69m_52982b0c-438c-4fbd-be5f-03fe6aca0327/frr-metrics/0.log" Jan 20 19:35:02 crc kubenswrapper[4661]: I0120 19:35:02.640108 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-9b68f5989-hk9zx_51bdae14-22a5-4783-8712-fc51ca6d8a07/manager/0.log" Jan 20 19:35:02 crc kubenswrapper[4661]: I0120 19:35:02.647699 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9s69m_52982b0c-438c-4fbd-be5f-03fe6aca0327/kube-rbac-proxy/0.log" Jan 20 19:35:02 crc kubenswrapper[4661]: I0120 19:35:02.651141 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-9f958b845-dw6hd_08e08814-f213-4476-a78d-82cddc30022d/manager/0.log" Jan 20 19:35:02 crc kubenswrapper[4661]: I0120 19:35:02.655012 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9s69m_52982b0c-438c-4fbd-be5f-03fe6aca0327/kube-rbac-proxy-frr/0.log" Jan 20 19:35:02 crc kubenswrapper[4661]: I0120 19:35:02.662714 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9s69m_52982b0c-438c-4fbd-be5f-03fe6aca0327/cp-frr-files/0.log" Jan 20 19:35:02 crc kubenswrapper[4661]: I0120 19:35:02.676605 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9s69m_52982b0c-438c-4fbd-be5f-03fe6aca0327/cp-reloader/0.log" Jan 20 19:35:02 crc kubenswrapper[4661]: I0120 19:35:02.684129 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9s69m_52982b0c-438c-4fbd-be5f-03fe6aca0327/cp-metrics/0.log" Jan 20 19:35:02 crc kubenswrapper[4661]: I0120 19:35:02.701145 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-7df86c4f6c-7z86n_c47c14be-ea84-47ba-a52b-9cb718ae6a30/frr-k8s-webhook-server/0.log" Jan 20 19:35:02 crc kubenswrapper[4661]: I0120 19:35:02.720167 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-c6994669c-gzjg9_eccd3436-cb57-49b8-a2f7-106fe5e39c7d/manager/0.log" Jan 20 19:35:02 crc kubenswrapper[4661]: I0120 19:35:02.728167 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-6f4477bbcd-h46rv_715feebe-b380-4ce1-9842-7f9da051a195/manager/0.log" Jan 20 19:35:02 crc kubenswrapper[4661]: I0120 19:35:02.732501 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-594c8c9d5d-r5bws_2bf3fc47-9ca2-45aa-9835-1ed5d413b0ec/manager/0.log" Jan 20 19:35:02 crc kubenswrapper[4661]: I0120 19:35:02.736926 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-686c759fbc-hdkt8_e0bbd467-090b-431e-b89b-8159d61d7dab/webhook-server/0.log" Jan 20 19:35:02 crc kubenswrapper[4661]: I0120 19:35:02.745705 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-77d5c5b54f-5w4m2_04a8f9c5-45fc-47db-adf2-3de38af2cf96/manager/0.log" Jan 20 19:35:03 crc kubenswrapper[4661]: I0120 19:35:03.117808 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-77c48c7859-w8bbb_70002b35-6f0d-4679-9279-a80574c467f0/manager/0.log" Jan 20 19:35:03 crc kubenswrapper[4661]: I0120 19:35:03.145927 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-78757b4889-cszrc_10ed69a9-7fbf-4139-b2b2-80dec4f8cf41/manager/0.log" Jan 20 19:35:03 crc kubenswrapper[4661]: I0120 19:35:03.226531 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-58jbs_69809466-8e46-4f8a-b90e-638f8af8b313/speaker/0.log" Jan 20 19:35:03 crc kubenswrapper[4661]: I0120 19:35:03.235024 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-767fdc4f47-4g9db_a5920876-3cd0-41cf-b7d8-6fd8ea0af29c/manager/0.log" Jan 20 19:35:03 crc kubenswrapper[4661]: I0120 19:35:03.237331 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-58jbs_69809466-8e46-4f8a-b90e-638f8af8b313/kube-rbac-proxy/0.log" Jan 20 19:35:03 crc kubenswrapper[4661]: I0120 19:35:03.279726 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-864f6b75bf-svt25_6c1159da-faf7-4389-b57b-05173827968d/manager/0.log" Jan 20 19:35:03 crc kubenswrapper[4661]: I0120 19:35:03.309482 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-c87fff755-69ktn_12b130a9-df33-4c1a-a145-961791dc9d9d/manager/0.log" Jan 20 19:35:03 crc kubenswrapper[4661]: I0120 19:35:03.354447 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-cb4666565-cqf8m_1b070a22-e050-4db7-bc74-f8a1129a8d61/manager/0.log" Jan 20 19:35:03 crc kubenswrapper[4661]: I0120 19:35:03.422974 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-65849867d6-2g55t_52bfaf4d-624e-45d7-86d8-4c0e18afe2e6/manager/0.log" Jan 20 19:35:03 crc kubenswrapper[4661]: I0120 19:35:03.436109 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-7fc9b76cf6-mqz45_5798b368-6725-4e14-a77c-37b7bcfd538d/manager/0.log" Jan 20 19:35:03 crc kubenswrapper[4661]: I0120 19:35:03.467582 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-6b68b8b854xbhqc_65995719-9618-424e-a324-084d52a0cd47/manager/0.log" Jan 20 19:35:03 crc kubenswrapper[4661]: I0120 19:35:03.617530 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-init-fdc84db4c-p87rq_8e170a45-9133-4aee-81e7-7f6188f48c91/operator/0.log" Jan 20 19:35:04 crc kubenswrapper[4661]: I0120 19:35:04.721481 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-858654f9db-2wqcl_13a9f3bc-c133-49ea-9cfd-bc8c107e32c6/cert-manager-controller/0.log" Jan 20 19:35:04 crc kubenswrapper[4661]: I0120 19:35:04.736000 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-cf98fcc89-f7qg8_3c6e82bb-badf-4079-abf0-566f4b6f0776/cert-manager-cainjector/0.log" Jan 20 19:35:04 crc kubenswrapper[4661]: I0120 19:35:04.750331 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-687f57d79b-scrrz_b1feddfe-5c29-4eba-99c5-65849498f0dc/cert-manager-webhook/0.log" Jan 20 19:35:04 crc kubenswrapper[4661]: I0120 19:35:04.891147 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-58b4997fc9-9wjks_d603e76e-8a9d-444f-b251-2d29b5588c8e/manager/0.log" Jan 20 19:35:04 crc kubenswrapper[4661]: I0120 19:35:04.901400 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-wj8kr_a9b5891c-9b50-4f14-ade6-69a048487d08/registry-server/0.log" Jan 20 19:35:04 crc kubenswrapper[4661]: I0120 19:35:04.947096 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-55db956ddc-4msz7_f61aad5b-f531-4dc0-8328-4b057c84651e/manager/0.log" Jan 20 19:35:04 crc kubenswrapper[4661]: I0120 19:35:04.967307 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-686df47fcb-tcgdv_dbbf0040-fc50-457e-ad76-42d6061a6df1/manager/0.log" Jan 20 19:35:04 crc kubenswrapper[4661]: I0120 19:35:04.989926 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-v8gf9_2e78fff0-2eba-4aa9-a4b0-2f5ff775e1ec/operator/0.log" Jan 20 19:35:04 crc kubenswrapper[4661]: I0120 19:35:04.998876 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-85dd56d4cc-wqr58_497cc518-3499-43be-8aff-c4ff58803cba/manager/0.log" Jan 20 19:35:05 crc kubenswrapper[4661]: I0120 19:35:05.080810 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-5f8f495fcf-2wsx8_22fe1eac-c7f9-4cef-8811-db5861b4caa2/manager/0.log" Jan 20 19:35:05 crc kubenswrapper[4661]: I0120 19:35:05.088983 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-7cd8bc9dbb-gg985_7f267072-d784-469d-acad-238e58ddd82c/manager/0.log" Jan 20 19:35:05 crc kubenswrapper[4661]: I0120 19:35:05.099011 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-64cd966744-hppzk_5a07b584-21cc-464b-a3bf-046c6e0ab18f/manager/0.log" Jan 20 19:35:05 crc kubenswrapper[4661]: I0120 19:35:05.555080 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-qdlnn_a507ebcc-7e0b-445b-9688-882358d365ce/control-plane-machine-set-operator/0.log" Jan 20 19:35:05 crc kubenswrapper[4661]: I0120 19:35:05.563721 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-7hbkg_302e8226-565c-44a4-bb0e-dee670200ae3/kube-rbac-proxy/0.log" Jan 20 19:35:05 crc kubenswrapper[4661]: I0120 19:35:05.574979 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-7hbkg_302e8226-565c-44a4-bb0e-dee670200ae3/machine-api-operator/0.log" Jan 20 19:35:06 crc kubenswrapper[4661]: I0120 19:35:06.541570 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-7ddb5c749-bbwzg_e257e7b3-ba70-44d2-abb9-6a6848bf1c06/manager/0.log" Jan 20 19:35:06 crc kubenswrapper[4661]: I0120 19:35:06.550992 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_c328dd49ee03958f20cc032e90cbfddae000be998ce16b6019d67f47cbzppkm_dc3336c8-c6d2-4f42-b8d7-534f5e765776/extract/0.log" Jan 20 19:35:06 crc kubenswrapper[4661]: I0120 19:35:06.559454 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_c328dd49ee03958f20cc032e90cbfddae000be998ce16b6019d67f47cbzppkm_dc3336c8-c6d2-4f42-b8d7-534f5e765776/util/0.log" Jan 20 19:35:06 crc kubenswrapper[4661]: I0120 19:35:06.567061 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_c328dd49ee03958f20cc032e90cbfddae000be998ce16b6019d67f47cbzppkm_dc3336c8-c6d2-4f42-b8d7-534f5e765776/pull/0.log" Jan 20 19:35:06 crc kubenswrapper[4661]: I0120 19:35:06.628508 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-9b68f5989-hk9zx_51bdae14-22a5-4783-8712-fc51ca6d8a07/manager/0.log" Jan 20 19:35:06 crc kubenswrapper[4661]: I0120 19:35:06.645643 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-9f958b845-dw6hd_08e08814-f213-4476-a78d-82cddc30022d/manager/0.log" Jan 20 19:35:06 crc kubenswrapper[4661]: I0120 19:35:06.646767 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-7754f76f8b-frgmz_68fe7ab0-cff5-474c-aa0d-7c579ddc51bb/nmstate-console-plugin/0.log" Jan 20 19:35:06 crc kubenswrapper[4661]: I0120 19:35:06.673927 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-q9p2x_0b121ec2-f30a-46c4-a556-dd00cca2a1e3/nmstate-handler/0.log" Jan 20 19:35:06 crc kubenswrapper[4661]: I0120 19:35:06.687234 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-7pxb7_7f2c01ac-294a-42b8-9988-22419d94a0ec/nmstate-metrics/0.log" Jan 20 19:35:06 crc kubenswrapper[4661]: I0120 19:35:06.694527 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-7pxb7_7f2c01ac-294a-42b8-9988-22419d94a0ec/kube-rbac-proxy/0.log" Jan 20 19:35:06 crc kubenswrapper[4661]: I0120 19:35:06.723199 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-646758c888-mkng6_91e3ce75-26ba-42cb-b4dd-322bc9188bab/nmstate-operator/0.log" Jan 20 19:35:06 crc kubenswrapper[4661]: I0120 19:35:06.737854 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-c6994669c-gzjg9_eccd3436-cb57-49b8-a2f7-106fe5e39c7d/manager/0.log" Jan 20 19:35:06 crc kubenswrapper[4661]: I0120 19:35:06.739445 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-8474b5b9d8-6rl82_fa442cfc-fd6e-4b5d-882d-aaa8de83f99a/nmstate-webhook/0.log" Jan 20 19:35:06 crc kubenswrapper[4661]: I0120 19:35:06.747469 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-594c8c9d5d-r5bws_2bf3fc47-9ca2-45aa-9835-1ed5d413b0ec/manager/0.log" Jan 20 19:35:06 crc kubenswrapper[4661]: I0120 19:35:06.765710 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-77d5c5b54f-5w4m2_04a8f9c5-45fc-47db-adf2-3de38af2cf96/manager/0.log" Jan 20 19:35:07 crc kubenswrapper[4661]: I0120 19:35:07.082031 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-77c48c7859-w8bbb_70002b35-6f0d-4679-9279-a80574c467f0/manager/0.log" Jan 20 19:35:07 crc kubenswrapper[4661]: I0120 19:35:07.099301 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-78757b4889-cszrc_10ed69a9-7fbf-4139-b2b2-80dec4f8cf41/manager/0.log" Jan 20 19:35:07 crc kubenswrapper[4661]: I0120 19:35:07.190486 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-767fdc4f47-4g9db_a5920876-3cd0-41cf-b7d8-6fd8ea0af29c/manager/0.log" Jan 20 19:35:07 crc kubenswrapper[4661]: I0120 19:35:07.240953 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-864f6b75bf-svt25_6c1159da-faf7-4389-b57b-05173827968d/manager/0.log" Jan 20 19:35:07 crc kubenswrapper[4661]: I0120 19:35:07.275269 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-c87fff755-69ktn_12b130a9-df33-4c1a-a145-961791dc9d9d/manager/0.log" Jan 20 19:35:07 crc kubenswrapper[4661]: I0120 19:35:07.322503 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-cb4666565-cqf8m_1b070a22-e050-4db7-bc74-f8a1129a8d61/manager/0.log" Jan 20 19:35:07 crc kubenswrapper[4661]: I0120 19:35:07.387758 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-65849867d6-2g55t_52bfaf4d-624e-45d7-86d8-4c0e18afe2e6/manager/0.log" Jan 20 19:35:07 crc kubenswrapper[4661]: I0120 19:35:07.397537 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-7fc9b76cf6-mqz45_5798b368-6725-4e14-a77c-37b7bcfd538d/manager/0.log" Jan 20 19:35:07 crc kubenswrapper[4661]: I0120 19:35:07.409692 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-6b68b8b854xbhqc_65995719-9618-424e-a324-084d52a0cd47/manager/0.log" Jan 20 19:35:07 crc kubenswrapper[4661]: I0120 19:35:07.524044 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-init-fdc84db4c-p87rq_8e170a45-9133-4aee-81e7-7f6188f48c91/operator/0.log" Jan 20 19:35:08 crc kubenswrapper[4661]: I0120 19:35:08.747506 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-58b4997fc9-9wjks_d603e76e-8a9d-444f-b251-2d29b5588c8e/manager/0.log" Jan 20 19:35:08 crc kubenswrapper[4661]: I0120 19:35:08.764415 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-wj8kr_a9b5891c-9b50-4f14-ade6-69a048487d08/registry-server/0.log" Jan 20 19:35:08 crc kubenswrapper[4661]: I0120 19:35:08.820982 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-55db956ddc-4msz7_f61aad5b-f531-4dc0-8328-4b057c84651e/manager/0.log" Jan 20 19:35:08 crc kubenswrapper[4661]: I0120 19:35:08.851128 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-686df47fcb-tcgdv_dbbf0040-fc50-457e-ad76-42d6061a6df1/manager/0.log" Jan 20 19:35:08 crc kubenswrapper[4661]: I0120 19:35:08.875989 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-v8gf9_2e78fff0-2eba-4aa9-a4b0-2f5ff775e1ec/operator/0.log" Jan 20 19:35:08 crc kubenswrapper[4661]: I0120 19:35:08.889962 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-85dd56d4cc-wqr58_497cc518-3499-43be-8aff-c4ff58803cba/manager/0.log" Jan 20 19:35:08 crc kubenswrapper[4661]: I0120 19:35:08.980910 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-5f8f495fcf-2wsx8_22fe1eac-c7f9-4cef-8811-db5861b4caa2/manager/0.log" Jan 20 19:35:08 crc kubenswrapper[4661]: I0120 19:35:08.993845 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-7cd8bc9dbb-gg985_7f267072-d784-469d-acad-238e58ddd82c/manager/0.log" Jan 20 19:35:09 crc kubenswrapper[4661]: I0120 19:35:09.006026 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-64cd966744-hppzk_5a07b584-21cc-464b-a3bf-046c6e0ab18f/manager/0.log" Jan 20 19:35:11 crc kubenswrapper[4661]: I0120 19:35:11.254465 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-j9j6p_e190abed-d178-4ce7-9485-f6090ecf8578/kube-multus-additional-cni-plugins/0.log" Jan 20 19:35:11 crc kubenswrapper[4661]: I0120 19:35:11.264046 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-j9j6p_e190abed-d178-4ce7-9485-f6090ecf8578/egress-router-binary-copy/0.log" Jan 20 19:35:11 crc kubenswrapper[4661]: I0120 19:35:11.271414 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-j9j6p_e190abed-d178-4ce7-9485-f6090ecf8578/cni-plugins/0.log" Jan 20 19:35:11 crc kubenswrapper[4661]: I0120 19:35:11.278159 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-j9j6p_e190abed-d178-4ce7-9485-f6090ecf8578/bond-cni-plugin/0.log" Jan 20 19:35:11 crc kubenswrapper[4661]: I0120 19:35:11.283895 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-j9j6p_e190abed-d178-4ce7-9485-f6090ecf8578/routeoverride-cni/0.log" Jan 20 19:35:11 crc kubenswrapper[4661]: I0120 19:35:11.293147 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-j9j6p_e190abed-d178-4ce7-9485-f6090ecf8578/whereabouts-cni-bincopy/0.log" Jan 20 19:35:11 crc kubenswrapper[4661]: I0120 19:35:11.301283 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-j9j6p_e190abed-d178-4ce7-9485-f6090ecf8578/whereabouts-cni/0.log" Jan 20 19:35:11 crc kubenswrapper[4661]: I0120 19:35:11.330340 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-admission-controller-857f4d67dd-44vhk_e4cd0e68-3282-4713-8386-8c86f56f1f70/multus-admission-controller/0.log" Jan 20 19:35:11 crc kubenswrapper[4661]: I0120 19:35:11.336257 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-admission-controller-857f4d67dd-44vhk_e4cd0e68-3282-4713-8386-8c86f56f1f70/kube-rbac-proxy/0.log" Jan 20 19:35:11 crc kubenswrapper[4661]: I0120 19:35:11.385386 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-z97p2_5b6f2401-3eb9-4ee4-b79c-6faee06bc21c/kube-multus/2.log" Jan 20 19:35:11 crc kubenswrapper[4661]: I0120 19:35:11.464738 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-z97p2_5b6f2401-3eb9-4ee4-b79c-6faee06bc21c/kube-multus/3.log" Jan 20 19:35:11 crc kubenswrapper[4661]: I0120 19:35:11.501813 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_network-metrics-daemon-dhd6h_58dc28d6-51a6-49ce-ab8e-ac0b6bbe7131/network-metrics-daemon/0.log" Jan 20 19:35:11 crc kubenswrapper[4661]: I0120 19:35:11.506270 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_network-metrics-daemon-dhd6h_58dc28d6-51a6-49ce-ab8e-ac0b6bbe7131/kube-rbac-proxy/0.log" Jan 20 19:36:23 crc kubenswrapper[4661]: I0120 19:36:23.162768 4661 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-q9zw6"] Jan 20 19:36:23 crc kubenswrapper[4661]: E0120 19:36:23.163606 4661 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="93f00d7b-4ab2-4411-8880-07d0a28a884f" containerName="extract-content" Jan 20 19:36:23 crc kubenswrapper[4661]: I0120 19:36:23.163619 4661 state_mem.go:107] "Deleted CPUSet assignment" podUID="93f00d7b-4ab2-4411-8880-07d0a28a884f" containerName="extract-content" Jan 20 19:36:23 crc kubenswrapper[4661]: E0120 19:36:23.163633 4661 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="93f00d7b-4ab2-4411-8880-07d0a28a884f" containerName="extract-utilities" Jan 20 19:36:23 crc kubenswrapper[4661]: I0120 19:36:23.163639 4661 state_mem.go:107] "Deleted CPUSet assignment" podUID="93f00d7b-4ab2-4411-8880-07d0a28a884f" containerName="extract-utilities" Jan 20 19:36:23 crc kubenswrapper[4661]: E0120 19:36:23.163660 4661 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="93f00d7b-4ab2-4411-8880-07d0a28a884f" containerName="registry-server" Jan 20 19:36:23 crc kubenswrapper[4661]: I0120 19:36:23.163665 4661 state_mem.go:107] "Deleted CPUSet assignment" podUID="93f00d7b-4ab2-4411-8880-07d0a28a884f" containerName="registry-server" Jan 20 19:36:23 crc kubenswrapper[4661]: I0120 19:36:23.163845 4661 memory_manager.go:354] "RemoveStaleState removing state" podUID="93f00d7b-4ab2-4411-8880-07d0a28a884f" containerName="registry-server" Jan 20 19:36:23 crc kubenswrapper[4661]: I0120 19:36:23.165126 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-q9zw6" Jan 20 19:36:23 crc kubenswrapper[4661]: I0120 19:36:23.204052 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-q9zw6"] Jan 20 19:36:23 crc kubenswrapper[4661]: I0120 19:36:23.230938 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e8411b82-ddd8-4296-8d1e-ddfbc67bfd53-catalog-content\") pod \"redhat-operators-q9zw6\" (UID: \"e8411b82-ddd8-4296-8d1e-ddfbc67bfd53\") " pod="openshift-marketplace/redhat-operators-q9zw6" Jan 20 19:36:23 crc kubenswrapper[4661]: I0120 19:36:23.231003 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4zfn5\" (UniqueName: \"kubernetes.io/projected/e8411b82-ddd8-4296-8d1e-ddfbc67bfd53-kube-api-access-4zfn5\") pod \"redhat-operators-q9zw6\" (UID: \"e8411b82-ddd8-4296-8d1e-ddfbc67bfd53\") " pod="openshift-marketplace/redhat-operators-q9zw6" Jan 20 19:36:23 crc kubenswrapper[4661]: I0120 19:36:23.231043 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e8411b82-ddd8-4296-8d1e-ddfbc67bfd53-utilities\") pod \"redhat-operators-q9zw6\" (UID: \"e8411b82-ddd8-4296-8d1e-ddfbc67bfd53\") " pod="openshift-marketplace/redhat-operators-q9zw6" Jan 20 19:36:23 crc kubenswrapper[4661]: I0120 19:36:23.332382 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e8411b82-ddd8-4296-8d1e-ddfbc67bfd53-catalog-content\") pod \"redhat-operators-q9zw6\" (UID: \"e8411b82-ddd8-4296-8d1e-ddfbc67bfd53\") " pod="openshift-marketplace/redhat-operators-q9zw6" Jan 20 19:36:23 crc kubenswrapper[4661]: I0120 19:36:23.332437 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4zfn5\" (UniqueName: \"kubernetes.io/projected/e8411b82-ddd8-4296-8d1e-ddfbc67bfd53-kube-api-access-4zfn5\") pod \"redhat-operators-q9zw6\" (UID: \"e8411b82-ddd8-4296-8d1e-ddfbc67bfd53\") " pod="openshift-marketplace/redhat-operators-q9zw6" Jan 20 19:36:23 crc kubenswrapper[4661]: I0120 19:36:23.332467 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e8411b82-ddd8-4296-8d1e-ddfbc67bfd53-utilities\") pod \"redhat-operators-q9zw6\" (UID: \"e8411b82-ddd8-4296-8d1e-ddfbc67bfd53\") " pod="openshift-marketplace/redhat-operators-q9zw6" Jan 20 19:36:23 crc kubenswrapper[4661]: I0120 19:36:23.333049 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e8411b82-ddd8-4296-8d1e-ddfbc67bfd53-catalog-content\") pod \"redhat-operators-q9zw6\" (UID: \"e8411b82-ddd8-4296-8d1e-ddfbc67bfd53\") " pod="openshift-marketplace/redhat-operators-q9zw6" Jan 20 19:36:23 crc kubenswrapper[4661]: I0120 19:36:23.333086 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e8411b82-ddd8-4296-8d1e-ddfbc67bfd53-utilities\") pod \"redhat-operators-q9zw6\" (UID: \"e8411b82-ddd8-4296-8d1e-ddfbc67bfd53\") " pod="openshift-marketplace/redhat-operators-q9zw6" Jan 20 19:36:23 crc kubenswrapper[4661]: I0120 19:36:23.359909 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4zfn5\" (UniqueName: \"kubernetes.io/projected/e8411b82-ddd8-4296-8d1e-ddfbc67bfd53-kube-api-access-4zfn5\") pod \"redhat-operators-q9zw6\" (UID: \"e8411b82-ddd8-4296-8d1e-ddfbc67bfd53\") " pod="openshift-marketplace/redhat-operators-q9zw6" Jan 20 19:36:23 crc kubenswrapper[4661]: I0120 19:36:23.486189 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-q9zw6" Jan 20 19:36:24 crc kubenswrapper[4661]: I0120 19:36:24.092292 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-q9zw6"] Jan 20 19:36:24 crc kubenswrapper[4661]: I0120 19:36:24.722915 4661 generic.go:334] "Generic (PLEG): container finished" podID="e8411b82-ddd8-4296-8d1e-ddfbc67bfd53" containerID="78ef18243896f6ac22770c417a27074629e99b0bc6e43287876f72fab03bf8e0" exitCode=0 Jan 20 19:36:24 crc kubenswrapper[4661]: I0120 19:36:24.723038 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-q9zw6" event={"ID":"e8411b82-ddd8-4296-8d1e-ddfbc67bfd53","Type":"ContainerDied","Data":"78ef18243896f6ac22770c417a27074629e99b0bc6e43287876f72fab03bf8e0"} Jan 20 19:36:24 crc kubenswrapper[4661]: I0120 19:36:24.723195 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-q9zw6" event={"ID":"e8411b82-ddd8-4296-8d1e-ddfbc67bfd53","Type":"ContainerStarted","Data":"23650f260a25d3014140343dbcfe58f9b5cba3077be81a4cf028acd8756d435e"} Jan 20 19:36:25 crc kubenswrapper[4661]: I0120 19:36:25.734883 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-q9zw6" event={"ID":"e8411b82-ddd8-4296-8d1e-ddfbc67bfd53","Type":"ContainerStarted","Data":"3b3938090865330e928cb94865e43ab00edc856c9e25f3db8239a13285c300ef"} Jan 20 19:36:29 crc kubenswrapper[4661]: I0120 19:36:29.788705 4661 generic.go:334] "Generic (PLEG): container finished" podID="e8411b82-ddd8-4296-8d1e-ddfbc67bfd53" containerID="3b3938090865330e928cb94865e43ab00edc856c9e25f3db8239a13285c300ef" exitCode=0 Jan 20 19:36:29 crc kubenswrapper[4661]: I0120 19:36:29.788787 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-q9zw6" event={"ID":"e8411b82-ddd8-4296-8d1e-ddfbc67bfd53","Type":"ContainerDied","Data":"3b3938090865330e928cb94865e43ab00edc856c9e25f3db8239a13285c300ef"} Jan 20 19:36:30 crc kubenswrapper[4661]: I0120 19:36:30.802120 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-q9zw6" event={"ID":"e8411b82-ddd8-4296-8d1e-ddfbc67bfd53","Type":"ContainerStarted","Data":"ab34c3966497bfe14c24b6b4981543530dd24734d92f7acecbeb2c61d4c0c589"} Jan 20 19:36:33 crc kubenswrapper[4661]: I0120 19:36:33.486850 4661 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-q9zw6" Jan 20 19:36:33 crc kubenswrapper[4661]: I0120 19:36:33.488241 4661 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-q9zw6" Jan 20 19:36:34 crc kubenswrapper[4661]: I0120 19:36:34.534235 4661 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-q9zw6" podUID="e8411b82-ddd8-4296-8d1e-ddfbc67bfd53" containerName="registry-server" probeResult="failure" output=< Jan 20 19:36:34 crc kubenswrapper[4661]: timeout: failed to connect service ":50051" within 1s Jan 20 19:36:34 crc kubenswrapper[4661]: > Jan 20 19:36:43 crc kubenswrapper[4661]: I0120 19:36:43.550495 4661 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-q9zw6" Jan 20 19:36:43 crc kubenswrapper[4661]: I0120 19:36:43.613452 4661 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-q9zw6" podStartSLOduration=15.070693221 podStartE2EDuration="20.613425573s" podCreationTimestamp="2026-01-20 19:36:23 +0000 UTC" firstStartedPulling="2026-01-20 19:36:24.724590466 +0000 UTC m=+5441.055380128" lastFinishedPulling="2026-01-20 19:36:30.267322828 +0000 UTC m=+5446.598112480" observedRunningTime="2026-01-20 19:36:30.838995468 +0000 UTC m=+5447.169785120" watchObservedRunningTime="2026-01-20 19:36:43.613425573 +0000 UTC m=+5459.944215245" Jan 20 19:36:43 crc kubenswrapper[4661]: I0120 19:36:43.632641 4661 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-q9zw6" Jan 20 19:36:43 crc kubenswrapper[4661]: I0120 19:36:43.796769 4661 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-q9zw6"] Jan 20 19:36:44 crc kubenswrapper[4661]: I0120 19:36:44.978686 4661 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-q9zw6" podUID="e8411b82-ddd8-4296-8d1e-ddfbc67bfd53" containerName="registry-server" containerID="cri-o://ab34c3966497bfe14c24b6b4981543530dd24734d92f7acecbeb2c61d4c0c589" gracePeriod=2 Jan 20 19:36:45 crc kubenswrapper[4661]: I0120 19:36:45.524772 4661 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-q9zw6" Jan 20 19:36:45 crc kubenswrapper[4661]: I0120 19:36:45.617822 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4zfn5\" (UniqueName: \"kubernetes.io/projected/e8411b82-ddd8-4296-8d1e-ddfbc67bfd53-kube-api-access-4zfn5\") pod \"e8411b82-ddd8-4296-8d1e-ddfbc67bfd53\" (UID: \"e8411b82-ddd8-4296-8d1e-ddfbc67bfd53\") " Jan 20 19:36:45 crc kubenswrapper[4661]: I0120 19:36:45.617905 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e8411b82-ddd8-4296-8d1e-ddfbc67bfd53-utilities\") pod \"e8411b82-ddd8-4296-8d1e-ddfbc67bfd53\" (UID: \"e8411b82-ddd8-4296-8d1e-ddfbc67bfd53\") " Jan 20 19:36:45 crc kubenswrapper[4661]: I0120 19:36:45.618019 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e8411b82-ddd8-4296-8d1e-ddfbc67bfd53-catalog-content\") pod \"e8411b82-ddd8-4296-8d1e-ddfbc67bfd53\" (UID: \"e8411b82-ddd8-4296-8d1e-ddfbc67bfd53\") " Jan 20 19:36:45 crc kubenswrapper[4661]: I0120 19:36:45.618898 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e8411b82-ddd8-4296-8d1e-ddfbc67bfd53-utilities" (OuterVolumeSpecName: "utilities") pod "e8411b82-ddd8-4296-8d1e-ddfbc67bfd53" (UID: "e8411b82-ddd8-4296-8d1e-ddfbc67bfd53"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 19:36:45 crc kubenswrapper[4661]: I0120 19:36:45.624298 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e8411b82-ddd8-4296-8d1e-ddfbc67bfd53-kube-api-access-4zfn5" (OuterVolumeSpecName: "kube-api-access-4zfn5") pod "e8411b82-ddd8-4296-8d1e-ddfbc67bfd53" (UID: "e8411b82-ddd8-4296-8d1e-ddfbc67bfd53"). InnerVolumeSpecName "kube-api-access-4zfn5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 19:36:45 crc kubenswrapper[4661]: I0120 19:36:45.720528 4661 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e8411b82-ddd8-4296-8d1e-ddfbc67bfd53-utilities\") on node \"crc\" DevicePath \"\"" Jan 20 19:36:45 crc kubenswrapper[4661]: I0120 19:36:45.720556 4661 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4zfn5\" (UniqueName: \"kubernetes.io/projected/e8411b82-ddd8-4296-8d1e-ddfbc67bfd53-kube-api-access-4zfn5\") on node \"crc\" DevicePath \"\"" Jan 20 19:36:45 crc kubenswrapper[4661]: I0120 19:36:45.732470 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e8411b82-ddd8-4296-8d1e-ddfbc67bfd53-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e8411b82-ddd8-4296-8d1e-ddfbc67bfd53" (UID: "e8411b82-ddd8-4296-8d1e-ddfbc67bfd53"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 19:36:45 crc kubenswrapper[4661]: I0120 19:36:45.824271 4661 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e8411b82-ddd8-4296-8d1e-ddfbc67bfd53-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 20 19:36:45 crc kubenswrapper[4661]: I0120 19:36:45.994532 4661 generic.go:334] "Generic (PLEG): container finished" podID="e8411b82-ddd8-4296-8d1e-ddfbc67bfd53" containerID="ab34c3966497bfe14c24b6b4981543530dd24734d92f7acecbeb2c61d4c0c589" exitCode=0 Jan 20 19:36:45 crc kubenswrapper[4661]: I0120 19:36:45.994575 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-q9zw6" event={"ID":"e8411b82-ddd8-4296-8d1e-ddfbc67bfd53","Type":"ContainerDied","Data":"ab34c3966497bfe14c24b6b4981543530dd24734d92f7acecbeb2c61d4c0c589"} Jan 20 19:36:45 crc kubenswrapper[4661]: I0120 19:36:45.994607 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-q9zw6" event={"ID":"e8411b82-ddd8-4296-8d1e-ddfbc67bfd53","Type":"ContainerDied","Data":"23650f260a25d3014140343dbcfe58f9b5cba3077be81a4cf028acd8756d435e"} Jan 20 19:36:45 crc kubenswrapper[4661]: I0120 19:36:45.994627 4661 scope.go:117] "RemoveContainer" containerID="ab34c3966497bfe14c24b6b4981543530dd24734d92f7acecbeb2c61d4c0c589" Jan 20 19:36:45 crc kubenswrapper[4661]: I0120 19:36:45.994622 4661 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-q9zw6" Jan 20 19:36:46 crc kubenswrapper[4661]: I0120 19:36:46.032526 4661 scope.go:117] "RemoveContainer" containerID="3b3938090865330e928cb94865e43ab00edc856c9e25f3db8239a13285c300ef" Jan 20 19:36:46 crc kubenswrapper[4661]: I0120 19:36:46.038831 4661 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-q9zw6"] Jan 20 19:36:46 crc kubenswrapper[4661]: I0120 19:36:46.053923 4661 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-q9zw6"] Jan 20 19:36:46 crc kubenswrapper[4661]: I0120 19:36:46.072156 4661 scope.go:117] "RemoveContainer" containerID="78ef18243896f6ac22770c417a27074629e99b0bc6e43287876f72fab03bf8e0" Jan 20 19:36:46 crc kubenswrapper[4661]: I0120 19:36:46.107576 4661 scope.go:117] "RemoveContainer" containerID="ab34c3966497bfe14c24b6b4981543530dd24734d92f7acecbeb2c61d4c0c589" Jan 20 19:36:46 crc kubenswrapper[4661]: E0120 19:36:46.108123 4661 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ab34c3966497bfe14c24b6b4981543530dd24734d92f7acecbeb2c61d4c0c589\": container with ID starting with ab34c3966497bfe14c24b6b4981543530dd24734d92f7acecbeb2c61d4c0c589 not found: ID does not exist" containerID="ab34c3966497bfe14c24b6b4981543530dd24734d92f7acecbeb2c61d4c0c589" Jan 20 19:36:46 crc kubenswrapper[4661]: I0120 19:36:46.108190 4661 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ab34c3966497bfe14c24b6b4981543530dd24734d92f7acecbeb2c61d4c0c589"} err="failed to get container status \"ab34c3966497bfe14c24b6b4981543530dd24734d92f7acecbeb2c61d4c0c589\": rpc error: code = NotFound desc = could not find container \"ab34c3966497bfe14c24b6b4981543530dd24734d92f7acecbeb2c61d4c0c589\": container with ID starting with ab34c3966497bfe14c24b6b4981543530dd24734d92f7acecbeb2c61d4c0c589 not found: ID does not exist" Jan 20 19:36:46 crc kubenswrapper[4661]: I0120 19:36:46.108238 4661 scope.go:117] "RemoveContainer" containerID="3b3938090865330e928cb94865e43ab00edc856c9e25f3db8239a13285c300ef" Jan 20 19:36:46 crc kubenswrapper[4661]: E0120 19:36:46.108722 4661 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3b3938090865330e928cb94865e43ab00edc856c9e25f3db8239a13285c300ef\": container with ID starting with 3b3938090865330e928cb94865e43ab00edc856c9e25f3db8239a13285c300ef not found: ID does not exist" containerID="3b3938090865330e928cb94865e43ab00edc856c9e25f3db8239a13285c300ef" Jan 20 19:36:46 crc kubenswrapper[4661]: I0120 19:36:46.108767 4661 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3b3938090865330e928cb94865e43ab00edc856c9e25f3db8239a13285c300ef"} err="failed to get container status \"3b3938090865330e928cb94865e43ab00edc856c9e25f3db8239a13285c300ef\": rpc error: code = NotFound desc = could not find container \"3b3938090865330e928cb94865e43ab00edc856c9e25f3db8239a13285c300ef\": container with ID starting with 3b3938090865330e928cb94865e43ab00edc856c9e25f3db8239a13285c300ef not found: ID does not exist" Jan 20 19:36:46 crc kubenswrapper[4661]: I0120 19:36:46.108795 4661 scope.go:117] "RemoveContainer" containerID="78ef18243896f6ac22770c417a27074629e99b0bc6e43287876f72fab03bf8e0" Jan 20 19:36:46 crc kubenswrapper[4661]: E0120 19:36:46.109257 4661 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"78ef18243896f6ac22770c417a27074629e99b0bc6e43287876f72fab03bf8e0\": container with ID starting with 78ef18243896f6ac22770c417a27074629e99b0bc6e43287876f72fab03bf8e0 not found: ID does not exist" containerID="78ef18243896f6ac22770c417a27074629e99b0bc6e43287876f72fab03bf8e0" Jan 20 19:36:46 crc kubenswrapper[4661]: I0120 19:36:46.109301 4661 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"78ef18243896f6ac22770c417a27074629e99b0bc6e43287876f72fab03bf8e0"} err="failed to get container status \"78ef18243896f6ac22770c417a27074629e99b0bc6e43287876f72fab03bf8e0\": rpc error: code = NotFound desc = could not find container \"78ef18243896f6ac22770c417a27074629e99b0bc6e43287876f72fab03bf8e0\": container with ID starting with 78ef18243896f6ac22770c417a27074629e99b0bc6e43287876f72fab03bf8e0 not found: ID does not exist" Jan 20 19:36:46 crc kubenswrapper[4661]: I0120 19:36:46.154180 4661 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e8411b82-ddd8-4296-8d1e-ddfbc67bfd53" path="/var/lib/kubelet/pods/e8411b82-ddd8-4296-8d1e-ddfbc67bfd53/volumes" Jan 20 19:36:59 crc kubenswrapper[4661]: I0120 19:36:59.323583 4661 patch_prober.go:28] interesting pod/machine-config-daemon-svf7c container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 20 19:36:59 crc kubenswrapper[4661]: I0120 19:36:59.324195 4661 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-svf7c" podUID="78855c94-da90-4523-8d65-70f7fd153dee" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 20 19:37:29 crc kubenswrapper[4661]: I0120 19:37:29.323354 4661 patch_prober.go:28] interesting pod/machine-config-daemon-svf7c container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 20 19:37:29 crc kubenswrapper[4661]: I0120 19:37:29.324079 4661 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-svf7c" podUID="78855c94-da90-4523-8d65-70f7fd153dee" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 20 19:37:59 crc kubenswrapper[4661]: I0120 19:37:59.323139 4661 patch_prober.go:28] interesting pod/machine-config-daemon-svf7c container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 20 19:37:59 crc kubenswrapper[4661]: I0120 19:37:59.323730 4661 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-svf7c" podUID="78855c94-da90-4523-8d65-70f7fd153dee" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 20 19:37:59 crc kubenswrapper[4661]: I0120 19:37:59.323792 4661 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-svf7c" Jan 20 19:37:59 crc kubenswrapper[4661]: I0120 19:37:59.324743 4661 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"51457ad649bdc1ea8dd82d8ab16bfc72ad5c24435b0fd5e6de792c40c78dc5c4"} pod="openshift-machine-config-operator/machine-config-daemon-svf7c" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 20 19:37:59 crc kubenswrapper[4661]: I0120 19:37:59.324830 4661 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-svf7c" podUID="78855c94-da90-4523-8d65-70f7fd153dee" containerName="machine-config-daemon" containerID="cri-o://51457ad649bdc1ea8dd82d8ab16bfc72ad5c24435b0fd5e6de792c40c78dc5c4" gracePeriod=600 Jan 20 19:37:59 crc kubenswrapper[4661]: E0120 19:37:59.496354 4661 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod78855c94_da90_4523_8d65_70f7fd153dee.slice/crio-conmon-51457ad649bdc1ea8dd82d8ab16bfc72ad5c24435b0fd5e6de792c40c78dc5c4.scope\": RecentStats: unable to find data in memory cache]" Jan 20 19:37:59 crc kubenswrapper[4661]: I0120 19:37:59.981551 4661 generic.go:334] "Generic (PLEG): container finished" podID="78855c94-da90-4523-8d65-70f7fd153dee" containerID="51457ad649bdc1ea8dd82d8ab16bfc72ad5c24435b0fd5e6de792c40c78dc5c4" exitCode=0 Jan 20 19:37:59 crc kubenswrapper[4661]: I0120 19:37:59.981612 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-svf7c" event={"ID":"78855c94-da90-4523-8d65-70f7fd153dee","Type":"ContainerDied","Data":"51457ad649bdc1ea8dd82d8ab16bfc72ad5c24435b0fd5e6de792c40c78dc5c4"} Jan 20 19:37:59 crc kubenswrapper[4661]: I0120 19:37:59.981862 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-svf7c" event={"ID":"78855c94-da90-4523-8d65-70f7fd153dee","Type":"ContainerStarted","Data":"8c2c0540c85400f3bc3171f207699fe16a395291b82a8a7f3bee8f8da1569bee"} Jan 20 19:37:59 crc kubenswrapper[4661]: I0120 19:37:59.981887 4661 scope.go:117] "RemoveContainer" containerID="7b21693d8a82b0b4ec2dcf50fcc180f7e20aa5c7528eda5ef12172a1ad0a2efe" Jan 20 19:38:20 crc kubenswrapper[4661]: I0120 19:38:20.074088 4661 scope.go:117] "RemoveContainer" containerID="5a14a940839b66caaeb155dbddf78741718d471f59263a7dcf16ea8fc445cbcb" Jan 20 19:39:27 crc kubenswrapper[4661]: I0120 19:39:27.111771 4661 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-88nfc"] Jan 20 19:39:27 crc kubenswrapper[4661]: E0120 19:39:27.114475 4661 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e8411b82-ddd8-4296-8d1e-ddfbc67bfd53" containerName="registry-server" Jan 20 19:39:27 crc kubenswrapper[4661]: I0120 19:39:27.114639 4661 state_mem.go:107] "Deleted CPUSet assignment" podUID="e8411b82-ddd8-4296-8d1e-ddfbc67bfd53" containerName="registry-server" Jan 20 19:39:27 crc kubenswrapper[4661]: E0120 19:39:27.114825 4661 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e8411b82-ddd8-4296-8d1e-ddfbc67bfd53" containerName="extract-utilities" Jan 20 19:39:27 crc kubenswrapper[4661]: I0120 19:39:27.114956 4661 state_mem.go:107] "Deleted CPUSet assignment" podUID="e8411b82-ddd8-4296-8d1e-ddfbc67bfd53" containerName="extract-utilities" Jan 20 19:39:27 crc kubenswrapper[4661]: E0120 19:39:27.115102 4661 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e8411b82-ddd8-4296-8d1e-ddfbc67bfd53" containerName="extract-content" Jan 20 19:39:27 crc kubenswrapper[4661]: I0120 19:39:27.115191 4661 state_mem.go:107] "Deleted CPUSet assignment" podUID="e8411b82-ddd8-4296-8d1e-ddfbc67bfd53" containerName="extract-content" Jan 20 19:39:27 crc kubenswrapper[4661]: I0120 19:39:27.115494 4661 memory_manager.go:354] "RemoveStaleState removing state" podUID="e8411b82-ddd8-4296-8d1e-ddfbc67bfd53" containerName="registry-server" Jan 20 19:39:27 crc kubenswrapper[4661]: I0120 19:39:27.117335 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-88nfc" Jan 20 19:39:27 crc kubenswrapper[4661]: I0120 19:39:27.151525 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-88nfc"] Jan 20 19:39:27 crc kubenswrapper[4661]: I0120 19:39:27.191295 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d7ad9de-7ce0-44c4-bbf1-b4b667ee79c0-utilities\") pod \"community-operators-88nfc\" (UID: \"1d7ad9de-7ce0-44c4-bbf1-b4b667ee79c0\") " pod="openshift-marketplace/community-operators-88nfc" Jan 20 19:39:27 crc kubenswrapper[4661]: I0120 19:39:27.191357 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2x7kb\" (UniqueName: \"kubernetes.io/projected/1d7ad9de-7ce0-44c4-bbf1-b4b667ee79c0-kube-api-access-2x7kb\") pod \"community-operators-88nfc\" (UID: \"1d7ad9de-7ce0-44c4-bbf1-b4b667ee79c0\") " pod="openshift-marketplace/community-operators-88nfc" Jan 20 19:39:27 crc kubenswrapper[4661]: I0120 19:39:27.191426 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d7ad9de-7ce0-44c4-bbf1-b4b667ee79c0-catalog-content\") pod \"community-operators-88nfc\" (UID: \"1d7ad9de-7ce0-44c4-bbf1-b4b667ee79c0\") " pod="openshift-marketplace/community-operators-88nfc" Jan 20 19:39:27 crc kubenswrapper[4661]: I0120 19:39:27.293957 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d7ad9de-7ce0-44c4-bbf1-b4b667ee79c0-utilities\") pod \"community-operators-88nfc\" (UID: \"1d7ad9de-7ce0-44c4-bbf1-b4b667ee79c0\") " pod="openshift-marketplace/community-operators-88nfc" Jan 20 19:39:27 crc kubenswrapper[4661]: I0120 19:39:27.294032 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2x7kb\" (UniqueName: \"kubernetes.io/projected/1d7ad9de-7ce0-44c4-bbf1-b4b667ee79c0-kube-api-access-2x7kb\") pod \"community-operators-88nfc\" (UID: \"1d7ad9de-7ce0-44c4-bbf1-b4b667ee79c0\") " pod="openshift-marketplace/community-operators-88nfc" Jan 20 19:39:27 crc kubenswrapper[4661]: I0120 19:39:27.294131 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d7ad9de-7ce0-44c4-bbf1-b4b667ee79c0-catalog-content\") pod \"community-operators-88nfc\" (UID: \"1d7ad9de-7ce0-44c4-bbf1-b4b667ee79c0\") " pod="openshift-marketplace/community-operators-88nfc" Jan 20 19:39:27 crc kubenswrapper[4661]: I0120 19:39:27.294851 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d7ad9de-7ce0-44c4-bbf1-b4b667ee79c0-catalog-content\") pod \"community-operators-88nfc\" (UID: \"1d7ad9de-7ce0-44c4-bbf1-b4b667ee79c0\") " pod="openshift-marketplace/community-operators-88nfc" Jan 20 19:39:27 crc kubenswrapper[4661]: I0120 19:39:27.295112 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d7ad9de-7ce0-44c4-bbf1-b4b667ee79c0-utilities\") pod \"community-operators-88nfc\" (UID: \"1d7ad9de-7ce0-44c4-bbf1-b4b667ee79c0\") " pod="openshift-marketplace/community-operators-88nfc" Jan 20 19:39:27 crc kubenswrapper[4661]: I0120 19:39:27.329164 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2x7kb\" (UniqueName: \"kubernetes.io/projected/1d7ad9de-7ce0-44c4-bbf1-b4b667ee79c0-kube-api-access-2x7kb\") pod \"community-operators-88nfc\" (UID: \"1d7ad9de-7ce0-44c4-bbf1-b4b667ee79c0\") " pod="openshift-marketplace/community-operators-88nfc" Jan 20 19:39:27 crc kubenswrapper[4661]: I0120 19:39:27.446476 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-88nfc" Jan 20 19:39:28 crc kubenswrapper[4661]: I0120 19:39:28.011033 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-88nfc"] Jan 20 19:39:28 crc kubenswrapper[4661]: I0120 19:39:28.995630 4661 generic.go:334] "Generic (PLEG): container finished" podID="1d7ad9de-7ce0-44c4-bbf1-b4b667ee79c0" containerID="32b495751e6b4ff2f26627c8c70d412d9b492a1977de34f73245c9b7c19e2260" exitCode=0 Jan 20 19:39:28 crc kubenswrapper[4661]: I0120 19:39:28.996005 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-88nfc" event={"ID":"1d7ad9de-7ce0-44c4-bbf1-b4b667ee79c0","Type":"ContainerDied","Data":"32b495751e6b4ff2f26627c8c70d412d9b492a1977de34f73245c9b7c19e2260"} Jan 20 19:39:28 crc kubenswrapper[4661]: I0120 19:39:28.996045 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-88nfc" event={"ID":"1d7ad9de-7ce0-44c4-bbf1-b4b667ee79c0","Type":"ContainerStarted","Data":"a4b19bab84791673c94d931388ce76931dc1d862a26987935a74806119899271"} Jan 20 19:39:28 crc kubenswrapper[4661]: I0120 19:39:28.998161 4661 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 20 19:39:30 crc kubenswrapper[4661]: I0120 19:39:30.004987 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-88nfc" event={"ID":"1d7ad9de-7ce0-44c4-bbf1-b4b667ee79c0","Type":"ContainerStarted","Data":"a740f6c0493eeec43b7297c4d7df098b72c89e9024bc50296426eedaff2e7293"} Jan 20 19:39:31 crc kubenswrapper[4661]: I0120 19:39:31.024316 4661 generic.go:334] "Generic (PLEG): container finished" podID="1d7ad9de-7ce0-44c4-bbf1-b4b667ee79c0" containerID="a740f6c0493eeec43b7297c4d7df098b72c89e9024bc50296426eedaff2e7293" exitCode=0 Jan 20 19:39:31 crc kubenswrapper[4661]: I0120 19:39:31.024465 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-88nfc" event={"ID":"1d7ad9de-7ce0-44c4-bbf1-b4b667ee79c0","Type":"ContainerDied","Data":"a740f6c0493eeec43b7297c4d7df098b72c89e9024bc50296426eedaff2e7293"} Jan 20 19:39:32 crc kubenswrapper[4661]: I0120 19:39:32.034577 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-88nfc" event={"ID":"1d7ad9de-7ce0-44c4-bbf1-b4b667ee79c0","Type":"ContainerStarted","Data":"054e8988a3d54f26bfc7e43975741747878a7fa21729669d51868d89126c348f"} Jan 20 19:39:37 crc kubenswrapper[4661]: I0120 19:39:37.447203 4661 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-88nfc" Jan 20 19:39:37 crc kubenswrapper[4661]: I0120 19:39:37.449262 4661 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-88nfc" Jan 20 19:39:37 crc kubenswrapper[4661]: I0120 19:39:37.491161 4661 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-88nfc" Jan 20 19:39:37 crc kubenswrapper[4661]: I0120 19:39:37.512737 4661 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-88nfc" podStartSLOduration=8.085172043 podStartE2EDuration="10.512718614s" podCreationTimestamp="2026-01-20 19:39:27 +0000 UTC" firstStartedPulling="2026-01-20 19:39:28.997865119 +0000 UTC m=+5625.328654781" lastFinishedPulling="2026-01-20 19:39:31.42541169 +0000 UTC m=+5627.756201352" observedRunningTime="2026-01-20 19:39:32.060651189 +0000 UTC m=+5628.391440851" watchObservedRunningTime="2026-01-20 19:39:37.512718614 +0000 UTC m=+5633.843508286" Jan 20 19:39:38 crc kubenswrapper[4661]: I0120 19:39:38.138989 4661 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-88nfc" Jan 20 19:39:38 crc kubenswrapper[4661]: I0120 19:39:38.856308 4661 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-88nfc"] Jan 20 19:39:40 crc kubenswrapper[4661]: I0120 19:39:40.113334 4661 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-88nfc" podUID="1d7ad9de-7ce0-44c4-bbf1-b4b667ee79c0" containerName="registry-server" containerID="cri-o://054e8988a3d54f26bfc7e43975741747878a7fa21729669d51868d89126c348f" gracePeriod=2 Jan 20 19:39:40 crc kubenswrapper[4661]: I0120 19:39:40.586926 4661 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-88nfc" Jan 20 19:39:40 crc kubenswrapper[4661]: I0120 19:39:40.739396 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2x7kb\" (UniqueName: \"kubernetes.io/projected/1d7ad9de-7ce0-44c4-bbf1-b4b667ee79c0-kube-api-access-2x7kb\") pod \"1d7ad9de-7ce0-44c4-bbf1-b4b667ee79c0\" (UID: \"1d7ad9de-7ce0-44c4-bbf1-b4b667ee79c0\") " Jan 20 19:39:40 crc kubenswrapper[4661]: I0120 19:39:40.739509 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d7ad9de-7ce0-44c4-bbf1-b4b667ee79c0-utilities\") pod \"1d7ad9de-7ce0-44c4-bbf1-b4b667ee79c0\" (UID: \"1d7ad9de-7ce0-44c4-bbf1-b4b667ee79c0\") " Jan 20 19:39:40 crc kubenswrapper[4661]: I0120 19:39:40.739561 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d7ad9de-7ce0-44c4-bbf1-b4b667ee79c0-catalog-content\") pod \"1d7ad9de-7ce0-44c4-bbf1-b4b667ee79c0\" (UID: \"1d7ad9de-7ce0-44c4-bbf1-b4b667ee79c0\") " Jan 20 19:39:40 crc kubenswrapper[4661]: I0120 19:39:40.749227 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d7ad9de-7ce0-44c4-bbf1-b4b667ee79c0-utilities" (OuterVolumeSpecName: "utilities") pod "1d7ad9de-7ce0-44c4-bbf1-b4b667ee79c0" (UID: "1d7ad9de-7ce0-44c4-bbf1-b4b667ee79c0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 19:39:40 crc kubenswrapper[4661]: I0120 19:39:40.760586 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d7ad9de-7ce0-44c4-bbf1-b4b667ee79c0-kube-api-access-2x7kb" (OuterVolumeSpecName: "kube-api-access-2x7kb") pod "1d7ad9de-7ce0-44c4-bbf1-b4b667ee79c0" (UID: "1d7ad9de-7ce0-44c4-bbf1-b4b667ee79c0"). InnerVolumeSpecName "kube-api-access-2x7kb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 19:39:40 crc kubenswrapper[4661]: I0120 19:39:40.841929 4661 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d7ad9de-7ce0-44c4-bbf1-b4b667ee79c0-utilities\") on node \"crc\" DevicePath \"\"" Jan 20 19:39:40 crc kubenswrapper[4661]: I0120 19:39:40.841960 4661 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2x7kb\" (UniqueName: \"kubernetes.io/projected/1d7ad9de-7ce0-44c4-bbf1-b4b667ee79c0-kube-api-access-2x7kb\") on node \"crc\" DevicePath \"\"" Jan 20 19:39:41 crc kubenswrapper[4661]: I0120 19:39:41.047220 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d7ad9de-7ce0-44c4-bbf1-b4b667ee79c0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d7ad9de-7ce0-44c4-bbf1-b4b667ee79c0" (UID: "1d7ad9de-7ce0-44c4-bbf1-b4b667ee79c0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 19:39:41 crc kubenswrapper[4661]: I0120 19:39:41.128664 4661 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-88nfc" Jan 20 19:39:41 crc kubenswrapper[4661]: I0120 19:39:41.128662 4661 generic.go:334] "Generic (PLEG): container finished" podID="1d7ad9de-7ce0-44c4-bbf1-b4b667ee79c0" containerID="054e8988a3d54f26bfc7e43975741747878a7fa21729669d51868d89126c348f" exitCode=0 Jan 20 19:39:41 crc kubenswrapper[4661]: I0120 19:39:41.128731 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-88nfc" event={"ID":"1d7ad9de-7ce0-44c4-bbf1-b4b667ee79c0","Type":"ContainerDied","Data":"054e8988a3d54f26bfc7e43975741747878a7fa21729669d51868d89126c348f"} Jan 20 19:39:41 crc kubenswrapper[4661]: I0120 19:39:41.128955 4661 scope.go:117] "RemoveContainer" containerID="054e8988a3d54f26bfc7e43975741747878a7fa21729669d51868d89126c348f" Jan 20 19:39:41 crc kubenswrapper[4661]: I0120 19:39:41.128975 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-88nfc" event={"ID":"1d7ad9de-7ce0-44c4-bbf1-b4b667ee79c0","Type":"ContainerDied","Data":"a4b19bab84791673c94d931388ce76931dc1d862a26987935a74806119899271"} Jan 20 19:39:41 crc kubenswrapper[4661]: I0120 19:39:41.153438 4661 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d7ad9de-7ce0-44c4-bbf1-b4b667ee79c0-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 20 19:39:41 crc kubenswrapper[4661]: I0120 19:39:41.162949 4661 scope.go:117] "RemoveContainer" containerID="a740f6c0493eeec43b7297c4d7df098b72c89e9024bc50296426eedaff2e7293" Jan 20 19:39:41 crc kubenswrapper[4661]: I0120 19:39:41.187974 4661 scope.go:117] "RemoveContainer" containerID="32b495751e6b4ff2f26627c8c70d412d9b492a1977de34f73245c9b7c19e2260" Jan 20 19:39:41 crc kubenswrapper[4661]: I0120 19:39:41.188153 4661 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-88nfc"] Jan 20 19:39:41 crc kubenswrapper[4661]: I0120 19:39:41.198441 4661 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-88nfc"] Jan 20 19:39:41 crc kubenswrapper[4661]: I0120 19:39:41.263951 4661 scope.go:117] "RemoveContainer" containerID="054e8988a3d54f26bfc7e43975741747878a7fa21729669d51868d89126c348f" Jan 20 19:39:41 crc kubenswrapper[4661]: E0120 19:39:41.264942 4661 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"054e8988a3d54f26bfc7e43975741747878a7fa21729669d51868d89126c348f\": container with ID starting with 054e8988a3d54f26bfc7e43975741747878a7fa21729669d51868d89126c348f not found: ID does not exist" containerID="054e8988a3d54f26bfc7e43975741747878a7fa21729669d51868d89126c348f" Jan 20 19:39:41 crc kubenswrapper[4661]: I0120 19:39:41.264979 4661 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"054e8988a3d54f26bfc7e43975741747878a7fa21729669d51868d89126c348f"} err="failed to get container status \"054e8988a3d54f26bfc7e43975741747878a7fa21729669d51868d89126c348f\": rpc error: code = NotFound desc = could not find container \"054e8988a3d54f26bfc7e43975741747878a7fa21729669d51868d89126c348f\": container with ID starting with 054e8988a3d54f26bfc7e43975741747878a7fa21729669d51868d89126c348f not found: ID does not exist" Jan 20 19:39:41 crc kubenswrapper[4661]: I0120 19:39:41.265033 4661 scope.go:117] "RemoveContainer" containerID="a740f6c0493eeec43b7297c4d7df098b72c89e9024bc50296426eedaff2e7293" Jan 20 19:39:41 crc kubenswrapper[4661]: E0120 19:39:41.265444 4661 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a740f6c0493eeec43b7297c4d7df098b72c89e9024bc50296426eedaff2e7293\": container with ID starting with a740f6c0493eeec43b7297c4d7df098b72c89e9024bc50296426eedaff2e7293 not found: ID does not exist" containerID="a740f6c0493eeec43b7297c4d7df098b72c89e9024bc50296426eedaff2e7293" Jan 20 19:39:41 crc kubenswrapper[4661]: I0120 19:39:41.265514 4661 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a740f6c0493eeec43b7297c4d7df098b72c89e9024bc50296426eedaff2e7293"} err="failed to get container status \"a740f6c0493eeec43b7297c4d7df098b72c89e9024bc50296426eedaff2e7293\": rpc error: code = NotFound desc = could not find container \"a740f6c0493eeec43b7297c4d7df098b72c89e9024bc50296426eedaff2e7293\": container with ID starting with a740f6c0493eeec43b7297c4d7df098b72c89e9024bc50296426eedaff2e7293 not found: ID does not exist" Jan 20 19:39:41 crc kubenswrapper[4661]: I0120 19:39:41.265533 4661 scope.go:117] "RemoveContainer" containerID="32b495751e6b4ff2f26627c8c70d412d9b492a1977de34f73245c9b7c19e2260" Jan 20 19:39:41 crc kubenswrapper[4661]: E0120 19:39:41.265930 4661 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"32b495751e6b4ff2f26627c8c70d412d9b492a1977de34f73245c9b7c19e2260\": container with ID starting with 32b495751e6b4ff2f26627c8c70d412d9b492a1977de34f73245c9b7c19e2260 not found: ID does not exist" containerID="32b495751e6b4ff2f26627c8c70d412d9b492a1977de34f73245c9b7c19e2260" Jan 20 19:39:41 crc kubenswrapper[4661]: I0120 19:39:41.265963 4661 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"32b495751e6b4ff2f26627c8c70d412d9b492a1977de34f73245c9b7c19e2260"} err="failed to get container status \"32b495751e6b4ff2f26627c8c70d412d9b492a1977de34f73245c9b7c19e2260\": rpc error: code = NotFound desc = could not find container \"32b495751e6b4ff2f26627c8c70d412d9b492a1977de34f73245c9b7c19e2260\": container with ID starting with 32b495751e6b4ff2f26627c8c70d412d9b492a1977de34f73245c9b7c19e2260 not found: ID does not exist" Jan 20 19:39:42 crc kubenswrapper[4661]: I0120 19:39:42.156091 4661 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d7ad9de-7ce0-44c4-bbf1-b4b667ee79c0" path="/var/lib/kubelet/pods/1d7ad9de-7ce0-44c4-bbf1-b4b667ee79c0/volumes" Jan 20 19:39:59 crc kubenswrapper[4661]: I0120 19:39:59.323290 4661 patch_prober.go:28] interesting pod/machine-config-daemon-svf7c container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 20 19:39:59 crc kubenswrapper[4661]: I0120 19:39:59.323867 4661 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-svf7c" podUID="78855c94-da90-4523-8d65-70f7fd153dee" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 20 19:40:08 crc kubenswrapper[4661]: I0120 19:40:08.658588 4661 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-ctjlc"] Jan 20 19:40:08 crc kubenswrapper[4661]: E0120 19:40:08.660424 4661 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1d7ad9de-7ce0-44c4-bbf1-b4b667ee79c0" containerName="extract-content" Jan 20 19:40:08 crc kubenswrapper[4661]: I0120 19:40:08.660547 4661 state_mem.go:107] "Deleted CPUSet assignment" podUID="1d7ad9de-7ce0-44c4-bbf1-b4b667ee79c0" containerName="extract-content" Jan 20 19:40:08 crc kubenswrapper[4661]: E0120 19:40:08.660750 4661 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1d7ad9de-7ce0-44c4-bbf1-b4b667ee79c0" containerName="registry-server" Jan 20 19:40:08 crc kubenswrapper[4661]: I0120 19:40:08.660842 4661 state_mem.go:107] "Deleted CPUSet assignment" podUID="1d7ad9de-7ce0-44c4-bbf1-b4b667ee79c0" containerName="registry-server" Jan 20 19:40:08 crc kubenswrapper[4661]: E0120 19:40:08.660956 4661 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1d7ad9de-7ce0-44c4-bbf1-b4b667ee79c0" containerName="extract-utilities" Jan 20 19:40:08 crc kubenswrapper[4661]: I0120 19:40:08.661034 4661 state_mem.go:107] "Deleted CPUSet assignment" podUID="1d7ad9de-7ce0-44c4-bbf1-b4b667ee79c0" containerName="extract-utilities" Jan 20 19:40:08 crc kubenswrapper[4661]: I0120 19:40:08.661381 4661 memory_manager.go:354] "RemoveStaleState removing state" podUID="1d7ad9de-7ce0-44c4-bbf1-b4b667ee79c0" containerName="registry-server" Jan 20 19:40:08 crc kubenswrapper[4661]: I0120 19:40:08.663365 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ctjlc" Jan 20 19:40:08 crc kubenswrapper[4661]: I0120 19:40:08.701705 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-ctjlc"] Jan 20 19:40:08 crc kubenswrapper[4661]: I0120 19:40:08.806209 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/772ae114-b7b8-47ea-b656-d69a41823348-catalog-content\") pod \"certified-operators-ctjlc\" (UID: \"772ae114-b7b8-47ea-b656-d69a41823348\") " pod="openshift-marketplace/certified-operators-ctjlc" Jan 20 19:40:08 crc kubenswrapper[4661]: I0120 19:40:08.806275 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mjmh2\" (UniqueName: \"kubernetes.io/projected/772ae114-b7b8-47ea-b656-d69a41823348-kube-api-access-mjmh2\") pod \"certified-operators-ctjlc\" (UID: \"772ae114-b7b8-47ea-b656-d69a41823348\") " pod="openshift-marketplace/certified-operators-ctjlc" Jan 20 19:40:08 crc kubenswrapper[4661]: I0120 19:40:08.806361 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/772ae114-b7b8-47ea-b656-d69a41823348-utilities\") pod \"certified-operators-ctjlc\" (UID: \"772ae114-b7b8-47ea-b656-d69a41823348\") " pod="openshift-marketplace/certified-operators-ctjlc" Jan 20 19:40:08 crc kubenswrapper[4661]: I0120 19:40:08.908153 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/772ae114-b7b8-47ea-b656-d69a41823348-catalog-content\") pod \"certified-operators-ctjlc\" (UID: \"772ae114-b7b8-47ea-b656-d69a41823348\") " pod="openshift-marketplace/certified-operators-ctjlc" Jan 20 19:40:08 crc kubenswrapper[4661]: I0120 19:40:08.908216 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mjmh2\" (UniqueName: \"kubernetes.io/projected/772ae114-b7b8-47ea-b656-d69a41823348-kube-api-access-mjmh2\") pod \"certified-operators-ctjlc\" (UID: \"772ae114-b7b8-47ea-b656-d69a41823348\") " pod="openshift-marketplace/certified-operators-ctjlc" Jan 20 19:40:08 crc kubenswrapper[4661]: I0120 19:40:08.908286 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/772ae114-b7b8-47ea-b656-d69a41823348-utilities\") pod \"certified-operators-ctjlc\" (UID: \"772ae114-b7b8-47ea-b656-d69a41823348\") " pod="openshift-marketplace/certified-operators-ctjlc" Jan 20 19:40:08 crc kubenswrapper[4661]: I0120 19:40:08.908721 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/772ae114-b7b8-47ea-b656-d69a41823348-utilities\") pod \"certified-operators-ctjlc\" (UID: \"772ae114-b7b8-47ea-b656-d69a41823348\") " pod="openshift-marketplace/certified-operators-ctjlc" Jan 20 19:40:08 crc kubenswrapper[4661]: I0120 19:40:08.908814 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/772ae114-b7b8-47ea-b656-d69a41823348-catalog-content\") pod \"certified-operators-ctjlc\" (UID: \"772ae114-b7b8-47ea-b656-d69a41823348\") " pod="openshift-marketplace/certified-operators-ctjlc" Jan 20 19:40:08 crc kubenswrapper[4661]: I0120 19:40:08.940420 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mjmh2\" (UniqueName: \"kubernetes.io/projected/772ae114-b7b8-47ea-b656-d69a41823348-kube-api-access-mjmh2\") pod \"certified-operators-ctjlc\" (UID: \"772ae114-b7b8-47ea-b656-d69a41823348\") " pod="openshift-marketplace/certified-operators-ctjlc" Jan 20 19:40:08 crc kubenswrapper[4661]: I0120 19:40:08.992592 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ctjlc" Jan 20 19:40:09 crc kubenswrapper[4661]: I0120 19:40:09.459808 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-ctjlc"] Jan 20 19:40:10 crc kubenswrapper[4661]: I0120 19:40:10.465585 4661 generic.go:334] "Generic (PLEG): container finished" podID="772ae114-b7b8-47ea-b656-d69a41823348" containerID="f0cdf71f065abd1cd686f7704670c81dfbc324a5056198d61e642af65b4ac9ae" exitCode=0 Jan 20 19:40:10 crc kubenswrapper[4661]: I0120 19:40:10.465645 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ctjlc" event={"ID":"772ae114-b7b8-47ea-b656-d69a41823348","Type":"ContainerDied","Data":"f0cdf71f065abd1cd686f7704670c81dfbc324a5056198d61e642af65b4ac9ae"} Jan 20 19:40:10 crc kubenswrapper[4661]: I0120 19:40:10.465756 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ctjlc" event={"ID":"772ae114-b7b8-47ea-b656-d69a41823348","Type":"ContainerStarted","Data":"158c0c01e562ddc720e922a0f532a3f034d141d9904088da047b4f0c5bf5dca0"} Jan 20 19:40:11 crc kubenswrapper[4661]: I0120 19:40:11.474427 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ctjlc" event={"ID":"772ae114-b7b8-47ea-b656-d69a41823348","Type":"ContainerStarted","Data":"d6039e7c5ddf46ff998a6b660725afa5a7746ac6e5461a968ef30175ac0d8cfd"} Jan 20 19:40:12 crc kubenswrapper[4661]: I0120 19:40:12.508027 4661 generic.go:334] "Generic (PLEG): container finished" podID="772ae114-b7b8-47ea-b656-d69a41823348" containerID="d6039e7c5ddf46ff998a6b660725afa5a7746ac6e5461a968ef30175ac0d8cfd" exitCode=0 Jan 20 19:40:12 crc kubenswrapper[4661]: I0120 19:40:12.508114 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ctjlc" event={"ID":"772ae114-b7b8-47ea-b656-d69a41823348","Type":"ContainerDied","Data":"d6039e7c5ddf46ff998a6b660725afa5a7746ac6e5461a968ef30175ac0d8cfd"} Jan 20 19:40:13 crc kubenswrapper[4661]: I0120 19:40:13.518314 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ctjlc" event={"ID":"772ae114-b7b8-47ea-b656-d69a41823348","Type":"ContainerStarted","Data":"c994b1ffc120fefee34cbe23d44d1383bea58b9ecc0194753a5e7de3e7756bfe"} Jan 20 19:40:13 crc kubenswrapper[4661]: I0120 19:40:13.540641 4661 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-ctjlc" podStartSLOduration=3.045582378 podStartE2EDuration="5.540616509s" podCreationTimestamp="2026-01-20 19:40:08 +0000 UTC" firstStartedPulling="2026-01-20 19:40:10.467972295 +0000 UTC m=+5666.798761957" lastFinishedPulling="2026-01-20 19:40:12.963006426 +0000 UTC m=+5669.293796088" observedRunningTime="2026-01-20 19:40:13.537048127 +0000 UTC m=+5669.867837789" watchObservedRunningTime="2026-01-20 19:40:13.540616509 +0000 UTC m=+5669.871406171" Jan 20 19:40:18 crc kubenswrapper[4661]: I0120 19:40:18.994167 4661 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-ctjlc" Jan 20 19:40:18 crc kubenswrapper[4661]: I0120 19:40:18.995855 4661 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-ctjlc" Jan 20 19:40:19 crc kubenswrapper[4661]: I0120 19:40:19.060020 4661 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-ctjlc" Jan 20 19:40:20 crc kubenswrapper[4661]: I0120 19:40:20.068943 4661 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-ctjlc" Jan 20 19:40:20 crc kubenswrapper[4661]: I0120 19:40:20.152393 4661 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-ctjlc"] Jan 20 19:40:21 crc kubenswrapper[4661]: I0120 19:40:21.949550 4661 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-ctjlc" podUID="772ae114-b7b8-47ea-b656-d69a41823348" containerName="registry-server" containerID="cri-o://c994b1ffc120fefee34cbe23d44d1383bea58b9ecc0194753a5e7de3e7756bfe" gracePeriod=2 Jan 20 19:40:22 crc kubenswrapper[4661]: I0120 19:40:22.967158 4661 generic.go:334] "Generic (PLEG): container finished" podID="772ae114-b7b8-47ea-b656-d69a41823348" containerID="c994b1ffc120fefee34cbe23d44d1383bea58b9ecc0194753a5e7de3e7756bfe" exitCode=0 Jan 20 19:40:22 crc kubenswrapper[4661]: I0120 19:40:22.967533 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ctjlc" event={"ID":"772ae114-b7b8-47ea-b656-d69a41823348","Type":"ContainerDied","Data":"c994b1ffc120fefee34cbe23d44d1383bea58b9ecc0194753a5e7de3e7756bfe"} Jan 20 19:40:23 crc kubenswrapper[4661]: I0120 19:40:23.103468 4661 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ctjlc" Jan 20 19:40:23 crc kubenswrapper[4661]: I0120 19:40:23.228357 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/772ae114-b7b8-47ea-b656-d69a41823348-utilities\") pod \"772ae114-b7b8-47ea-b656-d69a41823348\" (UID: \"772ae114-b7b8-47ea-b656-d69a41823348\") " Jan 20 19:40:23 crc kubenswrapper[4661]: I0120 19:40:23.228427 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mjmh2\" (UniqueName: \"kubernetes.io/projected/772ae114-b7b8-47ea-b656-d69a41823348-kube-api-access-mjmh2\") pod \"772ae114-b7b8-47ea-b656-d69a41823348\" (UID: \"772ae114-b7b8-47ea-b656-d69a41823348\") " Jan 20 19:40:23 crc kubenswrapper[4661]: I0120 19:40:23.228508 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/772ae114-b7b8-47ea-b656-d69a41823348-catalog-content\") pod \"772ae114-b7b8-47ea-b656-d69a41823348\" (UID: \"772ae114-b7b8-47ea-b656-d69a41823348\") " Jan 20 19:40:23 crc kubenswrapper[4661]: I0120 19:40:23.229160 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/772ae114-b7b8-47ea-b656-d69a41823348-utilities" (OuterVolumeSpecName: "utilities") pod "772ae114-b7b8-47ea-b656-d69a41823348" (UID: "772ae114-b7b8-47ea-b656-d69a41823348"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 19:40:23 crc kubenswrapper[4661]: I0120 19:40:23.244051 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/772ae114-b7b8-47ea-b656-d69a41823348-kube-api-access-mjmh2" (OuterVolumeSpecName: "kube-api-access-mjmh2") pod "772ae114-b7b8-47ea-b656-d69a41823348" (UID: "772ae114-b7b8-47ea-b656-d69a41823348"). InnerVolumeSpecName "kube-api-access-mjmh2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 19:40:23 crc kubenswrapper[4661]: I0120 19:40:23.285929 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/772ae114-b7b8-47ea-b656-d69a41823348-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "772ae114-b7b8-47ea-b656-d69a41823348" (UID: "772ae114-b7b8-47ea-b656-d69a41823348"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 19:40:23 crc kubenswrapper[4661]: I0120 19:40:23.331357 4661 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/772ae114-b7b8-47ea-b656-d69a41823348-utilities\") on node \"crc\" DevicePath \"\"" Jan 20 19:40:23 crc kubenswrapper[4661]: I0120 19:40:23.331389 4661 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mjmh2\" (UniqueName: \"kubernetes.io/projected/772ae114-b7b8-47ea-b656-d69a41823348-kube-api-access-mjmh2\") on node \"crc\" DevicePath \"\"" Jan 20 19:40:23 crc kubenswrapper[4661]: I0120 19:40:23.331399 4661 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/772ae114-b7b8-47ea-b656-d69a41823348-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 20 19:40:23 crc kubenswrapper[4661]: I0120 19:40:23.978383 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ctjlc" event={"ID":"772ae114-b7b8-47ea-b656-d69a41823348","Type":"ContainerDied","Data":"158c0c01e562ddc720e922a0f532a3f034d141d9904088da047b4f0c5bf5dca0"} Jan 20 19:40:23 crc kubenswrapper[4661]: I0120 19:40:23.978723 4661 scope.go:117] "RemoveContainer" containerID="c994b1ffc120fefee34cbe23d44d1383bea58b9ecc0194753a5e7de3e7756bfe" Jan 20 19:40:23 crc kubenswrapper[4661]: I0120 19:40:23.978640 4661 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ctjlc" Jan 20 19:40:24 crc kubenswrapper[4661]: I0120 19:40:24.005010 4661 scope.go:117] "RemoveContainer" containerID="d6039e7c5ddf46ff998a6b660725afa5a7746ac6e5461a968ef30175ac0d8cfd" Jan 20 19:40:24 crc kubenswrapper[4661]: I0120 19:40:24.027825 4661 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-ctjlc"] Jan 20 19:40:24 crc kubenswrapper[4661]: I0120 19:40:24.032031 4661 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-ctjlc"] Jan 20 19:40:24 crc kubenswrapper[4661]: I0120 19:40:24.074189 4661 scope.go:117] "RemoveContainer" containerID="f0cdf71f065abd1cd686f7704670c81dfbc324a5056198d61e642af65b4ac9ae" Jan 20 19:40:24 crc kubenswrapper[4661]: I0120 19:40:24.164685 4661 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="772ae114-b7b8-47ea-b656-d69a41823348" path="/var/lib/kubelet/pods/772ae114-b7b8-47ea-b656-d69a41823348/volumes" Jan 20 19:40:29 crc kubenswrapper[4661]: I0120 19:40:29.323621 4661 patch_prober.go:28] interesting pod/machine-config-daemon-svf7c container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 20 19:40:29 crc kubenswrapper[4661]: I0120 19:40:29.324299 4661 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-svf7c" podUID="78855c94-da90-4523-8d65-70f7fd153dee" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 20 19:40:59 crc kubenswrapper[4661]: I0120 19:40:59.323573 4661 patch_prober.go:28] interesting pod/machine-config-daemon-svf7c container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 20 19:40:59 crc kubenswrapper[4661]: I0120 19:40:59.324218 4661 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-svf7c" podUID="78855c94-da90-4523-8d65-70f7fd153dee" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 20 19:40:59 crc kubenswrapper[4661]: I0120 19:40:59.324251 4661 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-svf7c" Jan 20 19:40:59 crc kubenswrapper[4661]: I0120 19:40:59.325081 4661 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"8c2c0540c85400f3bc3171f207699fe16a395291b82a8a7f3bee8f8da1569bee"} pod="openshift-machine-config-operator/machine-config-daemon-svf7c" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 20 19:40:59 crc kubenswrapper[4661]: I0120 19:40:59.325134 4661 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-svf7c" podUID="78855c94-da90-4523-8d65-70f7fd153dee" containerName="machine-config-daemon" containerID="cri-o://8c2c0540c85400f3bc3171f207699fe16a395291b82a8a7f3bee8f8da1569bee" gracePeriod=600 Jan 20 19:40:59 crc kubenswrapper[4661]: E0120 19:40:59.458850 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-svf7c_openshift-machine-config-operator(78855c94-da90-4523-8d65-70f7fd153dee)\"" pod="openshift-machine-config-operator/machine-config-daemon-svf7c" podUID="78855c94-da90-4523-8d65-70f7fd153dee" Jan 20 19:41:00 crc kubenswrapper[4661]: I0120 19:41:00.343070 4661 generic.go:334] "Generic (PLEG): container finished" podID="78855c94-da90-4523-8d65-70f7fd153dee" containerID="8c2c0540c85400f3bc3171f207699fe16a395291b82a8a7f3bee8f8da1569bee" exitCode=0 Jan 20 19:41:00 crc kubenswrapper[4661]: I0120 19:41:00.343142 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-svf7c" event={"ID":"78855c94-da90-4523-8d65-70f7fd153dee","Type":"ContainerDied","Data":"8c2c0540c85400f3bc3171f207699fe16a395291b82a8a7f3bee8f8da1569bee"} Jan 20 19:41:00 crc kubenswrapper[4661]: I0120 19:41:00.343767 4661 scope.go:117] "RemoveContainer" containerID="51457ad649bdc1ea8dd82d8ab16bfc72ad5c24435b0fd5e6de792c40c78dc5c4" Jan 20 19:41:00 crc kubenswrapper[4661]: I0120 19:41:00.344250 4661 scope.go:117] "RemoveContainer" containerID="8c2c0540c85400f3bc3171f207699fe16a395291b82a8a7f3bee8f8da1569bee" Jan 20 19:41:00 crc kubenswrapper[4661]: E0120 19:41:00.344544 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-svf7c_openshift-machine-config-operator(78855c94-da90-4523-8d65-70f7fd153dee)\"" pod="openshift-machine-config-operator/machine-config-daemon-svf7c" podUID="78855c94-da90-4523-8d65-70f7fd153dee" Jan 20 19:41:14 crc kubenswrapper[4661]: I0120 19:41:14.148419 4661 scope.go:117] "RemoveContainer" containerID="8c2c0540c85400f3bc3171f207699fe16a395291b82a8a7f3bee8f8da1569bee" Jan 20 19:41:14 crc kubenswrapper[4661]: E0120 19:41:14.149183 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-svf7c_openshift-machine-config-operator(78855c94-da90-4523-8d65-70f7fd153dee)\"" pod="openshift-machine-config-operator/machine-config-daemon-svf7c" podUID="78855c94-da90-4523-8d65-70f7fd153dee" Jan 20 19:41:25 crc kubenswrapper[4661]: I0120 19:41:25.143223 4661 scope.go:117] "RemoveContainer" containerID="8c2c0540c85400f3bc3171f207699fe16a395291b82a8a7f3bee8f8da1569bee" Jan 20 19:41:25 crc kubenswrapper[4661]: E0120 19:41:25.144292 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-svf7c_openshift-machine-config-operator(78855c94-da90-4523-8d65-70f7fd153dee)\"" pod="openshift-machine-config-operator/machine-config-daemon-svf7c" podUID="78855c94-da90-4523-8d65-70f7fd153dee" Jan 20 19:41:38 crc kubenswrapper[4661]: I0120 19:41:38.142915 4661 scope.go:117] "RemoveContainer" containerID="8c2c0540c85400f3bc3171f207699fe16a395291b82a8a7f3bee8f8da1569bee" Jan 20 19:41:38 crc kubenswrapper[4661]: E0120 19:41:38.143978 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-svf7c_openshift-machine-config-operator(78855c94-da90-4523-8d65-70f7fd153dee)\"" pod="openshift-machine-config-operator/machine-config-daemon-svf7c" podUID="78855c94-da90-4523-8d65-70f7fd153dee" Jan 20 19:41:50 crc kubenswrapper[4661]: I0120 19:41:50.142502 4661 scope.go:117] "RemoveContainer" containerID="8c2c0540c85400f3bc3171f207699fe16a395291b82a8a7f3bee8f8da1569bee" Jan 20 19:41:50 crc kubenswrapper[4661]: E0120 19:41:50.143419 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-svf7c_openshift-machine-config-operator(78855c94-da90-4523-8d65-70f7fd153dee)\"" pod="openshift-machine-config-operator/machine-config-daemon-svf7c" podUID="78855c94-da90-4523-8d65-70f7fd153dee" Jan 20 19:42:05 crc kubenswrapper[4661]: I0120 19:42:05.142534 4661 scope.go:117] "RemoveContainer" containerID="8c2c0540c85400f3bc3171f207699fe16a395291b82a8a7f3bee8f8da1569bee" Jan 20 19:42:05 crc kubenswrapper[4661]: E0120 19:42:05.143891 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-svf7c_openshift-machine-config-operator(78855c94-da90-4523-8d65-70f7fd153dee)\"" pod="openshift-machine-config-operator/machine-config-daemon-svf7c" podUID="78855c94-da90-4523-8d65-70f7fd153dee" Jan 20 19:42:16 crc kubenswrapper[4661]: I0120 19:42:16.143059 4661 scope.go:117] "RemoveContainer" containerID="8c2c0540c85400f3bc3171f207699fe16a395291b82a8a7f3bee8f8da1569bee" Jan 20 19:42:16 crc kubenswrapper[4661]: E0120 19:42:16.144455 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-svf7c_openshift-machine-config-operator(78855c94-da90-4523-8d65-70f7fd153dee)\"" pod="openshift-machine-config-operator/machine-config-daemon-svf7c" podUID="78855c94-da90-4523-8d65-70f7fd153dee" Jan 20 19:42:27 crc kubenswrapper[4661]: I0120 19:42:27.143751 4661 scope.go:117] "RemoveContainer" containerID="8c2c0540c85400f3bc3171f207699fe16a395291b82a8a7f3bee8f8da1569bee" Jan 20 19:42:27 crc kubenswrapper[4661]: E0120 19:42:27.144999 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-svf7c_openshift-machine-config-operator(78855c94-da90-4523-8d65-70f7fd153dee)\"" pod="openshift-machine-config-operator/machine-config-daemon-svf7c" podUID="78855c94-da90-4523-8d65-70f7fd153dee" Jan 20 19:42:29 crc kubenswrapper[4661]: I0120 19:42:29.306908 4661 generic.go:334] "Generic (PLEG): container finished" podID="1c0be25f-c64b-46cd-96d1-c9306ca20536" containerID="88f3cd081cc0b391c97df2c593b87901b89d12659b809e90a40f989bc24ecb5a" exitCode=0 Jan 20 19:42:29 crc kubenswrapper[4661]: I0120 19:42:29.307053 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-2z2xq/must-gather-p8q68" event={"ID":"1c0be25f-c64b-46cd-96d1-c9306ca20536","Type":"ContainerDied","Data":"88f3cd081cc0b391c97df2c593b87901b89d12659b809e90a40f989bc24ecb5a"} Jan 20 19:42:29 crc kubenswrapper[4661]: I0120 19:42:29.308141 4661 scope.go:117] "RemoveContainer" containerID="88f3cd081cc0b391c97df2c593b87901b89d12659b809e90a40f989bc24ecb5a" Jan 20 19:42:29 crc kubenswrapper[4661]: I0120 19:42:29.507574 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-2z2xq_must-gather-p8q68_1c0be25f-c64b-46cd-96d1-c9306ca20536/gather/0.log" Jan 20 19:42:38 crc kubenswrapper[4661]: I0120 19:42:38.143378 4661 scope.go:117] "RemoveContainer" containerID="8c2c0540c85400f3bc3171f207699fe16a395291b82a8a7f3bee8f8da1569bee" Jan 20 19:42:38 crc kubenswrapper[4661]: E0120 19:42:38.144589 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-svf7c_openshift-machine-config-operator(78855c94-da90-4523-8d65-70f7fd153dee)\"" pod="openshift-machine-config-operator/machine-config-daemon-svf7c" podUID="78855c94-da90-4523-8d65-70f7fd153dee" Jan 20 19:42:41 crc kubenswrapper[4661]: I0120 19:42:41.043002 4661 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-2z2xq/must-gather-p8q68"] Jan 20 19:42:41 crc kubenswrapper[4661]: I0120 19:42:41.043919 4661 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-2z2xq/must-gather-p8q68" podUID="1c0be25f-c64b-46cd-96d1-c9306ca20536" containerName="copy" containerID="cri-o://7107f61bca4c2db6a1e4e6be2c93cf01e69f35ef0990cbd311ad2ea112861067" gracePeriod=2 Jan 20 19:42:41 crc kubenswrapper[4661]: I0120 19:42:41.059284 4661 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-2z2xq/must-gather-p8q68"] Jan 20 19:42:41 crc kubenswrapper[4661]: I0120 19:42:41.460579 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-2z2xq_must-gather-p8q68_1c0be25f-c64b-46cd-96d1-c9306ca20536/copy/0.log" Jan 20 19:42:41 crc kubenswrapper[4661]: I0120 19:42:41.461323 4661 generic.go:334] "Generic (PLEG): container finished" podID="1c0be25f-c64b-46cd-96d1-c9306ca20536" containerID="7107f61bca4c2db6a1e4e6be2c93cf01e69f35ef0990cbd311ad2ea112861067" exitCode=143 Jan 20 19:42:41 crc kubenswrapper[4661]: I0120 19:42:41.634720 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-2z2xq_must-gather-p8q68_1c0be25f-c64b-46cd-96d1-c9306ca20536/copy/0.log" Jan 20 19:42:41 crc kubenswrapper[4661]: I0120 19:42:41.635136 4661 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-2z2xq/must-gather-p8q68" Jan 20 19:42:41 crc kubenswrapper[4661]: I0120 19:42:41.807861 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/1c0be25f-c64b-46cd-96d1-c9306ca20536-must-gather-output\") pod \"1c0be25f-c64b-46cd-96d1-c9306ca20536\" (UID: \"1c0be25f-c64b-46cd-96d1-c9306ca20536\") " Jan 20 19:42:41 crc kubenswrapper[4661]: I0120 19:42:41.807936 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k5n47\" (UniqueName: \"kubernetes.io/projected/1c0be25f-c64b-46cd-96d1-c9306ca20536-kube-api-access-k5n47\") pod \"1c0be25f-c64b-46cd-96d1-c9306ca20536\" (UID: \"1c0be25f-c64b-46cd-96d1-c9306ca20536\") " Jan 20 19:42:41 crc kubenswrapper[4661]: I0120 19:42:41.813804 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1c0be25f-c64b-46cd-96d1-c9306ca20536-kube-api-access-k5n47" (OuterVolumeSpecName: "kube-api-access-k5n47") pod "1c0be25f-c64b-46cd-96d1-c9306ca20536" (UID: "1c0be25f-c64b-46cd-96d1-c9306ca20536"). InnerVolumeSpecName "kube-api-access-k5n47". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 19:42:41 crc kubenswrapper[4661]: I0120 19:42:41.910461 4661 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k5n47\" (UniqueName: \"kubernetes.io/projected/1c0be25f-c64b-46cd-96d1-c9306ca20536-kube-api-access-k5n47\") on node \"crc\" DevicePath \"\"" Jan 20 19:42:42 crc kubenswrapper[4661]: I0120 19:42:42.015053 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1c0be25f-c64b-46cd-96d1-c9306ca20536-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "1c0be25f-c64b-46cd-96d1-c9306ca20536" (UID: "1c0be25f-c64b-46cd-96d1-c9306ca20536"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 19:42:42 crc kubenswrapper[4661]: I0120 19:42:42.114663 4661 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/1c0be25f-c64b-46cd-96d1-c9306ca20536-must-gather-output\") on node \"crc\" DevicePath \"\"" Jan 20 19:42:42 crc kubenswrapper[4661]: I0120 19:42:42.152832 4661 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1c0be25f-c64b-46cd-96d1-c9306ca20536" path="/var/lib/kubelet/pods/1c0be25f-c64b-46cd-96d1-c9306ca20536/volumes" Jan 20 19:42:42 crc kubenswrapper[4661]: I0120 19:42:42.469861 4661 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-2z2xq_must-gather-p8q68_1c0be25f-c64b-46cd-96d1-c9306ca20536/copy/0.log" Jan 20 19:42:42 crc kubenswrapper[4661]: I0120 19:42:42.470425 4661 scope.go:117] "RemoveContainer" containerID="7107f61bca4c2db6a1e4e6be2c93cf01e69f35ef0990cbd311ad2ea112861067" Jan 20 19:42:42 crc kubenswrapper[4661]: I0120 19:42:42.470733 4661 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-2z2xq/must-gather-p8q68" Jan 20 19:42:42 crc kubenswrapper[4661]: I0120 19:42:42.490803 4661 scope.go:117] "RemoveContainer" containerID="88f3cd081cc0b391c97df2c593b87901b89d12659b809e90a40f989bc24ecb5a" Jan 20 19:42:51 crc kubenswrapper[4661]: I0120 19:42:51.142588 4661 scope.go:117] "RemoveContainer" containerID="8c2c0540c85400f3bc3171f207699fe16a395291b82a8a7f3bee8f8da1569bee" Jan 20 19:42:51 crc kubenswrapper[4661]: E0120 19:42:51.143621 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-svf7c_openshift-machine-config-operator(78855c94-da90-4523-8d65-70f7fd153dee)\"" pod="openshift-machine-config-operator/machine-config-daemon-svf7c" podUID="78855c94-da90-4523-8d65-70f7fd153dee" Jan 20 19:43:04 crc kubenswrapper[4661]: I0120 19:43:04.149873 4661 scope.go:117] "RemoveContainer" containerID="8c2c0540c85400f3bc3171f207699fe16a395291b82a8a7f3bee8f8da1569bee" Jan 20 19:43:04 crc kubenswrapper[4661]: E0120 19:43:04.150724 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-svf7c_openshift-machine-config-operator(78855c94-da90-4523-8d65-70f7fd153dee)\"" pod="openshift-machine-config-operator/machine-config-daemon-svf7c" podUID="78855c94-da90-4523-8d65-70f7fd153dee" Jan 20 19:43:19 crc kubenswrapper[4661]: I0120 19:43:19.142982 4661 scope.go:117] "RemoveContainer" containerID="8c2c0540c85400f3bc3171f207699fe16a395291b82a8a7f3bee8f8da1569bee" Jan 20 19:43:19 crc kubenswrapper[4661]: E0120 19:43:19.143736 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-svf7c_openshift-machine-config-operator(78855c94-da90-4523-8d65-70f7fd153dee)\"" pod="openshift-machine-config-operator/machine-config-daemon-svf7c" podUID="78855c94-da90-4523-8d65-70f7fd153dee" Jan 20 19:43:30 crc kubenswrapper[4661]: I0120 19:43:30.701770 4661 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-99sfx"] Jan 20 19:43:30 crc kubenswrapper[4661]: E0120 19:43:30.710454 4661 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="772ae114-b7b8-47ea-b656-d69a41823348" containerName="extract-content" Jan 20 19:43:30 crc kubenswrapper[4661]: I0120 19:43:30.710482 4661 state_mem.go:107] "Deleted CPUSet assignment" podUID="772ae114-b7b8-47ea-b656-d69a41823348" containerName="extract-content" Jan 20 19:43:30 crc kubenswrapper[4661]: E0120 19:43:30.710497 4661 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="772ae114-b7b8-47ea-b656-d69a41823348" containerName="registry-server" Jan 20 19:43:30 crc kubenswrapper[4661]: I0120 19:43:30.710504 4661 state_mem.go:107] "Deleted CPUSet assignment" podUID="772ae114-b7b8-47ea-b656-d69a41823348" containerName="registry-server" Jan 20 19:43:30 crc kubenswrapper[4661]: E0120 19:43:30.710517 4661 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1c0be25f-c64b-46cd-96d1-c9306ca20536" containerName="copy" Jan 20 19:43:30 crc kubenswrapper[4661]: I0120 19:43:30.710523 4661 state_mem.go:107] "Deleted CPUSet assignment" podUID="1c0be25f-c64b-46cd-96d1-c9306ca20536" containerName="copy" Jan 20 19:43:30 crc kubenswrapper[4661]: E0120 19:43:30.710544 4661 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1c0be25f-c64b-46cd-96d1-c9306ca20536" containerName="gather" Jan 20 19:43:30 crc kubenswrapper[4661]: I0120 19:43:30.710550 4661 state_mem.go:107] "Deleted CPUSet assignment" podUID="1c0be25f-c64b-46cd-96d1-c9306ca20536" containerName="gather" Jan 20 19:43:30 crc kubenswrapper[4661]: E0120 19:43:30.710562 4661 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="772ae114-b7b8-47ea-b656-d69a41823348" containerName="extract-utilities" Jan 20 19:43:30 crc kubenswrapper[4661]: I0120 19:43:30.710568 4661 state_mem.go:107] "Deleted CPUSet assignment" podUID="772ae114-b7b8-47ea-b656-d69a41823348" containerName="extract-utilities" Jan 20 19:43:30 crc kubenswrapper[4661]: I0120 19:43:30.710744 4661 memory_manager.go:354] "RemoveStaleState removing state" podUID="1c0be25f-c64b-46cd-96d1-c9306ca20536" containerName="copy" Jan 20 19:43:30 crc kubenswrapper[4661]: I0120 19:43:30.710769 4661 memory_manager.go:354] "RemoveStaleState removing state" podUID="1c0be25f-c64b-46cd-96d1-c9306ca20536" containerName="gather" Jan 20 19:43:30 crc kubenswrapper[4661]: I0120 19:43:30.710778 4661 memory_manager.go:354] "RemoveStaleState removing state" podUID="772ae114-b7b8-47ea-b656-d69a41823348" containerName="registry-server" Jan 20 19:43:30 crc kubenswrapper[4661]: I0120 19:43:30.712009 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-99sfx" Jan 20 19:43:30 crc kubenswrapper[4661]: I0120 19:43:30.716023 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-99sfx"] Jan 20 19:43:30 crc kubenswrapper[4661]: I0120 19:43:30.782163 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c9ba20b4-a283-451d-a724-e15cbec79c3e-utilities\") pod \"redhat-marketplace-99sfx\" (UID: \"c9ba20b4-a283-451d-a724-e15cbec79c3e\") " pod="openshift-marketplace/redhat-marketplace-99sfx" Jan 20 19:43:30 crc kubenswrapper[4661]: I0120 19:43:30.782821 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c9ba20b4-a283-451d-a724-e15cbec79c3e-catalog-content\") pod \"redhat-marketplace-99sfx\" (UID: \"c9ba20b4-a283-451d-a724-e15cbec79c3e\") " pod="openshift-marketplace/redhat-marketplace-99sfx" Jan 20 19:43:30 crc kubenswrapper[4661]: I0120 19:43:30.782938 4661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g8z5k\" (UniqueName: \"kubernetes.io/projected/c9ba20b4-a283-451d-a724-e15cbec79c3e-kube-api-access-g8z5k\") pod \"redhat-marketplace-99sfx\" (UID: \"c9ba20b4-a283-451d-a724-e15cbec79c3e\") " pod="openshift-marketplace/redhat-marketplace-99sfx" Jan 20 19:43:30 crc kubenswrapper[4661]: I0120 19:43:30.885364 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c9ba20b4-a283-451d-a724-e15cbec79c3e-catalog-content\") pod \"redhat-marketplace-99sfx\" (UID: \"c9ba20b4-a283-451d-a724-e15cbec79c3e\") " pod="openshift-marketplace/redhat-marketplace-99sfx" Jan 20 19:43:30 crc kubenswrapper[4661]: I0120 19:43:30.885448 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g8z5k\" (UniqueName: \"kubernetes.io/projected/c9ba20b4-a283-451d-a724-e15cbec79c3e-kube-api-access-g8z5k\") pod \"redhat-marketplace-99sfx\" (UID: \"c9ba20b4-a283-451d-a724-e15cbec79c3e\") " pod="openshift-marketplace/redhat-marketplace-99sfx" Jan 20 19:43:30 crc kubenswrapper[4661]: I0120 19:43:30.885559 4661 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c9ba20b4-a283-451d-a724-e15cbec79c3e-utilities\") pod \"redhat-marketplace-99sfx\" (UID: \"c9ba20b4-a283-451d-a724-e15cbec79c3e\") " pod="openshift-marketplace/redhat-marketplace-99sfx" Jan 20 19:43:30 crc kubenswrapper[4661]: I0120 19:43:30.885911 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c9ba20b4-a283-451d-a724-e15cbec79c3e-catalog-content\") pod \"redhat-marketplace-99sfx\" (UID: \"c9ba20b4-a283-451d-a724-e15cbec79c3e\") " pod="openshift-marketplace/redhat-marketplace-99sfx" Jan 20 19:43:30 crc kubenswrapper[4661]: I0120 19:43:30.886008 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c9ba20b4-a283-451d-a724-e15cbec79c3e-utilities\") pod \"redhat-marketplace-99sfx\" (UID: \"c9ba20b4-a283-451d-a724-e15cbec79c3e\") " pod="openshift-marketplace/redhat-marketplace-99sfx" Jan 20 19:43:30 crc kubenswrapper[4661]: I0120 19:43:30.905591 4661 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g8z5k\" (UniqueName: \"kubernetes.io/projected/c9ba20b4-a283-451d-a724-e15cbec79c3e-kube-api-access-g8z5k\") pod \"redhat-marketplace-99sfx\" (UID: \"c9ba20b4-a283-451d-a724-e15cbec79c3e\") " pod="openshift-marketplace/redhat-marketplace-99sfx" Jan 20 19:43:31 crc kubenswrapper[4661]: I0120 19:43:31.043167 4661 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-99sfx" Jan 20 19:43:31 crc kubenswrapper[4661]: I0120 19:43:31.510214 4661 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-99sfx"] Jan 20 19:43:31 crc kubenswrapper[4661]: I0120 19:43:31.979409 4661 generic.go:334] "Generic (PLEG): container finished" podID="c9ba20b4-a283-451d-a724-e15cbec79c3e" containerID="d2eeb09fb81b1be7af7a444c84453a363e09c764bbc85c5792ae266360b65e77" exitCode=0 Jan 20 19:43:31 crc kubenswrapper[4661]: I0120 19:43:31.979487 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-99sfx" event={"ID":"c9ba20b4-a283-451d-a724-e15cbec79c3e","Type":"ContainerDied","Data":"d2eeb09fb81b1be7af7a444c84453a363e09c764bbc85c5792ae266360b65e77"} Jan 20 19:43:31 crc kubenswrapper[4661]: I0120 19:43:31.979922 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-99sfx" event={"ID":"c9ba20b4-a283-451d-a724-e15cbec79c3e","Type":"ContainerStarted","Data":"39efeb1b49fd6d63a1d3aef68571d083adeb8aff38f6775d3d31dcd10ad37eab"} Jan 20 19:43:32 crc kubenswrapper[4661]: I0120 19:43:32.991000 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-99sfx" event={"ID":"c9ba20b4-a283-451d-a724-e15cbec79c3e","Type":"ContainerStarted","Data":"698fa8a96f51f56e4ea5ec7196458b6fe09f8b8c739104398ae358be84da9a6e"} Jan 20 19:43:34 crc kubenswrapper[4661]: I0120 19:43:34.006430 4661 generic.go:334] "Generic (PLEG): container finished" podID="c9ba20b4-a283-451d-a724-e15cbec79c3e" containerID="698fa8a96f51f56e4ea5ec7196458b6fe09f8b8c739104398ae358be84da9a6e" exitCode=0 Jan 20 19:43:34 crc kubenswrapper[4661]: I0120 19:43:34.006888 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-99sfx" event={"ID":"c9ba20b4-a283-451d-a724-e15cbec79c3e","Type":"ContainerDied","Data":"698fa8a96f51f56e4ea5ec7196458b6fe09f8b8c739104398ae358be84da9a6e"} Jan 20 19:43:34 crc kubenswrapper[4661]: I0120 19:43:34.150987 4661 scope.go:117] "RemoveContainer" containerID="8c2c0540c85400f3bc3171f207699fe16a395291b82a8a7f3bee8f8da1569bee" Jan 20 19:43:34 crc kubenswrapper[4661]: E0120 19:43:34.151314 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-svf7c_openshift-machine-config-operator(78855c94-da90-4523-8d65-70f7fd153dee)\"" pod="openshift-machine-config-operator/machine-config-daemon-svf7c" podUID="78855c94-da90-4523-8d65-70f7fd153dee" Jan 20 19:43:35 crc kubenswrapper[4661]: I0120 19:43:35.020861 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-99sfx" event={"ID":"c9ba20b4-a283-451d-a724-e15cbec79c3e","Type":"ContainerStarted","Data":"ab3d6c75bf5e34522d9b370acfe304f6f78bd55cef009bb07d2878e36b6033fd"} Jan 20 19:43:35 crc kubenswrapper[4661]: I0120 19:43:35.052482 4661 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-99sfx" podStartSLOduration=2.5728905170000003 podStartE2EDuration="5.052457434s" podCreationTimestamp="2026-01-20 19:43:30 +0000 UTC" firstStartedPulling="2026-01-20 19:43:31.981376924 +0000 UTC m=+5868.312166586" lastFinishedPulling="2026-01-20 19:43:34.460943841 +0000 UTC m=+5870.791733503" observedRunningTime="2026-01-20 19:43:35.042618288 +0000 UTC m=+5871.373407970" watchObservedRunningTime="2026-01-20 19:43:35.052457434 +0000 UTC m=+5871.383247106" Jan 20 19:43:41 crc kubenswrapper[4661]: I0120 19:43:41.044064 4661 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-99sfx" Jan 20 19:43:41 crc kubenswrapper[4661]: I0120 19:43:41.044576 4661 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-99sfx" Jan 20 19:43:41 crc kubenswrapper[4661]: I0120 19:43:41.091509 4661 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-99sfx" Jan 20 19:43:41 crc kubenswrapper[4661]: I0120 19:43:41.137509 4661 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-99sfx" Jan 20 19:43:41 crc kubenswrapper[4661]: I0120 19:43:41.330932 4661 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-99sfx"] Jan 20 19:43:43 crc kubenswrapper[4661]: I0120 19:43:43.096286 4661 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-99sfx" podUID="c9ba20b4-a283-451d-a724-e15cbec79c3e" containerName="registry-server" containerID="cri-o://ab3d6c75bf5e34522d9b370acfe304f6f78bd55cef009bb07d2878e36b6033fd" gracePeriod=2 Jan 20 19:43:43 crc kubenswrapper[4661]: I0120 19:43:43.591520 4661 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-99sfx" Jan 20 19:43:43 crc kubenswrapper[4661]: I0120 19:43:43.741993 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c9ba20b4-a283-451d-a724-e15cbec79c3e-utilities\") pod \"c9ba20b4-a283-451d-a724-e15cbec79c3e\" (UID: \"c9ba20b4-a283-451d-a724-e15cbec79c3e\") " Jan 20 19:43:43 crc kubenswrapper[4661]: I0120 19:43:43.742060 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c9ba20b4-a283-451d-a724-e15cbec79c3e-catalog-content\") pod \"c9ba20b4-a283-451d-a724-e15cbec79c3e\" (UID: \"c9ba20b4-a283-451d-a724-e15cbec79c3e\") " Jan 20 19:43:43 crc kubenswrapper[4661]: I0120 19:43:43.742110 4661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g8z5k\" (UniqueName: \"kubernetes.io/projected/c9ba20b4-a283-451d-a724-e15cbec79c3e-kube-api-access-g8z5k\") pod \"c9ba20b4-a283-451d-a724-e15cbec79c3e\" (UID: \"c9ba20b4-a283-451d-a724-e15cbec79c3e\") " Jan 20 19:43:43 crc kubenswrapper[4661]: I0120 19:43:43.743013 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c9ba20b4-a283-451d-a724-e15cbec79c3e-utilities" (OuterVolumeSpecName: "utilities") pod "c9ba20b4-a283-451d-a724-e15cbec79c3e" (UID: "c9ba20b4-a283-451d-a724-e15cbec79c3e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 19:43:43 crc kubenswrapper[4661]: I0120 19:43:43.753400 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c9ba20b4-a283-451d-a724-e15cbec79c3e-kube-api-access-g8z5k" (OuterVolumeSpecName: "kube-api-access-g8z5k") pod "c9ba20b4-a283-451d-a724-e15cbec79c3e" (UID: "c9ba20b4-a283-451d-a724-e15cbec79c3e"). InnerVolumeSpecName "kube-api-access-g8z5k". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 19:43:43 crc kubenswrapper[4661]: I0120 19:43:43.783385 4661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c9ba20b4-a283-451d-a724-e15cbec79c3e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c9ba20b4-a283-451d-a724-e15cbec79c3e" (UID: "c9ba20b4-a283-451d-a724-e15cbec79c3e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 19:43:43 crc kubenswrapper[4661]: I0120 19:43:43.844590 4661 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c9ba20b4-a283-451d-a724-e15cbec79c3e-utilities\") on node \"crc\" DevicePath \"\"" Jan 20 19:43:43 crc kubenswrapper[4661]: I0120 19:43:43.844628 4661 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c9ba20b4-a283-451d-a724-e15cbec79c3e-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 20 19:43:43 crc kubenswrapper[4661]: I0120 19:43:43.844643 4661 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g8z5k\" (UniqueName: \"kubernetes.io/projected/c9ba20b4-a283-451d-a724-e15cbec79c3e-kube-api-access-g8z5k\") on node \"crc\" DevicePath \"\"" Jan 20 19:43:44 crc kubenswrapper[4661]: I0120 19:43:44.105485 4661 generic.go:334] "Generic (PLEG): container finished" podID="c9ba20b4-a283-451d-a724-e15cbec79c3e" containerID="ab3d6c75bf5e34522d9b370acfe304f6f78bd55cef009bb07d2878e36b6033fd" exitCode=0 Jan 20 19:43:44 crc kubenswrapper[4661]: I0120 19:43:44.105530 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-99sfx" event={"ID":"c9ba20b4-a283-451d-a724-e15cbec79c3e","Type":"ContainerDied","Data":"ab3d6c75bf5e34522d9b370acfe304f6f78bd55cef009bb07d2878e36b6033fd"} Jan 20 19:43:44 crc kubenswrapper[4661]: I0120 19:43:44.105556 4661 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-99sfx" event={"ID":"c9ba20b4-a283-451d-a724-e15cbec79c3e","Type":"ContainerDied","Data":"39efeb1b49fd6d63a1d3aef68571d083adeb8aff38f6775d3d31dcd10ad37eab"} Jan 20 19:43:44 crc kubenswrapper[4661]: I0120 19:43:44.105571 4661 scope.go:117] "RemoveContainer" containerID="ab3d6c75bf5e34522d9b370acfe304f6f78bd55cef009bb07d2878e36b6033fd" Jan 20 19:43:44 crc kubenswrapper[4661]: I0120 19:43:44.105732 4661 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-99sfx" Jan 20 19:43:44 crc kubenswrapper[4661]: I0120 19:43:44.132844 4661 scope.go:117] "RemoveContainer" containerID="698fa8a96f51f56e4ea5ec7196458b6fe09f8b8c739104398ae358be84da9a6e" Jan 20 19:43:44 crc kubenswrapper[4661]: I0120 19:43:44.160029 4661 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-99sfx"] Jan 20 19:43:44 crc kubenswrapper[4661]: I0120 19:43:44.166969 4661 scope.go:117] "RemoveContainer" containerID="d2eeb09fb81b1be7af7a444c84453a363e09c764bbc85c5792ae266360b65e77" Jan 20 19:43:44 crc kubenswrapper[4661]: I0120 19:43:44.175754 4661 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-99sfx"] Jan 20 19:43:44 crc kubenswrapper[4661]: I0120 19:43:44.211562 4661 scope.go:117] "RemoveContainer" containerID="ab3d6c75bf5e34522d9b370acfe304f6f78bd55cef009bb07d2878e36b6033fd" Jan 20 19:43:44 crc kubenswrapper[4661]: E0120 19:43:44.212925 4661 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ab3d6c75bf5e34522d9b370acfe304f6f78bd55cef009bb07d2878e36b6033fd\": container with ID starting with ab3d6c75bf5e34522d9b370acfe304f6f78bd55cef009bb07d2878e36b6033fd not found: ID does not exist" containerID="ab3d6c75bf5e34522d9b370acfe304f6f78bd55cef009bb07d2878e36b6033fd" Jan 20 19:43:44 crc kubenswrapper[4661]: I0120 19:43:44.212993 4661 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ab3d6c75bf5e34522d9b370acfe304f6f78bd55cef009bb07d2878e36b6033fd"} err="failed to get container status \"ab3d6c75bf5e34522d9b370acfe304f6f78bd55cef009bb07d2878e36b6033fd\": rpc error: code = NotFound desc = could not find container \"ab3d6c75bf5e34522d9b370acfe304f6f78bd55cef009bb07d2878e36b6033fd\": container with ID starting with ab3d6c75bf5e34522d9b370acfe304f6f78bd55cef009bb07d2878e36b6033fd not found: ID does not exist" Jan 20 19:43:44 crc kubenswrapper[4661]: I0120 19:43:44.213024 4661 scope.go:117] "RemoveContainer" containerID="698fa8a96f51f56e4ea5ec7196458b6fe09f8b8c739104398ae358be84da9a6e" Jan 20 19:43:44 crc kubenswrapper[4661]: E0120 19:43:44.213431 4661 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"698fa8a96f51f56e4ea5ec7196458b6fe09f8b8c739104398ae358be84da9a6e\": container with ID starting with 698fa8a96f51f56e4ea5ec7196458b6fe09f8b8c739104398ae358be84da9a6e not found: ID does not exist" containerID="698fa8a96f51f56e4ea5ec7196458b6fe09f8b8c739104398ae358be84da9a6e" Jan 20 19:43:44 crc kubenswrapper[4661]: I0120 19:43:44.213470 4661 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"698fa8a96f51f56e4ea5ec7196458b6fe09f8b8c739104398ae358be84da9a6e"} err="failed to get container status \"698fa8a96f51f56e4ea5ec7196458b6fe09f8b8c739104398ae358be84da9a6e\": rpc error: code = NotFound desc = could not find container \"698fa8a96f51f56e4ea5ec7196458b6fe09f8b8c739104398ae358be84da9a6e\": container with ID starting with 698fa8a96f51f56e4ea5ec7196458b6fe09f8b8c739104398ae358be84da9a6e not found: ID does not exist" Jan 20 19:43:44 crc kubenswrapper[4661]: I0120 19:43:44.213518 4661 scope.go:117] "RemoveContainer" containerID="d2eeb09fb81b1be7af7a444c84453a363e09c764bbc85c5792ae266360b65e77" Jan 20 19:43:44 crc kubenswrapper[4661]: E0120 19:43:44.214012 4661 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d2eeb09fb81b1be7af7a444c84453a363e09c764bbc85c5792ae266360b65e77\": container with ID starting with d2eeb09fb81b1be7af7a444c84453a363e09c764bbc85c5792ae266360b65e77 not found: ID does not exist" containerID="d2eeb09fb81b1be7af7a444c84453a363e09c764bbc85c5792ae266360b65e77" Jan 20 19:43:44 crc kubenswrapper[4661]: I0120 19:43:44.214064 4661 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d2eeb09fb81b1be7af7a444c84453a363e09c764bbc85c5792ae266360b65e77"} err="failed to get container status \"d2eeb09fb81b1be7af7a444c84453a363e09c764bbc85c5792ae266360b65e77\": rpc error: code = NotFound desc = could not find container \"d2eeb09fb81b1be7af7a444c84453a363e09c764bbc85c5792ae266360b65e77\": container with ID starting with d2eeb09fb81b1be7af7a444c84453a363e09c764bbc85c5792ae266360b65e77 not found: ID does not exist" Jan 20 19:43:46 crc kubenswrapper[4661]: I0120 19:43:46.151006 4661 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c9ba20b4-a283-451d-a724-e15cbec79c3e" path="/var/lib/kubelet/pods/c9ba20b4-a283-451d-a724-e15cbec79c3e/volumes" Jan 20 19:43:47 crc kubenswrapper[4661]: I0120 19:43:47.142392 4661 scope.go:117] "RemoveContainer" containerID="8c2c0540c85400f3bc3171f207699fe16a395291b82a8a7f3bee8f8da1569bee" Jan 20 19:43:47 crc kubenswrapper[4661]: E0120 19:43:47.142602 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-svf7c_openshift-machine-config-operator(78855c94-da90-4523-8d65-70f7fd153dee)\"" pod="openshift-machine-config-operator/machine-config-daemon-svf7c" podUID="78855c94-da90-4523-8d65-70f7fd153dee" Jan 20 19:43:48 crc kubenswrapper[4661]: E0120 19:43:48.325229 4661 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc9ba20b4_a283_451d_a724_e15cbec79c3e.slice\": RecentStats: unable to find data in memory cache]" Jan 20 19:43:58 crc kubenswrapper[4661]: E0120 19:43:58.568762 4661 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc9ba20b4_a283_451d_a724_e15cbec79c3e.slice\": RecentStats: unable to find data in memory cache]" Jan 20 19:44:01 crc kubenswrapper[4661]: I0120 19:44:01.142087 4661 scope.go:117] "RemoveContainer" containerID="8c2c0540c85400f3bc3171f207699fe16a395291b82a8a7f3bee8f8da1569bee" Jan 20 19:44:01 crc kubenswrapper[4661]: E0120 19:44:01.143785 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-svf7c_openshift-machine-config-operator(78855c94-da90-4523-8d65-70f7fd153dee)\"" pod="openshift-machine-config-operator/machine-config-daemon-svf7c" podUID="78855c94-da90-4523-8d65-70f7fd153dee" Jan 20 19:44:08 crc kubenswrapper[4661]: E0120 19:44:08.836320 4661 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc9ba20b4_a283_451d_a724_e15cbec79c3e.slice\": RecentStats: unable to find data in memory cache]" Jan 20 19:44:16 crc kubenswrapper[4661]: I0120 19:44:16.142325 4661 scope.go:117] "RemoveContainer" containerID="8c2c0540c85400f3bc3171f207699fe16a395291b82a8a7f3bee8f8da1569bee" Jan 20 19:44:16 crc kubenswrapper[4661]: E0120 19:44:16.143973 4661 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-svf7c_openshift-machine-config-operator(78855c94-da90-4523-8d65-70f7fd153dee)\"" pod="openshift-machine-config-operator/machine-config-daemon-svf7c" podUID="78855c94-da90-4523-8d65-70f7fd153dee" Jan 20 19:44:19 crc kubenswrapper[4661]: E0120 19:44:19.103039 4661 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc9ba20b4_a283_451d_a724_e15cbec79c3e.slice\": RecentStats: unable to find data in memory cache]"